A new report by public strategy and media group Gatefield has projected that up to 30 million Nigerian women and girls could face AI-facilitated online attacks annually by 2030 if urgent regulatory and digital safety measures are not implemented.
The projection is contained in Gatefield’s State of Online Harms 2025 report, which examines the growing intersection between emerging technologies and digital violence in Nigeria. The report warns that advances in artificial intelligence, particularly generative AI tools, are accelerating the scale, speed and sophistication of online abuse targeting women and girls.
According to Gatefield, AI-enabled threats may include the creation of non-consensual sexualized deepfakes, impersonation scams, automated harassment campaigns, and coordinated disinformation efforts. The report argues that as internet penetration continues to rise in Nigeria, the number of potential victims could increase significantly without stronger legal safeguards and platform accountability.
Gatefield’s analysis is based on projected internet user growth in Nigeria by 2030, combined with current trends in online harassment and digital gender-based violence. The report suggests that millions of women and girls may be directly targeted each year, while many more could be indirectly exposed to harmful AI-generated content.
The organisation notes that AI tools have lowered the technical barriers for malicious actors, allowing individuals or groups to produce realistic, manipulated images, videos, and messages at scale. It adds that such tools can amplify existing patterns of gender-based abuse, particularly against women in politics, journalism, activism, and other public-facing roles.
Gatefield is calling for urgent reforms, including updated cybercrime legislation, clearer regulatory frameworks for AI deployment, stronger enforcement mechanisms, and improved collaboration between technology companies and Nigerian authorities. The report also recommends expanded digital literacy programs to help users identify and respond to AI-generated misinformation and abuse.
The findings were released as part of broader conversations around online safety and digital rights, with advocates warning that technological innovation must be matched by protections that safeguard vulnerable populations.
While the report presents forward-looking projections rather than confirmed figures, Gatefield maintains that proactive intervention is critical to prevent a potential escalation of “industrialised” online harm in the coming years. The full report is available here.

Director
Bio: An (HND, BA, MBA, MSc) is a tech-savvy digital marketing professional, writing on artificial intelligence, digital tools, and emerging technologies. He holds an HND in Marketing, is a Chartered Marketer, earned an MBA in Marketing Management from LAUTECH, a BA in Marketing Management and Web Technologies from York St John University, and an MSc in Social Business and Marketing Management from the University of Salford, Manchester.
He has professional experience across sales, hospitality, healthcare, digital marketing, and business development, and has worked with Sheraton Hotels, A24 Group, and Kendal Nutricare. A skilled editor and web designer, He focuses on simplifying complex technologies and highlighting AI-driven opportunities for businesses and professionals.
