Without surprise, the rise of Artificial Intelligence has driven progress across sectors, but in Nigeria, it has also enabled disinformation. Deepfakes, synthetic images, and manipulated videos are increasingly used to mislead the public, sway opinion, and erode trust in institutions.
From elections to social issues and financial scams, AI-generated disinformation is posing serious challenges to national security, democratic processes, and societal cohesion. Therefore, it is of utmost importance to highlight these major AI-related disinformation incidents in Nigeria and draw key lessons to forestall future recurrence.
Major AI-Generated Disinformation Incidents in Nigeria
1. Viral Deepfake of Political Figures (2025 Election Season — June 2025)
During the 2025 elections, deepfake videos portraying political leaders in controversial situations went viral. Fact-checkers confirmed these were AI-generated, designed to delegitimise candidates and exacerbate political tensions.
Lesson: Election periods are especially vulnerable to AI-generated manipulation, and that calls for alertness from all and sundry.
2. Fake Nigerian Army Clip in Benue (March 2025)
A deepfake clip showed Nigerian soldiers prioritising cattle over citizens in Benue State, sparking outrage. Despite being debunked, it spread rapidly.
Lesson: AI-generated content can inflame ethnic and social tensions, even before verification.
3. Misleading Video of Dr Florence Ajimobi (February 2025)
A video falsely depicting Dr Florence Ajimobi making inflammatory political remarks circulated widely, prompting official clarifications.
Lesson: Impersonating public figures can erode trust in leadership and democratic discourse, leading to low confidence in the government of the day.
4. Ponzi Scheme Ad Featuring President Tinubu (April 2025)
An AI-edited advert used President Bola Tinubu’s likeness to promote a fraudulent investment scheme, misleading the public.
Lesson: Deepfakes are not limited to politics-they can also facilitate economic scams.
5. Celebrity and Public Health Manipulations (May 2025)
AI-generated videos placed celebrities and public figures, such as Wole Soyinka, in fabricated scenarios, creating confusion and reputational damage.
Lesson: False content affects public perception beyond politics, including culture and health.
6. Lekki Flooding AI Clip Causes Confusion (July 2025)
A hyper-realistic AI-generated video depicted severe flooding in Lekki. Although intended as satire, many treated it as real.
Lesson: AI blurs the line between reality and satire, particularly for audiences with low media literacy.
7. Misattributed Videos of President Tinubu (August 2025)
Doctored videos circulated showing President Tinubu allegedly threatening citizens over AI-generated content.
Lesson: Deepfakes can be weaponised to suppress criticism and manipulate public perception.
8. Amplification of Foreign Propaganda (September 2025)
Some Nigerian accounts shared AI-generated material portraying foreign leaders in fabricated political narratives, spreading disinformation across borders.
Lesson: Disinformation is transnational, and local audiences can unintentionally amplify global propaganda.
9. AI-Generated Investment Endorsements (June 2025)
Videos falsely showed politicians endorsing dubious investment platforms, misleading citizens into financial scams.
Lesson: AI manipulation exploits public figures’ trust to commit fraud.
10. Political Propaganda Networks (October 2025)
Social media accounts deployed AI tools to generate fabricated content in support of political narratives, polarising public opinion.
Lesson: Automation lowers the barrier for large-scale propaganda, making it harder to trace and counter.
Common Trends and Effects
1. Deepfakes as Weapons of Distrust
AI-generated media makes it hard to distinguish truth from falsehood, gradually eroding public trust in news, leaders, and institutions.
2. Low Media Literacy Amplifies Harm
Limited digital literacy means many Nigerians cannot easily detect AI-manipulated content, allowing misinformation to spread rapidly.
3. Institutional and Security Concerns
Fake content targeting government or security agencies can provoke outrage, weaken confidence, and threaten national cohesion.
4. Legal and Regulatory Gaps
Existing laws do not sufficiently address AI disinformation or identity abuse, leaving perpetrators largely unaccountable.
5. Gendered Digital Harm
Women face disproportionate risks from AI-driven harassment, including impersonation and deepfake abuse, affecting safety and reputation.
Also read:
Future Lessons
1. Promote Digital and Media Literacy
Educate citizens to recognise and critically evaluate AI-generated content to reduce the spread of misinformation.
2. Strengthen Legal Frameworks
Implement clear laws to tackle deepfakes, fraud, and online impersonation, ensuring accountability for perpetrators.
3. Encourage Platform Responsibility
Require social media platforms to act transparently, moderate content swiftly, and provide tools to verify information.
4. Support Fact-Checking Institutions
Empower civil society and fact-checkers with resources and authority to verify content and counter disinformation effectively.
5. Invest in AI Detection Tools
Deploy AI-powered verification technologies across newsrooms, government agencies, and NGOs to detect and combat synthetic content.
Epilogue
AI has opened incredible possibilities for innovation, but it has also created new avenues for disinformation.
Nigeria’s experience shows that deepfakes and synthetic content are not just technical challenges-they are societal ones, affecting trust, security, and vulnerable communities.
Addressing this threat requires a combined effort: informed citizens, robust laws, responsible platforms, empowered fact-checkers, and advanced detection tools.
Only through awareness, vigilance, and coordinated action can the country harness AI’s benefits while safeguarding truth and public confidence.

Senior Reporter/Editor
Bio: Ugochukwu is a freelance journalist and Editor at AIbase.ng, with a strong professional focus on investigative reporting. He holds a degree in Mass Communication and brings extensive experience in news gathering, reporting, and editorial writing. With over a decade of active engagement across diverse news outlets, he contributes in-depth analytical, practical, and expository articles exploring artificial intelligence and its real-world impact. His seasoned newsroom experience and well-established information networks provide AIbase.ng with credible, timely, and high-quality coverage of emerging AI developments.
