As discussions about artificial intelligence and Nigeria’s 2027 elections grow, much of the attention has focused on deepfake videos. While those threats are real, experts warn that the most dangerous forms of AI manipulation may be the ones voters never clearly see.
One emerging concern is the use of AI to spread political propaganda in local languages. In Nigeria, political conversations increasingly take place via WhatsApp voice notes in languages such as Hausa, Yoruba, Igbo, and Pidgin. Modern AI tools can now translate and generate speech in these languages, making it possible to produce convincing audio messages that sound like respected clerics, community leaders or local politicians.
A fake voice message shared days before an election, for example, could claim that a religious leader has endorsed a particular candidate. By the time such content is debunked, it may have already influenced thousands of voters.
Another overlooked risk is the rise of synthetic online support. AI can generate large networks of automated social media accounts that interact like real people, promoting certain candidates or hashtags. To ordinary users-and even journalists-these conversations can appear to represent genuine public opinion when they are actually orchestrated by automated systems.
Real Also:
Artificial intelligence also allows political actors to tailor messages to specific groups of voters. Instead of broadcasting a single campaign message to everyone, AI can craft highly targeted narratives based on people’s concerns, identities, or frustrations. In a country where ethnic, regional and religious identities already play strong roles in politics, such targeted messaging could deepen divisions.
Beyond campaigning, AI could also affect public trust after elections. Fabricated videos showing ballot manipulation or election violence could circulate online, potentially triggering protests or casting doubt on legitimate results. At the same time, genuine evidence of wrongdoing might be dismissed as fake, creating confusion about what information can be trusted.
The challenge is made even more complex by the platforms where political conversations often take place. Messaging apps such as WhatsApp and Telegram allow information to spread rapidly through private networks of friends and family. Because these platforms are encrypted, monitoring misinformation becomes far more difficult.
To address these risks, analysts say Nigeria will need stronger collaboration between electoral authorities, technology companies, civil society groups and fact-checking organisations. Expanding multilingual monitoring of online content and improving digital literacy among citizens could also help limit the impact of AI-driven misinformation.
Ultimately, protecting Nigeria’s democratic process will depend not only on securing ballot boxes but also on safeguarding the information environment around elections. As the 2027 vote approaches, the challenge may be less about visible digital manipulation-and more about the subtle forms of AI influence already shaping political conversations behind the scenes.

Senior Reporter/Editor
Bio: Ugochukwu is a freelance journalist and Editor at AIbase.ng, with a strong professional focus on investigative reporting. He holds a degree in Mass Communication and brings extensive experience in news gathering, reporting, and editorial writing. With over a decade of active engagement across diverse news outlets, he contributes in-depth analytical, practical, and expository articles exploring artificial intelligence and its real-world impact. His seasoned newsroom experience and well-established information networks provide AIbase.ng with credible, timely, and high-quality coverage of emerging AI developments.

