New Election Threats Emerge Beyond Deepfakes as AI Content Farms Target UK
New Election Threats Emerge Beyond Deepfakes as AI Content Farms Target UK

A recent investigation has uncovered overseas content farms using AI to spread fake political content targeting UK audiences, highlighting how artificial messaging is already shaping domestic political discourse. As local elections approach, there is still no mandatory requirement to label AI-generated content, leaving a regulatory gap.

Advances in generative AI now allow political messaging to be produced at scale: not just as isolated deepfakes, but as coordinated networks of human-like personas capable of influencing public discourse in real time.

This experiment portrays clearly what future dangers could be. Imagine someone creating thousands of AI avatars that look just like us, and they're spreading a political message we never even heard of. At scale, this kind of influence won't just distort public opinion — it could fundamentally undermine trust in elections. These AI agents can now look, speak, and react just like real people. There's no reliable way for the average person to tell the difference.

Donatas Smailys, CEO of Billo App

AI-Supported Election Interference Is No Longer Hypothetical

According to research published by Harvard Business School, the 2024 US presidential election saw deepfakes throughout the campaign, with fabricated news outlets influencing debates. In the UK, a survey of adults found that 34.1% had seen deepfakes featuring politicians during the same period.

The HBS research demonstrates that AI in its current form can manipulate beliefs and behaviours at the population level. Large language models and autonomous agents allow influence campaigns to operate with unprecedented reach and precision, enabling propaganda to be generated cheaply while remaining credible and human-like.

In one experiment, participants were exposed to just ten AI-generated social media posts containing a political slogan. Afterwards, more than 80% began repeating the same slogan themselves, demonstrating how quickly AI-driven messaging can spread through human users.

Regulatory Gaps

Experts warn that unaccountable AI influencers are set to exploit gaps in regulation and digital literacy, making it harder to distinguish real people from fake voices in real-world elections. The lack of a coherent national policy leaves a gap that AI could exploit at scale.