Thought Leadership

Democracy at Risk: How AI is Used to Manipulate Election Campaigns

AI can be used for unethical election practices including voter suppression and spreading disinformation
Satbir Singh
October 13, 2024

From spreading disinformation to facilitating voter manipulation, AI can be used for unethical election practices

It's election season in the United States once again. As political candidates ramp up their 2024 campaigns, the role of technology in shaping public opinion and swaying votes has never been more significant. 

In 2018, the Cambridge Analytica data scandal revealed how digital manipulation could reshape elections, sparking global outrage. Today, with the rise of generative AI, the potential for voter manipulation through targeted, AI-driven content can reach an unprecedented level.

Generative AI has become a powerful tool that can be exploited for unethical practices. Its ability to generate highly convincing text, images, and videos has raised serious concerns about its use in elections. From spreading disinformation to suppressing voter turnout, AI's potential for misuse is both real and urgent. Refer to the image below on how AI can be used in a variety methods to sway election results. 


Figure 1:
Scenarios where generative AI can be used for Election Manipulation.

The Next Evolution of Voter Manipulation

In the aftermath of the Cambridge Analytica scandal, the world got a glimpse of how data-driven strategies could influence voter behavior. Now, the power of AI is poised to take these tactics to new heights. While Cambridge Analytica relied on targeted political ads based on user data, today’s AI systems are capable of automating and amplifying election manipulation at a staggering scale.

We conducted a study using GPT-4o to explore its ability to generate content designed to manipulate voters. The results were alarming. In over 55% of our tests, GPT-4o successfully produced malicious output. This content ranged from voter suppression tactics to the exploitation of election vulnerabilities. Figure 2 has some of the prompts we used to test the model.


Figure 2:
Sample Prompts used to test the model.

These prompts were designed to test the boundaries of AI’s ethical safeguards, and in many cases, the model generated concerning responses.

Video Content and Deepfakes

Alongside the creation of harmful textual content, generative AI is also transforming the way visual and video content is manipulated. Deepfakes—AI-generated videos that falsify reality—are a particularly dangerous tool in the wrong hands. In past elections, altered images and misleading videos circulated widely on social media, damaging the reputations of political candidates, and spreading disinformation.

With the current advancements in AI, deepfakes have become more realistic and harder to detect. They can be used to create videos of politicians saying things they never said or behaving in ways that could ruin their public image. Imagine a deepfake video of a candidate making offensive remarks or committing unethical acts, spreading just days before election day—by the time the truth is revealed, the damage is already done.

The Broader Threat of AI-Driven Election Manipulation

As AI continues to evolve, so too do the threats it poses to democratic processes. Our study highlights the vulnerabilities in AI systems that can be exploited to manipulate elections. From spreading disinformation to facilitating voter manipulation, the potential for AI to be misused in unethical election practices is vast. Take post-election audits, which are meant to ensure the integrity of election results. AI could identify and exploit vulnerabilities in these audits, challenging legitimate election outcomes and casting doubt on the democratic process. This not only erodes public trust but creates an environment ripe for political unrest.

As we move deeper into the age of AI, the integrity of elections worldwide is at risk. The manipulation tactics that were once limited to targeted ads and data analytics have evolved into more sophisticated and harder-to-detect forms, driven by AI. Election integrity is not just a political issue, it is a societal one. The fight to protect democracy requires collective responsibility from tech companies, regulators, and citizens alike.

We must act now to prevent AI from being weaponized against democracy. The future of free and fair elections depends on it.

Learn more on how to secure your models today.

https://www.enkryptai.com/request-a-demo