A discussion about the influence of AI on democratic processes within upcoming election cycles globally

Generative AI systems’ capacity to create visual and audio content that could pass as “real” has raised many concerns about the technology’s potential impact on the integrity of our information ecosystems. Many have raised concerns about the possibility for generative AI systems to lower the barrier to entry for launching disinformation campaigns designed to skew people’s perception of the world or steer people away from accurate and reliable information that is critical to election processes.

On August 8, 2024, fellows participated in a discussion on the topic of AI and Democracy. In order to seed the conversation, three program participants presented concrete case studies and research findings that uncovered the use of AI in democratic processes and helped program participants understand the complexities and risks associated with it.

Discussion Summary

The discussion session on AI and Democracy kicks off the second iteration of the Responsible AI Fellowship project. It began with an acknowledgment that the year 2024 is a historic election year globally – with over forty percent of the world population eligible to vote in their respective elections. Despite seventy-eight percent of citizens in democratic countries considering democracy an important asset of their daily lives, democracies worldwide are affected by increased polarization, erosion of civil liberties, and declining public trust.

When AI is used in maladaptive ways it can negatively impact information ecosystems and further the dangerous trends outlined above. AI-powered tools, such as generative AI deepfakes and bots, can amplify false information and influence public opinion as well as election outcomes. Those presenting examined various case studies encapsulating multiple regions around the world, with a particular focus on the Global South. Each case study looked into an aspect of AI that either directly influenced the outcome of an election or has otherwise impacted the democratic process and civilian life.

Brazil (2018) – The impact of AI on voter behavior. Using publicly available data, the micro-targeting of voters contributed to a widespread of tailored political messaging. This practice both manipulated and polarized voters, leaving little common ground for conversation.

India (2019 and 2024) – AI as a tool to mislead and deceive voters. In 2019, AI was used heavily in misinformation campaigns, with automated bots, fake accounts, deepfakes, and manipulated media influencing voter behavior and contributing to false narratives. Increased skepticism and fragmentation, along with the democratization of AI tools in India, have led to a new set of opportunities and concerns in the 2024 election. On the one hand, AI has been used “constructively” to better connect with voters, launch multilingual campaigns, and provide further context into the deceptive use of deepfakes; on the other hand, the misuse of AI has seen an increase in trolling and augmented meme wars online.

Indonesia (2024) – AI as an assist in electoral processes. AI Voting Assistants are being introduced to the world’s third-largest democracy, giving voters the opportunity for assistance in choosing an ideal candidate based on their voting profile. AI deepfakes have also been used to portray presidential candidates speaking foreign languages which, while being a potentially helpful tool for diplomacy, could alienate and mislead domestic voters.

Kenya (2022) – The need for governance over AI systems. In 2022, Huawei Safe City helped to install 200 surveillance cameras across Nairobi, later aiding in establishing a national police command center for managing the technology. Countries like Uganda and Zambia have deployed similar Safe City technologies but have started using them to surveil and track those opposed to the government.  

Nigeria (2014-24) – The need for a National Artificial Intelligence Strategy (NAIS). For the past decade, Nigerian governments have spent more on surveillance capabilities, collecting localized data and analyzing it for suspicious communication patterns. In the 2023 elections, AI deepfakes were allegedly used to stir religious unrest amongst the nearly half-Christian and half-Muslim population. Nigeria’s NAIS, released in August 2024, applies the US NIST AI Risk Management Framework and identifies future AI risks for the country – the risks however, place lower than other domestic issues like cybersecurity, concentration of power, and job displacement.

Pakistan (2023) – Pakistan has also made recent efforts towards a national AI strategy. With the 2023 Draft National AI Policy, the Ministry of Information Technology & Telecommunications established the need to invest in public awareness and data sharing, further proposing an AI Regulatory Directorate (ARD). The ARD would be responsible for both the oversight of AI and the creation of regulatory guidelines to combat the spread of disinformation.

Senegal (2024) – AI deepfake technology in elections. In this case, the mentor to the current president, and former candidate, was the subject of an AI deepfake video where he condemned France. This condemnation of the country’s former colonizer ignited nationalist sentiments and contributed to the success of the now-president’s campaign. In the absence of extensive information access laws, Senegal has witnessed increased internet shutdowns and restricted access to social media coupled with heightened surveillance-related legislation.

Slovakia (2023) – AI interference in the Global North. In 2023, a deepfake recording circulated in Slovakia of a candidate confessing to rigging the election, leading to his subsequent loss of the election.

United States (2023-24) – AI voter suppression and manipulation. In 2023, a fake robocall from President Joe Biden, went out to Democrat voters in New Hampshire ahead of the State’s primary election telling them not to vote. In 2024, a deepfake video of presidential candidate Kamala Harris was recirculated by Elon Musk in a post on X, garnering more than 150 million views. Musk failed to disclose that the video was fake and intended for parodic use – which is allowed in the United States – blurring the lines between voter manipulation and parody in elections.

The use of AI technologies to increasingly surveil citizens in democratic countries, automate and disseminate disinformation, and target and manipulate voters has led to various ethical and legal implications in the Global South. AI has also been disproportionately weaponized against women in politics, becoming a tool for harassment against politicians and journalists. In extreme cases, women are made the subjects of non-consensual deepfake pornography, which accounts for ninety to ninety-five percent of online deepfakes. In the Global North, the direct use of AI in politics has emerged as an entirely new challenge. In both Russia and the UK, AIs have been proposed as potential candidates, with AI Steve becoming a candidate in the UK, represented by businessman Steve Endacott. In Denmark, the Danish AI Party is led by an AI and presents an opportunity for alternative political leadership and quicker decision-making. While unnerving, some theoretical applications of AI in democracy, like individualized ‘AI agents’, could offer increased voter representation in parliamentary and governing bodies.

Presenters highlighted that the following technical solutions were developed to detect fake news and deepfakes, automated bots and trolls, and microtargeting and manipulation online. AI detection systems are being used to identify deepfakes and analyze AI-generated imagery, audio, or video that may be difficult for humans to recognize as fake. Bot detection systems are similarly being used to analyze patterns of behavior in bot accounts. Ad transparency tools and algorithm audits have been used to offer users insight into their ad preferences online, an effort to combat microtargeting and manipulation.

Furthermore, there are other promising opportunities surrounding AI and democracy. The 2024 AI Election Accord, signed by 25 of the biggest global tech companies, could be considered a step in the right direction, ensuring that sophisticated tools developed by large tech companies will not be misused in democratic processes. POLIS is an open-source AI tool enabled through advanced statistics and machine learning to analyze varying political climates and has been used all over the world to better understand democracies. There has also been a rise in Kenyan-made custom GPT tools to enhance accountability in the economic and political sectors.

While the misuse of AI technology in democratic processes has led to an increase in distrust and misconception of truth, there is consideration for AI to act as a future bridge between governments and citizens. As one participant pointed out, elections have always been manipulative, AI has just become a tool to make that manipulation more widespread. If regulated effectively, AI can help establish a common ground between governments and their citizens. Intentionally built AI systems could take a government that is hard to fully understand and bridge it with a civic population that is hard to fully respond to, narrowing the cognitive gap. Fellows pointed out opportunities like; AI live translate, AI translation for public service announcements, and the use of AI agencies to deliver informed political summaries and verify political information. However, AI continues to develop at breathtaking speed, and proper governance of it will be key to the functioning of the democracies in which it is deployed.

This summary captures the extensive discussion and insights shared during the meeting held on August 8, 2024, at 10:00am EST while maintaining the anonymity of individual participants as per the Chatham House Rule.

A special thank you to Isaac Halaszi, Research Assistant for the Strategic Foresight Hub at the Stimson Center, who contributed to this event summary.