By Trey Grayson

This week the Senate Rules Committee will hold a hearing dedicated to “AI and the Future of Elections,” featuring testimony from state election officials as well as tech policy experts. Sen. Amy Klobuchar (D-Minn.), the Committee’s chairwoman, has called for federal legislation that would “prohibit the distribution of materially deceptive AI-generated [content intended] to influence a federal election.” And earlier this month Senate Majority Leader Chuck Schumer organized a closed-door meeting with some of Silicon Valley’s top executives to discuss the future of AI regulation. 

It’s clear that we’ve entered the era of AI elections. As the 2024 election draws near, artificial intelligence has emerged as a leading-edge potential threat—as well as a potential asset in some limited cases—to election security and voter confidence. 

The electoral implications of AI demand our collective attention and scrutiny. These technologies have the potential to disrupt our democratic processes and sow further distrust in the outcome of elections. But by automating certain tasks and making them scalable, they also offer potential benefits for election administrators and others wishing to disseminate reliable election-related information.

Below we’ve outlined a series of questions to guide the conversation around AI and elections going forward. AI will play a role in the elections of tomorrow. It can serve as a valuable tool in certain contexts and as a destabilizing force in others. It’s on all of us—from the news media, to election officials, to ordinary citizens—to ensure that its impact on elections is a positive one. That starts with understanding the landscape before us.

What are AI’s impacts on the dissemination of election-related information?

Deep-fakes and misinformation are the most notable threats AI poses going forward. (A deep-fake refers to manipulated media made to look as if a person, e.g. a candidate for office, did or said something that they didn’t.) Recent AI-generated incidents include a deep-fake video of President Biden declaring a national draft of American citizens to aid Ukraine’s war effort. While the post initially was marked as a deep-fake, later, it was reposted without the deep-fake warning and garnered over 8 million views on Twitter

Another deep-fake incident saw former President Trump sharing a fabricated video of CNN host Anderson Cooper telling viewers they had seen “Trump ripping us a new [expletive] here on CNN’s live presidential town hall.” While this was widely received as a joke, it demonstrated the newfound ability for any person to create convincing deep-fake videos with minimal barriers to entry. 

These new capabilities run the risk of sowing false but eminently believable narratives that could alter how people vote on election day. A deep-fake video of Sen. Elizabeth Warren insisting that Republicans should be barred from voting in 2024 recently circulated online, garnering millions of views. Were such a video to surface widely in the critical days leading up to an election, it could be determinative of the outcome.

Never before has such distortive power been available at such low cost to so many potential bad actors. The misuse of AI, particularly in the context of deep-fakes, raises concerns about its potential to manipulate public opinion, suppress votes, and compromise the integrity of elections.

What role will AI training data play?

AI tools are driven by their training data. These systems learn by studying vast troves of data, called large language models (LLMs), and they generate content by predicting what word is most likely to come next in a given sentence based on these inputs. They are for all intents and purposes indifferent to truth; they are built to regurgitate whatever their training data tells them to. 

Given this context, past false statements related to elections present a significant challenge, as these AI systems carry the potential for amplifying future instances of election-related falsehoods. Take, for example, the 2016 U.S. presidential election, during which state-affiliated entities in Russia and elsewhere orchestrated large-scale disinformation campaigns, aiming to sway the outcome. These false claims can plausibly be thought to be part of the LLMs that most AIs are trained on today.

In the present day, the emergence of generative AI could enable similar campaigns to be run with significantly fewer resources, potentially democratizing the ability to wage information warfare and expanding the opportunity to more entities seeking to disrupt our democratic process. The consequences of such a scenario are far-reaching, as they could inundate online platforms with fabricated content, including images, videos, and text generated by AI.

The upsides of AI in election administration.

At the same time, AI presents valuable potential opportunities to enhance public trust in elections. For example, the rapidly developing suite of translation tools enabled by AI technology can be a valuable asset for election administrators looking to disseminate information to multiple different communities simultaneously. In areas like New York or Los Angeles, where dozens or hundreds of different languages are spoken by voters, this presents a real opportunity if utilized responsibly.

Likewise, AI can be used to cull vast troves of voter data to identify patterns for election administrators. Multi-state programs like the Electronic Registration Information Center (ERIC) have long sought to identify inconsistencies in voter rolls in order to eliminate potential fraud. By enlisting the help of a trustworthy AI system, it isn’t hard to imagine a new and improved version of these systems that is quicker, more agile, and more accurate in conducting this vital work.

And AI can be used to sniff out AI. Just as these models can be perniciously used to turn out misleading content, AI systems can be trained to recognize and identify election-related narratives that are wilfully intended to mislead. In 2016, human moderators shouldered much of the burden in identifying false information online. Going forward, that work could be outsourced to automated models better equipped to handle the task at scale. 

The role of executive action.

Addressing these complex challenges demands a comprehensive government approach. A suggested course of action involves creating a lead agency to oversee AI-related matters in the context of elections. The Cybersecurity and Infrastructure Security Agency could play a pivotal role by equipping election offices with resources tailored to combat disinformation campaigns fueled by deepfake technology and AI-generated language models. But a dedicated agency or sub-agency may be necessary to tackle the problem at scale. 

Additionally, the Federal Election Commission should extend its current political advertising disclosure requirements to encompass the entire spectrum of online communications allowed under federal law. This includes coverage of AI-generated content disseminated by paid influencers and the promotion of such content through paid online channels. We have already witnessed a Ron DeSantis-affiliated PAC use AI voice replication of Donald Trump to have him read aloud the content of one of the former president’s Truth Social posts. This instance did not involve misinformation, but it highlights where and how it could be used.

The need for training and upgrading election infrastructure.

The nation’s election administration infrastructure—the back-end technology that powers our elections—is in many places outmoded and out of date. Coping with modern-day threats requires updating election systems to bring them into alignment with the challenges posed by AI and other next-generation threats. The update to nationwide Voluntary Voting System Guidelines that is currently in progress, as mandated by federal law, is a positive step in this direction. But systems cannot be allowed to atrophy over time.

Election workers are the most important asset in our voting system. The men and women who run and administer elections do so under immense pressure, often working with shoestring budgets and small staffs. It is critical that local election administration agencies equip personnel with adequate, cutting-edge training, software, and resources to keep up with threats as they evolve. Our response to the challenges outlined above will only be as good as we allow it to be. Our investment must reflect the gravity of the threat.

Trey Grayson serves as Advisory Board Co-Chair of the Secure Elections Project. He is a former Kentucky Secretary of State, and former Chair of the Republican Association of Secretaries of State.