December 23, 2024

America Is Primed for an AI Election Backlash

4 min read
Illustration of an eye made out of blue and red ballots

During last night’s presidential debate, Donald Trump once again baselessly insisted that the only reason he lost in 2020 was coordinated fraud. “Our elections are bad,” Trump declared—gesturing to the possibility that, should he lose in November, he will again contest the results.

After every presidential election nowadays, roughly half the nation is in disbelief at the outcome—and many, in turn, search for excuses. Some of those claims are outright fabricated, such as Republican cries that 2020 was “stolen,” which culminated in the riot at the Capitol on January 6. Others are rooted in facts but blown out of proportion, such as Democrats’ outrage over Russian propaganda and the abject failure of Facebook’s content moderation in 2016. Come this November, the malcontents will need targets for their ire—and either side could find an alluring new scapegoat in generative AI.

Over the past several months, multiple polls have shown that large swaths of Americans fear that AI will be used to sway the election. In a survey conducted in April by researchers at Elon University, 78 percent of participants said they believed AI would be used to affect the presidential election by running fake social-media accounts, generating misinformation, or persuading people not to vote. More than half thought all three were at least somewhat likely to happen. Research conducted by academics in March found that half of Americans think AI will make elections worse. Another poll from last fall found that 74 percent of Americans were worried that deepfakes will manipulate public opinion. These worries make sense: Articles and government notices warning that AI could threaten election security in 2024 are legion.

There are, to be clear, very real reasons to worry that generative AI could influence voters, as I have written: Chatbots regularly assert incorrect but believable claims with confidence, and AI-generated photos and videos can be challenging to immediately detect. The technology could be used to manipulate people’s beliefs, impersonate candidates, or spread disenfranchising false information about how to vote. An AI robocall has already been used to try to dissuade people from voting in the New Hampshire primary. And an AI-generated post of Taylor Swift endorsing Trump helped prompt her to endorse Kamala Harris right after last night’s debate.

Politicians and public figures have begun to invoke AI-generated disinformation, legitimately and not, as a way to brush off criticism, disparage opponents, and stoke the culture wars. Democratic Representative Shontel Brown recently introduced legislation to safeguard elections from AI, stating that “deceptive AI-generated content is a threat to elections, voters, and our democracy.” Others have been more inflammatory, if not fantastical: Trump has falsely claimed that images of a Harris rally were AI-generated, and large tech companies have more broadly been subject to his petulance: He recently called Google “a Crooked, Election Interference Machine.” Roger Stone, an architect of Trump’s efforts to overturn the 2020 election, has denounced allegedly incriminating audio recordings of him as “AI manipulation.” Right-wing concerns about “woke AI” have proliferated amid claims that tech companies are preventing their bots from expressing conservative viewpoints; Elon Musk created a whole AI start-up in part to make an “uncensored” chatbot, echoing how he purchased Twitter under the auspices of free speech, but functionally to protect far-right accounts.

The seeds of an AI election backlash were sown even before this election. The process started in the late 2010s, when fears about the influence of a deepfake apocalypse began, or perhaps even earlier, when Americans finally noticed the rapid spread of mis- and disinformation on social media. But if AI actually becomes a postelection scapegoat, it likely won’t be because the technology singlehandedly determined the results. In 2016, the Facebook–Cambridge Analytica scandal was real, but there are plenty of other reasons Hillary Clinton lost. With AI, fact and fiction about election tampering may be hard to separate for people of all political persuasions. Evidence that generative AI turned people away from polling booths or influenced their political beliefs, in favor of either candidate, may well emerge. OpenAI says it has already shut down a covert Iranian operation that used ChatGPT to write content about the 2024 election, and the Department of Justice announced last week that it had disrupted a Russian campaign to influence U.S. elections that also deployed AI-generated content, to spread pro-Kemlin narratives about Ukraine.

Appropriate and legitimate applications of AI to converse with and persuade potential voters—such as automatically translating a campaign message into dozens of different languages—will be mixed up with less well-intentioned uses. All of it could be appropriated as evidence of wrongdoing at scales large and small. Already, the GOP is stoking claims that tech companies and the government have conspired to control the news cycle or even tried to “rig” the 2020 election, fueled by Mark Zuckerberg’s recent statement to Congress that Meta suppressed certain content about the pandemic in response to “government pressure.”

Generative AI continues to not upend society so much as accelerate its current dysfunctions. Concerns that many members of both major parties seem to share about AI products might simply further rip the nation apart—similar to how disinformation on Facebook reshaped both American political discourse and the company’s trajectory after 2016. Like many claims that past elections were fraudulent, the future and effects of AI will be decided not just by computer code, laws, and facts, but millions of people’s emotions.