November 24, 2024

AI’s Fingerprints Were All Over the Election

7 min read

The images and videos were hard to miss in the days leading up to November 5. There was Donald Trump with the chiseled musculature of Superman, hovering over a row of skyscrapers. Trump and Kamala Harris squaring off in bright-red uniforms (McDonald’s logo for Trump, hammer-and-sickle insignia for Harris). People had clearly used AI to create these—an effort to show support for their candidate or to troll their opponents. But the images didn’t stop after Trump won. The day after polls closed, the Statue of Liberty wept into her hands as a drizzle fell around her. Trump and Elon Musk, in space suits, stood on the surface of Mars; hours later, Trump appeared at the door of the White House, waving goodbye to Harris as she walked away, clutching a cardboard box filled with flags.

Every federal election since at least 2018 has been plagued with fears about potential disruptions from AI. Perhaps a computer-generated recording of Joe Biden would swing a key county, or doctored footage of a poll worker burning ballots would ignite riots. Those predictions never materialized, but many of them were also made before the arrival of ChatGPT, DALL-E, and the broader category of advanced, cheap, and easy-to-use generative-AI models—all of which seemed much more threatening than anything that had come before. Not even a year after ChatGPT was released in late 2022, generative-AI programs were used to target Trump, Emmanuel Macron, Biden, and other political leaders. In May 2023, an AI-generated image of smoke billowing out of the Pentagon caused a brief dip in the U.S. stock market. Weeks later, Ron DeSantis’s presidential primary campaign appeared to have used the technology to make an advertisement.

And so a trio of political scientists at Purdue University decided to get a head start on tracking how generative AI might influence the 2024 election cycle. In June 2023, Christina Walker, Daniel Schiff, and Kaylyn Jackson Schiff started to track political AI-generated images and videos in the United States. Their work is focused on two particular categories: deepfakes, referring to media made with AI, and “cheapfakes,” which are produced with more traditional editing software, such as Photoshop. Now, more than a week after polls closed, their database, along with the work of other researchers, paints a surprising picture of how AI appears to have actually influenced the election—one that is far more complicated than previous fears suggested.

The most visible generated media this election have not exactly planted convincing false narratives or otherwise deceived American citizens. Instead, AI-generated media have been used for transparent propaganda, satire, and emotional outpourings: Trump, wading in a lake, clutches a duck and a cat (“Protect our ducks and kittens in Ohio!”); Harris, enrobed in a coppery blue, struts before the Statue of Liberty and raises a matching torch. In August, Trump posted an AI-generated video of himself and Musk doing a synchronized TikTok dance; a follower responded with an AI image of the duo riding a dragon. The pictures were fake, sure, but they weren’t feigning otherwise. In their analysis of election-week AI imagery, the Purdue team found that such posts were far more frequently intended for satire or entertainment than false information per se. Trump and Musk have shared political AI illustrations that got hundreds of millions of views. Brendan Nyhan, a political scientist at Dartmouth who studies the effects of misinformation, told me that the AI images he saw “were obviously AI-generated, and they were not being treated as literal truth or evidence of something. They were treated as visual illustrations of some larger point.” And this usage isn’t new: In the Purdue team’s entire database of fabricated political imagery, which includes hundreds of entries, satire and entertainment were the two most common goals.

That doesn’t mean these images and videos are merely playful or innocuous. Outrageous and false propaganda, after all, has long been an effective way to spread political messaging and rile up supporters. Some of history’s most effective propaganda campaigns have been built on images that simply project the strength of one leader or nation. Generative AI offers a low-cost and easy tool to produce huge amounts of tailored images that accomplish just this, heightening existing emotions and channeling them to specific ends.

These sorts of AI-generated cartoons and agitprop could well have swayed undecided minds, driven turnout, galvanized “Stop the Steal” plotting, or driven harassment of election officials or racial minorities. An illustration of Trump in an orange jumpsuit emphasizes Trump’s criminal convictions and perceived unfitness for the office, while an image of Harris speaking to a sea of red flags, a giant hammer-and-sickle above the crowd, smears her as “woke” and a “Communist.” An edited image showing Harris dressed as Princess Leia kneeling before a voting machine and captioned “Help me, Dominion. You’re my only hope” (an altered version of a famous Star Wars line) stirs up conspiracy theories about election fraud. “Even though we’re noticing many deepfakes that seem silly, or just seem like simple political cartoons or memes, they might still have a big impact on what we think about politics,” Kaylyn Jackson Schiff told me. It’s easy to imagine someone’s thought process: That image of “Comrade Kamala” is AI-generated, sure, but she’s still a Communist. That video of people shredding ballots is animated, but they’re still shredding ballots. That’s a cartoon of Trump clutching a cat, but immigrants really are eating pets. Viewers, especially those already predisposed to find and believe extreme or inflammatory content, may be further radicalized and siloed. The especially photorealistic propaganda might even fool someone if reshared enough times, Walker told me.

There were, of course, also a number of fake images and videos that were intended to directly change people’s attitudes and behaviors. The FBI has identified several fake videos intended to cast doubt on election procedures, such as false footage of someone ripping up ballots in Pennsylvania. “Our foreign adversaries were clearly using AI” to push false stories, Lawrence Norden, the vice president of the Elections & Government Program at the Brennan Center for Justice, told me. He did not see any “super innovative use of AI,” but said the technology has augmented existing strategies, such as creating fake-news websites, stories, and social-media accounts, as well as helping plan and execute cyberattacks. But it will take months or years to fully parse the technology’s direct influence on 2024’s elections. Misinformation in local races is much harder to track, for example, because there is less of a spotlight on them. Deepfakes in encrypted group chats are also difficult to track, Norden said. Experts had also wondered whether the use of AI to create highly realistic, yet fake, videos showing voter fraud might have been deployed to discredit a Trump loss. This scenario has not yet been tested.

Although it appears that AI did not directly sway the results last week, the technology has eroded Americans’ overall ability to know or trust information and one another—not deceiving people into believing a particular thing so much as advancing a nationwide descent into believing nothing at all. A new analysis by the Institute for Strategic Dialogue of AI-generated media during the U.S. election cycle found that users on X, YouTube, and Reddit inaccurately assessed whether content was real roughly half the time, and more frequently thought authentic content was AI-generated than the other way around. With so much uncertainty, using AI to convince people of alternative facts seems like a waste of time—far more useful to exploit the technology to directly and forcefully send a motivated message, instead. Perhaps that’s why, of the election-week, AI-generated media the Purdue team analyzed, pro-Trump and anti-Kamala content was most common.

More than a week after Trump’s victory, the use of AI for satire, entertainment, and activism has not ceased. Musk, who will soon co-lead a new extragovernmental organization, routinely shares such content. The morning of November 6, Donald Trump Jr. put out a call for memes that was met with all manner of AI-generated images. Generative AI is changing the nature of evidence, yes, but also that of communication—providing a new, powerful medium through which to illustrate charged emotions and beliefs, broadcast them, and rally even more like-minded people. Instead of an all-caps thread, you can share a detailed and personalized visual effigy. These AI-generated images and videos are instantly legible and, by explicitly targeting emotions instead of information, obviate the need for falsification or critical thinking at all. No need to refute, or even consider, a differing view—just make an angry meme about it. No need to convince anyone of your adoration of J. D. Vance—just use AI to make him, literally, more attractive. Veracity is beside the point, which makes the technology perhaps the nation’s most salient mode of political expression. In a country where facts have gone from irrelevant to detestable, of course deepfakes—fake news made by deep-learning algorithms—don’t matter; to growing numbers of people, everything is fake but what they already know, or rather, feel.