The Year of the AI Election Wasn’t Quite What Everyone Expected

by Admin
The Year of the AI Election Wasn’t Quite What Everyone Expected

Many pieces of AI-generated content were used to express support for or fandom of certain candidates. For instance, an AI-generated video of Donald Trump and Elon Musk dancing to the BeeGees song “Stayin’ Alive” was shared millions of times on social media, including by Senator Mike Lee, a Utah Republican.

“It’s all about social signaling. It’s all the reasons why people share this stuff. It’s not AI. You’re seeing the effects of a polarized electorate,” says Bruce Schneier, a public interest technologist and lecturer at the Harvard Kennedy School. “It’s not like we had perfect elections throughout our history and now suddenly there’s AI and it’s all misinformation.”

But don’t get it twisted—there were misleading deepfakes that spread during this election. For instance, in the days before Bangladesh’s elections, deepfakes circulated online encouraging supporters of one of the country’s political parties to boycott the vote. Sam Gregory, program director of the nonprofit Witness, which helps people use technology to support human rights and runs a rapid-response detection program for civil society organizations and journalists, says that his team did see an increase in cases of deepfakes this year.

“In multiple election contexts,” he says, “there have been examples of both real deceptive or confusing use of synthetic media in audio, video, and image format that have puzzled journalists or have not been possible for them to fully verify or challenge.” What this reveals, he says, is that the tools and systems currently in place to detect AI-generated media are still lagging behind the pace at which the technology is developing. In places outside the US and Western Europe, these detection tools are even less reliable.

“Fortunately, AI in deceptive ways was not used at scale in most elections or in pivotal ways, but it’s very clear that there’s a gap in the detection tools and access to them for the people who need it the most,” says Gregory. “This is not the time for complacency.”

The very existence of synthetic media at all, he says, has meant that politicians have been able to allege that real media is fake—a phenomenon known as the “liar’s dividend.” In August, Donald Trump alleged that images showing large crowds of people turning out to rallies for Vice President Kamala Harris were AI-generated. (They weren’t.) Gregory says that in an analysis of all the reports to Witness’ deepfake rapid-response force, about a third of the cases were politicians using AI to deny evidence of a real event—many involving leaked conversations.



Source Link

You may also like

Leave a Comment

This website uses cookies. By continuing to use this site, you accept our use of cookies.