2024 Is the Year of the Generative AI Election

by Admin
2024 Is the Year of the Generative AI Election

Experts know that generative AI is poised to drastically change the information landscape, and problems that have long plagued tech platforms—like mis- and disinformation, scams, and hateful content—are likely to be amplified, despite the guardrails that companies say they’ve put in place.

There are a few ways to know whether something was made or manipulated using AI: People or campaigns may have confirmed its usage; fact-checkers may have analyzed and debunked something circulating out in the world; or maybe the AI content is being clearly used for something like satire. Sometimes, if we’re lucky, it’s watermarked, meaning there’s something indicating that it was generated or changed by AI. But the reality is this likely accounts for only some of what’s already out there. Even our own dataset is almost certainly an undercount.

And that leads us to another issue: As British journalist Peter Pomerantsev has said, “When nothing is true, everything is possible.” In an information ecosystem where anything can be generative AI, it’s easy for politicians or public figures to say that something real is fake—what’s known as the “liar’s dividend.” That means people may be less likely to believe information even when it’s true. As for fact-checkers and journalists, many don’t have the tools readily available to assess whether something has been made or manipulated by AI. Whatever this year brings, it’s likely going to be only the tip of the iceberg.

But just because something is fake doesn’t make it bad. Deepfakes have found a home in satire, chatbots can (sometimes) provide good information, and personalized campaign outreach can make people feel seen by their political representatives.

It’s a brave new world, but that’s why we’re tracking it.

The Chatroom

As part of our AI project, we’re asking readers to submit any instances of generative AI you’re encountering out in the wild this election year.

To get a better sense of how we’ll be evaluating submissions (or even the things we find) and to send one our way, check out this link here. If you’re not sure whether something was made from generative AI or just a run-of-the-mill cheapfake, send it anyway and we’ll look into it.

💬 Leave a comment below this article.

WIRED Reads

Want more? Subscribe now for unlimited access to WIRED.

What Else We’re Reading

🔗 TikTok says it removed an influence campaign originating in China: TikTok said last week that it had taken down thousands of accounts linked to 15 Chinese influence campaigns on its platforms. (The Washington Post)

🔗 Ramaswamy Urges BuzzFeed to Cut Jobs, Air More Conservative Voices: Vivek Ramaswamy, the former Republican presidential candidate, is now an activist investor in BuzzFeed. He wants the publication to court conservative readers and to say it “lied” in its reporting about Donald Trump and Covid, among other topics. (Bloomberg)

🔗 OpenAI Creates Oversight Board Featuring Sam Altman After Dissolving Safety Team: The new board will make recommendations about safety and security, and will have 90 days to “further develop OpenAI’s processes and safeguards,” according to the company’s blog. (Bloomberg)

The Download

One last thing! This week on the podcast, I spoke with our editor and host Leah Feiger about the AI elections project. Give it a listen!

In addition to talking about the new project (can you tell I’m excited?), Leah and I were joined by Nilesh Christopher, who has reported on the role of deepfakes in India’s elections for WIRED. The biggest takeaway: The Indian elections are wrapping up soon, and many of the country’s burgeoning generative AI companies are looking for new markets that might be interested in their tools—possibly even coming to an election near you.

That’s it for today. Thanks again for subscribing. You can get in touch with me via email and X.

Source image: Getty Images



Source Link

You may also like

Leave a Comment

This website uses cookies. By continuing to use this site, you accept our use of cookies.