The Biden admin has no firm plan to call out domestic disinformation in the 2024 election

by Admin
The Biden admin has no firm plan to call out domestic disinformation in the 2024 election

The Biden administration has no firm plans to alert the public about deepfakes or other false information during the 2024 election unless it is clearly coming from a foreign actor and poses a sufficiently grave threat, according to current and former officials.

Although cyber experts in and outside of government expect an onslaught of disinformation and deepfakes during this year’s election campaign, officials in the FBI and the Department of Homeland Security remain worried that if they weigh in, they will face accusations that they are attempting to tilt the election in favor of President Joe Biden’s re-election.

Lawmakers from both parties have urged the Biden administration to take a more assertive stance.

“I’m worried that you may be overly concerned with appearing partisan and that that will freeze you in terms of taking the actions that are necessary,” Sen. Angus King, a Maine independent who caucuses with the Democrats, told cybersecurity and intelligence officials at a hearing last month.

Image: A voter walks toward the entrance of the Boys and Girls Clubs of the Great Lakes Bay Region to cast their ballot in Bay City, Mich., during Election Day on Nov. 3, 2020. (Kaytie Boomer / The Bay City Times via AP file)

Image: A voter walks toward the entrance of the Boys and Girls Clubs of the Great Lakes Bay Region to cast their ballot in Bay City, Mich., during Election Day on Nov. 3, 2020. (Kaytie Boomer / The Bay City Times via AP file)

Sen. Marco Rubio, R-Fla., asked how the government would react to a deepfake video. “If this happens, who’s in charge of responding to it? Have we thought through the process of what do we do when one of these scenarios occurs?” he asked. “‘We just want you to know that video is not real.’ Who would be in charge of that?”

A senior U.S. official familiar with government deliberations said federal law enforcement agencies, particularly the FBI, are reluctant to call out disinformation with a domestic origin.

The FBI will investigate possible election law violations, the official said, but does not feel equipped to make public statements about disinformation or deepfakes generated by Americans.

“The FBI is not in the truth detection business,” the official said.

In interagency meetings about the issue, the official said, it’s clear that the Biden administration does not have a specific plan for how to deal with domestic election disinformation, whether it’s a deepfake impersonating a candidate or a false report about violence or voting locations being closed that could dissuade people from going to the polls.

In a statement to NBC News, the FBI acknowledged that even when it investigates possible criminal violations involving false information, the bureau is unlikely to immediately flag what’s false.

“The FBI can and does investigate allegations of Americans spreading disinformation that are intended to deny or undermine someone’s ability to vote,” the statement said. “The FBI takes these allegations seriously, and that requires that we follow logical investigative steps to determine if there is a violation of federal law. Those investigative steps cannot be completed ‘in the moment.’”

The bureau added that it will “work closely with state and local election officials to share information in real time. But since elections are administered at the state level, the FBI would defer to state-level election officials about their respective plans to address disinformation in the moment.”

A senior official at the Cybersecurity and Infrastructure Security Agency (CISA), the federal entity charged with protecting election infrastructure, said state and local election agencies were best placed to inform the public about false information spread by other Americans but would not rule out the possibility that the agency might issue a public warning if necessary.

“I won’t say that we wouldn’t speak publicly about something. I would not say that categorically. No, I think it just depends,” the official said.

“Is this something that’s specific to one state or jurisdiction? Is this something that’s happening in multiple states? Is this something that’s actually impacting election infrastructure?” the official said.

CISA has focused on helping educate the public and train state and local election officials about the tactics employed in disinformation campaigns, the official said.

“At CISA, we certainly have not stopped prioritizing this as a threat vector that we take very seriously for this election cycle,” the official said.

The late-breaking deepfake

Robert Weissman, president of Public Citizen, a pro-democracy group that has been urging states to criminalize political deepfakes, said that the current federal approach is a recipe for chaos.

The biggest fear, he said, is a late-breaking deepfake that reflects poorly on a candidate and could influence the outcome of an election. Right now, government bodies — from county election boards to federal authorities — have no plans to respond to such a development, he said.

Joe Biden Campaigns In Western Pennsylvania One Day Before Election politics political politician campaign (Drew Angerer / Getty Images file)Joe Biden Campaigns In Western Pennsylvania One Day Before Election politics political politician campaign (Drew Angerer / Getty Images file)

Joe Biden Campaigns In Western Pennsylvania One Day Before Election politics political politician campaign (Drew Angerer / Getty Images file)

“If political operatives have a tool they can use and it’s legal, even if it’s unethical, they are pretty likely to use it,” Weissman said. “We are foolish if we expect anything other than a tsunami of deepfakes.”

Disinformation designed to keep people from voting is illegal, but deepfakes mischaracterizing the actions of candidates are not prohibited under federal law and by the laws of 30 states.

The Department of Homeland Security has warned election officials across the country that generative artificial intelligence could allow bad actors — either foreign or domestic — to impersonate election officials and spread false information, something that has happened in other countries around the world in recent months.

At a recent meeting with tech executives and nonpartisan watchdog groups, a senior federal official in cybersecurity acknowledged that fake videos or audio clips generated by AI posed a potential risk in an election year. But they said that CISA would not try to intervene to warn the public because of the polarized political climate.

Intelligence agencies say they are closely tracking false information spread by foreign adversaries, and officials said recently they are prepared if necessary to issue a public statement about certain disinformation if the author of the false information is clearly a foreign actor and if the threat is sufficiently “severe” that it could jeopardize the outcome of the election. But they have not clearly defined what “severe” means.

At a Senate Intelligence Committee hearing last month on the disinformation threat, senators said the government needed to come up with a more coherent plan as to how it would handle a potentially damaging “deepfake” during the election campaign.

Sen. Mark Warner, D-Va., the committee’s chair, told NBC News that the threat posed by generative AI is “serious and rampant” and that the federal government needed to be ready to respond.

“While I continue to push tech companies to do more to curb nefarious AI content of all varieties, I think it’s appropriate for the federal government to have a plan in place to alert the public when a serious threat comes from a foreign adversary,” he said. “In domestic contexts, state and federal law enforcement may be positioned to determine if election-related disinformation constitutes criminal activity, such as voter suppression.”

How other countries respond

Unlike the U.S. government, Canada has published an explanation of its decision-making protocol for how Ottawa will respond to an incident that could put an election at risk. The government website promises to “communicate clearly, transparently and impartially with Canadians during an election in the event of an incident or a series of incidents that threatened the election’s integrity.”

Some other democracies, including Taiwan, France and Sweden, have adopted a more proactive approach to disinformation, flagging false reports or collaborating closely with nonpartisan groups that fact-check and try to educate the public, experts said.

politics political politician (Saul Loeb / AFP via Getty Images file)politics political politician (Saul Loeb / AFP via Getty Images file)

politics political politician (Saul Loeb / AFP via Getty Images file)

Sweden, for example, set up a special government agency in 2022 to combat disinformation — prompted by Russia’s information warfare — and has tried to educate the public about what to look out for and how to recognize attempts to spread falsehoods.

France has set up a similar agency, the Vigilance and Protection Service against Foreign Digital Interference, known as Viginum, which regularly issues detailed public reports about Russian-backed propaganda and false reports, describing fake government websites, news sites and social media accounts.

The European Union, following the lead of France and other member states, has set up a center for sharing information and research between government agencies and nonprofit civil society groups that track the issue.

But those countries are not plagued by the same degree of political division as in the United States, according to David Salvo, a former U.S. diplomat and now managing director of the Alliance for Securing Democracy at the German Marshall Fund think tank.

“It’s tough, because the best practices tend to be in places where either trust in government is a hell of a lot higher than it is here,” he said.

Discord derailed U.S. effort

After the 2016 election in which Russia spread disinformation through social media, U.S. government agencies began working with social media companies and researchers to help identify potentially violent or volatile content. But a federal court ruling in 2023 discouraged federal agencies from even communicating with social media platforms about content.

The Supreme Court is due to take up the case as soon as this week, and if the lower court ruling is rejected, more regular communication between federal agencies and the tech firms could resume.

Early in President Joe Biden’s term, the administration sought to tackle the danger presented by false information circulating on social media, with DHS setting up a disinformation working group led by an expert from a nonpartisan Washington think tank. But Republican lawmakers denounced the Disinformation Governance Board as a threat to free speech with an overly vague role and threatened to cut off funding for it.

Under political pressure, DHS shut it down in August 2022 and the expert who ran the board, Nina Jankowicz, said she and her family received numerous death threats during her brief tenure.

Even informal cooperation between the federal government and private nonprofit groups is more politically fraught in the U.S. due to the polarized landscape, experts say.

Nonpartisan organizations potentially face accusations of partisan bias if they collaborate or share information with a federal or state government agency, and many have faced allegations that they are stifling freedom of speech by merely tracking online disinformation.

The threat of lawsuits and intense political attacks from pro-Trump Republicans have led many organizations and universities to pull back from research on disinformation in recent years. Stanford University’s Internet Observatory, which had produced influential research on how false information moved through social media platforms during elections, recently laid off most of its staff after a spate of legal challenges and political criticism.

The university on Monday denied it was shutting down the center because of outside political pressure. It does, however, “face funding challenges as its founding grants will soon be exhausted,” the center said in a statement.

Given the federal government’s reluctance to speak publicly about disinformation, state and local election officials likely will be in the spotlight during the election, having to make decisions quickly about whether to issue a public warning. Some already have turned to a coalition of nonprofit organizations that have hired technical experts to help detect AI-generated deepfakes and provide accurate information about voting.

Two days before New Hampshire’s presidential primary in January, the state attorney general’s office put out a statement warning the public about AI-produced robocalls using fake audio clips that sounded like Biden telling voters not to go to the polls. New Hampshire’s secretary of state then spoke to news outlets to provide accurate information about voting.

This article was originally published on NBCNews.com

Source Link

You may also like

Leave a Comment

This website uses cookies. By continuing to use this site, you accept our use of cookies.