Klobuchar presses Congress to regulate use of AI in elections 

28 September 2023

WASHINGTON — Sen. Amy Klobuchar shared the story of an AI-generated post on Twitter that featured someone who looked like Democratic Sen. Elizabeth Warren saying that people from the opposing political party should not be allowed to vote.

“She never said that, but it looked like her,” Klobuchar said.

Klobuchar cited the fraudulent post as an example of how artificial intelligence can disrupt elections as the new technology becomes more involved in political speech.

She also cited another social media post that featured a photo of former President Trump hugging Anthony Fauci, the former head of the National Institute of Allergy and Infectious Diseases. The post went viral but was an AI-generated “deep fake.”

On Wednesday, Klobuchar, the chairwoman of the Senate Rules Committee, held a hearing to determine what Congress can do to prevent artificial intelligence from undermining elections and attacking a democratic process, especially as the contest for the next U.S. president is heating up.

“Like any emerging technology, AI comes with risk, and we’d better be ready,” she said.

Earlier this month, Klobuchar introduced the Protect Elections from Deceptive AI Act, bipartisan legislation that would prohibit the distribution of materially deceptive AI-generated audio, images, or video relating to federal candidates in political ads or fundraising efforts. It would require that this content be taken down and allows its victims to seek damages in federal court.

Klobuchar is also the sponsor of another bill, the REAL Political Ads Act, which would require a disclaimer on political ads that use images or video generated by artificial intelligence. 

But, in finding the best way to combat the increasing threat posed by AI to American elections, Klobuchar must fend off arguments that new regulations would erode First Amendment rights to free speech.

“As we learn more about this technology, we must also keep in mind the important protections of free speech in this country. These protections are needed to preserve our democracy,” said Sen. Deb Fischer of Nebraska, the top Republican on the Rules Committee.  

A witness at Wednesday’s hearing was Minnesota Secretary of State Steve Simon, who said “we are talking about an old problem – election misinformation and disinformation – that can now more easily be amplified.”

Graeme Sloan/Sipa USA
Minnesota Secretary of State Steve Simon testifying during the Senate Rules Committee hearing on AI and Elections.

Simon said some disinformation could come, not from an attempt to willfully hurt a political opponent, but simply through “an innocent circumstance.”

He cited an experiment by Max Hailperin, a retired professor of computer science in Minnesota. Hailperin had asked ChatGPT questions about Minnesota’s election laws. The program gave the wrong answers to several questions.

“Intentional misdirection? Probably not,” Simon said. “Still, it is a danger to voters who may get bad information about critical election rules.”

Simon also said that “in the wrong hands, AI could be used to misdirect intentionally – and in ways that are more advanced than ever.”

Minnesota’s election chief also said the state has taken “proactive steps” to fight this latest danger to democracy. He said his office is putting out accurate information while taking steps to debunk misinformation quickly.

Simon said his agency has worked with local and federal allies to monitor and respond to inaccuracies “that could morph into conspiracy theories on election-related topics” and urged voters to acquire “media literacy” and rely on “trusted sources” of information provided by the National Association of Secretaries of State.

Simon also said federal legislation to regulate AI would also be helpful, as would federal dollars to help states shore up protections. 

Yet those who say Congress should be cautious about putting guardrails on artificial intelligence argue that federal elections have never been free from “deep fakes,” and other falsehoods and mistaken beliefs.

“While powerful tools can be misused, generative AI tools seem unlikely to materially affect election results, because political speech already uses AI tools and has for years,” testified Neil Chilson, a research fellow at the Center for Growth and Opportunity at Utah State University.

Chilson said AI will lower the cost of generating creative content, and he predicted “we’ll see more speech, including political speech.”

“But we should not expect a shift in the truth-to-lies ration,” Chilson said. “Repackaging past proposals to control and censor political speech as ‘AI regulation’ will not solve misinformation in ads and will chill political speech.”

Meanwhile, Trevor Potter, the president of the Campaign Legal Center and a former Republican member of the Federal Election Commission, said congressional legislation should be “a carefully crafted, narrow statute that would withstand Supreme Court scrutiny.”

Klobuchar appeared to have a powerful ally in her effort to push forward AI legislation. At the hearing, Senate Majority Leader Chuck Schumer, D-New York, said he backed the effort to protect the electoral system – and said he is concerned “foreign adversaries” would use AI to undermine U.S. democracy. Schumer, however, said the effort should be bipartisan.

“Once damaging information is sent to 100 million homes, it’s hard to put that genie back in the bottle,” Schumer said.

Meanwhile, Klobuchar said she hoped the Senate would approve legislation by the end of the year. 

“The election is upon us, these things are happening now,” Klobuchar said.

Need help?

If you need support, please send an email to [email protected]

Thank you.