California Laws Cracking Down on Election Deepfakes by AI Face Legal Challenges

New Laws Target AI-Generated Election Deepfakes

California has implemented some of the strictest laws in the U.S. to combat the spread of AI-generated election disinformation ahead of the 2024 election. Governor Gavin Newsom signed three landmark bills at an AI conference in San Francisco, making it illegal to use artificial intelligence to create and circulate false political ads close to Election Day.

The legislation aims to address the rising threat of “deepfakes”—realistic but fabricated images and videos created with AI technology. These laws include penalties for creating and distributing false election-related content, with one law allowing individuals to sue for damages. The laws specifically target misleading depictions of candidates, election workers, and voting machines.

Legal Challenges and Free Speech Concerns

Despite the intent to safeguard election integrity, two of the three laws are already facing legal challenges. A lawsuit filed in Sacramento on Tuesday claims the new legislation violates free speech rights. The complainant, who previously created parody videos with altered audio of Vice President Kamala Harris, argues that the laws allow anyone to take legal action against content they find objectionable.

The lawsuit, backed by attorney Theodore Frank, contends that the laws stretch antitrust and racketeering laws “beyond recognition” and are designed to “force social media companies to censor” users. Frank noted that other states, like Alabama, have passed similar laws without the controversy now emerging in California.

Governor Newsom Defends the Legislation

In response to the lawsuit, Governor Newsom’s office clarified that the laws do not prohibit satire or parody but require that AI-generated content be clearly labeled as altered. Newsom spokesperson Izzy Gardon stated, “This new disclosure law for election misinformation isn’t any more onerous than laws already passed in other states, including Alabama.”

One of the laws mandates large online platforms, like X (formerly Twitter), to remove AI-generated election disinformation starting next year. Elon Musk, the platform’s owner, criticized the legislation, elevating an AI-altered video of Harris and claiming the new law infringes on the First Amendment. Musk’s involvement has intensified the debate, with critics labeling the new laws as unconstitutional.

Election Disinformation Threat Grows Nationwide

California is not alone in its efforts to combat election disinformation. Lawmakers in over a dozen states have proposed similar measures, driven by the emergence of AI technology that can quickly generate and disseminate fake content. The rapid pace at which AI can produce deepfakes poses a significant threat to the credibility of elections, say experts.

One of the three laws signed by Newsom is designed to prevent election deepfakes 120 days before Election Day and 60 days after. The law also allows courts to block the distribution of false materials. Violators may face civil penalties, though the legislation exempts parody and satire content that is clearly labeled as such.