New York’s proposed political deepfake ban suppresses speech and violates the First Amendment
ID 294082396 © Tero Vesalainen | Dreamstime.com

Commentary

New York’s proposed political deepfake ban suppresses speech and violates the First Amendment

Libel and slander laws already exist and can be used by lawmakers worried about how deepfakes could harm their reputations or spread misinformation.

New York is proposing a ban on political deepfakes, joining over 30 states that attempted to limit them over the past year. States have been pushing for these bans to combat the spread of misinformation that could harm election integrity. While the effort to combat the spread of false information regarding elections is well-intentioned, New York’s Assembly Bill A235’s broad language suppresses First Amendment-protected political speech—including parody or satire. Instead of creating new, overreaching legislation to address potential harms, New York legislators should look to libel and slander laws already in place to address potential risks associated with this new technology.

Synthetic media, or “deepfakes,” are videos or sounds meant to depict real people or events but are entirely generated by generative artificial intelligence (AI). Unlike other AI systems, generative AI specializes in producing original content, including images, text, and musical compositions. These deepfakes are produced using machine-learning techniques, primarily deep learning, that analyze and synthesize visual and audio data. Deepfake technology has advanced rapidly in recent years, becoming more accessible, realistic, and easier to create. A common worry with deepfakes is that people may struggle to differentiate between real and fake. 

New York’s proposed bill, currently in committee in the state Assembly, strictly targets political deepfakes, specifically depictions of political officials. The state hopes to join 20 other states that passed legislation banning depictions of elected officials and candidates prior to an election, including the 15 states that did so in 2024. Under Assembly Bill A235, generative AI system owners, licensees, and operators would have to “implement a reasonable method” to prevent users from creating deepfakes made of New York public officials or candidates. Once a candidate or public official notifies them that they do not want to be depicted, the owners have 60 days to prohibit depictions of such officials. Owners must also create a “reasonable method” for public officials to notify them via a method that is  “easy to access, understand, complete and send and that such method provides clear updates to the sender on the status of their request in a timely manner.”

Most, if not all, public officials and candidates will likely wish not to be depicted in a deepfake. This potential ban on political deepfakes violates the First Amendment right to political speech, traditionally one of the most constitutionally protected forms of individual expression in the United States. This law would limit the opportunity for citizens to express themselves and their political opinions if the subject they wish to depict does not wish to be parodied or satirized through the use of deepfakes. 

First Amendment concerns were a point of pushback against state political deepfake bans in 2024. In June 2024, Republican Louisiana Gov. Jeff Landry cited the First Amendment when he vetoed House Bill 154, which would have banned political deepfakes of candidates and elected officials. In October 2024, a California judge placed a preliminary injunction on Assembly Bill 2839, which would have prohibited all AI depictions of candidates, including those used for satire. In his ruling, the judge stated California’s bill violates the First Amendment and serves “as a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate.” 

Beyond the constitutional violations, New York would also have difficulty enforcing these provisions. The internet does not have borders, and information constantly flows freely around the globe, transcending physical and political boundaries. This makes it effectively impossible for state governments to ban certain media within its domain. People anywhere, including foreign actors, can purchase a virtual private network (VPN), which re-routes their IP to another location to avoid being traced. They can easily continue to make and spread political deepfakes in a state where it is outlawed with this technology. Before New York passes any political deepfake ban, legislators should consider the logistics involved in attempting to enforce the law if someone from outside the state disseminates a deepfake to individuals within their state. 

If New York lawmakers are worried about deepfakes that could harm their reputations or spread misinformation, they could always look to libel or slander laws already on the books. If a politician feels someone has written (libel) or spoken (slander) false or defamatory content about them with the intention to harm their reputation, they can file a lawsuit and seek damages. It may be possible that someone creates a political deepfake to make someone look bad, but there is no need to ban the entire technology when libel and slander laws have existed for decades. 

Political deepfakes can be used as both a tool for free expression and potentially spread misinformation. The New York legislature’s proposed solution to misinformation would ban all political deepfakes, which suppresses First Amendment-protected speech and does not allow for parody or satire. Instead, New York should counter potential misinformation by being more transparent with voters. Politicians and candidates hold a unique public platform where they can provide information to voters, clarify any misconceptions about themselves, and be easily accessible to the voting population. If someone has trouble deciphering whether a deepfake message is real, there are emerging services and products to help users identify “fake” generative AI content. 

Prior to the 2024 presidential election, the Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA) released a public statement warning about the threat of political deepfakes. They recommended that voters “critically evaluate the sources of the information they consume and to seek out reliable and verified information from trusted sources, such as state and local election officials.” By providing more direct transparency to voters, New York can combat any potential misinformation while allowing citizens to express themselves using whatever technological means they choose.