Introduction
The rapid advancement of artificial intelligence (AI) has led to a recent rise of “deepfakes,” in which AI is used to manipulate or fabricate audio, video, or images with realistic accuracy. The highly realistic depiction of deepfakes technology has led to fears of potential misuse, such as harming others’ reputations through deliberate misrepresentation or spreading misinformation online.
Thirty-nine states have passed laws regulating the spread of intimate or erotic deepfakes, including bans on the creation and distribution fabricated images or videos depicting child sexual abuse material (CSAM) and revenge porn. Revenge porn, or nonconsensual pornography, is sexual or pornographic images of individuals distributed without their consent. These laws are expansions of currently existing laws and reiterate that these activities are still illegal when done with this new technology.
In addition, over 30 states have also proposed laws to try and regulate political deepfakes, which would restrict certain fabricated depictions of political candidates or office holders. These proposals have different goals than restrictions on sexual deepfake regulation; they are meant to prevent the creation and spread of images that may deceive voters, spread misinformation, and potentially influence elections.
Most of these state laws allow for political deepfakes so long as they include clear disclosures or watermarks identifying them as synthetic media. Deepfakes of candidates lacking these disclosures created or distributed before an election are outlawed. The watermark approach reflects a growing consensus that these disclosure requirements are a key tool in combating malicious political deepfakes.
While these proposals may have good intentions, restricting political deepfakes risks limiting political speech that is protected by the First Amendment. Many state laws are intended to combat political deepfakes that attempt to deceive voters. However, regulations may lead to the targeting of deepfakes meant to be parody or satire. Not all deepfakes are created with the intent to deceive. Parody and satire often use exaggerated or fabricated imagery to critique public figures or highlight social issues, and the line between humor and deception can be highly subjective. States may attempt to ban satirical political deepfakes that are realistic enough to potentially mislead some viewers, regardless of the creator’s actual purpose. The ambiguity of which political deepfakes a law regulates can create a chilling effect, deterring artists, comedians, and political commentators from engaging in creative expression out of fear that their work could be mischaracterized as deceptive and subject to legal action.
It is crucial that policymakers approach this issue with caution. Deepfakes can be used as a form of self-expression, parody, and satire, all of which are protected speech under the First Amendment. Regulatory responses must not inadvertently infringe on free speech. Rather than rushing to impose restrictions, policymakers should focus on an approach that recognizes the protections already provided by libel and slander laws and encourages transparency and accountability without undermining fundamental rights.
This policy brief explores the potential dangers of deepfakes while advocating for solutions that prioritize technological advancement, self-regulation, and public education over government intervention. A nuanced response will allow society to address the challenges of deepfakes while preserving the benefits of creative and expressive digital technologies.
Read the full report here:
Download this Resource
Deepfakes, AI, and Existing Laws
By Richard Sill, Technology Policy Fellow
Thank you for downloading!