AI Deepfakes & Synthetic Media Legal FAQ (2026)

Criminal Penalties, Right of Publicity, Platform Liability & Your Legal Rights Against Deepfake Exploitation

AI-powered deepfakes and synthetic media have created unprecedented legal challenges for individuals, businesses, and society. From non-consensual intimate imagery to election disinformation, the ability to generate hyper-realistic fake audio, video, and images raises critical questions about consent, liability, and free expression. This FAQ covers the current legal landscape for AI deepfakes in 2026, including California's landmark statutes, proposed federal legislation, FTC enforcement, criminal penalties, platform responsibilities, and practical steps for victims seeking legal remedies.

Table of Contents

Key Deepfake Laws Comparison (2026)

Law / Proposal Scope Remedy Type Key Provision Status
CA AB 602 Non-consensual intimate deepfakes Civil Private right of action + damages Enacted 2019
CA AB 730 Election deepfakes (60-day window) Civil + Injunctive Pre-election distribution ban Enacted 2019
Cal. Civ. Code 3344 Right of publicity (commercial use) Civil Consent required for likeness use Enacted (amended)
NO FAKES Act All unauthorized AI replicas Civil + Platform liability Federal right in voice/likeness + $50K statutory damages Proposed
DEFIANCE Act Non-consensual intimate deepfakes Civil (federal) Up to $150K statutory damages + 10-year SOL Proposed
DEEPFAKES Accountability Act All synthetic media Criminal + Civil Mandatory disclosure/labeling requirement Proposed

Status current as of early 2026. Federal proposals are subject to amendment during the legislative process. Additional state laws exist in 40+ states. Always consult current statutes for the most up-to-date provisions.

Frequently Asked Questions

Q: What are AI deepfakes and why are they a legal concern? +

AI deepfakes are synthetic media created using artificial intelligence, typically deep learning neural networks, to generate or manipulate images, video, or audio that convincingly depict real people saying or doing things they never actually said or did. The technology uses generative adversarial networks (GANs), diffusion models, and other machine learning techniques to produce hyper-realistic forgeries that can be nearly indistinguishable from authentic media to the average viewer.

Deepfakes pose serious legal concerns across multiple domains. In the context of privacy and dignity, non-consensual deepfake pornography has become the most prevalent harmful application, disproportionately targeting women and causing severe emotional and reputational harm. In the political sphere, deepfake videos of candidates and officials can spread disinformation and undermine democratic processes, particularly during election periods. For businesses, deepfake audio and video can be used for fraud, including impersonating executives to authorize wire transfers, manipulate stock prices, or conduct social engineering attacks against employees.

The legal framework for addressing deepfakes is rapidly evolving, with states like California, Texas, Virginia, and New York enacting specific deepfake legislation, while federal proposals including the NO FAKES Act and DEFIANCE Act aim to create nationwide standards. Existing legal tools including right of publicity statutes, defamation law, fraud statutes, and harassment laws also apply to deepfake scenarios, though they were not specifically designed for synthetic media challenges.

Legal Reference: Cal. Civ. Code § 1708.86 (AB 602, non-consensual deepfake pornography); Cal. Elec. Code § 20010 (AB 730, election deepfakes); 15 U.S.C. § 45 (FTC Act Section 5, deceptive practices); proposed NO FAKES Act, DEFIANCE Act, DEEPFAKES Accountability Act.
Q: What does California AB 602 do about non-consensual deepfake pornography? +

California AB 602 (codified as Civil Code Section 1708.86) creates a civil cause of action for individuals depicted in non-consensual sexually explicit deepfake material. Signed into law in 2019, this statute allows victims to sue the creators and distributors of digitally altered or synthetic sexually explicit images or videos that use their likeness without consent. The law covers material created through "digital alteration" including AI-generated deepfakes that place a person's face or likeness onto sexually explicit content.

Under AB 602, a plaintiff can recover economic damages (such as lost income and medical expenses), non-economic damages (including emotional distress, humiliation, and loss of reputation), punitive damages if the conduct was malicious or oppressive, and reasonable attorney's fees and costs. The statute applies whether or not the depicted person is identifiable by the general public, meaning that even private individuals whose deepfakes are shared among a limited audience have standing to sue.

The law also covers scenarios where a person's face is superimposed onto existing pornographic content or where entirely synthetic sexually explicit images are generated using a person's likeness through AI tools. AB 602 was groundbreaking legislation when enacted, making California one of the first states to create specific civil liability for deepfake pornography, and it has served as a model for similar legislation enacted in over a dozen other states since 2019.

Legal Reference: Cal. Civ. Code § 1708.86 (AB 602); see also Cal. Penal Code § 632.01 (criminal penalties for non-consensual deepfake pornography distribution); Cal. Penal Code § 647(j)(4) (revenge porn statute applicable to digitally altered content).
Q: How does California AB 730 regulate deepfakes in elections? +

California AB 730 (codified as Elections Code Section 20010) prohibits the distribution of materially deceptive audio or visual media of a candidate for public office within 60 days of an election. Enacted in 2019, this law specifically targets deepfake videos, altered images, and synthetic audio designed to damage a candidate's reputation or deceive voters about a candidate's actions or statements during the critical pre-election period. The statute defines "materially deceptive" media as content that would falsely appear to a reasonable person to depict the candidate doing or saying something they did not actually do or say.

The 60-day window was chosen to cover the most sensitive campaign period when deceptive media could have the greatest impact on voter decision-making and when there may be insufficient time for corrections or fact-checking to reach all affected voters. Violations of AB 730 can result in injunctive relief, where a court orders removal of the deceptive content, as well as damages. Candidates depicted in deceptive deepfakes can seek emergency injunctions to compel removal of content before Election Day.

The law includes exemptions for satire, parody, and content clearly labeled as manipulated or synthetic. News organizations that unknowingly broadcast deceptive content are also generally protected if they exercised reasonable diligence in their editorial process. Critics of AB 730 have raised First Amendment concerns, arguing that the law's restriction on political speech during the pre-election period may be subject to strict scrutiny analysis. However, courts have generally upheld narrowly tailored, time-limited election integrity regulations as serving compelling government interests.

Legal Reference: Cal. Elec. Code § 20010 (AB 730); see also Cal. Elec. Code § 20009 (disclosure requirements for political advertisements using AI-generated content); First Amendment, U.S. Const. amend. I (strict scrutiny for content-based restrictions on political speech).
Q: What is the right of publicity and how does it apply to AI deepfakes? +

The right of publicity is a legal right that protects an individual's ability to control the commercial use of their name, likeness, image, voice, and other identifying characteristics. In California, this right is codified in Civil Code Section 3344, which provides a statutory cause of action and has been supplemented by robust common law protections developed through decades of case law. Under Section 3344, any person who knowingly uses another's name, voice, signature, photograph, or likeness for commercial purposes without prior consent is liable for damages, including the greater of $750 statutory damages or actual damages, plus any profits attributable to the unauthorized use.

When applied to AI deepfakes, the right of publicity provides a powerful legal tool for individuals whose likenesses are used without authorization. If someone creates a deepfake video using a celebrity's face to endorse a product, or generates synthetic audio mimicking a singer's voice for a commercial track, or uses an actor's likeness in an AI-generated advertisement, the depicted individual can bring a right of publicity claim seeking compensatory damages, injunctive relief, and disgorgement of profits derived from the unauthorized use.

California's right of publicity is particularly strong because it survives death under Civil Code Section 3344.1, lasting for 70 years after a person's death. This means estates can pursue claims against deepfakes depicting deceased individuals, which is especially relevant given the growing use of AI to recreate the likenesses and voices of deceased performers. The right of publicity analysis in deepfake cases considers whether the use is commercial in nature, whether the depicted person is identifiable, and whether First Amendment defenses such as newsworthiness, transformative use, or commentary apply.

Legal Reference: Cal. Civ. Code § 3344 (statutory right of publicity for living persons); Cal. Civ. Code § 3344.1 (post-mortem right of publicity, 70 years after death); Comedy III Productions v. Gary Saderup, 25 Cal. 4th 387 (2001) (transformative use test for right of publicity).
Q: What is the federal NO FAKES Act and what would it do? +

The NO FAKES Act (Nurture Originals, Foster Art, and Keep Entertainment Safe Act) is proposed federal legislation that would create a nationwide right to control the use of one's voice and visual likeness in AI-generated content. Introduced with bipartisan support in both chambers of Congress, the NO FAKES Act aims to address the patchwork of state laws by establishing a uniform federal standard for combating unauthorized AI replicas of individuals.

The key provisions of the NO FAKES Act include: creating a federal intellectual property right in a person's voice and likeness that would exist alongside state rights of publicity; making it unlawful to produce or distribute an AI-generated replica of a person's voice or likeness without authorization when the replica is used in a way that is likely to cause confusion or appears authentic; providing a private right of action with statutory damages of up to $50,000 per violation (or actual damages if greater), plus attorney's fees and litigation costs; extending protection for 70 years after death to protect the interests of estates and heirs; and including platform liability provisions that would require hosting platforms to remove unauthorized AI replicas upon proper notification.

The NO FAKES Act includes exemptions for news reporting, documentary purposes, commentary, criticism, satire, parody, and bona fide academic or scientific research. It also carves out protections for platforms that act in good faith to remove infringing content upon receiving valid notice, similar to the DMCA safe harbor framework. If enacted, this legislation would provide the first comprehensive federal framework for addressing AI-generated impersonation and would preempt conflicting state laws while allowing states to provide additional protections beyond the federal floor.

Legal Reference: Proposed NO FAKES Act (S. 3875 / H.R. 6943, 118th Congress); see also 17 U.S.C. § 512 (DMCA safe harbor provisions, serving as the model for the NO FAKES Act's notice-and-takedown framework); Restatement (Third) of Unfair Competition § 46 (right of publicity principles).
Q: How does the FTC enforce against deceptive AI-generated media? +

The Federal Trade Commission has authority to take enforcement action against deceptive AI-generated media under Section 5 of the FTC Act (15 U.S.C. Section 45), which prohibits unfair or deceptive acts or practices in or affecting commerce. The FTC has increasingly focused on AI-related deception, issuing guidance, proposing rules, and taking enforcement actions against companies and individuals who use synthetic media to mislead consumers or facilitate fraud.

The FTC's enforcement approach covers several categories of deepfake misuse. Deceptive advertising using AI-generated testimonials, endorsements, or product demonstrations that falsely depict real people or fabricate results violates the FTC's Endorsement Guides and Section 5. Impersonation fraud involving deepfake audio or video to impersonate business executives, government officials, or other trusted figures for financial gain falls squarely within the FTC's authority. In 2024, the FTC finalized a rule specifically addressing AI-enabled impersonation scams, extending existing protections against government and business impersonation to expressly cover AI-generated impersonation.

The FTC can seek injunctive relief requiring cessation of deceptive practices, civil penalties of up to $50,120 per violation per day, consumer redress and restitution, and disgorgement of ill-gotten gains. The FTC has also used its authority under Section 13(b) of the FTC Act to obtain temporary restraining orders and preliminary injunctions against ongoing deceptive AI media operations. For businesses using AI-generated content in marketing or advertising, the FTC's guidance makes clear that AI-generated endorsements and testimonials must comply with the same truthfulness standards as traditional advertising, and failure to disclose the AI-generated nature of content may itself constitute a deceptive practice.

Legal Reference: FTC Act § 5, 15 U.S.C. § 45 (unfair or deceptive acts or practices); FTC Endorsement Guides, 16 C.F.R. Part 255 (endorsements and testimonials in advertising); FTC Impersonation Rule (2024) (AI-enabled impersonation scams); FTC Act § 13(b), 15 U.S.C. § 53(b) (injunctive relief authority).
Q: What criminal penalties exist for creating or distributing deepfakes? +

Criminal penalties for deepfake creation and distribution are expanding rapidly as states enact targeted legislation addressing the most harmful applications of synthetic media. In California, non-consensual deepfake pornography can be prosecuted under multiple statutes. Penal Code Section 632.01 (enacted through AB 602's companion criminal provisions) makes it a crime to intentionally disclose sexually explicit material that has been digitally altered to depict a person participating in a sexual act without their consent. Violations can result in imprisonment in county jail for up to one year and fines up to $1,000 for a first offense, with enhanced penalties for repeat offenders.

California Penal Code Section 647(j)(4) also criminalizes the intentional distribution of intimate images without consent, which courts have applied to digitally altered content including AI-generated deepfakes. Penalties include up to six months in county jail for a first offense, escalating to up to one year for subsequent offenses. Additional California criminal provisions address the use of deepfakes for fraud, extortion, and blackmail under existing criminal statutes.

At the federal level, while no comprehensive deepfake criminal statute has been enacted as of early 2026, several bills are under active consideration. The DEEPFAKES Accountability Act would require creators to label synthetic media and impose criminal penalties for failure to disclose that content is AI-generated. Existing federal laws also apply to deepfake scenarios: wire fraud statutes (18 U.S.C. Section 1343) cover deepfake-enabled financial fraud, identity theft statutes (18 U.S.C. Section 1028) apply when deepfakes are used to assume another person's identity for unlawful purposes, and child exploitation laws (18 U.S.C. Section 2256) cover AI-generated child sexual abuse material regardless of whether a real child was depicted in the creation of the material.

Legal Reference: Cal. Penal Code § 632.01 (criminal deepfake pornography distribution); Cal. Penal Code § 647(j)(4) (non-consensual intimate images); 18 U.S.C. § 1343 (wire fraud); 18 U.S.C. § 1028 (identity theft); 18 U.S.C. § 2256 (child exploitation material, including AI-generated CSAM); proposed DEEPFAKES Accountability Act.
Q: Are social media platforms liable for deepfake content posted by users? +

Platform liability for user-posted deepfake content is governed primarily by Section 230 of the Communications Decency Act (47 U.S.C. Section 230), which provides broad immunity to interactive computer services for content created by third-party users. Under current law, social media platforms like Meta, X (formerly Twitter), TikTok, and YouTube generally cannot be held liable as publishers of deepfake content uploaded by their users, provided the platforms did not create or materially contribute to the development of the content itself.

However, this immunity is not absolute and is subject to several important limitations. Section 230 does not protect platforms from federal criminal liability, meaning platforms could face prosecution for knowingly facilitating the distribution of illegal deepfake content such as child sexual abuse material. Additionally, several state deepfake laws specifically include platform liability provisions that may test the boundaries of Section 230 protections, creating legal uncertainty that will likely require judicial resolution.

Many platforms have implemented voluntary policies against deepfakes as part of their community standards. Meta prohibits manipulated media that is likely to mislead the average person, YouTube requires disclosure of realistic altered content depicting real people, and TikTok prohibits synthetic media that misleads viewers about real-world events or causes harm to the depicted subject. These policies create a framework where platforms can remove deepfake content under their terms of service even without a legal obligation to do so. The proposed NO FAKES Act would create notice-and-takedown frameworks specifically for deepfake content, similar to the DMCA's framework for copyright infringement, potentially requiring platforms to remove reported deepfakes within specified timeframes or forfeit their Section 230 protections for that specific content.

Legal Reference: 47 U.S.C. § 230 (Communications Decency Act, platform immunity); 47 U.S.C. § 230(e)(1) (no effect on federal criminal law); 17 U.S.C. § 512 (DMCA notice-and-takedown, model for proposed deepfake legislation); proposed NO FAKES Act platform liability provisions.
Q: How does the federal DEFIANCE Act address non-consensual intimate deepfakes? +

The DEFIANCE Act (Disrupt Explicit Forged Images and Non-Consensual Edits Act) is federal legislation that creates a civil cause of action for victims of non-consensual intimate deepfake imagery. The Act allows individuals who are depicted in AI-generated sexually explicit content without their consent to sue the creators and knowing distributors of such content in federal court, providing access to the federal judicial system regardless of the amount in controversy.

The DEFIANCE Act provides for significant damages, including compensatory damages for emotional distress, economic harm, and reputational injury, as well as punitive damages for particularly egregious or malicious conduct. Statutory damages of up to $150,000 are available for cases where actual damages are difficult to quantify, providing meaningful deterrence even when specific financial losses cannot be precisely calculated. Prevailing plaintiffs can recover reasonable attorney's fees and litigation costs, removing a significant financial barrier for victims seeking legal recourse.

The statute of limitations is 10 years from the date of discovery, recognizing that victims may not become aware of deepfake content depicting them for extended periods, particularly when content circulates in private channels or on obscure platforms. The Act defines covered content broadly to include any visual depiction that appears to authentically depict a real, identifiable individual engaged in sexually explicit conduct, regardless of the specific technology used to create it. This technology-neutral definition encompasses traditional image manipulation, GAN-generated deepfakes, diffusion model outputs, and any future technology that produces realistic synthetic intimate imagery. The DEFIANCE Act includes exemptions for law enforcement activities, bona fide medical or scientific research, and content where the depicted individual provided informed written consent.

Legal Reference: Proposed DEFIANCE Act (S. 3696, 118th Congress); see also Violence Against Women Act (VAWA) provisions on non-consensual intimate images; 28 U.S.C. § 1331 (federal question jurisdiction for DEFIANCE Act claims).
Q: What legal protections exist for deepfake victims and how can they seek removal? +

Victims of deepfake exploitation have multiple legal avenues for seeking removal of content and obtaining compensation, though the process can be complex and time-sensitive. The first step is typically reporting the content directly to the hosting platform through its dedicated reporting mechanisms. Major platforms including Google, Meta, X, TikTok, and Pornhub have dedicated reporting channels for non-consensual intimate imagery, including synthetic content. Google also accepts requests to remove non-consensual explicit deepfakes from search results, which can limit the content's discoverability even if the original hosting site does not remove it.

Emergency court orders are a powerful tool when platform reporting is insufficient or too slow. Victims can file for temporary restraining orders and preliminary injunctions requiring platforms and individuals to remove deepfake content immediately. In cases involving non-consensual intimate imagery, courts have generally been willing to grant emergency relief given the severe and ongoing nature of the harm. California's AB 602 and similar state laws expressly authorize injunctive relief as a remedy. DMCA takedown notices can also be effective in some circumstances, particularly when the deepfake incorporates copyrighted photographs or other protected works.

Beyond platform removal, victims can pursue civil lawsuits under multiple legal theories: right of publicity claims under California Civil Code Section 3344, intentional infliction of emotional distress, invasion of privacy under both false light and appropriation of likeness theories, defamation if the deepfake conveys false statements of fact, and violations of state-specific deepfake statutes. Victims should also report deepfake content to the FBI's Internet Crime Complaint Center (IC3) and local law enforcement, as criminal prosecution may be available. Documenting the deepfake content through screenshots, web archives, and forensic preservation before it is removed is critical for both criminal prosecution and civil litigation.

Legal Reference: Cal. Civ. Code § 1708.86 (AB 602, injunctive relief for deepfake victims); Cal. Civ. Code § 3344 (right of publicity claims); 17 U.S.C. § 512(c) (DMCA takedown notice procedures); Restatement (Second) of Torts §§ 652A-652E (privacy torts applicable to deepfakes).
Q: What First Amendment defenses apply to AI-generated deepfakes? +

The First Amendment protects freedom of speech and expression, and several defenses based on free speech principles may apply to certain categories of AI-generated deepfakes, though these defenses have significant limitations when the deepfake is designed to deceive or harm. Satire and parody represent the strongest First Amendment defenses for deepfake content. If an AI-generated video is clearly satirical and would not be understood by a reasonable viewer as depicting actual events, it may receive robust constitutional protection. California's AB 730 explicitly exempts satire and parody from its election deepfake prohibition, recognizing the long American tradition of political satire.

Newsworthy commentary and criticism may also receive First Amendment protection. Journalists and commentators who use deepfake examples to discuss the technology itself, illustrate the dangers of synthetic media, report on deepfake incidents, or critique public figures' responses to the technology may argue their use is protected speech. The key factor is whether the deepfake serves a legitimate communicative purpose beyond mere deception or exploitation. Political speech receives the highest level of First Amendment protection, but courts have consistently held that deliberately deceptive speech designed to defraud or cause material harm may be regulated without violating the First Amendment.

Non-consensual intimate deepfakes receive essentially no First Amendment protection. Courts have uniformly held that non-consensual pornographic content, including synthetic material, does not constitute protected speech because the severe harm to the depicted individual far outweighs any marginal expressive value. Similarly, deepfakes used to commit fraud, extortion, or identity theft are criminal conduct, not protected expression. The transformative use test from Comedy III Productions v. Saderup may provide some protection for artistic works that use a person's likeness in a sufficiently transformative way, but realistic deepfakes intended to deceive generally fail this test.

Legal Reference: U.S. Const. amend. I (freedom of speech); Cal. Elec. Code § 20010(b) (satire/parody exemption in AB 730); Comedy III Productions v. Gary Saderup, 25 Cal. 4th 387 (2001) (transformative use test); United States v. Alvarez, 567 U.S. 709 (2012) (false speech and First Amendment limits).
Q: What state laws beyond California address AI deepfakes? +

While California was among the earliest states to enact deepfake-specific legislation, numerous other states have followed with their own laws addressing various aspects of synthetic media. As of 2026, over 40 states have enacted some form of deepfake legislation, creating a complex patchwork of state-level protections with varying scope, penalties, and enforcement mechanisms.

Texas was the first state to criminalize the creation of deepfake videos intended to harm political candidates or influence elections, with SB 751 making it a Class A misdemeanor punishable by up to one year in jail and fines up to $4,000. Virginia was among the first states to explicitly include deepfakes within its revenge porn statute, making it a Class 1 misdemeanor to disseminate or sell sexually explicit deepfake content without consent. New York enacted legislation requiring informed consent before creating digital replicas of individuals for commercial purposes and providing civil remedies for victims of non-consensual deepfake pornography. Georgia, Minnesota, Hawaii, and Illinois have all enacted laws addressing non-consensual intimate deepfakes with varying criminal penalties and civil remedies.

Washington state's legislation addresses both non-consensual intimate images and election-related deepfakes, with a focus on requiring disclosure labels on synthetic media used in political advertising. Florida's deepfake law targets both sexually explicit deepfakes and politically deceptive content, with civil penalties and criminal sanctions. For individuals and businesses operating across state lines, this patchwork creates significant compliance challenges, as the applicable law depends on where the content is created, where it is distributed, and where the depicted person resides. This complexity is a primary argument for comprehensive federal legislation like the NO FAKES Act and DEFIANCE Act, which would establish uniform national standards while allowing states to provide supplemental protections.

Legal Reference: Tex. Elec. Code § 255.004 (SB 751, election deepfakes); Va. Code § 18.2-386.2 (revenge porn statute covering deepfakes); N.Y. Civ. Rights Law § 52-c (digital replica consent); Wash. Rev. Code § 9A.86.010 (non-consensual intimate images); see also National Conference of State Legislatures (NCSL) tracking of state deepfake legislation.

Need Help With a Deepfake Legal Issue?

Generate professional cease-and-desist letters, DMCA takedown notices, and legal demand documents in minutes.

Create Documents