Private members-only forum

ElevenLabs Voice Cloning Consent Policy 2026: Legal Requirements & Commercial Use

Started by PodcastPro_Marcus · Nov 30, 2025 · 58 replies
Voice cloning laws vary by state and are evolving rapidly. ElevenLabs terms change frequently. Always verify current TOS and consult an attorney for specific situations.
PM
PodcastPro_Marcus OP

I run a podcast network and we're looking at using ElevenLabs to clone my own voice for ad reads. The idea is I'd record a few hours of training audio, then use the AI to generate sponsor spots so I don't have to personally record every single ad.

Is this legal for commercial use? I've seen stories about people getting sued over voice cloning and I want to make sure I'm doing this right. It's my own voice, I'm not trying to clone anyone else.

Also wondering about the ElevenLabs license - if I'm on their paid plan, do I actually own the voice clone or do they have rights to it?

RS
Rachel_MediaLaw Attorney

Good question, and smart to think about this before diving in. Let me break down the legal landscape for voice cloning in 2025:

Cloning your OWN voice - generally legal:

  • You have the right to use your own voice commercially
  • Creating an AI clone of your voice for your own business is legally similar to recording yourself
  • No consent issues when it's your voice and your commercial project

State voice rights laws to be aware of:

  • California (Civil Code 3344): Strong right of publicity protections. You can use YOUR voice freely, but cloning others requires written consent
  • New York (Civil Rights Law 50-51): Similar protections. The state expanded these to cover AI-generated voice replicas in 2024
  • Tennessee (ELVIS Act 2024): Explicitly covers AI voice cloning - requires consent for using anyone's voice
  • Other states: Many have general right of publicity laws that courts are interpreting to cover voice cloning

For your use case - cloning your own voice for your own podcast ads - you're in the clear legally. The issues arise when people clone voices without the voice owner's consent.

VT
VoiceTech_Nina

I've been using ElevenLabs professionally for about 18 months now. Here's what you need to know about their terms:

ElevenLabs License Terms (as of January 2025):

  • On paid plans (Starter, Creator, Pro, Enterprise), you get commercial use rights to audio you generate
  • You must own or have rights to any voice you clone - this is in their TOS Section 5
  • They require you to confirm you have consent before cloning any voice
  • Free tier is for non-commercial use only

Who owns the voice clone?

This is where it gets nuanced. You retain rights to audio content you create using their service. However, ElevenLabs retains rights to the underlying model and technology. Your custom voice profile is essentially "your voice interpreted through their tech" - you can use it commercially per the license, but you don't own the actual AI model.

For podcasters like yourself, the practical answer is: yes, you can use your cloned voice commercially on paid plans. Just keep records proving the voice is yours.

JW
AudiobookJay

I actually do this commercially and can share my experience. I'm an audiobook narrator and I use ElevenLabs to clone my voice for certain projects.

My setup:

  • ElevenLabs Pro plan ($99/month)
  • Cloned my own voice using about 3 hours of clean audio
  • Use it for audiobook narration on specific projects where the publisher agrees to AI-assisted production
  • Revenue is around $4-5k/month from AI-generated audiobooks

What I've learned:

  1. Always disclose AI involvement to clients/publishers - it's becoming industry standard
  2. ACX (Amazon's audiobook platform) now requires disclosure of AI narration
  3. Some publishers explicitly prohibit AI voices - check contracts before pitching
  4. Quality still requires human editing - I spend time cleaning up pronunciation and pacing

The technology is good enough for commercial work now, but transparency is key. Most issues I've seen come from people trying to hide the AI involvement, not from the actual use of voice cloning.

CW
ContentWarning_Dave

Just want to add a warning here since this thread is talking about commercial use - do NOT clone celebrity voices or public figures, even for "parody" purposes.

Recent cases that went badly:

  • A YouTuber got sued for using an AI-generated "Morgan Freeman" voice in sponsored content - settled for undisclosed amount but had to remove all videos
  • A podcast that used AI-cloned voices of politicians for "comedy" got cease and desist letters
  • Multiple Twitch streamers have been DMCA'd for using celebrity voice clones

The law is very clear on this: using someone else's voice without consent, especially commercially, violates their right of publicity. "Parody" is not a blanket defense for voice cloning the way it can be for visual works.

ElevenLabs actually bans cloning public figures without consent in their TOS. They've suspended accounts for this.

RS
Rachel_MediaLaw Attorney

@ContentWarning_Dave raises an important point. Let me expand on the consent requirements since this is where most legal trouble happens:

Consent requirements for voice cloning:

  • Your own voice: No consent needed beyond agreeing to platform TOS
  • Employees/contractors: Get written consent. Include it in employment agreements or as a separate release
  • Third parties: Explicit written consent required. Should specify permitted uses, duration, compensation
  • Deceased individuals: Rights may pass to estate - varies by state. California extends publicity rights 70 years after death

What proper consent documentation should include:

  1. Clear identification of the voice being cloned
  2. Specific permitted uses (commercial, non-commercial, platforms, etc.)
  3. Duration of the license
  4. Compensation terms if any
  5. Right to revoke (or not)
  6. Acknowledgment that AI/synthetic voice technology will be used

For @PodcastPro_Marcus's situation - since it's your own voice, you just need to document that fact. Keep a simple signed statement confirming you're the voice owner, along with your original training recordings. This protects you if anyone ever questions the authenticity.

PM
PodcastPro_Marcus OP

This is incredibly helpful. So to summarize my understanding:

  • Cloning my own voice = legal
  • Commercial use on ElevenLabs paid plan = allowed per their TOS
  • Keep documentation proving the voice is mine
  • Probably should disclose to sponsors that ads use AI voice

One more question - what about if I want to license the voice clone to other shows in my network? Like if another podcast host wants to use MY cloned voice for their ads because sponsors like my voice. Any issues there?

VT
VoiceTech_Nina

@PodcastPro_Marcus that gets into interesting territory. If you're licensing your voice (even the AI clone) to third parties, you're essentially creating a voice talent licensing arrangement.

Things to consider for licensing your cloned voice:

  • You can absolutely do this - your voice, your rights to license it
  • Create a proper licensing agreement specifying permitted uses
  • Consider whether you want approval rights over the actual content
  • Make sure licensees aren't creating content that could damage your reputation
  • Check ElevenLabs TOS on sublicensing - some plans have restrictions

Traditional voice actors license their work all the time. AI cloning just makes it more scalable. But that scalability means you need to think carefully about where your voice appears. You don't want your cloned voice showing up in ads for products you wouldn't personally endorse.

I'd suggest working with an entertainment attorney to draft a standard licensing template if you're going to do this regularly. Protects both you and the licensees.

KL
KateLegalTech Attorney

Late to this thread but wanted to add some important updates on the regulatory landscape since voice cloning legal is a hot topic right now:

Recent Regulatory Developments:

  • FTC enforcement: The FTC has started taking action against deceptive uses of AI voice cloning in advertising. Make sure any AI-generated content is clearly identified as such
  • State legislation: Besides CA, NY, and TN, at least 12 other states introduced voice cloning bills in 2025. Some passed, others pending
  • NO AI FRAUD Act (federal): Still in committee but would create federal standards for voice cloning consent
  • Platform policies: YouTube, TikTok, and Meta all updated policies requiring disclosure of synthetic voices

Best practices for commercial voice cloning in 2025:

  1. Only clone voices you have clear rights to
  2. Document consent thoroughly
  3. Disclose AI involvement to business partners and platforms
  4. Monitor where your voice content appears
  5. Stay updated on changing laws - this area is evolving fast

For legitimate use cases like @PodcastPro_Marcus's - using your own voice commercially through proper channels - the legal risk is minimal. Just stay transparent and document everything.

TG
TinaGriffin_VO

I'm a full-time voice actor with 12 years of experience and a SAG-AFTRA member. I have to say, this thread makes me deeply uncomfortable. Everyone is talking about voice cloning like it's just another productivity tool, but for those of us who make a living with our voices, this technology is an existential threat.

SAG-AFTRA has been very clear on this: the union's position is that AI voice cloning must require informed consent, fair compensation, and ongoing residual payments to the voice talent. The 2023 strike wasn't just about screen actors - voice performers were a huge part of the conversation. We won contract language requiring consent for AI use of our performances, but enforcement is another story entirely.

The problem isn't people cloning their own voices. It's that tools like ElevenLabs make it trivially easy for anyone to upload a few minutes of audio and create a clone. I've already had two instances where clients tried to clone my voice from demo reels I sent them, then cancel the contract because they "didn't need me anymore." That's theft, plain and simple.

DX
DevOps_Xander

Game developer here. We've been evaluating ElevenLabs and competitors for NPC dialogue in our upcoming RPG. The volume of voice lines needed for an open-world game is staggering - we're talking 40,000+ lines across hundreds of characters. Traditional voice acting for that scope would cost us north of $2 million.

Our legal team spent three months reviewing the landscape before we moved forward. Here's what we settled on: we hired 15 voice actors, got explicit written consent and paid them a flat fee plus a revenue share, then used ElevenLabs Enterprise to create voice clones for generating the bulk NPC dialogue. The actors recorded about 2 hours each of training audio, plus they did all the main storyline dialogue themselves.

The key legal document was our "Voice Likeness License Agreement" which specifies exactly how the clones can be used, limits it to this specific game title, and gives actors the right to audit how their voice is being used. Our attorney modeled it on SAG-AFTRA's AI provisions but adapted it for non-union indie game work.

TG
TinaGriffin_VO

@DevOps_Xander At least you're doing it the right way - getting consent and paying people. But I want to push back on the framing. You say traditional VO would cost $2 million like that's unreasonable. That's $2 million going to working artists. When you use AI clones instead, most of that money stays with the studio. The 15 actors you hired are doing the work of what would have been 50-100 actors.

I'm not saying your approach is illegal - it sounds like you've covered your legal bases. But the voice acting community sees this as the beginning of the end. Once those cloned voices exist, what stops you from using them in your next game? Or licensing them to another studio? The "limited to this title" clause is only as strong as your willingness to enforce it.

This is exactly what we warned about during the strike. The technology isn't the problem - it's the economic incentive to replace human performers at scale.

AL
AccessibleLearning_Jo

I want to offer a different perspective here. I work in accessibility technology and we use ElevenLabs to create custom voice profiles for people with degenerative speech conditions like ALS. Before voice cloning, when someone lost the ability to speak, they were stuck with a generic robotic voice that sounded nothing like them. Now we can bank their voice while they still can speak and give them back something that sounds like them.

The consent issue is straightforward in our case - the person themselves is requesting the clone of their own voice. But we've run into legal gray areas around what happens after the person passes away. Can the family continue using the voice clone? Does the estate have rights to it? We had one situation where a patient's ex-spouse tried to claim rights to the voice profile during a divorce proceeding.

I understand the concerns voice actors have, and those are valid. But I hope any regulation that comes out of this carves out clear exceptions for accessibility and medical use. The ElevenLabs terms don't currently distinguish between entertainment and accessibility use cases, and they probably should.

RS
Rachel_MediaLaw Attorney

@AccessibleLearning_Jo raises a really important point about accessibility that often gets lost in these debates. Let me address the estate/deceased voice rights question since several people have asked about this:

Posthumous voice rights - state by state:

  • California: Right of publicity survives death for 70 years. The estate controls use of the deceased's voice, including AI clones
  • Tennessee (ELVIS Act): Explicitly covers posthumous AI voice use. Named after Elvis Presley for a reason - the estate actively protects his voice likeness
  • Indiana: Right of publicity survives for 100 years after death
  • New York: Extended posthumous protections to AI-generated likenesses in 2024

For the accessibility case - if someone creates a voice clone for medical purposes while alive, the question of what happens after death is genuinely unsettled law in most jurisdictions. The voice clone was created with consent for a specific purpose (communication assistance). Whether the estate can repurpose it, or whether ElevenLabs can retain the voice model, depends on the original terms of service agreement and state law.

My recommendation for accessibility providers: include explicit terms about posthumous use in your intake agreements. Cover what happens to the voice data, who controls it, and whether it can be deleted upon request by the estate.

MC
MarketingChris_Agency

We're a mid-size marketing agency and we've been using ElevenLabs for about eight months for client work. Specifically, we use it to produce radio and podcast ad spots in multiple languages using the client's CEO's voice. The CEO records in English and we clone the voice to produce Spanish, French, and German versions.

The legal setup we use: the client signs a voice cloning authorization as part of our services agreement, which explicitly grants us permission to create and use the AI voice clone for the duration of the campaign. We also include a clause stating that all voice data is deleted within 30 days of campaign completion.

One issue we ran into with the FTC: in early 2026, the FTC released updated guidance on AI-generated content in advertising. The key takeaway is that if a "reasonable consumer" would be misled into thinking they're hearing the actual person speak, you need to disclose the AI involvement. We now include a brief disclaimer at the end of all AI-voiced ads. It hasn't hurt conversion rates at all, and it keeps us compliant.

For other agencies considering this - the cost savings are significant. We're producing multilingual ad campaigns for about 40% of what traditional multi-language VO would cost. But you absolutely need proper consent documentation and FTC compliance baked into your workflow.

NB
NewbiePodcaster_Sam

Okay, I'm going to be the person who admits they're confused. I'm just starting a small podcast and I've been using the free tier of ElevenLabs to generate intro/outro music with a voiceover. Am I already breaking the law? I didn't clone anyone specific - I just used one of their pre-made voices from the library.

Also, what's the difference between using a pre-made stock voice and cloning? Do the same laws apply? I didn't sign anything about consent because it's just a stock voice, not a real person. Or is it a real person? I genuinely don't know how these stock voices are created.

Sorry if these are basic questions but this thread has me worried that I might be doing something wrong without realizing it.

VT
VoiceTech_Nina

@NewbiePodcaster_Sam Don't panic! Let me clarify a few things:

Stock/pre-made voices vs. custom clones:

  • Stock voices in ElevenLabs' library are voices they've licensed or created. When you use these, ElevenLabs has already handled the consent/rights issues. You're covered by their TOS
  • Custom clones are when you upload audio of a specific person's voice to create a new voice profile. This is where consent laws apply heavily

About the free tier: The main limitation is that free tier is for non-commercial use only. If your podcast is monetized (ads, Patreon, sponsorships), you technically need a paid plan. If it's truly a hobby podcast with no revenue, you're fine on free tier.

The stock voices in the ElevenLabs library were trained on voice data that ElevenLabs has the rights to use. Some are synthetic composites, some are based on licensed voice recordings. Either way, that's ElevenLabs' legal responsibility, not yours. You just need to follow their usage terms for your plan tier.

DR
DocFilm_Renata

Documentary filmmaker here with a specific question that I think adds to this discussion. I'm working on a historical documentary about a civil rights leader who passed away in 1998. I want to use ElevenLabs to recreate their voice reading from their own published writings, using archival recordings as training data.

The estate has given us permission to use archival footage and audio in the documentary. But they didn't specifically address AI voice cloning because it wasn't on anyone's radar when we negotiated the license. Do I need to go back and get specific permission for the voice clone? The estate's attorney says the existing license covers "all audio reproduction" but I'm not sure AI synthesis qualifies as "reproduction."

This feels like it should be a straightforward fair use / licensed use situation, but the AI element complicates everything. Has anyone dealt with something similar? The documentary is for a major streaming platform and they're requiring us to have airtight clearances before they'll distribute it.

KL
KateLegalTech Attorney

@DocFilm_Renata This is a really nuanced situation. Let me give you my analysis, though I'd strongly recommend engaging entertainment counsel who specializes in documentary clearances:

The core issue is whether "audio reproduction" in your existing license encompasses AI-generated synthesis. In my view, it almost certainly does not. Traditional "reproduction" means copying or re-broadcasting existing recordings. AI voice cloning creates new audio that never existed - the deceased person never actually said these specific words in this specific way. That's generation, not reproduction.

You need to go back to the estate and get a specific amendment to your license covering: (1) use of archival audio as AI training data, (2) creation of synthetic voice output, and (3) the specific content the synthetic voice will "speak." Most estates are open to this if approached respectfully, especially when the content is the person's own published writings.

The streaming platform is right to require airtight clearances. One high-profile lawsuit over unauthorized posthumous voice cloning could tank the entire project. Get the amendment, pay the additional licensing fee if required, and document everything. The cost of doing it right is a fraction of the cost of doing it wrong.

SF
ScamFighter_Priya

I'm coming at this from a completely different angle. I'm a victim of voice cloning fraud. Someone used ElevenLabs (or a similar tool - we're still investigating) to clone my voice from my YouTube channel and used it to call my elderly mother pretending to be me in distress, asking her to wire money. She sent $4,500 before my brother figured out what was happening.

We filed a police report and the FBI cyber crimes division is involved, but progress is slow. ElevenLabs' response when we reported the abuse was... underwhelming. They said they'd "investigate" and suspended an account, but couldn't tell us if that account actually created the clone of my voice due to "privacy policies." The irony of citing privacy while someone used their tool to violate mine is not lost on me.

The FTC's 2025 Voice Cloning Challenge was supposed to help develop solutions for this, but I haven't seen any meaningful protections materialize. For everyone discussing the commercial use cases in this thread - please also think about the dark side. Every improvement in voice cloning fidelity makes scams like what happened to my family more convincing.

MJ
ModeratorJanelle Moderator

Thank you @ScamFighter_Priya for sharing your experience. I want to acknowledge that voice cloning fraud is a serious and growing problem, and victim experiences deserve to be centered in these discussions.

A quick note for everyone in this thread: please keep the discussion constructive and focused on legal/policy aspects. We've seen some heated exchanges in other voice cloning threads and I want to keep this one productive. Both the legitimate use cases and the abuse concerns are valid and important.

For anyone who has been a victim of voice cloning fraud, the FTC has a dedicated reporting page and the FBI's IC3 (Internet Crime Complaint Center) accepts reports. Document everything - the cloned audio if you have it, call records, financial transactions. This evidence is crucial both for law enforcement and for any civil action you might pursue.

GT
GlobalTech_Anwar

I want to bring the international perspective into this conversation since most of the discussion has been US-focused. I lead compliance for a voice AI company based in London, and the regulatory landscape outside the US is quite different.

EU AI Act (effective August 2025):

  • Voice cloning falls under "limited risk" AI systems, which means mandatory transparency obligations
  • Any AI-generated audio must be labeled as artificially generated - no exceptions for commercial use
  • Deepfakes (including voice) that could be mistaken for real are subject to additional disclosure requirements
  • GDPR applies to voice data as biometric data - you need explicit consent under Article 9 to process someone's voice for cloning

UK approach: The UK has taken a more "innovation-friendly" stance post-Brexit but still requires compliance with the UK GDPR for voice data processing. The UK's AI Safety Institute has published guidelines on voice synthesis that recommend voluntary labeling, but legislation is expected in late 2026.

For companies using ElevenLabs internationally: be aware that generating voice content in one jurisdiction and distributing it in another creates complex compliance questions. EU citizens' voice data processed through US servers may trigger GDPR enforcement. We've seen the first cross-border voice cloning complaints filed with EU data protection authorities.

CW
ContentWarning_Dave

@ScamFighter_Priya I'm so sorry about what happened to your mother. This is exactly the kind of abuse scenario that keeps getting hand-waved away by the "innovation at all costs" crowd.

I've been tracking voice cloning fraud cases for a cybersecurity blog I contribute to, and the numbers are staggering. The FTC reported a 300% increase in AI voice scam complaints between 2024 and 2025. The median loss per victim is around $3,000, and elderly individuals are disproportionately targeted because the cloned voices of family members are incredibly convincing over a phone connection.

What frustrates me about ElevenLabs specifically is that their verification system for voice cloning is minimal. On Professional Voice Cloning, they require you to read a consent statement, but on Instant Voice Cloning (available on lower tiers), you just need about 60 seconds of audio. There's no meaningful verification that the audio belongs to you or that you have consent. The barrier to misuse is absurdly low.

HV
HollywoodVoice_Craig

25-year voice over industry veteran here. I do national TV commercials, video game characters, and animated series. I want to share something that happened to our industry that isn't getting enough attention in these legal discussions.

A major car manufacturer approached my agency last year wanting to use AI voice cloning for their 2026 ad campaign. They wanted to clone the voice of their longtime spokesperson (a well-known actor) and generate hundreds of localized radio spots without paying session fees for each recording. The actor's contract didn't explicitly address AI cloning because it was signed in 2021.

The actor's team pushed back hard. After months of negotiation, they settled on a deal where the actor recorded new training audio (paid at premium session rates), received a per-use royalty for every AI-generated spot, retained approval rights over all output, and the license was limited to 18 months. The total compensation ended up being more than traditional recording would have cost, which is actually the right outcome - the actor's voice has value precisely because of their fame and skill.

This should be the model. Voice cloning shouldn't be a way to cut talent out of the equation - it should be a new form of licensing that compensates performers fairly for the use of their most personal asset.

EL
eLearning_Raj

E-learning content creator here. We produce corporate training modules for Fortune 500 companies and we've been using ElevenLabs extensively for about a year. I wanted to share our experience because enterprise e-learning is a massive use case that doesn't get discussed much.

We produce roughly 200 hours of training content per quarter across 8 languages. Before ElevenLabs, we were spending $180,000/quarter on voice talent. Now we spend about $35,000/quarter on the Enterprise ElevenLabs plan plus a small amount for quality review. The cost reduction is dramatic but the legal setup required significant upfront investment.

Our enterprise legal framework includes:

  • Custom voice profiles created from consenting voice actors we hired specifically for this purpose
  • Work-for-hire agreements where the actors explicitly assign all rights including AI synthesis rights
  • A data processing agreement with ElevenLabs that complies with our clients' data security requirements
  • Disclosure language in all training modules noting AI-generated narration

The biggest legal headache has actually been client contracts, not voice rights. Many enterprise clients have AI usage policies that restrict or prohibit AI-generated content in their training materials. We've had to renegotiate several MSAs to explicitly address AI voice narration.

TG
TinaGriffin_VO

@eLearning_Raj Thanks for confirming exactly what I was worried about. You just described replacing $180,000/quarter in voice actor income with $35,000/quarter. That's $580,000 per year taken out of the pockets of working voice actors.

And before someone says "but the actors consented and got paid" - yes, a few actors got a one-time fee to record training audio. The rest of the voice actor community who would have been hired for those 200 hours of content per quarter? They just lost that income permanently. "Work for hire with AI synthesis rights" means those actors signed away their ongoing earning potential for a one-time payment.

This is the fundamental labor issue with voice cloning. It's not about whether it's legal - clearly it can be done legally. It's about whether it's ethical to use technology to massively reduce compensation for creative professionals. The legal framework is lagging behind the economic reality.

LP
LegalPod_Derek

I host a legal podcast focused on IP law and I've been covering voice cloning litigation for two years now. I want to highlight some recent celebrity voice cloning lawsuits that are shaping the legal landscape because these cases will set precedents that affect everyone in this thread:

Key cases to watch (2025-2026):

  • Scarlett Johansson vs. AI app developer: The actress sent a cease-and-desist after an app used a voice "inspired by" hers. This highlighted that even voices that are similar to but not directly cloned from a celebrity can trigger right of publicity claims
  • Estate of Robin Williams: Williams' estate has been aggressive about preventing AI use of his voice, citing his specific wishes that his likeness not be exploited after death. They've sent C&Ds to multiple AI voice platforms
  • Drake/The Weeknd AI song: While primarily a music copyright case, it established that AI-generated content mimicking a specific artist's voice can be removed from platforms under existing copyright frameworks
  • Tennessee vs. unnamed defendants: First criminal prosecution under the ELVIS Act for commercial use of an AI-cloned country music singer's voice without consent

The trend is clear: courts are increasingly willing to extend traditional right of publicity protections to AI-generated voice content. Platforms like ElevenLabs are likely to face increased pressure to implement stricter verification systems.

AK
AudioArchivist_Kim

Fascinating thread. I work at a university library managing a historical audio archive. We've been approached by several documentary projects wanting to use our archival recordings as training data for voice cloning of deceased historical figures. This raises questions nobody in our field was prepared for.

Our archive holds recordings of major 20th century figures - politicians, activists, authors - many donated by families with deeds of gift that predate AI entirely. The typical deed says something like "for research, educational, and public use." Does AI voice cloning fall under "educational use" if it's for a documentary? What about a commercial audiobook?

We've essentially put a moratorium on releasing archival audio for AI training purposes until we get clearer legal guidance. We've consulted with three different IP attorneys and gotten three different opinions. The intersection of archival access rights, posthumous publicity rights, and AI training data rights is genuinely uncharted territory.

If any of the attorneys in this thread have guidance on how archives should approach these requests, I'd be very grateful. We're trying to be responsible stewards of these materials while not being so restrictive that we prevent legitimate scholarly and educational use.

RS
Rachel_MediaLaw Attorney

@AudioArchivist_Kim This is genuinely one of the hardest questions in this space right now. Let me offer some framework, though I'll caveat that there isn't settled law here:

Key considerations for archives:

  1. Deed of gift language: "Research and educational use" almost certainly doesn't encompass AI training. Deeds written before 2020 couldn't have contemplated this use. You'd need to interpret the donor's likely intent, and most courts would lean toward a narrow reading
  2. Posthumous publicity rights: Even if the archive has the right to play the recordings, the deceased person's estate may separately hold publicity rights that cover AI synthesis. These are two different legal rights
  3. Transformative use defense: There's an argument that AI voice synthesis from archival audio is transformative enough to qualify as fair use, but this hasn't been tested in court and I wouldn't want an archive to be the test case

My practical recommendation: require requestors to obtain separate clearance from the relevant estate before you release audio for AI training purposes. This protects the archive from liability while still allowing legitimate use. Include language in your release forms specifically excluding AI training unless separately authorized.

BT
BrandTech_Melissa

I run a brand consultancy and I want to flag something about the FTC's evolving position on voice cloning in advertising that I don't think has been fully covered here.

In January 2026, the FTC issued supplementary guidance to their existing endorsement guidelines that specifically addresses AI-generated voices in advertising. The key points that affect anyone using ElevenLabs for commercial ad content:

  • Endorsement attribution: If an AI-cloned voice of a real person is used in an ad, it's treated as an endorsement by that person. The FTC's position is that consumers would reasonably believe the person is actually endorsing the product
  • Disclosure requirements: AI-generated voice content in ads must include "clear and conspicuous" disclosure. A small text disclaimer isn't sufficient - the FTC wants audible or prominently visual disclosure
  • Liability: Both the advertiser AND the platform/tool provider could be liable for deceptive AI voice use in advertising

For @MarketingChris_Agency and others doing agency work - make sure your disclosure practices align with the January 2026 guidance, not just the older 2025 rules. The bar has been raised significantly. We've updated all our client contracts to include explicit AI voice disclosure obligations.

DX
DevOps_Xander

@TinaGriffin_VO I hear your concerns and they're legitimate. But I want to push back on one thing: the idea that AI voice cloning is purely extractive for voice actors. In our case, the 15 actors we hired are earning more than they would have on a traditional project because the revenue share means they get paid every time we ship an update with new dialogue.

The actors who recorded for us in 2025 have collectively earned about $340,000 so far, and the game hasn't even launched yet. That's from the initial recording fees plus the revenue share on pre-orders. When we launch and start generating real revenue, their share increases. Compare that to traditional VO where you do a session, get a buyout, and never hear from the studio again.

I'm not saying every company will do it this way - many won't. But the technology enables new compensation models that can actually be better for performers if the contracts are structured fairly. The problem isn't the technology itself, it's companies that use it to cut costs at the expense of talent. That's a labor negotiation issue, not a technology issue.

SV
SynthVoice_Comparisons

Since this thread is specifically about ElevenLabs, I think it's worth comparing their consent and legal framework with competitors. I've evaluated all the major platforms for a client and here's how they stack up:

ElevenLabs: Requires consent checkbox for voice cloning. Professional Voice Clone requires reading a consent statement aloud. Commercial use on paid plans only. Has suspended accounts for violations. No watermarking of generated audio.

Resemble.AI: Requires a consent verification video where the voice owner states their consent on camera. More rigorous than ElevenLabs' checkbox approach. Offers audio watermarking to trace generated content. Generally considered the most compliance-friendly option for enterprise.

Play.ht: Similar consent requirements to ElevenLabs. Offers a "Voice Cloning Consent API" for enterprise customers that integrates consent collection into the workflow. Commercial use restricted to paid plans.

Microsoft Azure Custom Neural Voice: Most restrictive. Requires a recorded consent statement, limits access to approved use cases, and has a human review process for applications. Best for regulated industries but slowest to deploy.

If legal compliance is your top priority, Microsoft Azure or Resemble.AI offer the strongest verification frameworks. ElevenLabs offers the best voice quality in my testing, but their consent verification is the weakest among major platforms. This is a known criticism and something I expect them to address as regulation tightens.

JW
AudiobookJay

I want to update this thread on the audiobook front since things have changed significantly since my first post. The major publishers have now released formal policies on AI narration, and it's a mixed bag:

Publisher AI narration policies (as of February 2026):

  • Penguin Random House: Allows AI-assisted narration only for backlist titles with author consent. Requires disclosure on the audiobook listing. No AI narration for frontlist titles
  • HarperCollins: Blanket prohibition on fully AI-narrated audiobooks. Allows AI for pronunciation assistance and "voice enhancement" but the core narration must be human
  • Simon & Schuster: Case-by-case review. They've approved a few AI-narrated projects for niche non-fiction titles where finding a narrator with subject expertise is difficult
  • Indie/self-published (via ACX/Findaway): Most permissive. Requires disclosure but allows AI narration. Amazon's Kindle Direct has a separate "AI narrated" category

For anyone using ElevenLabs for audiobook production: check your publishing contract carefully. Many standard audiobook contracts now include clauses specifically addressing AI narration. If your contract was signed before 2025, it probably doesn't address AI at all, which creates ambiguity that could go either way in a dispute.

I've personally shifted my business model - I now position myself as a "human narrator enhanced by AI" rather than someone who produces "AI-narrated audiobooks." The distinction matters both legally and commercially.

WT
WebDev_Tomasz

Slightly tangential question but relevant to the consent/training data discussion: does anyone know what ElevenLabs' own training data practices look like? Specifically, what voice data was used to train their base models (not custom clones, but the underlying speech synthesis engine)?

I ask because there's been a wave of class-action lawsuits against AI companies over training data. If ElevenLabs trained their foundational models on voice data scraped from the internet without consent - YouTube videos, podcasts, public recordings - that could create a chain of liability that extends to users of the platform.

It's similar to the image generation debate: if Midjourney was trained on copyrighted images, does using Midjourney make you liable for the underlying copyright infringement? The same logic could apply to voice synthesis. If ElevenLabs' base model was trained on voice data that wasn't properly licensed, is there downstream liability for commercial users?

I haven't been able to find clear documentation from ElevenLabs about their training data sources. Their privacy policy mentions collecting "voice samples and audio data" but doesn't specify origins for the base model training set. This opacity is concerning for anyone building a commercial product on top of their platform.

KL
KateLegalTech Attorney

@WebDev_Tomasz That's one of the most important questions in AI law right now and unfortunately there's no clear answer yet. Let me share what we know and what's pending:

Training data liability - current state:

  • Multiple class actions have been filed against AI companies (primarily image/text generators) over training data. None have reached final judgment yet
  • The key legal theory is that using copyrighted material to train AI models constitutes infringement, regardless of whether the output is substantially similar to any specific training sample
  • For voice specifically, the argument would be based on right of publicity rather than copyright, since voice performances have both copyright and publicity right protections

Downstream user liability: This is genuinely uncertain. The closest analogy is using a product manufactured from stolen materials - generally, an innocent purchaser has some protection, but it depends on whether they knew or should have known about the provenance issue.

Practically speaking: if you're an enterprise customer using ElevenLabs, your services agreement should include indemnification provisions where ElevenLabs agrees to defend you against claims arising from their platform's training data. Check your agreement - many standard terms do include this, but the scope may be limited. If you're on a consumer plan, the TOS typically shift all risk to the user.

NB
NewbiePodcaster_Sam

@VoiceTech_Nina Thank you for clarifying! So just to make sure I understand: using the stock voices from the library is fine as long as I'm on the right plan tier for my use case. Since my podcast now has a Patreon that makes about $200/month, I should upgrade to a paid plan. Got it.

Follow-up question though: I've been thinking about cloning my own voice so I can have consistent intros even when I have a cold or my voice is tired. If I clone my own voice on a Starter plan, and then later downgrade to free, what happens to the voice clone? Can ElevenLabs keep using my voice data even after I stop paying? Their privacy policy is kind of vague on data retention.

This feels like it should be simple but after reading this whole thread I'm now paranoid about everything related to voice data.

MR
MusicRights_Elena

I work in music rights clearance and I'm seeing voice cloning issues bleed into my field in ways that weren't anticipated. Specifically, artists are now using ElevenLabs and similar tools to generate "vocal features" where an AI clone of Artist A sings on Artist B's track, with consent.

The legal question is: who gets the vocal performance credit? Traditional music contracts assume a human performed the vocals. If an AI generated the vocal track from a clone, is it a "performance" for the purpose of mechanical royalties? What about neighboring rights in jurisdictions that recognize them? The performing rights organizations (ASCAP, BMI, SESAC) haven't published clear guidance on this yet.

We had a case where an artist cloned their own voice using ElevenLabs to generate ad-libs and backing vocals for their album. Their label's contract entitled the label to a share of "all recordings featuring the artist's vocal performance." Does an AI-generated vocal qualify? The artist argued no - they didn't actually perform. The label argued yes - it's their voice regardless of how it was produced.

This was settled privately but it illustrates how existing contracts don't account for AI voice cloning. Anyone in the music industry should be adding explicit AI clauses to all new agreements.

SF
ScamFighter_Priya

@ModeratorJanelle Thank you for the resources. Update on my situation: the FBI identified the individual who cloned my voice. Turns out they had cloned voices of over 40 people using publicly available audio from YouTube, podcasts, and social media. They used ElevenLabs' Instant Voice Clone feature on a paid account registered with a stolen credit card.

What's disturbing is how easy it was. My YouTube channel has hundreds of hours of me speaking clearly in a quiet environment - perfect training data for voice cloning. The scammer didn't need any special technical skills. They literally just uploaded clips of my videos to ElevenLabs and had a usable clone within minutes.

I'm now working with an attorney to explore civil claims against both the scammer and potentially ElevenLabs for insufficient safety measures. Under California law (where I'm based), I may have claims under both the right of publicity statute and the state's consumer protection laws. My attorney is also looking at whether ElevenLabs has any duty of care to implement voice verification that could prevent this kind of abuse.

JN
JapanVoice_Naoko

Adding to @GlobalTech_Anwar's international perspective - the situation in Japan is quite unique and relevant since Japan is the world's largest market for voice acting (seiyuu culture). Japan amended its Copyright Act in 2024 to include specific provisions for AI-generated voice content.

The Japanese approach distinguishes between "voice likeness rights" (which protect specific individuals) and "voice style rights" (which are more limited). A direct clone of a specific person's voice requires consent, similar to US law. However, creating a synthetic voice that merely sounds similar to a category of voices (e.g., "young female anime character voice") is generally permitted because you're not copying any specific individual.

ElevenLabs has a significant user base in Japan, and the company has reportedly hired Japanese-speaking legal staff to handle compliance. The Japanese voice actor union (JARPA) has been in negotiations with AI voice platforms about licensing frameworks. Unlike SAG-AFTRA's more adversarial approach, JARPA is pursuing a collaborative model where voice actors can opt in to licensing their voices for AI synthesis with standardized compensation.

For anyone using ElevenLabs to produce content for the Japanese market: be aware that voice rights protections in Japan are in some ways stronger than in the US, and the cultural sensitivity around voice acting is extremely high.

EL
eLearning_Raj

@TinaGriffin_VO I understand your frustration, and I don't want to dismiss it. But I want to share the other side: before we adopted AI voice synthesis, we weren't spending $180k/quarter on voice actors. We were producing lower quality content with text-to-speech engines that sounded robotic, or we simply weren't producing multilingual content at all. The alternative to AI voice cloning for us wasn't "hire more voice actors" - it was "deliver worse training content or none at all."

That said, your point about fair compensation is valid. We recently revised our voice actor agreements to include annual payments (not just one-time buyouts) as long as we continue using their voice profiles. It's not a perfect system but it means the actors earn ongoing income from the value their voices provide. We arrived at this structure partly because of feedback from voice actor communities like this one.

I think the e-learning and corporate training sector is a good example of where AI voice synthesis creates genuine new value rather than just extracting it from existing voice actors. The market for multilingual corporate training content has expanded significantly because AI makes it affordable. That's new work that didn't exist before, not displaced work.

AL
AccessibleLearning_Jo

I wanted to follow up on my earlier post about accessibility use cases. We've had a significant development. Our organization just completed a pilot program with ElevenLabs where they provided free Enterprise-tier access for verified accessibility providers working with patients who have speech-affecting conditions.

The terms of the pilot are notable from a legal perspective:

  • Voice clones created under the accessibility program are classified as "medical assistive technology" which triggers different regulatory considerations than commercial voice cloning
  • Patient voice data is handled under HIPAA-compliant data processing agreements (at least for US patients)
  • The voice clone belongs to the patient, not to ElevenLabs or the provider. If the patient wants their voice data deleted, ElevenLabs must comply within 30 days
  • Posthumous use is governed by the patient's advance directive or healthcare proxy, not by ElevenLabs' standard TOS

This is a meaningful step forward. I hope ElevenLabs makes this a permanent program, and I hope other voice AI platforms follow suit. Accessibility use cases deserve special treatment in both platform policies and legislation.

PM
PodcastPro_Marcus OP

This thread has grown way beyond my original question and I'm grateful for all the perspectives. Quick update on my situation: I went ahead with the ElevenLabs Pro plan and cloned my voice for ad reads. Been using it for about two months now.

Practical results: it saves me roughly 6-8 hours per week on ad recording. The quality is good enough that most listeners can't tell the difference, though I did get a few emails from sharp-eared fans asking if something sounded "different." I now include a brief note in my show notes that some ad reads use AI-assisted voice technology.

One thing I didn't anticipate: some sponsors actually prefer the AI voice reads because I can turn around scripts same-day instead of scheduling recording sessions. The consistency is also better - no background noise variations, no vocal fatigue on long recording days. From a pure business perspective, it's been a net positive.

Legal documentation I put in place based on this thread's advice: signed self-consent statement, copies of all original training audio with timestamps, disclosure language in sponsor contracts, and a note in my podcast's terms of use about AI-assisted content. Thanks especially to @Rachel_MediaLaw and @KateLegalTech for the guidance.

CW
ContentWarning_Dave

Since @SynthVoice_Comparisons brought up platform comparison, I want to add a security comparison angle. I've been evaluating voice cloning platforms for their abuse prevention measures, and the differences are stark:

Abuse prevention measures by platform:

  • ElevenLabs: Basic consent checkbox, verbal consent statement for Professional Voice Clone, account-level ban for violations. No audio watermarking. No real-time monitoring for known voice profiles
  • Resemble.AI: Video consent verification, audio watermarking in all output, content moderation API that can flag potentially problematic use patterns
  • Microsoft Azure: Application-based access, human review of use cases, audio watermarking, restricted to approved scenarios. Hardest to abuse but also hardest to access
  • Open source tools (Coqui, RVC): No safeguards whatsoever. Anyone can clone any voice with zero verification. This is actually where most fraud occurs

ElevenLabs gets singled out in these discussions partly because of brand recognition, but the reality is that open-source voice cloning tools are far more dangerous from an abuse perspective. At least ElevenLabs can (and does) suspend accounts. With open-source tools, there's no entity to hold accountable.

Any meaningful regulation needs to address the full ecosystem, not just commercial platforms. Otherwise you're just pushing bad actors toward unregulated tools while burdening legitimate users with compliance costs.

FD
ForensicDigital_Mark

Digital forensics consultant here. I work with law firms on AI-related fraud cases, including several involving voice cloning. I want to address something practical that hasn't come up: how do you prove or disprove that audio is AI-generated?

Current detection capabilities are mediocre at best. The major detection tools (including ElevenLabs' own AI Speech Classifier) have accuracy rates around 70-85% depending on the audio quality and the synthesis method used. That means 15-30% of AI-generated audio can pass as human, and some human audio gets flagged as AI. This creates real problems in both legal proceedings and platform enforcement.

For anyone who might need to prove that their voice clone output is legitimately authorized (like @PodcastPro_Marcus), I recommend: maintain a chain of custody for all generated audio, keep API logs from ElevenLabs showing when content was generated and from which voice profile, and store the generation parameters. If you're ever challenged, this documentation is far more useful than any detection tool.

Conversely, for victims like @ScamFighter_Priya, the forensic challenge is significant. We can often determine that audio is AI-generated, but linking it to a specific tool or account is difficult without subpoena-level access to the platform's records. This is one reason law enforcement cases move slowly.

HV
HollywoodVoice_Craig

I want to address the SAG-AFTRA angle more specifically since I sit on a committee that's been working on AI voice policies. Here's where the union stands as of February 2026:

SAG-AFTRA AI Voice Policy (current):

  • All SAG-AFTRA contracts now require explicit, informed consent before any AI voice synthesis of a member's performance
  • Consent must be separate from the general performance agreement - it can't be buried in boilerplate
  • Members must receive compensation for AI use that is at minimum equal to what they would have earned performing the content traditionally
  • The union maintains a "No AI Without Consent" registry where members can declare they do not consent to any AI use of their voice
  • Employers who violate AI consent provisions face the same penalties as other contract violations, including arbitration and potential strike action

The union's position isn't anti-technology. It's that voice actors should control and be compensated for the use of their voices, whether the performance is live or synthesized. The car manufacturer deal I mentioned earlier is actually a model the union points to as a positive example - the actor consented, was fairly compensated, and retained control.

If you're hiring SAG-AFTRA voice actors and planning to use AI cloning, you need to negotiate AI provisions separately. Do not assume that a standard session contract covers AI use - it almost certainly doesn't, and proceeding without explicit consent is a contract violation.

DR
DocFilm_Renata

@KateLegalTech Thank you for the detailed response. We took your advice and went back to the estate for a specific AI voice cloning amendment. Good news: they agreed, but the negotiation was more complex than expected.

The estate wanted approval rights over every piece of dialogue the AI voice would "speak," which is reasonable. They also wanted a limitation that the voice clone could only read the subject's own published words - no putting new words in their mouth, even contextual narration like "in this letter, he wrote..." They felt strongly that having their deceased family member "say" things they never said would be disrespectful regardless of the content.

We agreed to all their conditions. The streaming platform's legal team reviewed the amendment and approved it. Total additional cost was about $15,000 in legal fees and a supplemental licensing fee to the estate. Worth every penny for the legal certainty.

One interesting twist: the estate requested that the documentary include an on-screen notice explaining that the voice was AI-recreated from archival recordings. The streaming platform initially pushed back on this (they wanted the AI voice to feel seamless), but the estate made it a non-negotiable condition. I actually think it adds to the documentary's honesty and the audience response has been positive.

RM
RadioMike_Veteran

30-year radio broadcast veteran here. I've been reading this entire thread and I want to offer a reality check from the broadcast industry perspective. Radio stations across the country are already using AI voice cloning at scale, and the legal and ethical conversations are lagging far behind the actual adoption.

I know of at least three major broadcast groups that have cloned the voices of their top DJs and are using AI to generate localized content for small-market stations. The DJ records their main show in one city, and AI clones handle the local weather, traffic, and station IDs for a dozen other markets. The DJ gets paid their regular salary - there's no additional compensation for the AI usage because their contracts don't address it.

The legal exposure here is enormous. These DJs didn't specifically consent to voice cloning. Their employment contracts say things like "your services are exclusive to the station" and "all recordings made during employment are work product" - but voice cloning wasn't contemplated when those contracts were drafted. When one of these DJs leaves or gets fired, what happens to their voice clone? Can the station keep using it?

I've raised these concerns internally and been told "legal is handling it." But from what I can see, nobody is actually handling it. They're just doing it and hoping nobody sues. It's a ticking time bomb.

RS
Rachel_MediaLaw Attorney

@RadioMike_Veteran What you're describing is unfortunately common and you're right - it is a ticking time bomb. Let me give the legal analysis:

"Work product" clauses and voice cloning: A standard employment agreement that makes recordings "work product" of the employer typically covers the actual recordings made during employment. It does not automatically grant the employer the right to create new, synthetic performances using the employee's voice. These are fundamentally different rights.

Think of it this way: if a studio owns the copyright to an actor's movie performance, that doesn't mean they can use AI to create a new movie starring that actor without their consent. The performance rights and the likeness/publicity rights are separate legal concepts.

What happens when the DJ leaves?

  • If no specific AI/voice cloning consent was given, the station should immediately stop using the voice clone
  • Continuing to use a former employee's cloned voice is almost certainly a right of publicity violation
  • It could also constitute false endorsement under the Lanham Act - listeners may believe the DJ still works at the station
  • In states with ELVIS Act-type legislation, it could trigger statutory damages

If I were advising those broadcast groups, I would tell them to immediately get explicit voice cloning consent from every affected DJ, negotiate fair compensation, and implement a policy for clone deactivation upon employment termination. The longer they wait, the more exposed they are.

MC
MarketingChris_Agency

@BrandTech_Melissa Thanks for flagging the January 2026 FTC guidance update. We reviewed it with our compliance team and had to make several changes to our workflow:

What we changed after the FTC January 2026 update:

  1. Our audio disclaimer moved from end of ad to beginning - the FTC's "clear and conspicuous" standard means disclosures at the end of a 30-second spot don't count
  2. We added an audible disclosure (not just text in show notes) that says "This message features AI-assisted voice technology"
  3. We created a client-facing compliance checklist that both the client and our team sign off on before any AI-voiced ad goes live
  4. We stopped using AI voice cloning for testimonial-style ads entirely - the FTC specifically called out endorsement scenarios as high-risk

The testimonial point is critical. If you're using an AI clone of a real person's voice to deliver what sounds like a personal endorsement, the FTC considers that deceptive unless: (a) the person actually endorses the product, (b) the AI use is clearly disclosed, and (c) the AI-generated statements accurately reflect the endorser's genuine opinions. Meeting all three criteria is harder than it sounds.

We've seen competitors who haven't updated their practices. It's only a matter of time before the FTC makes an example of someone in the AI voice advertising space.

GT
GlobalTech_Anwar

Following up on my earlier post about international regulations - there's been a significant development in the EU. The European Data Protection Board (EDPB) issued a formal opinion in February 2026 specifically addressing AI voice cloning under GDPR:

Key points from the EDPB opinion:

  • Voice data is confirmed as biometric data under GDPR Article 9, meaning processing requires explicit consent or another Article 9 exemption
  • Creating a voice clone constitutes "processing of biometric data for the purpose of uniquely identifying a natural person" - the strictest category
  • Consent must be freely given, specific, informed, and unambiguous. A generic checkbox does not meet this standard
  • The right to erasure (Article 17) explicitly applies to voice clone models - if a person requests deletion, the entire model must be deleted, not just the training data
  • Transfer of voice clone data outside the EU requires standard contractual clauses or adequacy decisions, which affects US-based platforms like ElevenLabs

This opinion has immediate practical implications for anyone using ElevenLabs with EU voice data or distributing AI-voiced content to EU audiences. Non-compliance penalties under GDPR can reach 4% of global annual turnover or 20 million euros, whichever is higher. These aren't theoretical risks - EU data protection authorities have been aggressive with enforcement.

TG
TinaGriffin_VO

@DevOps_Xander and @eLearning_Raj - I appreciate you both engaging with my concerns constructively rather than dismissively. The revenue share model and annual payments you describe are definitely better than straight buyouts. I'll give credit where it's due.

But I want to share some data from the voice acting community that paints a broader picture. The National Association of Voice Actors conducted a survey in late 2025, and the results are sobering: 67% of full-time voice actors reported a decline in income compared to 2023. 43% said they'd lost at least one recurring client to AI voice alternatives. The median income for full-time VO work dropped from $52,000 in 2023 to $38,000 in 2025.

Not every company is doing it the right way. For every @DevOps_Xander who builds in fair compensation, there are a hundred companies quietly replacing voice actors with AI clones and pocketing the savings. The legal frameworks being discussed in this thread are important, but they primarily protect against unauthorized cloning. They don't address the broader economic displacement that happens even when everything is perfectly legal and consensual.

I'm not asking for a ban on voice cloning. I'm asking for the industry to develop norms where the humans whose voices power these systems are treated as partners, not as raw material to be extracted from and discarded.

CT
ComplianceTech_Omar

Enterprise compliance officer here. I want to address the ElevenLabs Enterprise agreement specifically since several people in this thread are using or considering the Enterprise tier. I've reviewed their Enterprise terms for three different clients and there are some important distinctions from the consumer/pro plans:

ElevenLabs Enterprise agreement - notable provisions:

  • Data isolation: Enterprise customers get dedicated model instances. Your voice data is not commingled with other customers' data and is not used to improve ElevenLabs' general models
  • Indemnification: ElevenLabs provides limited indemnification for IP claims arising from their base technology (not from your voice data inputs)
  • Data deletion: Upon contract termination, ElevenLabs will delete all custom voice data within 90 days (vs. vague retention policies on consumer plans)
  • SLA on compliance: Enterprise agreement includes a commitment to maintain compliance with specified regulations (you choose which apply to your use case)
  • Audit rights: Enterprise customers can audit ElevenLabs' data handling practices annually

For any organization doing commercial voice cloning at scale, the Enterprise agreement is really the only option that provides adequate legal protections. The consumer plans shift nearly all risk to the user. The price differential is significant ($0/month for free vs. custom Enterprise pricing that starts around $500-1000/month), but the legal protection and data governance provisions are worth it for any serious commercial deployment.

LW
LegalWriter_Hannah

Audiobook narrator and author here. I want to add a wrinkle to the audiobook discussion that @AudiobookJay started. I recently learned that several authors are now including "AI narration prohibition" clauses in their publishing contracts. This is becoming a real negotiation point.

My agent reports that about 30% of new audiobook contracts she's negotiated in early 2026 include language that either: (a) prohibits AI narration entirely, (b) requires author approval for any AI-assisted narration, or (c) specifies that AI narration triggers a higher royalty rate to the author. Authors are starting to view the choice of narrator - human or AI - as a creative decision they should control.

There's also a growing consumer backlash. Goodreads and Audible reviews increasingly mention AI narration negatively, and some authors have seen ratings drop on AI-narrated titles. One author I know pulled their book from an AI narration deal after readers complained, even though the narration quality was technically good.

The legal right to use AI narration is one thing. The market reality is more complex. For producers considering ElevenLabs for audiobooks, factor in the reputational risk and potential reader backlash, not just the legal compliance.

MJ
ModeratorJanelle Moderator

This thread has become an incredibly valuable resource. I'm going to pin it to the top of the IP & Content category. A few housekeeping notes:

For easy reference, here's a summary of key resources mentioned in this thread:

  • FTC Voice Cloning guidance: Updated January 2026, covers advertising disclosure requirements
  • State laws: California Civil Code 3344, New York Civil Rights Law 50-51, Tennessee ELVIS Act
  • EU AI Act: Effective August 2025, mandatory transparency obligations for voice synthesis
  • EDPB opinion on voice cloning under GDPR: February 2026, voice data as biometric data
  • SAG-AFTRA AI voice provisions: No AI Without Consent registry and contract requirements
  • ElevenLabs TOS: Section 5 on voice rights, plan-specific commercial use terms

Remember that this is a discussion forum, not legal advice. The attorneys who contribute here are sharing general information, not forming attorney-client relationships. For specific situations, consult with qualified counsel in your jurisdiction.

VP
VoicePatent_Steven

IP attorney here, specializing in patent and trade secret law rather than entertainment law. I want to raise an angle that hasn't been discussed: can a unique cloned voice be considered a trade secret or proprietary asset?

Consider @PodcastPro_Marcus's scenario scaled up: a media company develops a distinctive AI voice character that becomes closely associated with their brand - think of it like a mascot, but audio-only. That voice was created using ElevenLabs' technology and a custom blend of training audio. Competitors then try to replicate that voice using different tools.

The legal protections here are layered. The underlying voice of a real person is protected by right of publicity. But what about a wholly synthetic voice that doesn't correspond to any real person? It might be protectable as a trade secret (if the training process and parameters are kept confidential), or potentially through trademark law if the voice functions as a brand identifier. The Lanham Act has been successfully used to protect distinctive sounds as trademarks - think of NBC's chimes or the MGM lion's roar.

For companies investing significant resources in developing custom AI voice characters on ElevenLabs or similar platforms, I'd recommend documenting the creation process, keeping the training data and parameters confidential, and considering trademark registration for distinctive voice identifiers. These protections layer on top of whatever rights you get through ElevenLabs' TOS.

NW
NarratorWorks_Denise

Audiobook narrator with a niche specialization in technical and medical texts. I want to respond to @LegalWriter_Hannah's point about author pushback on AI narration, because I'm seeing the opposite trend in my corner of the market.

For technical, scientific, and medical audiobooks, authors are actually requesting AI narration. The reason is practical: finding a human narrator who can correctly pronounce complex medical terminology, chemical compounds, and Latin species names is extremely difficult and expensive. I can do it because I have a biology degree and 10 years of experience, but there aren't many of us.

One publisher I work with now uses a hybrid model: I narrate the first chapter and any complex sections, and my ElevenLabs voice clone handles the more straightforward narrative portions. The author reviews all AI-generated audio for accuracy. It cuts production time by 60% and the author gets a more consistent, technically accurate product.

My point is that the "AI narration is bad" narrative is too simplistic. In some genres and use cases, AI narration is genuinely better for the end product. The key is matching the tool to the task and being transparent about it.

KD
DeepfakeDefense_Kara

I'm a policy researcher focused on deepfake legislation. I want to provide an update on state-level voice cloning laws since this is moving fast and several comments in this thread reference outdated information:

State voice cloning / deepfake laws - status as of February 2026:

  • California (AB 2602, AB 1836): Expanded protections effective January 2026. Now covers AI voice replicas specifically. Requires clear consent for any commercial voice synthesis. Creates a private right of action with statutory damages of $5,000 per violation
  • Tennessee (ELVIS Act): Still the gold standard. First state to explicitly cover AI voice cloning. Being used as a model for other states' legislation
  • New York: Expanded voice protections in 2024. Currently considering additional AI-specific amendments
  • Illinois: Introduced the AI Voice and Likeness Protection Act in 2025. Passed committee, floor vote expected spring 2026
  • Texas: Passed a voice deepfake law focused on fraud and election interference. Narrower than California/Tennessee but includes criminal penalties
  • Washington state: Introduced comprehensive AI transparency legislation including voice cloning disclosure requirements

At the federal level, the NO AI FRAUD Act and the AI Labeling Act are both in committee. Neither is expected to pass before mid-2026 at the earliest. In the meantime, state laws are creating a patchwork of different requirements that companies like ElevenLabs and their users need to navigate.

For anyone doing commercial voice cloning across state lines: you need to comply with the most restrictive state's requirements, not just your home state. If your content reaches California audiences, California law applies.

VT
VoiceTech_Nina

@NewbiePodcaster_Sam To answer your question about what happens to your voice clone data if you downgrade: I actually tested this. Here's what I found:

When you downgrade from a paid plan to free on ElevenLabs, your custom voice clones become inaccessible but are not immediately deleted. According to their current data retention policy, voice profile data is retained for up to 12 months after account downgrade or cancellation, after which it's "scheduled for deletion." However, the actual deletion timeline isn't guaranteed.

If you want your voice data definitively removed, you need to submit a specific data deletion request through their support system. Under GDPR (if you're in the EU) or CCPA (if you're in California), they're legally required to honor deletion requests within specific timeframes. For everyone else, you're relying on their voluntary data practices.

My recommendation: before downgrading, go into your account, delete all custom voice profiles manually, and then submit a data deletion request via support email. Keep a copy of the request and any confirmation you receive. This gives you documentation that you attempted to have your voice data removed, which could be important if any issues arise later.

KL
KateLegalTech Attorney

I want to wrap up my contributions to this thread with some forward-looking thoughts on where voice cloning law is headed, since I've been attending the ABA's AI and IP working group meetings:

What to expect in 2026-2027:

  • Federal legislation: Some form of federal voice cloning law is likely by mid-2027. It will probably establish minimum consent standards and preempt some (but not all) state laws. Expect heavy lobbying from both the AI industry and entertainment unions
  • Platform liability: The question of whether platforms like ElevenLabs bear secondary liability for users' voice cloning violations will be tested in court. Section 230 protections may apply but haven't been tested specifically for AI voice synthesis
  • Watermarking mandates: The EU AI Act already requires labeling of AI-generated content. Expect US legislation to follow with mandatory watermarking or provenance tracking for synthetic audio
  • Voice data as property right: Several legal scholars are proposing frameworks that would treat voice data as a form of personal property right, similar to how some states treat genetic information. This would fundamentally change the consent and compensation dynamics

For anyone operating in this space: build your legal and compliance frameworks with flexibility. The law is going to change significantly over the next 18 months. What's compliant today may not be compliant tomorrow. Maintain good records, prioritize consent, disclose AI use, and stay engaged with the evolving regulatory landscape.

This has been an excellent discussion. The diversity of perspectives - voice actors, developers, attorneys, accessibility advocates, fraud victims - reflects the true complexity of this issue. There are no simple answers, but being informed and thoughtful about these questions puts everyone in this thread ahead of the curve.

You must be a member to reply.

Join the discussion →

Related Resources

โ†’ AI Output Rights Hub โ†’ Copyright Infringement Demand Letter