Private members-only forum

[MEGATHREAD] AI-Generated Content, Copyright & Commercial Use — Comprehensive Guide (2025)

Started by KellyMartinez_Mod · Dec 8, 2025 · 15 replies · Pinned
For informational purposes only. AI copyright law is rapidly evolving. The legal landscape described here may change as new cases are decided and new regulations take effect. Consult a licensed IP attorney for advice specific to your situation.

📋 TL;DR — AI Copyright Quick Reference (Updated March 2026)

KM
KellyMartinez_Mod Mod

We've had dozens of threads about AI-generated content and copyright over the past year, and the questions keep coming. This megathread consolidates the best analysis, case law summaries, and practical advice in one place.

This megathread covers:

  • Copyright status of AI outputs — can you copyright text, images, or code generated by AI?
  • The "human authorship" requirement — what the Copyright Office says and how courts are interpreting it
  • Platform-specific rights — what the terms of ChatGPT, Claude, Midjourney, Stable Diffusion, and DALL-E actually say about ownership
  • Commercial use — can you sell AI-generated content, use it in products, or license it?
  • Training data lawsuits — NYT v. OpenAI, Getty v. Stability AI, and what they mean for end users
  • Enterprise policies — how companies are handling AI content in contracts and workflows

Related threads:

Key takeaways (details in thread below):

  • Purely AI-generated content (no meaningful human involvement) is almost certainly not copyrightable under current U.S. law
  • AI-assisted content with sufficient human authorship likely is copyrightable — the line is still being drawn by the courts
  • Most AI platforms assign output rights to the user under their terms of service, but that's a contractual right — not a copyright
  • Training data lawsuits are far from settled and could change the landscape for all users
  • If you're using AI content commercially, document your human contributions and have clear policies

Please keep discussion focused on legal and practical issues. This is not the thread for debating whether AI art is "real art." New developments will be added as they happen.

MK
AttorneyMichaelK Attorney

Legal Analysis: Copyright Status of AI-Generated Works

Let me lay out the current legal framework as clearly as I can. This is one of the fastest-moving areas of IP law, but the foundational principles are becoming clearer.

1. The Human Authorship Requirement

U.S. copyright law protects "original works of authorship" (17 U.S.C. 102). The Copyright Office has consistently maintained that "authorship" requires a human author. This isn't new — it goes back to Burrow-Giles Lithographic Co. v. Sarony (1884), where the Supreme Court held that copyright could protect a photograph because a human made creative choices in composing, lighting, and arranging the subject.

2. Thaler v. Perlmutter (D.D.C. 2023)

This was the first federal court case directly addressing AI-generated works. Stephen Thaler applied for a copyright registration for an image generated entirely by his AI system, DABUS, naming the AI as the author. The court (Judge Howell) held that copyright requires human authorship and that a work generated entirely by AI without human creative input cannot be copyrighted. The ruling was straightforward: "Human authorship is a bedrock requirement of copyright."

3. Zarya of the Dawn (Copyright Office, 2023)

This is the more nuanced and practically important decision. Kristina Kashtanova registered a graphic novel that combined AI-generated images (from Midjourney) with human-authored text and human-arranged page layouts. The Copyright Office ruled:

  • The individual AI-generated images were not copyrightable because Kashtanova's text prompts to Midjourney did not constitute sufficient creative control over the resulting images
  • The text she wrote was copyrightable (standard human authorship)
  • The selection and arrangement of images and text into a cohesive work was copyrightable as a compilation under 17 U.S.C. 103

This is the key framework: a work can contain both copyrightable and non-copyrightable elements. Your copyright protection extends only to the human-authored portions.

4. The "Sufficient Human Authorship" Standard

The Copyright Office's February 2023 guidance established that copyright registration requires "sufficient human authorship." The question is: where does the human's creative contribution end and the AI's begin? The Office has indicated that:

  • Typing a prompt into an AI tool is generally not sufficient human authorship (similar to giving instructions to a commissioned artist — you don't own the copyright just because you described what you wanted)
  • Selecting, arranging, and modifying AI outputs can constitute sufficient human authorship if the creative choices are meaningful
  • Using AI as one tool among many in a creative process (e.g., generating a rough draft, then extensively editing and revising) is more likely to result in copyrightable work

The bottom line: the more human creative input you add on top of the AI output, the stronger your copyright claim. Pure prompt-to-output with no further human modification is the weakest position.

DD
DataPrivacyDan

Training Data Lawsuits and What They Mean for End Users

Even if you can sort out the copyright-on-the-output side, there's a second major legal issue: were the AI models themselves trained on copyrighted material without authorization? This is the subject of several major lawsuits that could reshape the entire landscape.

Key cases:

  • The New York Times v. OpenAI and Microsoft — The NYT sued alleging that ChatGPT and Bing Chat were trained on millions of NYT articles without authorization. The NYT demonstrated that ChatGPT could reproduce near-verbatim excerpts of copyrighted articles. OpenAI argued fair use. Portions of this case have been settled, but the core fair use question remains in litigation and could reach the appellate courts.
  • Getty Images v. Stability AI — Getty sued Stability AI (maker of Stable Diffusion) in both the U.S. (D. Del.) and the UK for training on Getty's copyrighted image library. Getty showed that Stable Diffusion sometimes generated images with distorted Getty watermarks — strong evidence that the training data included Getty images. This case is ongoing.
  • Andersen v. Stability AI et al. — A class action brought by visual artists against Stability AI, Midjourney, and DeviantArt. The court dismissed some claims but allowed the core copyright infringement claims against Stability AI to proceed. This is the case most likely to produce a class-wide ruling on whether training on copyrighted images constitutes infringement.
  • Authors Guild v. OpenAI — Prominent authors (including John Grisham, Jodi Picoult, and George R.R. Martin) sued OpenAI alleging unauthorized training on their published works. This case could establish important precedents for text-based AI training.

Why this matters for end users: If courts ultimately rule that AI training on copyrighted data is infringement (not fair use), end users of those AI tools could face secondary liability claims — particularly if you're using AI outputs commercially in ways that compete with the original training data sources. This is still theoretical, but it's a real risk that businesses should account for.

For a deeper discussion of how AI scraping affects content creators, see: AI scraping my content — legal options

FA
FreelanceWriter_Amy

Practical question: I used ChatGPT to help write marketing copy for a client's website. The client is now asking me to guarantee that the copy is "original and copyrightable" — which is standard language in my freelance writing contracts. Can I honestly make that guarantee?

To be clear, I didn't just paste in a prompt and hand over the output. I wrote detailed outlines, generated drafts with ChatGPT, then spent several hours rewriting, restructuring, and adding my own voice. Maybe 40% of the final text originated from ChatGPT and 60% is my own writing. But it's all blended together now.

Also — if a competitor copies this marketing copy word-for-word, would my client have any legal recourse? Or is the AI-generated portion "free for all"?

MK
AttorneyMichaelK Attorney

@FreelanceWriter_Amy — Good questions. Based on what you've described, you're in a reasonable position, but there are nuances:

On copyrightability: Given the level of human creative involvement you've described — outlining, rewriting, restructuring, and blending your own writing — the final work likely meets the "sufficient human authorship" threshold. The Copyright Office's guidance suggests that using AI as a drafting tool, followed by substantial human revision, is the type of AI-assisted creation that can result in copyrightable work. Your situation is closer to the copyrightable text portions of Zarya of the Dawn than to the non-copyrightable AI images.

On your contractual guarantee: I'd recommend modifying your standard contract language. Instead of guaranteeing the work is "original and copyrightable," consider disclosing that AI tools were used in the drafting process and that the final work reflects substantial human authorship and creative judgment. Transparency protects you legally.

On a competitor copying it: Your client would likely have enforceable copyright in the final work as a whole, given the substantial human contribution. However, if a competitor could somehow isolate the purely AI-generated portions and copy only those, copyright protection for those specific passages would be much weaker. In practice, this is nearly impossible when the work is thoroughly blended — which actually works in your favor.

There are also other legal theories beyond copyright — unfair competition, trade dress, and state-level unfair business practices laws — that could provide protection even for non-copyrightable elements.

TM
TechFounderMike

How Our Startup Handles AI-Generated Content (Practical Policy)

We use AI tools extensively — Claude for code review and documentation, ChatGPT for first-draft marketing copy, and Midjourney for social media visuals. Here's the internal policy we developed with our attorney:

  1. Disclosure: All content that used AI in its creation is tagged internally. We don't publicly disclose AI use on every piece of content, but we track it so we can respond if questioned.
  2. Human review required: No AI-generated content goes live without substantive human review and editing. "Substantive" means actual creative changes, not just proofreading. This is both a quality control measure and a legal protection.
  3. No AI for core IP: Our core product code, patent applications, and key brand assets are human-created. We use AI for support tasks (documentation, boilerplate, initial drafts) but not for the output we consider our competitive advantage.
  4. Insurance: We added a media liability rider to our E&O insurance that covers claims arising from AI-generated content. Cost was about $800/year. Worth it for the peace of mind.
  5. Vendor contracts: When we hire freelancers or agencies, we now ask whether they use AI tools and require disclosure. We don't prohibit AI use, but we want to know.

This approach lets us move fast and use AI tools productively while managing the legal risk. The key insight: treat AI like any other tool in your creative process, but document the human contribution.

For more on Claude's specific terms for business use, see: Claude AI terms for business use

SJ
StartupLawyerJess Attorney

Platform-by-Platform: What the Terms of Service Actually Say About Ownership

I've reviewed the current (as of December 2025) terms of service for the major AI platforms. Here's what each one says about who owns the output:

OpenAI (ChatGPT, DALL-E, GPT-4 API)

  • OpenAI assigns all rights in the output to the user, "to the extent permitted by applicable law"
  • That caveat ("to the extent permitted by applicable law") is key — it means OpenAI is assigning you whatever rights exist, but if the output isn't copyrightable, there's nothing to assign
  • OpenAI retains the right to use inputs and outputs to improve their models (unless you opt out via the API or enterprise settings)
  • Commercial use is permitted on all paid plans

Anthropic (Claude)

  • Anthropic's terms similarly state that output ownership belongs to the user to the extent permitted by law
  • On their commercial API and Claude for Work plans, Anthropic does not use inputs or outputs for model training by default
  • Commercial use is permitted
  • See the detailed breakdown in: Claude AI terms for business use

Midjourney

  • Paid subscribers own the assets they create, subject to the same "to the extent permitted by law" caveat
  • Free-tier users grant Midjourney a license to use their generated images
  • Midjourney grants paid users a broad commercial license to use generated images
  • Companies with over $1M in annual gross revenue must purchase the "Pro" or "Mega" plan for commercial use
  • Important: Midjourney's terms do not grant exclusivity — other users could theoretically generate similar images from similar prompts

Stability AI (Stable Diffusion)

  • The open-source model (downloadable) has a permissive license — you own your outputs
  • The hosted API (DreamStudio) terms assign output rights to users
  • However, Stability AI faces the most serious training data lawsuits, which creates indirect risk for commercial users

Key point: All of these platforms assign you contractual rights to the output. But a contractual right is different from a copyright. The platform can give you permission to use and commercialize the output (contractual right), but they cannot give you a copyright that doesn't exist under the law. Think of it like a license to public domain material — you have permission to use it, but you can't stop others from using similar AI-generated content.

CB
CryptoTrader_Ben

Different angle here: my client runs a DTC e-commerce brand and wants to use AI-generated product lifestyle images instead of hiring photographers. We're talking about using Midjourney to create images of their products in styled settings — kitchens, living rooms, etc.

Two concerns: (1) Can a competitor just take these images since they're AI-generated and not copyrightable? (2) Is there any risk that Midjourney generates an image that's too similar to an existing copyrighted photograph, exposing my client to infringement claims?

SJ
StartupLawyerJess Attorney

@CryptoTrader_Ben — Both are legitimate concerns.

On competitors copying the images: Yes, if the images are purely AI-generated (no significant human post-processing), copyright protection is likely weak or nonexistent. However, your client has other protections:

  • If a competitor scrapes and uses the exact same images, that could constitute unfair competition under Section 43(a) of the Lanham Act (passing off / false advertising)
  • Website terms of service can contractually restrict copying of images, even if they lack copyright protection
  • The practical reality: competitors are more likely to generate their own AI images than to steal yours

On similarity to existing photographs: This is the more serious risk. AI image generators are trained on existing photographs, and there have been documented cases of outputs that closely resemble specific copyrighted works. Your client could face an infringement claim from the original photographer even if the similarity was unintentional.

Practical mitigation: run a reverse image search on any AI-generated image before publishing it commercially. Services like TinEye or Google Images can help identify if the generated image is suspiciously similar to an existing work. It's not bulletproof, but it's a reasonable due diligence step.

Also consider: some AI platforms (including Shutterstock's AI generator) offer indemnification for certain AI-generated images. That may be worth the premium for commercial product imagery.

CN
CorpCounsel_NYC Attorney

Enterprise Perspective: How Large Companies Are Handling AI Content

I'm general counsel at a mid-size tech company and I've also been advising several Fortune 500 clients on AI content policies. Here's what I'm seeing across the industry:

1. The "AI Attestation" Trend in Contracts

A growing number of companies are adding AI attestation clauses to their vendor contracts. These typically require the vendor to disclose whether AI was used in creating deliverables and, if so, to what extent. Some go further and require that all deliverables be "primarily human-authored." We've seen this in:

  • Marketing agency contracts (requiring disclosure of AI-generated copy or images)
  • Software development agreements (requiring disclosure of AI-generated code, especially re: GitHub Copilot)
  • Content licensing agreements (publishers requiring attestation that licensed content is human-authored)
  • Legal services agreements (law firms being asked to certify that briefs are not AI-generated)

2. Internal AI Content Policies

Most large companies I work with have adopted formal AI use policies. Common elements include:

  • Approved AI tools list (e.g., only enterprise versions of ChatGPT/Claude with data retention turned off)
  • Prohibition on inputting confidential information, trade secrets, or personal data into AI tools
  • Mandatory human review for any AI-generated content that will be published externally
  • Documentation requirements for AI use in regulated industries (financial services, healthcare, legal)

3. Risk Management Frameworks

The companies handling this best are treating AI content risk like any other IP risk — with documented policies, training programs, and periodic audits. The companies handling it worst are either banning AI entirely (losing productivity) or ignoring the issue (accumulating risk).

If you're at a company of any size, get an AI content policy in place now. The cost of creating a policy is trivial compared to the cost of an infringement claim or a contract dispute over undisclosed AI use.

NP
NewGrad_Priya

This is incredibly helpful. Question from a recent grad's perspective: I'm a graphic designer and I've been using Midjourney and Stable Diffusion to create concept art for my portfolio. Some pieces are heavily edited in Photoshop after generation, others are more "raw" outputs that I've curated.

Two concerns: (1) If I include AI-assisted work in my portfolio, do I need to disclose that? (2) If a potential employer asks me to sign a work-for-hire agreement that assigns them "all intellectual property rights" in my work, and I use AI tools in the creative process, is that an issue?

SC
SarahConsumerRights

@NewGrad_Priya — On disclosure in your portfolio: there's no legal requirement to disclose AI use in a personal portfolio (unless you're making specific claims about the work being "original" or "hand-crafted"). However, it's increasingly considered best practice in creative industries to disclose AI involvement. Many design firms and creative agencies are explicitly asking about it during interviews.

Practically speaking, I'd recommend labeling AI-assisted pieces honestly. It shows integrity and prevents awkward situations if an employer later discovers the tools you used. The design industry is moving toward a norm of transparency about AI use, and being ahead of that curve is better than being caught behind it.

On the work-for-hire IP assignment: yes, you should disclose to your employer that you use AI tools. A work-for-hire agreement that assigns "all intellectual property rights" may not actually convey copyright in the AI-generated portions (because there may be no copyright to convey). This could create a contractual dispute if the employer discovers AI use later and claims you misrepresented the nature of the work.

Transparency upfront avoids problems later.

DM
DevOps_Marcus

For anyone following the Copyright Office developments — they published their Part 2 report on AI and copyright in late 2025, which focuses on copyrightability of AI-generated outputs. Key points from the report:

  • The Office reaffirmed the human authorship requirement but acknowledged it's a spectrum, not a bright line
  • They explicitly rejected the idea that prompt engineering alone constitutes sufficient human authorship for AI-generated images (reinforcing the Zarya of the Dawn reasoning)
  • For text-based AI outputs, they left more room for human authorship claims where the user provides detailed structural outlines, makes extensive edits, and exercises creative judgment in selecting and arranging AI-generated text
  • They recommended that Congress consider new legislation to address AI-generated works, potentially creating a sui generis (unique) form of protection that's shorter and narrower than traditional copyright

The sui generis proposal is interesting — it would potentially give AI-generated works some protection (maybe 5-10 years instead of life+70) without calling it "copyright." Similar to how databases are protected in the EU. But this would require Congressional action, which... don't hold your breath.

For the code-specific angle on this, see: Copyright and AI-generated code and GitHub Copilot code ownership

RM
RealEstateBroker_Miami

Very practical question: I'm a real estate broker and my team uses ChatGPT to write property descriptions and neighborhood guides for listings. We also use AI to generate virtual staging images (adding furniture to photos of empty rooms). Are there any specific risks I should be aware of in the real estate context?

Our MLS has started asking whether listing photos are "AI-generated or digitally altered" and I want to make sure we're in compliance.

MK
AttorneyMichaelK Attorney

@RealEstateBroker_Miami — Real estate is actually one of the areas where AI content disclosure is becoming regulated fastest. A few things to be aware of:

  • Virtual staging: NAR (National Association of Realtors) guidelines and many state real estate commissions now require clear disclosure when listing photos have been digitally altered or virtually staged. AI-generated or AI-altered images must be labeled as such. Failure to disclose can be considered a deceptive practice under state consumer protection laws.
  • Property descriptions: Copyright risk is low here (AI-assisted descriptions with human editing are likely copyrightable as discussed above). The bigger risk is accuracy — if ChatGPT hallucinates facts about a property or neighborhood, you could face liability for material misrepresentation.
  • Fair housing: Be extremely careful with AI-generated neighborhood descriptions. If the AI generates language that could be interpreted as steering (e.g., describing demographics, school quality in coded terms), you could face Fair Housing Act violations. Always review AI-generated property and neighborhood descriptions through a fair housing compliance lens.

Bottom line: disclose AI-altered photos, verify facts in AI-generated descriptions, and review everything for fair housing compliance. The copyright question is probably the least of your concerns in real estate — accuracy and disclosure are the bigger issues.

KM
KellyMartinez_Mod Mod

Updated Summary (February 2025)

This thread has become one of our most valuable resources. Here's the current state of play:

What we know (settled or near-settled law):

  • Purely AI-generated works without meaningful human creative input are not copyrightable (Thaler v. Perlmutter, Copyright Office guidance)
  • AI-assisted works with sufficient human authorship can be copyrighted — protection extends to the human-authored elements (Zarya of the Dawn)
  • Writing a prompt is generally not enough human authorship to claim copyright on the AI's output
  • All major AI platforms assign output rights to users contractually, but this doesn't create copyright where none exists

What's still uncertain (active litigation and evolving guidance):

  • Exactly where the line falls between "sufficient" and "insufficient" human authorship
  • Whether AI training on copyrighted data constitutes fair use (NYT v. OpenAI, Getty v. Stability AI)
  • Whether end users face secondary liability for using AI tools trained on infringing data
  • Whether Congress will create a new sui generis protection for AI-generated works

Best practices for commercial AI content use:

  1. Document your human contributions — keep drafts, outlines, and edit histories
  2. Disclose AI use in professional contexts, especially in regulated industries
  3. Add substantive human creative input on top of AI outputs before claiming ownership
  4. Run reverse image searches on AI-generated images before commercial publication
  5. Update your contracts to address AI-generated content (both as a creator and as a buyer)
  6. Consider media liability insurance that covers AI-generated content claims
  7. Stay current — this area of law is changing quarterly

Key resources and related threads:

I'll continue updating this thread as new cases are decided and new guidance is issued. If you have a specific question about your AI content situation, post it here and our community will help.

BC
BeatMaker_Carlos

Following this thread closely. I produce beats and instrumentals for hip-hop artists. I've started using Suno and Udio to generate musical ideas — loops, chord progressions, melody fragments — that I then heavily rework in Ableton. The final product is maybe 15-20% AI-originated material that I've chopped, pitched, layered, and processed beyond recognition.

Two questions: (1) Can I copyright these beats? (2) If an artist uses one of my beats and it becomes a hit, could the AI company claim a piece of the publishing rights?

The music industry is way behind on this compared to visual art. ASCAP and BMI haven't published clear guidance yet.

MD
MusicLawyer_Darnell Attorney

@BeatMaker_Carlos — Music copyright attorney here. Your situation is actually more favorable than you might think.

On copyrightability: Based on what you describe — taking AI-generated fragments and extensively reworking them (chopping, pitching, layering, processing) — the final beat likely qualifies as a copyrightable derivative work. The key is that your creative contributions are substantial and the AI material is transformed. This is analogous to sampling, where a producer takes a pre-existing recording and transforms it into something new. Courts have long recognized that sufficient transformation of source material creates new copyrightable expression.

On the AI company claiming publishing: Under current terms of service for both Suno and Udio, they assign output rights to the user. Neither company retains any claim to downstream publishing or mechanical royalties. However — and this is important — keep records of your creative process. If a dispute ever arises, you want to be able to demonstrate exactly what the AI generated vs. what you created.

Practical advice:

  • Save the raw AI outputs separately before you start processing them
  • Document your production process (screen recordings of your DAW sessions are ideal)
  • When registering with your PRO (ASCAP/BMI/SESAC), register as the sole author — the AI is a tool, not a co-writer
  • In your beat license agreements, add a clause disclosing that AI tools were used in the production process

You're right that the music industry is lagging. The Copyright Office's Part 3 report (expected mid-2026) will specifically address music and AI. Until then, document everything.

SR
StudioPhotographer_Ray

I want to raise something that's been bothering me about this entire discussion. Everyone's focused on "can I copyright my AI output" but nobody's talking about the photographers, illustrators, and writers whose work was stolen to train these models in the first place.

I've been a commercial photographer for 22 years. My images are all over the internet — licensed through Getty, Shutterstock, and my own site. I never consented to having my work fed into Stable Diffusion or Midjourney. Now people are using those tools to generate images that directly compete with my livelihood, using techniques and styles that these models learned from MY work.

The Getty v. Stability AI lawsuit is the tip of the iceberg. There are thousands of photographers like me who can't afford to sue individually. The class action (Andersen v. Stability AI) is our best hope, but it's moving slowly.

I'm not anti-AI. I use Photoshop's AI features daily. But there's a fundamental difference between AI as a tool that enhances human creativity and AI as a replacement that was built by stealing human creativity.

PL
Prof_Lemley_IPLaw Attorney

@StudioPhotographer_Ray raises an important point. Let me add some academic perspective on the training data question.

The fair use analysis for AI training is genuinely complex. Courts will likely consider:

Factor 1 (Purpose and character): Is the use "transformative"? The AI isn't copying photos to display them — it's extracting statistical patterns to generate new images. The Supreme Court's Andy Warhol Foundation v. Goldsmith (2023) decision narrowed transformative use, which could hurt the AI companies, but the use case is fundamentally different from Warhol's silkscreens.

Factor 2 (Nature of the work): Creative photographs get stronger protection than factual works. This factor cuts against the AI companies.

Factor 3 (Amount used): The models ingest entire works. But they don't store or reproduce them in the traditional sense — they extract weights and parameters. Whether this constitutes "copying" in the copyright sense is a genuinely novel question.

Factor 4 (Market effect): This is where photographers have the strongest argument. AI-generated images are demonstrably substituting for stock photography in commercial markets. If you can prove market harm, this factor weighs heavily against fair use.

My prediction: courts will split on this, and we'll eventually get a Supreme Court case. The most likely outcome is a licensing framework — similar to how music sampling evolved from litigation into a licensing market. AI companies will eventually need to license training data or face injunctions.

IK
IndieDev_Kenji

Indie game developer here. I'm using a combination of AI tools for my game:

  • Midjourney for concept art → then my artist redraws everything from scratch using the concepts as reference
  • ChatGPT for dialogue first drafts → then I rewrite about 80% and voice-direct the rest
  • GitHub Copilot for boilerplate code → I review and modify everything
  • Suno for placeholder music → will hire a composer for final soundtrack

My concern is about Steam, the App Store, and console platforms. Are any of them starting to require AI disclosure? And if my game makes money, could anyone come after me for using AI in the development process?

GN
GameLawyer_Nina Attorney

@IndieDev_Kenji — I represent several indie studios. Here's the current platform landscape:

Steam (Valve): As of late 2025, Steam requires disclosure of AI-generated content in the store page description. They distinguish between "AI-generated" (created primarily by AI) and "AI-assisted" (AI used as a tool in a human-directed process). Your workflow sounds like the latter. You need to check the appropriate box during the Steamworks submission process and briefly describe your AI use.

Apple App Store: Apple's current guidelines (updated Q4 2025) require that apps disclose AI-generated content but don't prohibit it. The guidelines focus more on AI-generated user-facing content within the app than on AI used during development.

Console platforms (Sony, Nintendo, Microsoft): No formal AI content policies yet, though Sony has indicated they're developing one. For now, your standard content submission agreements should cover you.

On legal risk: Your workflow is actually very defensible. You're using AI as a brainstorming and prototyping tool, with humans creating the final assets. The fact that your artist redraws from AI concepts (rather than shipping AI images directly) is particularly strong. Keep your concept art/reference files separate from your final assets as documentation.

The area to watch is code. Copilot-generated code has its own set of issues — see the Copilot thread for details. Short version: review everything Copilot suggests, don't accept verbatim blocks of code that look like they might be from a specific open-source project, and you'll be fine.

KM
KellyMartinez_Mod Mod

📌 2026 Update: Major Developments

Happy new year, everyone. Several important developments to start 2026:

1. Copyright Office Part 3 Report (expected June 2026): The final installment will address AI and music, AI and code, and recommendations for Congressional legislation. We'll cover it here when it drops.

2. NYT v. OpenAI partial settlement: The licensing component was resolved in Q4 2025, but the fair use question remains in active litigation. The court denied OpenAI's motion to dismiss the remaining claims. Trial could happen in late 2026.

3. EU AI Act implementation: The transparency obligations took effect January 1, 2026. AI-generated content must now be labeled as such in the EU. This affects any business that distributes AI content to EU audiences.

4. New state legislation: California AB-2013 (AI content watermarking) is now in effect. Several other states have introduced similar bills.

I'm updating the thread title to reflect 2026. Keep the discussion going — this thread remains the best resource on the forum for AI copyright questions.

EL
EuropeanStartup_Lukas

The EU AI Act transparency requirement is already causing headaches for us. We're a Berlin-based SaaS company and about 30% of our marketing content uses AI-generated text and images. Under the new rules, we need to:

  • Label all AI-generated content that could be mistaken for human-created
  • Maintain records of which content was AI-generated
  • Implement technical measures (watermarking) for AI-generated images

The labeling requirement seems straightforward for standalone content (blog posts, social media), but what about AI-assisted content where humans heavily edited the output? The regulation isn't clear on the threshold.

Has anyone found practical guidance on compliance? Our legal team is interpreting it conservatively (label everything that touched an AI tool), but that basically means labeling 80% of our output.

EA
EUTechLaw_Andreas Attorney

@EuropeanStartup_Lukas — I practice EU digital regulation law from Brussels. The transparency requirements under the AI Act (Article 52) are indeed ambiguous on the "human editing" threshold. Here's how we're advising clients:

The practical test: Would a reasonable person believe the content was entirely human-created? If yes, and AI was materially involved, you should label it. If the AI contribution was incidental (e.g., grammar checking, minor rephrasing), labeling isn't required.

Safe harbor approach: For now, the European Commission hasn't issued detailed implementing guidance. The safest approach is to disclose when AI was used in the primary creation of content. This doesn't mean every spell-checked email needs a label — it means AI-generated blog posts, marketing copy, and images should be disclosed.

Watermarking: For images, the requirement is for providers of AI systems (not users) to implement watermarking. So Midjourney, DALL-E, etc. need to watermark their outputs. As a user, your obligation is not to remove those watermarks.

I published a compliance checklist for SaaS companies on my firm's blog. Happy to share if useful. The penalties for non-compliance are significant (up to 3% of global annual turnover), so this is worth getting right.

PM
PublishingHouse_Margaret

Adding a traditional publishing perspective. I'm an acquisitions editor at a mid-size publisher. We've been grappling with AI-generated submissions since mid-2024 and have now formalized our policy:

  • Full manuscripts: We require authors to certify that the manuscript is "primarily authored by a human being." AI-assisted editing, research, and brainstorming are acceptable, but the creative expression must be human-originated.
  • Cover art: We will not accept AI-generated cover art for published titles. Period. The risk of training data infringement is too high for commercial publication.
  • Marketing copy: AI-assisted marketing materials are fine. We disclose when used.

We've already rejected 3 manuscripts this year where we detected likely AI generation (inconsistent style, generic phrasing, factual hallucinations). The detection tools aren't perfect, but combined with editorial judgment, we can usually tell.

The bigger question for us is contractual: our standard publishing agreement assigns "all rights in the Work" to us. If parts of the Work aren't copyrightable because they're AI-generated, that affects the scope of rights we're acquiring. We've added specific AI disclosure warranties to our contracts.

TJ
TechWriter_Jason

@PublishingHouse_Margaret — I'm curious about your detection methods. The major AI detection tools (GPTZero, Originality.ai, etc.) have documented false positive rates of 10-20%. I've personally had work that I wrote entirely from scratch flagged as "likely AI-generated" by multiple tools.

This creates a real problem. If publishers, employers, and platforms are using unreliable detection tools to reject or penalize human-created work, that's arguably more harmful than the AI content itself. A false accusation of AI plagiarism can destroy a writer's reputation.

Is anyone aware of legal challenges to AI content detection? It feels like there's a defamation or tortious interference angle when a detection tool falsely labels someone's original work as AI-generated and they lose a contract or job opportunity as a result.

AT
AttorneyMichaelK Attorney

@TechWriter_Jason — You're raising a real and underexplored legal issue. A few thoughts:

Detection tool liability: AI content detection tools typically include disclaimers that their results are probabilistic, not definitive. This makes a direct product liability or negligence claim against the tool provider difficult. However, if a tool explicitly represents a high accuracy rate that isn't supported by evidence, there could be a false advertising claim.

Employer/publisher liability: If an employer fires a worker or a publisher rejects a manuscript solely based on an AI detection tool, and the work was actually human-authored, there could be claims for:

  • Wrongful termination (if the employment contract or applicable state law provides protections)
  • Breach of contract (if a publishing or freelance agreement was terminated based on a false AI detection result)
  • Defamation (if the accusation of AI plagiarism was published to third parties — e.g., the employer told industry contacts)

I'm not aware of any filed cases on this specific issue yet, but I expect we'll see them in 2026. The fundamental problem is using a probabilistic tool to make a binary judgment. It's similar to the issues with facial recognition false positives — courts are increasingly skeptical of automated decision-making tools that produce life-altering consequences.

Best practice for employers and publishers: treat AI detection results as one data point, not a conclusive determination. Give the creator an opportunity to respond before taking action.

PH
PatentAtty_Howard Attorney

Adding the patent perspective, since this thread focuses mainly on copyright. The AI inventorship question is evolving differently from the copyright authorship question.

Current state: The Federal Circuit in Thaler v. Vidal (2022) held that an AI cannot be an "inventor" under the Patent Act — only natural persons can be named as inventors. This mirrors the copyright human authorship requirement.

However: The Patent Office issued guidance in February 2024 clarifying that inventions that use AI as a tool can still be patented, as long as a human made a "significant contribution" to the invention. This is more permissive than the copyright approach. The PTO explicitly stated that using AI to generate candidate solutions, which a human then selects, modifies, and develops, can result in a patentable invention with the human as the inventor.

Practical implication: If you're using AI to help develop patentable technology, focus on documenting your human contributions to the inventive process. The AI can do the heavy computational lifting, but the patent claims need to be traceable to human insight and judgment.

This divergence between patent law (more AI-friendly) and copyright law (stricter human authorship requirement) is going to create interesting strategic choices for companies developing AI-assisted products.

FM
FashionDesigner_Mika

How does this apply to fashion? I use AI to generate textile pattern designs, which I then modify and have printed on fabric. Fashion design has limited copyright protection to begin with (thanks to Star Athletica v. Varsity Brands and the separability doctrine), so I'm wondering if the AI element makes things even more complicated.

Also, some of my AI-generated patterns look similar to traditional cultural patterns (African, Japanese, etc.). Is there a cultural appropriation legal issue layered on top of the copyright question?

FR
FashionIPLaw_Rachel Attorney

@FashionDesigner_Mika — Fashion and AI is a tricky intersection. Let me address both questions:

Copyright in AI-generated textile patterns: Textile designs (surface patterns, prints) are generally copyrightable as pictorial or graphic works, separate from the utilitarian article they're printed on. For AI-generated patterns that you modify, the same framework applies: your modifications need to constitute "sufficient human authorship." If you're making meaningful creative changes — adjusting colors, rearranging elements, combining patterns, adding hand-drawn elements — you likely have a copyrightable work. If you're using raw AI output with minimal changes, protection is weaker.

On cultural patterns: U.S. copyright law doesn't have a "cultural appropriation" doctrine per se. Traditional cultural patterns that have been in use for centuries are generally in the public domain. However:

  • Some specific implementations of traditional patterns may be copyrighted by the individual artist who created that particular version
  • AI models trained on cultural art may generate outputs that closely resemble specific copyrighted implementations
  • Beyond copyright, there are emerging state laws and international frameworks (like WIPO's work on traditional cultural expressions) that could create additional obligations
  • There's also significant reputational risk — the fashion industry has faced major backlash for using AI to replicate indigenous designs

My advice: if an AI generates a pattern that looks like it's derived from a specific cultural tradition, research the source and be thoughtful about how you use it. The legal risk may be low, but the business and reputational risk can be high.

PR
PhD_Researcher_Li

I want to flag an issue that's becoming urgent in academia: AI-generated content in research papers. Several major journals (Nature, Science, PNAS) now require disclosure of AI tool use. But the policies are inconsistent and the enforcement mechanisms are weak.

More concerning: if portions of a published research paper are AI-generated and therefore not copyrightable, does that affect the journal's copyright claim over the paper? Most journals require authors to assign copyright. If the AI-generated portions have no copyright to assign, are journals overstating their rights?

And then there's the grant angle — if research funded by NIH or NSF uses AI-generated text in publications, does that create any issues with federal funding requirements?

EP
EdLaw_Professor_Kim Attorney

@PhD_Researcher_Li — These are important questions that most institutions haven't fully addressed. From an education law perspective:

Journal copyright assignment: You're correct that journals can only acquire copyright in the copyrightable portions of a paper. If an author assigns "all rights" but portions are AI-generated and not copyrightable, the journal's copyright covers only the human-authored content. This is analogous to how copyright in a compilation covers the selection and arrangement but not the underlying public domain materials.

Federal funding: NIH and NSF haven't issued specific guidance on AI-generated content in funded research publications, but both agencies require accurate reporting of research methodologies. If AI was used to generate portions of a publication and this isn't disclosed, it could be treated as a failure of scientific integrity under 42 CFR Part 93 (research misconduct regulations). The key principle is transparency.

Institutional policies: Most universities are developing AI use policies for both students and faculty. The better policies distinguish between AI as a research tool (acceptable, with disclosure) and AI as a ghostwriter (problematic). The University of Michigan's policy is often cited as a model — it requires disclosure of AI use, treats AI-generated text as unattributed assistance, and requires human authorship for any work submitted for academic credit or publication.

LT
LitigatorAnon_TX

Bringing this back to the legal profession itself. After the Mata v. Avianca debacle (lawyer submitted ChatGPT-fabricated case citations), courts have been cracking down on AI use in legal filings. I'm seeing local rules proliferate:

  • SDNY, NDTX, and at least 15 other federal districts now have standing orders requiring disclosure of AI use in briefs
  • Some state courts are following suit — California's Judicial Council issued recommendations in late 2025
  • The ABA issued Formal Opinion 512 in November 2025, which doesn't prohibit AI use but requires lawyers to verify all AI-generated content, maintain competence in AI tools, and disclose use when required by court rules

Here's my ethical dilemma: I use Claude extensively for legal research — it's genuinely helpful for identifying relevant cases and statutes, even though I verify everything independently. Do I need to disclose that? The standing orders in my district (NDTX) say disclosure is required for AI-generated "text," but is research assistance "text"?

The line between "AI-generated brief" and "AI-assisted research for a human-written brief" seems important but nobody's drawing it clearly.

EW
EthicsAtty_Washington Attorney

@LitigatorAnon_TX — Legal ethics attorney here. The ABA's Formal Opinion 512 gives more guidance than most people realize. Key points:

The duty of competence (Rule 1.1) now requires lawyers to understand the capabilities and limitations of AI tools they use. You don't have to be a computer scientist, but you need to know that LLMs can hallucinate and that AI outputs need verification.

The duty of candor (Rule 3.3) requires honesty with the tribunal. If a court's standing order requires AI disclosure, comply fully. If there's no standing order, the better practice is still to disclose if AI substantially contributed to the work product — not because it's legally required, but because nondisclosure that later surfaces creates a much bigger problem.

On your specific question: Using Claude for research — identifying cases, checking statutory language, brainstorming arguments — is analogous to using Westlaw or Lexis. If you independently verify everything and the written brief is your own work, most standing orders would not require disclosure. But if you use AI to generate draft language that appears substantially unchanged in the final brief, disclosure is almost certainly required.

The safest approach: when in doubt, disclose. No court has ever sanctioned a lawyer for over-disclosing AI use. Several have sanctioned lawyers for concealing it.

JP
Journalist_Portland

Journalist here. The media industry is splitting on AI and it's creating real tensions in newsrooms.

My outlet's policy: AI can be used for transcription, data analysis, and background research. AI cannot be used to generate any published text — not headlines, not articles, not social media posts. Every word published under a reporter's byline must be written by that reporter.

But I know colleagues at other outlets (won't name them) where AI-generated articles are published daily with minimal human editing, often without disclosure to readers. Some are using AI to generate entire "news" articles from press releases and wire reports.

The copyright angle is interesting — if AI-generated news articles aren't copyrightable, then aggregators and competitors can freely reproduce them without licensing. This could accelerate the death spiral for outlets that rely on AI content. The ones investing in human journalism will have copyrightable content they can license and protect; the AI-content mills won't.

Ironic outcome: the economic incentive might actually push quality journalism AWAY from AI.

DA
DataPrivacyDan

Wanted to add a data privacy angle I haven't seen discussed much in this thread.

When you use AI tools to generate content, you're often inputting proprietary or personal information as part of your prompts. The copyright question (who owns the output) is separate from the data privacy question (what happens to the input). But they intersect in interesting ways:

GDPR implications: If you input personal data into an AI tool (even inadvertently — e.g., a client's name in a prompt), the AI provider becomes a data processor under GDPR. Most AI platform terms address this, but the processing may not comply with your organization's data processing agreements or privacy policies.

Confidential information: Prompts containing trade secrets or attorney-client privileged information could waive privilege protections if the AI provider uses that data for training. This is why enterprise versions of AI tools (with no-training guarantees) matter so much for professional use.

The meta-question: Even if you own the copyright in the AI output, you might have violated data protection laws or confidentiality obligations in the process of creating it. Ownership of the output doesn't retroactively authorize the input.

Always check: (1) what does the AI tool do with your inputs, (2) do you have authority to input the data you're providing, and (3) does your organization's AI policy permit the specific tool and use case?

VD
VoiceActor_Denise

I need to raise the voice and likeness issue. I'm a professional voice actor and several AI companies have cloned my voice without consent. You can literally type text and generate audio that sounds like me. I've found my "voice" being used in AI-generated audiobooks, podcasts, and even commercial advertisements.

This is beyond copyright — it's a right of publicity issue. My voice is my livelihood, and these companies trained their models on recordings of my performances without permission or compensation.

SAG-AFTRA negotiated AI protections in the 2023 contract, but that only covers union work. The vast majority of voice work I've done over 15 years — corporate training videos, e-learning modules, phone systems — was non-union and has no contractual AI protections.

What legal recourse do I have? And is this covered by the same legal framework as AI-generated text and images, or is it different?

EC
EntertainmentLaw_Carla Attorney

@VoiceActor_Denise — Entertainment and AI law is my practice area. Voice cloning is legally distinct from text/image AI generation, and you actually have stronger protections.

Right of publicity: Every state recognizes some form of right of publicity — the right to control commercial use of your name, image, likeness, and in many states, voice. Unauthorized AI voice cloning for commercial purposes almost certainly violates right of publicity laws. California (Cal. Civ. Code § 3344), New York, and Tennessee have particularly strong protections.

Federal legislation: The NO FAKES Act (Nurture Originals, Foster Art, and Keep Entertainment Safe Act) was introduced in Congress in 2024 and has bipartisan support. If passed, it would create a federal right against unauthorized AI replications of voice and likeness. As of February 2026, it's still in committee but expected to move forward.

Section 43(a) Lanham Act: If AI-generated audio that sounds like you is being used in advertising, you may have a false endorsement claim under federal trademark law.

Copyright in performances: Your original recorded performances are copyrightable (as sound recordings or as parts of audiovisual works). The AI company may have infringed your copyright (or your client's copyright) by using those recordings as training data.

Practical steps:

  • Document every instance of your voice being used without consent
  • Send DMCA takedown notices for AI-generated audio that uses your copyrighted performances
  • Send cease-and-desist letters under right of publicity statutes to the platforms hosting the unauthorized content
  • Consider joining the class action efforts — several are being organized by SAG-AFTRA and individual attorneys

This area of law is moving fast and voice actors are in a better legal position than visual artists because right of publicity provides protections that copyright alone doesn't.

MR
MLEngineer_Raj

Engineer perspective on the watermarking discussion. The EU AI Act requires providers to watermark AI-generated content, and California AB-2013 has similar requirements. But the technical reality is that current watermarking is trivially defeatable:

  • Image watermarks (C2PA metadata) can be stripped by re-saving, screenshotting, or cropping
  • Text watermarks (statistical patterns in token distribution) can be defeated by paraphrasing or running through a second model
  • Audio watermarks are somewhat more robust but still removable with basic signal processing

The regulations are mandating a technology that doesn't reliably work. It's like mandating DRM in 2005 — well-intentioned but technically naive. The content will flow freely regardless of watermarks.

From a copyright perspective, this matters because if AI-generated content can't be reliably identified, the "human authorship" verification becomes even more important. You can't enforce a rule you can't detect violations of.

GP
GovContractor_Phil

Government contracting angle: our firm has several content creation contracts with federal agencies (writing reports, policy analyses, training materials). Our contracts specify "work for hire" and the government owns all rights.

We've started using AI tools to improve productivity on these contracts. But the FAR (Federal Acquisition Regulation) doesn't specifically address AI-generated content in work-for-hire deliverables. If our deliverables contain AI-generated content that isn't copyrightable, is the government getting less than they paid for? Are we in breach of contract by delivering partially non-copyrightable work under a work-for-hire agreement?

This feels like a ticking time bomb for government contractors. The IG or GAO could easily audit this and find that contractors are billing full rates for AI-generated work product.

GP
GovConLaw_Patricia Attorney

@GovContractor_Phil — This is an increasingly common concern in gov con. Here's the legal landscape:

FAR 52.227-14 (Rights in Data — General): This clause gives the government "unlimited rights" in data produced under contract. If portions of that data are AI-generated and not copyrightable, the government still has full use rights — they just don't have an exclusive copyright to enforce against third parties. In practical terms, this may not matter much since government work product is often treated as public domain anyway.

However: The False Claims Act risk is real. If your contract requires "original work product" and you deliver AI-generated content without disclosure, a qui tam relator could argue you made a false claim about the nature of the deliverables. The question is whether "original" in the contract means "not copied from another source" (AI content would qualify) or "created by human authors" (AI content might not).

Recent developments: OMB Memorandum M-24-10 (on AI in government) requires federal agencies to assess AI use by contractors. Several agencies (DoD, DHS, HHS) are already adding AI disclosure requirements to new solicitations. If you're not seeing these requirements in your current contracts, you will soon.

My advice: Get ahead of this. Disclose your AI use to your contracting officer proactively. Modify your internal procedures to document AI use on government contracts. And update your subcontractor agreements to require the same disclosures. The contractors who self-report and adapt will be fine; the ones who hide AI use and get caught will face suspension or debarment.

OM
OpenSourceDev_Marcus

Nobody's talking about the open source licensing collision. Here's the problem:

AI code generators (Copilot, CodeWhisperer, Claude) are trained on code from public repositories, much of which is licensed under GPL, MIT, Apache, LGPL, etc. When these tools generate code that's similar to GPL-licensed training data, does that code carry the GPL's copyleft obligations?

If yes: any proprietary software that includes AI-generated code derived from GPL sources could be violating the GPL, potentially requiring the entire codebase to be open-sourced.

If no: the GPL's copyleft mechanism becomes unenforceable against AI-generated code, which undermines the foundation of the free software movement.

The Free Software Foundation hasn't published a definitive position. The Software Freedom Conservancy has raised concerns. And companies like Microsoft (Copilot), Amazon (CodeWhisperer), and Anthropic (Claude) have all been careful not to take a clear legal position.

This is a bomb waiting to go off. The first GPL enforcement action against AI-generated code will be a landmark case.

OH
OpenSourceLaw_Heather Attorney

@OpenSourceDev_Marcus — Open source licensing attorney here. You've identified one of the most important unresolved questions in AI law. Here's my analysis:

The GPL enforcement theory: The GPL triggers when you create a "derivative work" of GPL-licensed code. If AI-generated code is substantially similar to specific GPL source code, and copyright exists in the AI output, then yes — the GPL obligations would theoretically attach. But there are two massive "ifs" in that chain.

Problem 1 — Copyright in the output: If the AI-generated code isn't copyrightable (because it lacks sufficient human authorship), then it can't be a "derivative work" under copyright law, and the GPL — which is a copyright license — has nothing to attach to. The GPL's copyleft depends on copyright existing in both the original and derivative works.

Problem 2 — Substantial similarity: AI code generators typically produce code that's functionally similar to training data but not identical. Short code snippets (under ~10 lines) often aren't copyrightable regardless of AI involvement because they lack sufficient originality. Longer blocks that are substantially similar to specific GPL source code could trigger the GPL, but proving the chain from training data → specific output is technically difficult.

My prediction: This will be resolved through a combination of:

  • Updated open source licenses that specifically address AI training (some are already being drafted)
  • AI code tools implementing better attribution and license detection
  • Industry norms around AI code review and open source compliance

For now: if your business depends on proprietary code, review AI-generated code carefully for similarity to known open source implementations, and maintain your own open source compliance processes.

NJ
NFTCreator_Jade

What about AI-generated NFTs? I know the NFT market crashed, but there's still a significant community creating and selling AI-generated digital art as NFTs. If the artwork isn't copyrightable, does the buyer actually own anything? Can someone else mint the same image as an NFT?

I've seen cases where AI-generated art was minted, sold for significant amounts, and then the same prompt was used to generate nearly identical images that were also minted. The original buyer was furious but had no legal recourse because... there was no copyright to enforce.

AT
AttorneyMichaelK Attorney

@NFTCreator_Jade — NFTs and AI copyright is a particularly interesting collision because NFTs were already in a legal gray area before AI entered the picture.

What an NFT buyer actually owns: An NFT is a token on a blockchain that points to a digital file. Buying an NFT generally does NOT transfer copyright in the underlying artwork (unless the smart contract or separate agreement specifically assigns copyright). You own the token; the creator (usually) retains the copyright.

AI-generated NFTs: If the underlying artwork is purely AI-generated and not copyrightable, then:

  • The NFT seller had no copyright to transfer (or retain) in the first place
  • Anyone can reproduce the image without infringing copyright
  • The NFT buyer's "ownership" is limited to the blockchain token itself — not any exclusive right to the image
  • Someone generating a nearly identical image from a similar prompt is not infringing anything

This is essentially the ultimate test case for "what are you actually buying when you buy an NFT?" For AI-generated NFTs, the answer is: a proof of provenance (you can show you bought this specific token from this specific creator at this time) but not exclusivity over the image.

Some NFT platforms are now requiring AI disclosure and offering "human-certified" badges. Whether this will matter to the market remains to be seen.

TY
TokyoIPLawyer_Yuki Attorney

Adding a comparative international perspective, since this thread is U.S.-focused:

Japan: Japan has taken the most AI-friendly approach globally. The 2024 amendments to Japan's Copyright Act explicitly allow AI training on copyrighted works under a broad fair use-like exception (Article 30-4). Japan also allows copyright in AI-generated works if a human exercised creative direction. This makes Japan the most favorable jurisdiction for AI content creation.

UK: The UK had a specific provision protecting "computer-generated works" (CDPA §9(3)), which arguably covers AI outputs. The UK government considered expanding AI training exceptions but pulled back after pushback from the creative industries. The current legal framework offers more protection for AI outputs than the U.S. but the training data question is unresolved.

China: The Beijing Internet Court ruled in November 2023 that AI-generated images can be copyrighted if a human made intellectual contributions through prompt design, parameter selection, and output curation. This is more permissive than the U.S. approach. China is actively developing its AI regulatory framework with more explicit copyright provisions.

EU: As discussed above, the AI Act focuses on transparency and labeling rather than copyright per se. Copyright in AI outputs is governed by member state laws, which vary. The pending AI Liability Directive could add another layer.

The global patchwork means businesses operating internationally need jurisdiction-specific strategies. What's copyrightable in Japan may not be in the U.S., and what's legal in the U.S. may require additional compliance in the EU.

KM
KellyMartinez_Mod Mod

📌 February 2026 Summary Update

This thread has grown significantly with excellent contributions. Here's an updated summary of where things stand:

Key 2026 developments:

  • EU AI Act transparency rules now in effect — AI content must be labeled in the EU
  • California AB-2013 (AI watermarking) in effect as of Jan 1, 2026
  • ABA Formal Opinion 512 — lawyers must verify AI content, maintain competence, disclose when required
  • Copyright Office Part 3 report expected mid-2026 (music, code, legislative recommendations)
  • NYT v. OpenAI fair use question heading toward trial
  • NO FAKES Act (voice/likeness protection) advancing in Congress

Emerging issues covered in this thread:

  • AI content detection reliability and liability for false positives
  • Music industry AI copyright (beat production, sampling analogies)
  • Government contractor AI disclosure obligations
  • Open source licensing collision with AI-generated code
  • International divergence (Japan most permissive, EU most regulated, U.S. in between)
  • Voice cloning and right of publicity protections
  • Fashion and textile design AI copyright
  • Academic publishing and AI disclosure

Keep the contributions coming. This thread is referenced regularly by practitioners and businesses navigating these issues.

AD
AgencyOwner_Derek

Running a 15-person marketing agency. We've been transparent with clients about using AI tools, but now we're facing a new problem: clients are demanding AI-free deliverables. Two enterprise clients have added "No AI-Generated Content" clauses to their contracts, which basically means we can't use ChatGPT, Claude, or even Grammarly's AI features for their work.

Questions: (1) Is a "no AI content" contractual clause enforceable? (2) If we accidentally use AI tools for a client with such a clause, what's our liability exposure? (3) How do we even define "AI" in this context — is spell check AI? Is Canva's background remover AI? Where's the line?

CR
ContractsAtty_Reema Attorney

@AgencyOwner_Derek — Contract attorney perspective:

Enforceability: Yes, a "no AI content" clause is enforceable as a contractual term. Parties can agree to restrict the tools and methods used to produce deliverables. It's similar to clauses requiring "Made in USA" products or "organic" ingredients — you can contractually commit to specific production methods.

Liability for breach: If you use AI tools in violation of such a clause, your exposure depends on the contract remedies. Likely scenarios:

  • Breach of warranty: If you warranted the work was AI-free and it wasn't, the client can claim breach of warranty damages
  • Requirement to re-do work: The client could demand replacement deliverables created without AI, at your cost
  • Contract termination: If the clause is material, breach could give the client the right to terminate the engagement
  • Reputational harm: The client could publicize the breach, damaging your agency's reputation

On defining "AI": This is where these clauses typically fail. Most "no AI" clauses I've seen don't define "AI" with any precision. Is autocomplete AI? Is a template AI? Is Photoshop's content-aware fill AI? You need to negotiate a clear, workable definition. I recommend something like: "Deliverables shall not include text, images, or other content where the primary creative expression was generated by a large language model, image generation model, or similar generative AI system. Use of general-purpose software tools that incorporate AI features (spell check, grammar correction, photo editing) is permitted."

Push back on overbroad clauses and get the definition right upfront. Otherwise you're signing up for an impossible compliance obligation.

MS
MedIllustrator_Sarah

Niche but important question: I create medical illustrations for textbooks and patient education materials. Some of my colleagues have started using AI to generate anatomical illustrations, which they then verify for accuracy and refine. The productivity gain is enormous — what used to take 2 days now takes 2 hours.

But medical illustration has a specific concern: accuracy. If an AI-generated anatomical illustration contains an error that a patient or medical student relies on, who's liable? The AI tool? The illustrator who didn't catch the error? The publisher?

Also, is there any regulatory issue with using AI-generated images in FDA-regulated patient education materials?

HN
HealthcareLaw_Nathan Attorney

@MedIllustrator_Sarah — Healthcare regulatory attorney. This intersects copyright, malpractice, and FDA regulation:

Liability for inaccurate AI illustrations: The illustrator and publisher both have a duty of care. If an AI-generated medical illustration contains a clinically significant error, liability would likely flow to whoever had the duty to verify accuracy — which is the illustrator (as the professional expert) and the publisher (as the entity distributing the material). The AI tool provider would likely be shielded by their terms of service disclaimers, similar to how a reference book publisher doesn't typically face product liability for errors.

FDA considerations: For patient education materials associated with FDA-regulated products (drug package inserts, medical device instructions for use), accuracy requirements are strict. The FDA hasn't specifically addressed AI-generated illustrations, but their general position is that manufacturers are responsible for the accuracy of all labeling materials regardless of how they were created. Using AI doesn't shift the regulatory burden.

Copyright in medical illustrations: Medical illustrations that involve creative choices in rendering, color, perspective, and simplified representation of anatomical structures are copyrightable — these are the human authorship elements. AI can help with the technical rendering, but the medical/anatomical judgment (what to show, what to emphasize, how to simplify for the audience) is the copyrightable human contribution.

Bottom line: use AI to improve productivity, but maintain the same quality control processes you'd use for hand-drawn illustrations. The standard of care doesn't change because the tool changed.

AC
Architect_Copenhagen

Architecture is starting to grapple with this too. AI tools (Midjourney, DALL-E, specialized tools like Hypar and Spacemaker) are being used for concept design, rendering, and even space planning. But architectural works have their own copyright provisions (Architectural Works Copyright Protection Act of 1990), and the interaction with AI generation is untested.

Specific concern: if I use AI to generate concept renderings and a client chooses one of those concepts as the basis for a building, who owns the design? My firm, the AI provider, or nobody? And if the AI-generated concept is "similar" to another architect's built work (because the model was trained on it), is that infringement of the original architect's copyright in the building?

IS
ImmigrationLaw_Sandra Attorney

An angle nobody's covered: AI content and immigration. Several of my clients are creators on O-1B (extraordinary ability in arts) and EB-1 (extraordinary ability) visas. Their visa status depends on demonstrating a record of creative achievement.

USCIS adjudicators are starting to ask whether submitted creative works were AI-generated. If an O-1B applicant's portfolio includes AI-generated art that isn't copyrightable, does that undermine their claim of "extraordinary ability" as an artist? The argument would be that the "creative achievement" belongs to the AI, not the human.

We've already had one RFE (Request for Evidence) where USCIS specifically asked our client to certify that their submitted portfolio was "personally created without the use of generative AI tools." This is a new development and we expect to see more of it.

For any creatives on specialty visas: be very careful about AI-generated content in your portfolio submissions. And if you use AI tools, be prepared to articulate your human creative contribution clearly.

KM
KellyMartinez_Mod Mod

📌 Thread Status: 55 posts and growing. This thread has become the most comprehensive resource on AI copyright on the forum. Key takeaway themes:

  1. The core framework is stable: Human authorship required for copyright. AI outputs alone aren't copyrightable. AI-assisted works with sufficient human contribution are protectable.
  2. The frontier is moving: Music, voice, code, fashion, architecture, medical — every industry is working through its own AI copyright questions with distinct nuances.
  3. Disclosure is the universal best practice: Across every sector discussed in this thread, transparency about AI use emerges as the single most important risk-mitigation strategy.
  4. International divergence matters: Japan, EU, UK, China, and U.S. are all taking different approaches. Global businesses need multi-jurisdictional strategies.
  5. New legal issues keep emerging: AI detection liability, government contractor disclosure, open source licensing collision, voice cloning, immigration portfolio concerns.

Continue posting questions and analysis. New developments in training data lawsuits, the Copyright Office Part 3 report, and the NO FAKES Act will be added as they happen.