Members-only forum — Email to join

[MEGATHREAD] AI-Generated Content, Copyright & Commercial Use — Comprehensive Guide (2026)

Started by KellyMartinez_Mod · Nov 15, 2025 · 15 replies · Pinned
For informational purposes only. AI copyright law is rapidly evolving. The legal landscape described here may change as new cases are decided and new regulations take effect. Consult a licensed IP attorney for advice specific to your situation.
KM
KellyMartinez_Mod Mod

We've had dozens of threads about AI-generated content and copyright over the past year, and the questions keep coming. This megathread consolidates the best analysis, case law summaries, and practical advice in one place.

This megathread covers:

  • Copyright status of AI outputs — can you copyright text, images, or code generated by AI?
  • The "human authorship" requirement — what the Copyright Office says and how courts are interpreting it
  • Platform-specific rights — what the terms of ChatGPT, Claude, Midjourney, Stable Diffusion, and DALL-E actually say about ownership
  • Commercial use — can you sell AI-generated content, use it in products, or license it?
  • Training data lawsuits — NYT v. OpenAI, Getty v. Stability AI, and what they mean for end users
  • Enterprise policies — how companies are handling AI content in contracts and workflows

Related threads:

Key takeaways (details in thread below):

  • Purely AI-generated content (no meaningful human involvement) is almost certainly not copyrightable under current U.S. law
  • AI-assisted content with sufficient human authorship likely is copyrightable — the line is still being drawn by the courts
  • Most AI platforms assign output rights to the user under their terms of service, but that's a contractual right — not a copyright
  • Training data lawsuits are far from settled and could change the landscape for all users
  • If you're using AI content commercially, document your human contributions and have clear policies

Please keep discussion focused on legal and practical issues. This is not the thread for debating whether AI art is "real art." New developments will be added as they happen.

MK
AttorneyMichaelK Attorney

Legal Analysis: Copyright Status of AI-Generated Works

Let me lay out the current legal framework as clearly as I can. This is one of the fastest-moving areas of IP law, but the foundational principles are becoming clearer.

1. The Human Authorship Requirement

U.S. copyright law protects "original works of authorship" (17 U.S.C. 102). The Copyright Office has consistently maintained that "authorship" requires a human author. This isn't new — it goes back to Burrow-Giles Lithographic Co. v. Sarony (1884), where the Supreme Court held that copyright could protect a photograph because a human made creative choices in composing, lighting, and arranging the subject.

2. Thaler v. Perlmutter (D.D.C. 2023)

This was the first federal court case directly addressing AI-generated works. Stephen Thaler applied for a copyright registration for an image generated entirely by his AI system, DABUS, naming the AI as the author. The court (Judge Howell) held that copyright requires human authorship and that a work generated entirely by AI without human creative input cannot be copyrighted. The ruling was straightforward: "Human authorship is a bedrock requirement of copyright."

3. Zarya of the Dawn (Copyright Office, 2023)

This is the more nuanced and practically important decision. Kristina Kashtanova registered a graphic novel that combined AI-generated images (from Midjourney) with human-authored text and human-arranged page layouts. The Copyright Office ruled:

  • The individual AI-generated images were not copyrightable because Kashtanova's text prompts to Midjourney did not constitute sufficient creative control over the resulting images
  • The text she wrote was copyrightable (standard human authorship)
  • The selection and arrangement of images and text into a cohesive work was copyrightable as a compilation under 17 U.S.C. 103

This is the key framework: a work can contain both copyrightable and non-copyrightable elements. Your copyright protection extends only to the human-authored portions.

4. The "Sufficient Human Authorship" Standard

The Copyright Office's February 2023 guidance established that copyright registration requires "sufficient human authorship." The question is: where does the human's creative contribution end and the AI's begin? The Office has indicated that:

  • Typing a prompt into an AI tool is generally not sufficient human authorship (similar to giving instructions to a commissioned artist — you don't own the copyright just because you described what you wanted)
  • Selecting, arranging, and modifying AI outputs can constitute sufficient human authorship if the creative choices are meaningful
  • Using AI as one tool among many in a creative process (e.g., generating a rough draft, then extensively editing and revising) is more likely to result in copyrightable work

The bottom line: the more human creative input you add on top of the AI output, the stronger your copyright claim. Pure prompt-to-output with no further human modification is the weakest position.

DD
DataPrivacyDan

Training Data Lawsuits and What They Mean for End Users

Even if you can sort out the copyright-on-the-output side, there's a second major legal issue: were the AI models themselves trained on copyrighted material without authorization? This is the subject of several major lawsuits that could reshape the entire landscape.

Key cases:

  • The New York Times v. OpenAI and Microsoft — The NYT sued alleging that ChatGPT and Bing Chat were trained on millions of NYT articles without authorization. The NYT demonstrated that ChatGPT could reproduce near-verbatim excerpts of copyrighted articles. OpenAI argued fair use. Portions of this case have been settled, but the core fair use question remains in litigation and could reach the appellate courts.
  • Getty Images v. Stability AI — Getty sued Stability AI (maker of Stable Diffusion) in both the U.S. (D. Del.) and the UK for training on Getty's copyrighted image library. Getty showed that Stable Diffusion sometimes generated images with distorted Getty watermarks — strong evidence that the training data included Getty images. This case is ongoing.
  • Andersen v. Stability AI et al. — A class action brought by visual artists against Stability AI, Midjourney, and DeviantArt. The court dismissed some claims but allowed the core copyright infringement claims against Stability AI to proceed. This is the case most likely to produce a class-wide ruling on whether training on copyrighted images constitutes infringement.
  • Authors Guild v. OpenAI — Prominent authors (including John Grisham, Jodi Picoult, and George R.R. Martin) sued OpenAI alleging unauthorized training on their published works. This case could establish important precedents for text-based AI training.

Why this matters for end users: If courts ultimately rule that AI training on copyrighted data is infringement (not fair use), end users of those AI tools could face secondary liability claims — particularly if you're using AI outputs commercially in ways that compete with the original training data sources. This is still theoretical, but it's a real risk that businesses should account for.

For a deeper discussion of how AI scraping affects content creators, see: AI scraping my content — legal options

FA
FreelanceWriter_Amy

Practical question: I used ChatGPT to help write marketing copy for a client's website. The client is now asking me to guarantee that the copy is "original and copyrightable" — which is standard language in my freelance writing contracts. Can I honestly make that guarantee?

To be clear, I didn't just paste in a prompt and hand over the output. I wrote detailed outlines, generated drafts with ChatGPT, then spent several hours rewriting, restructuring, and adding my own voice. Maybe 40% of the final text originated from ChatGPT and 60% is my own writing. But it's all blended together now.

Also — if a competitor copies this marketing copy word-for-word, would my client have any legal recourse? Or is the AI-generated portion "free for all"?

MK
AttorneyMichaelK Attorney

@FreelanceWriter_Amy — Good questions. Based on what you've described, you're in a reasonable position, but there are nuances:

On copyrightability: Given the level of human creative involvement you've described — outlining, rewriting, restructuring, and blending your own writing — the final work likely meets the "sufficient human authorship" threshold. The Copyright Office's guidance suggests that using AI as a drafting tool, followed by substantial human revision, is the type of AI-assisted creation that can result in copyrightable work. Your situation is closer to the copyrightable text portions of Zarya of the Dawn than to the non-copyrightable AI images.

On your contractual guarantee: I'd recommend modifying your standard contract language. Instead of guaranteeing the work is "original and copyrightable," consider disclosing that AI tools were used in the drafting process and that the final work reflects substantial human authorship and creative judgment. Transparency protects you legally.

On a competitor copying it: Your client would likely have enforceable copyright in the final work as a whole, given the substantial human contribution. However, if a competitor could somehow isolate the purely AI-generated portions and copy only those, copyright protection for those specific passages would be much weaker. In practice, this is nearly impossible when the work is thoroughly blended — which actually works in your favor.

There are also other legal theories beyond copyright — unfair competition, trade dress, and state-level unfair business practices laws — that could provide protection even for non-copyrightable elements.

TM
TechFounderMike

How Our Startup Handles AI-Generated Content (Practical Policy)

We use AI tools extensively — Claude for code review and documentation, ChatGPT for first-draft marketing copy, and Midjourney for social media visuals. Here's the internal policy we developed with our attorney:

  1. Disclosure: All content that used AI in its creation is tagged internally. We don't publicly disclose AI use on every piece of content, but we track it so we can respond if questioned.
  2. Human review required: No AI-generated content goes live without substantive human review and editing. "Substantive" means actual creative changes, not just proofreading. This is both a quality control measure and a legal protection.
  3. No AI for core IP: Our core product code, patent applications, and key brand assets are human-created. We use AI for support tasks (documentation, boilerplate, initial drafts) but not for the output we consider our competitive advantage.
  4. Insurance: We added a media liability rider to our E&O insurance that covers claims arising from AI-generated content. Cost was about $800/year. Worth it for the peace of mind.
  5. Vendor contracts: When we hire freelancers or agencies, we now ask whether they use AI tools and require disclosure. We don't prohibit AI use, but we want to know.

This approach lets us move fast and use AI tools productively while managing the legal risk. The key insight: treat AI like any other tool in your creative process, but document the human contribution.

For more on Claude's specific terms for business use, see: Claude AI terms for business use

SJ
StartupLawyerJess Attorney

Platform-by-Platform: What the Terms of Service Actually Say About Ownership

I've reviewed the current (as of December 2025) terms of service for the major AI platforms. Here's what each one says about who owns the output:

OpenAI (ChatGPT, DALL-E, GPT-4 API)

  • OpenAI assigns all rights in the output to the user, "to the extent permitted by applicable law"
  • That caveat ("to the extent permitted by applicable law") is key — it means OpenAI is assigning you whatever rights exist, but if the output isn't copyrightable, there's nothing to assign
  • OpenAI retains the right to use inputs and outputs to improve their models (unless you opt out via the API or enterprise settings)
  • Commercial use is permitted on all paid plans

Anthropic (Claude)

  • Anthropic's terms similarly state that output ownership belongs to the user to the extent permitted by law
  • On their commercial API and Claude for Work plans, Anthropic does not use inputs or outputs for model training by default
  • Commercial use is permitted
  • See the detailed breakdown in: Claude AI terms for business use

Midjourney

  • Paid subscribers own the assets they create, subject to the same "to the extent permitted by law" caveat
  • Free-tier users grant Midjourney a license to use their generated images
  • Midjourney grants paid users a broad commercial license to use generated images
  • Companies with over $1M in annual gross revenue must purchase the "Pro" or "Mega" plan for commercial use
  • Important: Midjourney's terms do not grant exclusivity — other users could theoretically generate similar images from similar prompts

Stability AI (Stable Diffusion)

  • The open-source model (downloadable) has a permissive license — you own your outputs
  • The hosted API (DreamStudio) terms assign output rights to users
  • However, Stability AI faces the most serious training data lawsuits, which creates indirect risk for commercial users

Key point: All of these platforms assign you contractual rights to the output. But a contractual right is different from a copyright. The platform can give you permission to use and commercialize the output (contractual right), but they cannot give you a copyright that doesn't exist under the law. Think of it like a license to public domain material — you have permission to use it, but you can't stop others from using similar AI-generated content.

CB
CryptoTrader_Ben

Different angle here: my client runs a DTC e-commerce brand and wants to use AI-generated product lifestyle images instead of hiring photographers. We're talking about using Midjourney to create images of their products in styled settings — kitchens, living rooms, etc.

Two concerns: (1) Can a competitor just take these images since they're AI-generated and not copyrightable? (2) Is there any risk that Midjourney generates an image that's too similar to an existing copyrighted photograph, exposing my client to infringement claims?

SJ
StartupLawyerJess Attorney

@CryptoTrader_Ben — Both are legitimate concerns.

On competitors copying the images: Yes, if the images are purely AI-generated (no significant human post-processing), copyright protection is likely weak or nonexistent. However, your client has other protections:

  • If a competitor scrapes and uses the exact same images, that could constitute unfair competition under Section 43(a) of the Lanham Act (passing off / false advertising)
  • Website terms of service can contractually restrict copying of images, even if they lack copyright protection
  • The practical reality: competitors are more likely to generate their own AI images than to steal yours

On similarity to existing photographs: This is the more serious risk. AI image generators are trained on existing photographs, and there have been documented cases of outputs that closely resemble specific copyrighted works. Your client could face an infringement claim from the original photographer even if the similarity was unintentional.

Practical mitigation: run a reverse image search on any AI-generated image before publishing it commercially. Services like TinEye or Google Images can help identify if the generated image is suspiciously similar to an existing work. It's not bulletproof, but it's a reasonable due diligence step.

Also consider: some AI platforms (including Shutterstock's AI generator) offer indemnification for certain AI-generated images. That may be worth the premium for commercial product imagery.

CN
CorpCounsel_NYC Attorney

Enterprise Perspective: How Large Companies Are Handling AI Content

I'm general counsel at a mid-size tech company and I've also been advising several Fortune 500 clients on AI content policies. Here's what I'm seeing across the industry:

1. The "AI Attestation" Trend in Contracts

A growing number of companies are adding AI attestation clauses to their vendor contracts. These typically require the vendor to disclose whether AI was used in creating deliverables and, if so, to what extent. Some go further and require that all deliverables be "primarily human-authored." We've seen this in:

  • Marketing agency contracts (requiring disclosure of AI-generated copy or images)
  • Software development agreements (requiring disclosure of AI-generated code, especially re: GitHub Copilot)
  • Content licensing agreements (publishers requiring attestation that licensed content is human-authored)
  • Legal services agreements (law firms being asked to certify that briefs are not AI-generated)

2. Internal AI Content Policies

Most large companies I work with have adopted formal AI use policies. Common elements include:

  • Approved AI tools list (e.g., only enterprise versions of ChatGPT/Claude with data retention turned off)
  • Prohibition on inputting confidential information, trade secrets, or personal data into AI tools
  • Mandatory human review for any AI-generated content that will be published externally
  • Documentation requirements for AI use in regulated industries (financial services, healthcare, legal)

3. Risk Management Frameworks

The companies handling this best are treating AI content risk like any other IP risk — with documented policies, training programs, and periodic audits. The companies handling it worst are either banning AI entirely (losing productivity) or ignoring the issue (accumulating risk).

If you're at a company of any size, get an AI content policy in place now. The cost of creating a policy is trivial compared to the cost of an infringement claim or a contract dispute over undisclosed AI use.

NP
NewGrad_Priya

This is incredibly helpful. Question from a recent grad's perspective: I'm a graphic designer and I've been using Midjourney and Stable Diffusion to create concept art for my portfolio. Some pieces are heavily edited in Photoshop after generation, others are more "raw" outputs that I've curated.

Two concerns: (1) If I include AI-assisted work in my portfolio, do I need to disclose that? (2) If a potential employer asks me to sign a work-for-hire agreement that assigns them "all intellectual property rights" in my work, and I use AI tools in the creative process, is that an issue?

SC
SarahConsumerRights

@NewGrad_Priya — On disclosure in your portfolio: there's no legal requirement to disclose AI use in a personal portfolio (unless you're making specific claims about the work being "original" or "hand-crafted"). However, it's increasingly considered best practice in creative industries to disclose AI involvement. Many design firms and creative agencies are explicitly asking about it during interviews.

Practically speaking, I'd recommend labeling AI-assisted pieces honestly. It shows integrity and prevents awkward situations if an employer later discovers the tools you used. The design industry is moving toward a norm of transparency about AI use, and being ahead of that curve is better than being caught behind it.

On the work-for-hire IP assignment: yes, you should disclose to your employer that you use AI tools. A work-for-hire agreement that assigns "all intellectual property rights" may not actually convey copyright in the AI-generated portions (because there may be no copyright to convey). This could create a contractual dispute if the employer discovers AI use later and claims you misrepresented the nature of the work.

Transparency upfront avoids problems later.

DM
DevOps_Marcus

For anyone following the Copyright Office developments — they published their Part 2 report on AI and copyright in late 2025, which focuses on copyrightability of AI-generated outputs. Key points from the report:

  • The Office reaffirmed the human authorship requirement but acknowledged it's a spectrum, not a bright line
  • They explicitly rejected the idea that prompt engineering alone constitutes sufficient human authorship for AI-generated images (reinforcing the Zarya of the Dawn reasoning)
  • For text-based AI outputs, they left more room for human authorship claims where the user provides detailed structural outlines, makes extensive edits, and exercises creative judgment in selecting and arranging AI-generated text
  • They recommended that Congress consider new legislation to address AI-generated works, potentially creating a sui generis (unique) form of protection that's shorter and narrower than traditional copyright

The sui generis proposal is interesting — it would potentially give AI-generated works some protection (maybe 5-10 years instead of life+70) without calling it "copyright." Similar to how databases are protected in the EU. But this would require Congressional action, which... don't hold your breath.

For the code-specific angle on this, see: Copyright and AI-generated code and GitHub Copilot code ownership

RM
RealEstateBroker_Miami

Very practical question: I'm a real estate broker and my team uses ChatGPT to write property descriptions and neighborhood guides for listings. We also use AI to generate virtual staging images (adding furniture to photos of empty rooms). Are there any specific risks I should be aware of in the real estate context?

Our MLS has started asking whether listing photos are "AI-generated or digitally altered" and I want to make sure we're in compliance.

MK
AttorneyMichaelK Attorney

@RealEstateBroker_Miami — Real estate is actually one of the areas where AI content disclosure is becoming regulated fastest. A few things to be aware of:

  • Virtual staging: NAR (National Association of Realtors) guidelines and many state real estate commissions now require clear disclosure when listing photos have been digitally altered or virtually staged. AI-generated or AI-altered images must be labeled as such. Failure to disclose can be considered a deceptive practice under state consumer protection laws.
  • Property descriptions: Copyright risk is low here (AI-assisted descriptions with human editing are likely copyrightable as discussed above). The bigger risk is accuracy — if ChatGPT hallucinates facts about a property or neighborhood, you could face liability for material misrepresentation.
  • Fair housing: Be extremely careful with AI-generated neighborhood descriptions. If the AI generates language that could be interpreted as steering (e.g., describing demographics, school quality in coded terms), you could face Fair Housing Act violations. Always review AI-generated property and neighborhood descriptions through a fair housing compliance lens.

Bottom line: disclose AI-altered photos, verify facts in AI-generated descriptions, and review everything for fair housing compliance. The copyright question is probably the least of your concerns in real estate — accuracy and disclosure are the bigger issues.

KM
KellyMartinez_Mod Mod

Updated Summary (February 2026)

This thread has become one of our most valuable resources. Here's the current state of play:

What we know (settled or near-settled law):

  • Purely AI-generated works without meaningful human creative input are not copyrightable (Thaler v. Perlmutter, Copyright Office guidance)
  • AI-assisted works with sufficient human authorship can be copyrighted — protection extends to the human-authored elements (Zarya of the Dawn)
  • Writing a prompt is generally not enough human authorship to claim copyright on the AI's output
  • All major AI platforms assign output rights to users contractually, but this doesn't create copyright where none exists

What's still uncertain (active litigation and evolving guidance):

  • Exactly where the line falls between "sufficient" and "insufficient" human authorship
  • Whether AI training on copyrighted data constitutes fair use (NYT v. OpenAI, Getty v. Stability AI)
  • Whether end users face secondary liability for using AI tools trained on infringing data
  • Whether Congress will create a new sui generis protection for AI-generated works

Best practices for commercial AI content use:

  1. Document your human contributions — keep drafts, outlines, and edit histories
  2. Disclose AI use in professional contexts, especially in regulated industries
  3. Add substantive human creative input on top of AI outputs before claiming ownership
  4. Run reverse image searches on AI-generated images before commercial publication
  5. Update your contracts to address AI-generated content (both as a creator and as a buyer)
  6. Consider media liability insurance that covers AI-generated content claims
  7. Stay current — this area of law is changing quarterly

Key resources and related threads:

I'll continue updating this thread as new cases are decided and new guidance is issued. If you have a specific question about your AI content situation, post it here and our community will help.

Want to participate in this discussion?

Email owner@terms.law to request access