📰 Recent Platform Updates

2025-2026 Anthropic: $1.5B settlement talks with music publishers over Claude training data usage
2025-2026 OpenAI: Court ordered to preserve 20M+ user interaction logs for NYT copyright case
2025-2026 Figma: Class action filed over AI training on user designs without consent
2025-2026 Disney-OpenAI $1B deal: First major studio licensing deal for AI training content
2025-2026 Warner Music: Settled with Suno/Udio over AI music generation copyright claims
Nov 2025 Getty v. Stability AI ruling: Stability wins on copyright, limited trademark finding
Mar 2025 D.C. Circuit affirms: AI cannot be copyright author (Thaler appeal)
2025 Copyright Office releases Part 2 report on AI copyrightability
📊

All major AI providers now say "you own your outputs"—but the details differ significantly. Here's what each platform's actual terms say:

Platform Output Ownership Commercial Use Key Restrictions Attribution IP Indemnity
OpenAI
(ChatGPT, DALL·E)
You retain input rights; OpenAI assigns output rights to you Best Permitted across all tiers No competing models; content policies None required None (as-is basis)
Anthropic
(Claude)
You retain inputs; Anthropic assigns output rights Best Permitted under commercial terms No competing models; acceptable use None required Enterprise
Google
(Gemini)
Google doesn't claim ownership; you own outputs Good Permitted (Workspace, enterprise) Standard acceptable use policies No hard requirement None dedicated
Midjourney You own "to fullest extent possible" Conditions Paid subscribers only; revenue thresholds Public gallery by default; content guidelines None required (voluntary) None (user bears risk)
Stability AI
(Stable Diffusion)
You own outputs under model license Good Open licenses; enterprise deals at scale OpenRAIL restrictions; no competing models None required None (broad disclaimers)
Microsoft
(Copilot)
You own input and output; Microsoft doesn't claim IP Best Permitted under business terms OSS license compliance; safety filters None required Full Commitment
💡
Key Takeaway: All major platforms assign ownership to you, but only Microsoft and Anthropic (enterprise) offer IP indemnity if outputs infringe third-party rights.
🔍

Each platform has unique nuances. Click through to our comprehensive guides for detailed analysis:

⚠️
Perplexity Gap: Unlike Claude and ChatGPT, Perplexity's terms don't explicitly assign output ownership. They simply don't address it—which may leave ownership unclear under default copyright principles.
🚫

Beneath the ownership headline, several recurring contractual themes matter in practical use.

Training Data Policies

OpenAI, Anthropic, Google, and Microsoft now draw clear lines between consumer and business traffic:

  • Business/API data: Not used for training by default
  • Consumer free tiers: May contribute to model improvement unless you opt out
  • Enterprise contracts: Explicit no-training commitments available

"If you are pushing confidential data, proprietary code, or core creative assets through AI, consumer free tiers are the wrong place to do it."

— Practical guidance for business users

No Competing Model Clauses

Most providers restrict using their outputs to train competing AI models. This includes OpenAI, Anthropic, Midjourney, and Stability AI. Owning an output doesn't give you a free hand to use it for training a rival platform.

Disclosure and Attribution

No platform requires formal attribution, but transparency is trending:

  • OpenAI: Encourages disclosure for heavily AI-assisted publications
  • Google: Warns against passing off AI content as human where deceptive
  • Regulated industries: Finance, healthcare, elections may require disclosure
📋
The contract layer generally lets you do almost anything legitimate with outputs, but it does care about how you use the service and whether you try to bootstrap a competitor on top of someone else's model.
⚖️

Every generative AI provider tries to push IP risk away from itself. The baseline is "as-is, no warranty"—but some providers now offer protection.

The Default Position: You're On Your Own

Most platforms provide outputs without promises that they're accurate, non-infringing, or fit for purpose. Liability is aggressively capped. If you publish an AI-generated image that echoes a photographer's portfolio and get sued, the default is that you bear the risk.

The Game-Changers: Indemnity Commitments

🛡️ Microsoft Copilot Copyright Commitment

Defends and indemnifies eligible business customers for copyright claims arising from Copilot outputs, provided guardrails are enabled.

🛡️ Anthropic Enterprise IP Protection

Indemnifies customers for IP claims tied to authorized use of Claude, subject to exclusions for misuse or knowing infringement.

Getty v. Stability AI: What It Means

The November 2025 UK ruling largely favored Stability on copyright grounds, but found limited trademark infringement over Getty watermarks in outputs. Key takeaways:

  • Training-data disputes target platforms, not individual users
  • Output risks fall on you if you publish infringing content
  • Vendor indemnity only helps if you stayed within their guardrails
⚠️
Critical Point: If an output is close enough to a protected work to raise infringement questions, the plaintiff will almost certainly name you as a defendant alongside (or instead of) the model provider.
🎯

Treating AI as a legal-grade tool rather than a novelty requires deliberate habits. Here's what determines whether you're building on sand or rock:

💼 Use Business-Grade Plans

For confidential data or core creative assets, use Enterprise/API tiers with explicit no-training commitments.

🎨 Add Human Creativity

For logos and brand visuals, treat AI as ideation. Have humans refine until the final reflects clear creative choices.

📁 Document Everything

Keep prompts, raw outputs, and subsequent drafts. This evidences your human creative process for registration.

📝 Update Freelancer Contracts

Clarify AI use conditions, require disclosure, and warrant that deliverables are eligible for expected IP treatment.

🚩 Red Flag Obvious Echoes

If output contains recognizable characters, logos, or artist styles, treat it as unauthorized derivative work.

™️ Lean on Trademark Law

For AI logos used as source identifiers, trademark protection may be stronger than arguing about copyright.

Vendor Selection Tip: If your business depends heavily on AI-generated content at scale, factor in indemnity (Microsoft's Copilot Commitment, Anthropic's enterprise protection) even if model quality is similar elsewhere.
Do I "own" the copyright in content created with ChatGPT, Claude, Gemini or similar tools?

In U.S. terms, you generally do not own copyright in purely AI-generated material that contains no human authorship. The Copyright Office's guidance and recent Thaler decisions make clear that a machine cannot be the legal author.

However, the major platforms contractually assign to you whatever rights they have and agree not to assert ownership themselves. This means you're free to use, modify, and commercialize outputs as far as the platform is concerned—but you may not be able to use copyright law to stop others from copying purely AI-generated content.

When you substantially revise or integrate AI output into work reflecting significant human creativity, you can hold copyright in your contribution.

If the platform says I own the output, can it still use my data for training?

Output ownership and data usage are separate clauses. OpenAI, Anthropic, Google, and Microsoft now all take the position that business/API data is not used to train models by default, while consumer chat may contribute unless you opt out.

If you need both ownership and strict confidentiality, use enterprise offerings that explicitly commit to no training on your data.

Can I safely use AI-generated images or text in my commercial products?

Contractually, yes—as long as you're on a plan that grants commercial rights. OpenAI, Google, Stability AI, Midjourney (paid), and Microsoft all permit commercial use.

The bigger risks are: (1) an output may be too close to someone else's protected work, leading to infringement claims, and (2) purely AI-generated content may not be protectable by you, so others might reuse it.

Do I have to disclose that something was AI-generated?

There's no single global rule requiring disclosure. In the U.S., there's no general statutory requirement today. Platform policies and sector regulation fill some of that space—OpenAI encourages disclosure for heavily AI-assisted publications, and Google warns against deceptive presentation.

More specific transparency rules are emerging around political advertising, biometric deepfakes, and consumer-protection contexts. In regulated industries, undisclosed AI use is more likely to be scrutinized.

What if an AI output is too close to someone else's work?

If an AI image or passage is substantially similar to a copyrighted work, you can be sued for infringement even if you never saw the original and the model did the copying. The fact that the model trained on that work is not a complete defense.

Your best protection: don't use obviously derivative outputs, run reverse image searches or plagiarism checks, and correct or remove content promptly if a rights holder objects. For some use cases, choosing a vendor with indemnity can shift the cost of dealing with such claims.

Can I build my own model using outputs from another provider?

Owning outputs doesn't automatically let you use them any way whatsoever. Most major providers prohibit using their outputs to develop competing models. OpenAI, Anthropic, and others explicitly restrict this.

If you're building an in-house model, base training on data you own outright, license appropriately, or obtain independently. Don't assume scraping your own ChatGPT or Claude transcripts into a training set is contract-compliant.