All major AI providers now say "you own your outputs"—but the details differ significantly. Here's what each platform's actual terms say:
| Platform | Output Ownership | Commercial Use | Key Restrictions | Attribution | IP Indemnity |
|---|---|---|---|---|---|
| OpenAI (ChatGPT, DALL·E) |
You retain input rights; OpenAI assigns output rights to you Best | Permitted across all tiers | No competing models; content policies | None required | None (as-is basis) |
| Anthropic (Claude) |
You retain inputs; Anthropic assigns output rights Best | Permitted under commercial terms | No competing models; acceptable use | None required | Enterprise |
| Google (Gemini) |
Google doesn't claim ownership; you own outputs Good | Permitted (Workspace, enterprise) | Standard acceptable use policies | No hard requirement | None dedicated |
| Midjourney | You own "to fullest extent possible" Conditions | Paid subscribers only; revenue thresholds | Public gallery by default; content guidelines | None required (voluntary) | None (user bears risk) |
| Stability AI (Stable Diffusion) |
You own outputs under model license Good | Open licenses; enterprise deals at scale | OpenRAIL restrictions; no competing models | None required | None (broad disclaimers) |
| Microsoft (Copilot) |
You own input and output; Microsoft doesn't claim IP Best | Permitted under business terms | OSS license compliance; safety filters | None required | Full Commitment |
Each platform has unique nuances. Click through to our comprehensive guides for detailed analysis:
Beneath the ownership headline, several recurring contractual themes matter in practical use.
Training Data Policies
OpenAI, Anthropic, Google, and Microsoft now draw clear lines between consumer and business traffic:
- Business/API data: Not used for training by default
- Consumer free tiers: May contribute to model improvement unless you opt out
- Enterprise contracts: Explicit no-training commitments available
"If you are pushing confidential data, proprietary code, or core creative assets through AI, consumer free tiers are the wrong place to do it."
No Competing Model Clauses
Most providers restrict using their outputs to train competing AI models. This includes OpenAI, Anthropic, Midjourney, and Stability AI. Owning an output doesn't give you a free hand to use it for training a rival platform.
Disclosure and Attribution
No platform requires formal attribution, but transparency is trending:
- OpenAI: Encourages disclosure for heavily AI-assisted publications
- Google: Warns against passing off AI content as human where deceptive
- Regulated industries: Finance, healthcare, elections may require disclosure
Every generative AI provider tries to push IP risk away from itself. The baseline is "as-is, no warranty"—but some providers now offer protection.
The Default Position: You're On Your Own
Most platforms provide outputs without promises that they're accurate, non-infringing, or fit for purpose. Liability is aggressively capped. If you publish an AI-generated image that echoes a photographer's portfolio and get sued, the default is that you bear the risk.
The Game-Changers: Indemnity Commitments
🛡️ Microsoft Copilot Copyright Commitment
Defends and indemnifies eligible business customers for copyright claims arising from Copilot outputs, provided guardrails are enabled.
🛡️ Anthropic Enterprise IP Protection
Indemnifies customers for IP claims tied to authorized use of Claude, subject to exclusions for misuse or knowing infringement.
Getty v. Stability AI: What It Means
The November 2025 UK ruling largely favored Stability on copyright grounds, but found limited trademark infringement over Getty watermarks in outputs. Key takeaways:
- Training-data disputes target platforms, not individual users
- Output risks fall on you if you publish infringing content
- Vendor indemnity only helps if you stayed within their guardrails
The contract landscape can be negotiated. The copyright landscape is much less flexible.
"No human author, no copyright."
The Thaler Decisions
In 2023, a D.C. federal court rejected Stephen Thaler's attempt to register copyright in art generated by his "Creativity Machine." In 2025, the D.C. Circuit affirmed, describing human authorship as a "bedrock requirement."
The Bright Line
The Copyright Office draws a clear distinction:
❌ Purely AI-Generated
Images or text produced from prompts with little human editing. Not protectable.
✓ AI-Assisted
Human makes creative decisions about selection, arrangement, or substantial modification. Protectable to extent of human contribution.
International Landscape
- EU: Requires "author's own intellectual creation"—human mind making creative choices
- UK: Has a "computer-generated works" provision, but its application to modern AI is contested
- Most jurisdictions: Require human authorship
Practical Consequences
- You can use AI outputs freely under platform contracts
- You may not be able to stop others from copying purely AI-generated material
- You can protect AI-assisted works where human contribution is substantial
Treating AI as a legal-grade tool rather than a novelty requires deliberate habits. Here's what determines whether you're building on sand or rock:
💼 Use Business-Grade Plans
For confidential data or core creative assets, use Enterprise/API tiers with explicit no-training commitments.
🎨 Add Human Creativity
For logos and brand visuals, treat AI as ideation. Have humans refine until the final reflects clear creative choices.
📁 Document Everything
Keep prompts, raw outputs, and subsequent drafts. This evidences your human creative process for registration.
📝 Update Freelancer Contracts
Clarify AI use conditions, require disclosure, and warrant that deliverables are eligible for expected IP treatment.
🚩 Red Flag Obvious Echoes
If output contains recognizable characters, logos, or artist styles, treat it as unauthorized derivative work.
™️ Lean on Trademark Law
For AI logos used as source identifiers, trademark protection may be stronger than arguing about copyright.
In U.S. terms, you generally do not own copyright in purely AI-generated material that contains no human authorship. The Copyright Office's guidance and recent Thaler decisions make clear that a machine cannot be the legal author.
However, the major platforms contractually assign to you whatever rights they have and agree not to assert ownership themselves. This means you're free to use, modify, and commercialize outputs as far as the platform is concerned—but you may not be able to use copyright law to stop others from copying purely AI-generated content.
When you substantially revise or integrate AI output into work reflecting significant human creativity, you can hold copyright in your contribution.
Output ownership and data usage are separate clauses. OpenAI, Anthropic, Google, and Microsoft now all take the position that business/API data is not used to train models by default, while consumer chat may contribute unless you opt out.
If you need both ownership and strict confidentiality, use enterprise offerings that explicitly commit to no training on your data.
Contractually, yes—as long as you're on a plan that grants commercial rights. OpenAI, Google, Stability AI, Midjourney (paid), and Microsoft all permit commercial use.
The bigger risks are: (1) an output may be too close to someone else's protected work, leading to infringement claims, and (2) purely AI-generated content may not be protectable by you, so others might reuse it.
There's no single global rule requiring disclosure. In the U.S., there's no general statutory requirement today. Platform policies and sector regulation fill some of that space—OpenAI encourages disclosure for heavily AI-assisted publications, and Google warns against deceptive presentation.
More specific transparency rules are emerging around political advertising, biometric deepfakes, and consumer-protection contexts. In regulated industries, undisclosed AI use is more likely to be scrutinized.
If an AI image or passage is substantially similar to a copyrighted work, you can be sued for infringement even if you never saw the original and the model did the copying. The fact that the model trained on that work is not a complete defense.
Your best protection: don't use obviously derivative outputs, run reverse image searches or plagiarism checks, and correct or remove content promptly if a rights holder objects. For some use cases, choosing a vendor with indemnity can shift the cost of dealing with such claims.
Owning outputs doesn't automatically let you use them any way whatsoever. Most major providers prohibit using their outputs to develop competing models. OpenAI, Anthropic, and others explicitly restrict this.
If you're building an in-house model, base training on data you own outright, license appropriately, or obtain independently. Don't assume scraping your own ChatGPT or Claude transcripts into a training set is contract-compliant.