How do other AI platforms compare?
See ownership terms for ChatGPT, Midjourney, and Perplexity side-by-side.
Anthropic's Terms of Service explicitly address output ownership. Here's what they actually say:
"Subject to your compliance with our Terms, we assign to you all of our right, title, and interest-if any-in Outputs."
Anthropic Terms of Service, Section 5.2 (August 2025)
What This Actually Means
The good: Anthropic isn't claiming ownership of your Claude outputs. As between you and Anthropic, you own what Claude creates for you.
The catch: Two critical qualifiers limit this assignment:
The August 2025 terms overhaul created a clear hierarchy. Your plan determines your rights:
| Feature | Free | Pro ($20/mo) | Team/Enterprise | API |
|---|---|---|---|---|
| Output Ownership | ✓ Assigned | ✓ Assigned | ✓ Assigned | ✓ Assigned |
| Training Opt-Out | ✗ None | ✓ Available | ✓ Default off | ✓ Never trained |
| Commercial Use | ⚠ Personal only | ⚠ Limited | ✓ Full rights | ✓ Full rights |
| Copyright Indemnity | ✗ None | ✗ None | ✓ Included | ✓ Included |
| Build Products | ✗ Not licensed | ⚠ Gray area | ✓ Permitted | ✓ Permitted |
Anthropic's Usage Policy sets hard limits. Violations can terminate your account AND void your ownership rights.
✗ Train Competing AI
Cannot use outputs to train ML models or build competing services.
✗ Resell Raw Outputs
Cannot sell Claude's content without adding substantial human contribution.
✗ Impersonate Humans
Cannot present AI output as human-written where disclosure is expected.
✗ High-Stakes Automation
Cannot use Claude for employment, credit, housing, or legal eligibility.
✗ Political Campaigns
Cannot use for lobbying, campaign messaging, or election influence.
✗ Illegal Content
Standard prohibitions on fraud, violence, harassment, malware, etc.
The "Substantial Contribution" Requirement
The most misunderstood rule: You cannot sell or publish Claude's raw outputs as standalone products.
Acceptable: Claude drafts, you rewrite 40%, add research, inject your voice
Unacceptable: Claude writes ebook, you fix typos and publish
✓ Personal Projects
Brainstorming, research, drafting, learning, internal work.
✓ Create with Your Input
Incorporate outputs with substantial human creativity.
✓ Commercial Use (API)
Build products, serve customers with proper licensing.
✓ Professional Assistance
Drafting, coding help, research with human review.
✓ Generate Code
Use Claude-generated code with review and testing.
✓ Marketing Assistance
Draft copy, social posts, emails with human editing.
Check Your Claude Usage Risk
Your responses are not stored or shared.
Even if Anthropic assigns you rights, the law may not protect AI-generated content. Here's what changed:
Copyright Office Part 2 (January 2025): Writing detailed prompts does NOT make you the "author" of AI outputs. Prompt engineering alone doesn't demonstrate sufficient human creative control.
U.S. Copyright Office, Part 2: Copyrightability
Copyright Office Part 3 (May 2025): AI training on copyrighted works "raises significant questions" and may constitute infringement requiring licensing.
U.S. Copyright Office, Part 3: Training AI
Bartz v. Anthropic (June 2025): Federal court ruled AI training on copyrighted works is likely "transformative" fair use. Good news for AI companies, but the ruling is preliminary and will be appealed.
Bartz v. Anthropic, PBC, N.D. Cal.
| Scenario | Status | Protection |
|---|---|---|
| Raw Claude output, no editing | Not copyrightable | None - anyone can copy |
| Clever prompts, raw output | Not copyrightable | Prompts don't create authorship |
| Light editing (typos) | Uncertain | Probably insufficient |
| Substantial editing (40%+) | Likely copyrightable | Human portions protected |
| AI as research, you write final | Copyrightable | Your expression protected |
Code Copyright: Special Considerations
- Functional code: Copyright protects expression, not function. If there's only one way to do something, it may not be copyrightable.
- Open source risk: If Claude reproduces GPL/copyleft code, your code could have licensing obligations.
- Trade secret alternative: Keep proprietary code confidential-trade secret protection doesn't require human authorship.
- Best practice: Review and modify Claude-generated code so copyright concerns don't matter.
Enterprise Indemnification
Ask Claude for code snippets or architectural suggestions. Review, test, and modify for your codebase. You're the engineer responsible for the final product.
Generate entire modules, make minor modifications, ship to production. Technically compliant, but you lack copyright protection and may have hidden bugs.
Use Claude-generated code to build an AI service that competes with Anthropic. Violates the non-compete clause.
Use Claude to research issues, draft initial clauses, or outline briefs. Review for accuracy, add your analysis, verify all citations.
Generate a brief with Claude and file with minimal review. Risk malpractice for hallucinations and ethical violations.
Use Claude for brainstorming, outlines, or first drafts. Substantially rewrite in your voice, add original research, fact-check claims.
Generate marketing emails with Claude, make light edits, publish. Probably compliant, but weak copyright claims if copied.
Have Claude write ebooks, make minimal changes, sell on Amazon. Violates "no raw output" rule and may constitute fraud.
Use Claude API for customer support (with human escalation), generate content, assist employees. Proper licensing, disclosure, human oversight.
Using Free or Pro for business customers. Technically works, but you lack commercial licensing and indemnification.
Build a Claude wrapper that resells access without value-add. Using Claude for automated hiring or loan decisions without human review.
Use Claude to explain concepts, find research angles, proofread drafts. Write papers yourself, cite sources properly. Claude is a tutor, not ghostwriter.
Have Claude write your essay and submit as your own. Violates terms AND virtually every honor code. AI detection is improving.
You do, as long as you comply with Anthropic's Terms. The terms explicitly assign "all right, title, and interest" in outputs to you. However, purely AI-generated content may have no copyright protection under current U.S. law.
Probably not for raw outputs. The Copyright Office requires human authorship. If you substantially edit or add your own creativity, those human contributions may be copyrightable. The more you contribute, the stronger your claim.
Yes, with conditions. You must add substantial original contribution-you can't sell raw output as a standalone product. For commercial use at scale, use the API. Claude assists YOUR creation rather than being the product itself.
It depends on your plan. Since August 2025: Free tier may be used for training (no opt-out). Pro can opt out. Team/Enterprise and API are never used for training.
Ownership is similar across all plans, but commercial rights differ. Free is personal only. Pro allows limited commercial use. API/Enterprise provides full commercial rights, indemnification, and no training on your data.
Yes, if using the API. API terms license commercial products serving end-users. You cannot use Free/Pro for multi-user products. Also prohibited: competing with Anthropic or "thin wrapper" products.
In certain contexts, yes. Anthropic requires disclosure when users might mistake AI for human work (chatbots, customer service). Academic institutions typically require it. Many jurisdictions are passing AI disclosure laws.
Anthropic can suspend or terminate access and you may forfeit the ownership assignment. For serious violations, Anthropic could pursue breach of contract. Some violations could expose you to criminal liability.
Same ownership rules apply. Claude Code outputs are covered by whichever terms govern your usage (API, Pro, etc.). Code you generate is assigned to you, but purely AI-generated portions may lack copyright protection.
Need a Lawyer's Opinion?
Get personalized guidance on your Claude commercial use, IP ownership questions, or contract review.
Schedule Your Consultation
Pick a time that works for you. Video call with face-to-face discussion of your specific situation.