How do other AI platforms compare?
See ownership terms for ChatGPT, Midjourney, and Perplexity side-by-side.
On January 22, 2026, Anthropic released "The Anthropic Model Spec" β a 23,000-word document (3x the length of the US Constitution) that defines how Claude should behave. This is the most comprehensive public AI behavior framework ever released.
The Four-Priority Hierarchy
Claude's Constitution establishes a strict priority order that affects how Claude responds β and what outputs it will create:
1. Safety & Human Oversight (Highest)
Claude will refuse outputs that could cause catastrophic harm, regardless of user instructions
2. Ethical Behavior
Avoids deception, illegal activity, and actions that violate trust even if technically allowed
3. Anthropic's Guidelines
Follows company policies on content, data use, and acceptable behavior
4. Helpfulness (Lowest)
Being genuinely useful to users β but only after the above constraints are satisfied
Hardcoded vs. Softcoded Behaviors
The Constitution distinguishes between behaviors Claude will never change and those that can be adjusted:
| Category | Examples | Impact on Outputs |
|---|---|---|
| Hardcoded OFF | Weapons of mass destruction, CSAM, undermining AI oversight | Claude will never generate this content |
| Hardcoded ON | Acknowledging being an AI, referring users to emergency services | Always present in relevant contexts |
| Softcoded (Default OFF) | Explicit content, detailed security vulnerability info | Operators can enable for specific contexts |
| Softcoded (Default ON) | Following suicide/self-harm safe messaging, adding safety caveats | Operators can disable for appropriate contexts |
What This Means for Your Outputs
Key Takeaways for Users
- Transparency: You now know exactly why Claude refuses certain requests
- Consistency: The four-priority hierarchy ensures predictable behavior across all tiers
- Operator Control: API/Enterprise users can toggle softcoded behaviors for legitimate use cases
- Industry Influence: The CC0 release may influence how other AI providers structure their own guidelines
Anthropic's Terms of Service explicitly address output ownership. Here's what they actually say:
"Subject to your compliance with our Terms, we assign to you all of our right, title, and interest-if any-in Outputs."
Anthropic Terms of Service, Section 5.2 (August 2025)
What This Actually Means
The good: Anthropic isn't claiming ownership of your Claude outputs. As between you and Anthropic, you own what Claude creates for you.
The catch: Two critical qualifiers limit this assignment:
The August 2025 terms overhaul created a clear hierarchy. Your plan determines your rights:
| Feature | Free | Pro ($20/mo) | Team/Enterprise | API |
|---|---|---|---|---|
| Output Ownership | ✓ Assigned | ✓ Assigned | ✓ Assigned | ✓ Assigned |
| Training Opt-Out | ✗ None | ✓ Available | ✓ Default off | ✓ Never trained |
| Commercial Use | ⚠ Personal only | ⚠ Limited | ✓ Full rights | ✓ Full rights |
| Copyright Indemnity | ✗ None | ✗ None | ✓ Included | ✓ Included |
| Build Products | ✗ Not licensed | ⚠ Gray area | ✓ Permitted | ✓ Permitted |
Anthropic's Usage Policy sets hard limits. Violations can terminate your account AND void your ownership rights.
✗ Train Competing AI
Cannot use outputs to train ML models or build competing services.
✗ Resell Raw Outputs
Cannot sell Claude's content without adding substantial human contribution.
✗ Impersonate Humans
Cannot present AI output as human-written where disclosure is expected.
✗ High-Stakes Automation
Cannot use Claude for employment, credit, housing, or legal eligibility.
✗ Political Campaigns
Cannot use for lobbying, campaign messaging, or election influence.
✗ Illegal Content
Standard prohibitions on fraud, violence, harassment, malware, etc.
The "Substantial Contribution" Requirement
The most misunderstood rule: You cannot sell or publish Claude's raw outputs as standalone products.
Acceptable: Claude drafts, you rewrite 40%, add research, inject your voice
Unacceptable: Claude writes ebook, you fix typos and publish
✓ Personal Projects
Brainstorming, research, drafting, learning, internal work.
✓ Create with Your Input
Incorporate outputs with substantial human creativity.
✓ Commercial Use (API)
Build products, serve customers with proper licensing.
✓ Professional Assistance
Drafting, coding help, research with human review.
✓ Generate Code
Use Claude-generated code with review and testing.
✓ Marketing Assistance
Draft copy, social posts, emails with human editing.
Check Your Claude Usage Risk
Your responses are not stored or shared.
Even if Anthropic assigns you rights, the law may not protect AI-generated content. Here's what changed:
Copyright Office Part 2 (January 2025): Writing detailed prompts does NOT make you the "author" of AI outputs. Prompt engineering alone doesn't demonstrate sufficient human creative control.
U.S. Copyright Office, Part 2: Copyrightability
Copyright Office Part 3 (May 2025): AI training on copyrighted works "raises significant questions" and may constitute infringement requiring licensing.
U.S. Copyright Office, Part 3: Training AI
Bartz v. Anthropic β $1.5B Settlement (September 2025): The largest copyright settlement in U.S. history. Anthropic agreed to pay authors $1.5 billion and establish a paid licensing framework for training data. The June 2025 "transformative fair use" ruling became moot. Signals that AI companies will increasingly license rather than litigate.
Bartz v. Anthropic, PBC, N.D. Cal. (settled Sept. 2025)
Thaler v. Perlmutter β SCOTUS Cert Petition (October 2025): Stephen Thaler asked the Supreme Court to decide whether AI can be a legal "author" under the Copyright Act. The D.C. Circuit affirmed in March 2025 that only humans can hold copyrights. The DOJ filed opposition in January 2026, arguing existing law is clear. If SCOTUS declines cert (likely), the "humans only" rule is settled law.
Thaler v. Perlmutter, No. 25-XXX (cert. petition Oct. 2025)
| Scenario | Status | Protection |
|---|---|---|
| Raw Claude output, no editing | Not copyrightable | None - anyone can copy |
| Clever prompts, raw output | Not copyrightable | Prompts don't create authorship |
| Light editing (typos) | Uncertain | Probably insufficient |
| Substantial editing (40%+) | Likely copyrightable | Human portions protected |
| AI as research, you write final | Copyrightable | Your expression protected |
Code Copyright: Special Considerations
- Functional code: Copyright protects expression, not function. If there's only one way to do something, it may not be copyrightable.
- Open source risk: If Claude reproduces GPL/copyleft code, your code could have licensing obligations.
- Trade secret alternative: Keep proprietary code confidential-trade secret protection doesn't require human authorship.
- Best practice: Review and modify Claude-generated code so copyright concerns don't matter.
Enterprise Indemnification
Ask Claude for code snippets or architectural suggestions. Review, test, and modify for your codebase. You're the engineer responsible for the final product.
Generate entire modules, make minor modifications, ship to production. Technically compliant, but you lack copyright protection and may have hidden bugs.
Use Claude-generated code to build an AI service that competes with Anthropic. Violates the non-compete clause.
Use Claude to research issues, draft initial clauses, or outline briefs. Review for accuracy, add your analysis, verify all citations.
Generate a brief with Claude and file with minimal review. Risk malpractice for hallucinations and ethical violations.
Use Claude for brainstorming, outlines, or first drafts. Substantially rewrite in your voice, add original research, fact-check claims.
Generate marketing emails with Claude, make light edits, publish. Probably compliant, but weak copyright claims if copied.
Have Claude write ebooks, make minimal changes, sell on Amazon. Violates "no raw output" rule and may constitute fraud.
Use Claude API for customer support (with human escalation), generate content, assist employees. Proper licensing, disclosure, human oversight.
Using Free or Pro for business customers. Technically works, but you lack commercial licensing and indemnification.
Build a Claude wrapper that resells access without value-add. Using Claude for automated hiring or loan decisions without human review.
Use Claude to explain concepts, find research angles, proofread drafts. Write papers yourself, cite sources properly. Claude is a tutor, not ghostwriter.
Have Claude write your essay and submit as your own. Violates terms AND virtually every honor code. AI detection is improving.
You do, as long as you comply with Anthropic's Terms. The terms explicitly assign "all right, title, and interest" in outputs to you. However, purely AI-generated content may have no copyright protection under current U.S. law.
Probably not for raw outputs. The Copyright Office requires human authorship. If you substantially edit or add your own creativity, those human contributions may be copyrightable. The more you contribute, the stronger your claim.
Yes, with conditions. You must add substantial original contribution-you can't sell raw output as a standalone product. For commercial use at scale, use the API. Claude assists YOUR creation rather than being the product itself.
It depends on your plan. Since August 2025: Free tier may be used for training (no opt-out). Pro can opt out. Team/Enterprise and API are never used for training.
Ownership is similar across all plans, but commercial rights differ. Free is personal only. Pro allows limited commercial use. API/Enterprise provides full commercial rights, indemnification, and no training on your data.
Yes, if using the API. API terms license commercial products serving end-users. You cannot use Free/Pro for multi-user products. Also prohibited: competing with Anthropic or "thin wrapper" products.
In certain contexts, yes. Anthropic requires disclosure when users might mistake AI for human work (chatbots, customer service). Academic institutions typically require it. Many jurisdictions are passing AI disclosure laws.
Anthropic can suspend or terminate access and you may forfeit the ownership assignment. For serious violations, Anthropic could pursue breach of contract. Some violations could expose you to criminal liability.
Same ownership rules apply. Claude Code outputs are covered by whichever terms govern your usage (API, Pro, etc.). Code you generate is assigned to you, but purely AI-generated portions may lack copyright protection.
Need a Lawyer's Opinion?
Get personalized guidance on your Claude commercial use, IP ownership questions, or contract review.
Schedule Your Consultation
Pick a time that works for you. Video call with face-to-face discussion of your specific situation.
More from Terms.Law
Protect your AI-generated content with proper licensing agreements. Custom AI content licensing contracts, SaaS terms with AI clauses, and IP assignment agreements. Starting at $500.