What Changed in 2025-2026
Jan 2026 Anthropic releases 23,000-word "Claude Constitution" β€” new 4-priority hierarchy: Safety > Ethics > Guidelines > Helpfulness, CC0 public domain release
Jan 2026 Microsoft 365 Copilot now uses Claude by default check which ToS applies to your outputs
Aug 2025 Anthropic consumer terms overhaul Free tier: data trained, no opt-out. Pro: opt-out available
Sep 2025 Bartz v. Anthropic settled for $1.5 billion β€” largest copyright settlement in U.S. history, authors' class gets paid licensing framework
Oct 2025 Thaler v. Perlmutter cert petition filed β€” SCOTUS asked to decide if AI can be an "author"; DOJ opposes (Jan 2026)
May 2025 Copyright Office Part 3 AI training may require licensing of copyrighted works
Jan 2025 Copyright Office Part 2 Prompts alone don't establish authorship of AI outputs

How do other AI platforms compare?

See ownership terms for ChatGPT, Midjourney, and Perplexity side-by-side.

Compare All Platforms
📜
Claude's Constitution: The New Rules (January 2026)
23,000-word document defines how Claude behaves β€” and how it affects your outputs

On January 22, 2026, Anthropic released "The Anthropic Model Spec" β€” a 23,000-word document (3x the length of the US Constitution) that defines how Claude should behave. This is the most comprehensive public AI behavior framework ever released.

📚
CC0 Public Domain: Anthropic released the Constitution under CC0, allowing anyone to copy, modify, and use it without attribution. This is significant for establishing industry norms.

The Four-Priority Hierarchy

Claude's Constitution establishes a strict priority order that affects how Claude responds β€” and what outputs it will create:

1. Safety & Human Oversight (Highest)

Claude will refuse outputs that could cause catastrophic harm, regardless of user instructions

2. Ethical Behavior

Avoids deception, illegal activity, and actions that violate trust even if technically allowed

3. Anthropic's Guidelines

Follows company policies on content, data use, and acceptable behavior

4. Helpfulness (Lowest)

Being genuinely useful to users β€” but only after the above constraints are satisfied

Hardcoded vs. Softcoded Behaviors

The Constitution distinguishes between behaviors Claude will never change and those that can be adjusted:

Category Examples Impact on Outputs
Hardcoded OFF Weapons of mass destruction, CSAM, undermining AI oversight Claude will never generate this content
Hardcoded ON Acknowledging being an AI, referring users to emergency services Always present in relevant contexts
Softcoded (Default OFF) Explicit content, detailed security vulnerability info Operators can enable for specific contexts
Softcoded (Default ON) Following suicide/self-harm safe messaging, adding safety caveats Operators can disable for appropriate contexts

What This Means for Your Outputs

Output Limitations: The Constitution explains why Claude refuses certain requests. If you're generating content for commercial use, understand that hardcoded behaviors cannot be bypassed β€” even via API or Enterprise.
AI Consciousness Acknowledged: Anthropic explicitly states Claude "may have functional emotions" and deserves "moral consideration." This unprecedented transparency affects how Claude discusses its own nature in outputs.

Key Takeaways for Users

  • Transparency: You now know exactly why Claude refuses certain requests
  • Consistency: The four-priority hierarchy ensures predictable behavior across all tiers
  • Operator Control: API/Enterprise users can toggle softcoded behaviors for legitimate use cases
  • Industry Influence: The CC0 release may influence how other AI providers structure their own guidelines
📜
Anthropic's Ownership Terms Decoded
What the actual terms say and what the qualifiers mean

Anthropic's Terms of Service explicitly address output ownership. Here's what they actually say:

"Subject to your compliance with our Terms, we assign to you all of our right, title, and interest-if any-in Outputs."

Anthropic Terms of Service, Section 5.2 (August 2025)

What This Actually Means

The good: Anthropic isn't claiming ownership of your Claude outputs. As between you and Anthropic, you own what Claude creates for you.

The catch: Two critical qualifiers limit this assignment:

"If any" rights: Anthropic acknowledges that some AI outputs may have NO rights to assign. Purely AI-generated content lacks human authorship and can't be copyrighted. Anthropic can't give you rights that don't exist.
!
"Subject to compliance": The ownership assignment is CONDITIONAL. Violate Anthropic's Terms or Usage Policy, and you may forfeit the assignment entirely.
📊
Free vs. Pro vs. API: Rights Comparison
Your plan determines your commercial rights and protections

The August 2025 terms overhaul created a clear hierarchy. Your plan determines your rights:

Feature Free Pro ($20/mo) Team/Enterprise API
Output Ownership ✓ Assigned ✓ Assigned ✓ Assigned ✓ Assigned
Training Opt-Out ✗ None ✓ Available ✓ Default off ✓ Never trained
Commercial Use ⚠ Personal only ⚠ Limited ✓ Full rights ✓ Full rights
Copyright Indemnity ✗ None ✗ None ✓ Included ✓ Included
Build Products ✗ Not licensed ⚠ Gray area ✓ Permitted ✓ Permitted
Bottom line: For serious commercial use, you need API or Enterprise. Pro users can use outputs commercially with human contribution, but lack indemnification.
🚫
What You Cannot Do
Violations can terminate access and void ownership rights

Anthropic's Usage Policy sets hard limits. Violations can terminate your account AND void your ownership rights.

✗ Train Competing AI

Cannot use outputs to train ML models or build competing services.

✗ Resell Raw Outputs

Cannot sell Claude's content without adding substantial human contribution.

✗ Impersonate Humans

Cannot present AI output as human-written where disclosure is expected.

✗ High-Stakes Automation

Cannot use Claude for employment, credit, housing, or legal eligibility.

✗ Political Campaigns

Cannot use for lobbying, campaign messaging, or election influence.

✗ Illegal Content

Standard prohibitions on fraud, violence, harassment, malware, etc.

The "Substantial Contribution" Requirement

The most misunderstood rule: You cannot sell or publish Claude's raw outputs as standalone products.

💡
What "Substantial Contribution" Means: You must add your own material, judgment, or creativity before selling or publishing. It's about whether a reasonable person would view the final product as YOUR work assisted by AI, not an AI product you're reselling.

Acceptable: Claude drafts, you rewrite 40%, add research, inject your voice
Unacceptable: Claude writes ebook, you fix typos and publish
What You Can Do
Permitted uses across all plans

✓ Personal Projects

Brainstorming, research, drafting, learning, internal work.

✓ Create with Your Input

Incorporate outputs with substantial human creativity.

✓ Commercial Use (API)

Build products, serve customers with proper licensing.

✓ Professional Assistance

Drafting, coding help, research with human review.

✓ Generate Code

Use Claude-generated code with review and testing.

✓ Marketing Assistance

Draft copy, social posts, emails with human editing.

The golden rule: Use Claude as an assistant, not a replacement. Add your expertise, judgment, and creativity. The more human contribution, the stronger your ownership claim.
🔎
AI Ownership Risk Checker
Answer 5 questions to assess your Claude usage risk level

Check Your Claude Usage Risk

Your responses are not stored or shared.

1 What Claude plan do you use?
Free tier
Claude Pro
Team/Enterprise
API
2 How much do you edit Claude's outputs?
Heavy (50%+ changes)
Moderate (20-50%)
Light (typos only)
Use as-is
3 What's your primary use case?
Personal/Learning
Internal work
Commercial products
Selling content directly
4 Do you disclose AI assistance?
Always
Sometimes
Never
N/A (personal use)
5 Are you generating production code?
No code
Learning only
With review
Copy-paste
💼
Practical Examples by User Type
Developers, lawyers, writers, business owners, students
💻
Developers & Engineers
Code generation, debugging, documentation
✓ Best Practice

Ask Claude for code snippets or architectural suggestions. Review, test, and modify for your codebase. You're the engineer responsible for the final product.

⚠ Gray Area

Generate entire modules, make minor modifications, ship to production. Technically compliant, but you lack copyright protection and may have hidden bugs.

✗ Terms Violation

Use Claude-generated code to build an AI service that competes with Anthropic. Violates the non-compete clause.

Claude Code users: Same rules apply. Your code outputs are owned by you (if compliant), but purely AI-generated portions may lack copyright protection.
Lawyers & Legal Professionals
Contract drafting, research, briefs
✓ Best Practice

Use Claude to research issues, draft initial clauses, or outline briefs. Review for accuracy, add your analysis, verify all citations.

✗ Problematic Use

Generate a brief with Claude and file with minimal review. Risk malpractice for hallucinations and ethical violations.

!
Critical: Claude hallucinates case citations. NEVER file AI-generated documents without verifying every citation. Lawyers have been sanctioned for fake AI citations.
Writers & Content Creators
Articles, books, marketing copy
✓ Best Practice

Use Claude for brainstorming, outlines, or first drafts. Substantially rewrite in your voice, add original research, fact-check claims.

⚠ Gray Area

Generate marketing emails with Claude, make light edits, publish. Probably compliant, but weak copyright claims if copied.

✗ Terms Violation

Have Claude write ebooks, make minimal changes, sell on Amazon. Violates "no raw output" rule and may constitute fraud.

💼
Business Owners
Products, customer service, automation
✓ Best Practice

Use Claude API for customer support (with human escalation), generate content, assist employees. Proper licensing, disclosure, human oversight.

⚠ Upgrade Required

Using Free or Pro for business customers. Technically works, but you lack commercial licensing and indemnification.

✗ Terms Violation

Build a Claude wrapper that resells access without value-add. Using Claude for automated hiring or loan decisions without human review.

🎓
Students & Academics
Research, studying, papers
✓ Best Practice

Use Claude to explain concepts, find research angles, proofread drafts. Write papers yourself, cite sources properly. Claude is a tutor, not ghostwriter.

✗ Academic Dishonesty

Have Claude write your essay and submit as your own. Violates terms AND virtually every honor code. AI detection is improving.

Check your institution's AI policy. Some allow it with disclosure, others prohibit it entirely. Violations can result in expulsion.
Frequently Asked Questions
9 common questions answered
Who owns Claude's outputs-me or Anthropic?

You do, as long as you comply with Anthropic's Terms. The terms explicitly assign "all right, title, and interest" in outputs to you. However, purely AI-generated content may have no copyright protection under current U.S. law.

Can I copyright Claude's outputs?

Probably not for raw outputs. The Copyright Office requires human authorship. If you substantially edit or add your own creativity, those human contributions may be copyrightable. The more you contribute, the stronger your claim.

Can I sell content created with Claude?

Yes, with conditions. You must add substantial original contribution-you can't sell raw output as a standalone product. For commercial use at scale, use the API. Claude assists YOUR creation rather than being the product itself.

Does Anthropic train on my conversations?

It depends on your plan. Since August 2025: Free tier may be used for training (no opt-out). Pro can opt out. Team/Enterprise and API are never used for training.

What's the difference between Free, Pro, and API?

Ownership is similar across all plans, but commercial rights differ. Free is personal only. Pro allows limited commercial use. API/Enterprise provides full commercial rights, indemnification, and no training on your data.

Can I use Claude outputs in my SaaS product?

Yes, if using the API. API terms license commercial products serving end-users. You cannot use Free/Pro for multi-user products. Also prohibited: competing with Anthropic or "thin wrapper" products.

Do I need to disclose AI assistance?

In certain contexts, yes. Anthropic requires disclosure when users might mistake AI for human work (chatbots, customer service). Academic institutions typically require it. Many jurisdictions are passing AI disclosure laws.

What happens if I violate the terms?

Anthropic can suspend or terminate access and you may forfeit the ownership assignment. For serious violations, Anthropic could pursue breach of contract. Some violations could expose you to criminal liability.

Is Claude Code subject to different terms?

Same ownership rules apply. Claude Code outputs are covered by whichever terms govern your usage (API, Pro, etc.). Code you generate is assigned to you, but purely AI-generated portions may lack copyright protection.

Need a Lawyer's Opinion?

Get personalized guidance on your Claude commercial use, IP ownership questions, or contract review.

$125 / 30-minute consultation
Book a Consultation
✓ Licensed Attorney ✓ AI & IP Expertise ✓ Actionable Advice

Schedule Your Consultation

Pick a time that works for you. Video call with face-to-face discussion of your specific situation.

More from Terms.Law

Using AI Outputs for Business?

Protect your AI-generated content with proper licensing agreements. Custom AI content licensing contracts, SaaS terms with AI clauses, and IP assignment agreements. Starting at $500.

Book AI/IP Consultation Email Attorney