| Platform | Ownership & License | Commercial Use | Key Restrictions | Attribution | Indemnity / Liability |
|---|---|---|---|---|---|
| OpenAI | Inputs/outputs are yours; OpenAI assigns its interest “where allowed by law.” ([OpenAI Terms]) | Yes across ChatGPT, DALL·E, API, Team, Enterprise; business data exempt from training by default. ([OpenAI Privacy]) | Safety/AUP, no using outputs/feedback to build competing models, rate-limit compliance. ([OpenAI Services Agreement]) | Not mandatory; transparency encouraged for heavily AI-assisted publications. ([OpenAI Guidelines]) | “As-is” service; no IP indemnity. Users indemnify for misuse. |
| Anthropic (Claude) | You retain rights in inputs and outputs; Anthropic assigns its interest. ([Anthropic Terms]) | Allowed for commercial deployments; consumer tier may limit to personal use. | Acceptable use plus “no competing model” clause. ([Anthropic Bedrock TOS]) | None. | Enterprise customers receive IP indemnity for authorized use. ([Reddit/Anthropic thread]) |
| Google Gemini | Google does not claim ownership; retains operational license only. ([Gemini Terms]) | Yes for Workspace/enterprise; consumer terms emphasize evaluation use. | Standard prohibited uses; warning against deceptive presentation. ([Gemini API AUP]) | No requirement, but policies discourage deception. | SaaS-style disclaimers; no specific output indemnity. |
| Midjourney | “You own” outputs to extent law allows; enterprise thresholds require higher-tier plans. ([Midjourney TOS]) | Commercial use on paid tiers; stealth/private mode extra. | Content moderation; public gallery; no scraping to train rivals. | Not required but often practiced. | No indemnity; users liable for infringement. |
| Stability AI / SD | Model licenses give you control of outputs subject to law. ([Stability License]) | Open/commercial use unless community license requires enterprise upgrade past revenue thresholds. | OpenRAIL terms ban illegal content, biometric abuse, competing-model training. ([Stable Diffusion License]) | None. | Broad disclaimers; no indemnity. Getty UK case highlighted trademark exposure. ([AP News]) |
| Microsoft Copilot / Azure OpenAI | You own inputs/outputs; MS keeps minimal license. ([Aurum Law]) | Yes for text/code with governance around open-source snippets. | SaaS AUP + filters against long verbatim code. | No requirement. | Copilot Copyright Commitment defends eligible business users if outputs trigger copyright claims. |
Most major terms bar using outputs or usage data to train rival models. Owning text ≠ free training data.
Business/API traffic is “no training by default.” Consumer/free use may feed improvement unless you opt out.
No legal blanket, but platforms warn against deceptive nondisclosure. Regulated sectors increasingly require AI labeling.
Only select vendors (Microsoft, Anthropic enterprise) backstop IP risk. Others disclaim everything.
Getty v. Stability shows platforms fight training suits, but you remain liable for publishing derivative outputs.
U.S. Copyright Office: pure machine output is not copyrightable. Thaler decisions affirm. Disclose AI portions when registering. ([USCO AI Guidance][13])
Protect human contributions (selection, arrangement, edits). Raw AI portions remain unprotected.
EU demands human intellectual creation; UK has unique “computer-generated” provision under debate.
Contract control is not legal exclusivity. Add human creativity to secure enforceable rights.