Tap each clause that appears in your AI vendor agreement (OpenAI, Anthropic, Google, AWS Bedrock, Azure OpenAI, Mistral, Cohere, etc.). Output: red / yellow / green flags across the 10 categories that decide whether the contract is enterprise-safe.
Vendor terms change frequently. This is a triage checklist, not a live database of current vendor terms. Last reviewed: 2026-05-10.
If a clause is in your contract or in the vendor's standard terms, tap it. The output flags the categories that need negotiation, addendum, or escalation.
This is a triage tool. The actual contract review ($1,500 flat fee) reads the agreement, identifies the negotiable terms, drafts redline language, and prepares the negotiation memo.
The scanner is built around the 10 categories that most often determine whether an AI vendor contract is acceptable for downstream enterprise customers and regulated workloads. Most major AI vendors (OpenAI, Anthropic, Google, AWS Bedrock, Azure OpenAI Service, Mistral, Cohere) have evolved their enterprise tiers to address most of these, but the standard / self-serve tiers often retain default terms that fail enterprise scrutiny.
If multiple red flags fire, the contract should be either (a) negotiated, (b) replaced with a different tier or vendor, or (c) addressed through an AI Use Addendum that flows correct terms downstream to your customers. The $1,500 AI Vendor Contract Review handles option (a). The $2,000 AI Use Addendum + DPA Update handles option (c).
If your vendor is on a standard / consumer / self-serve tier, the enterprise tier is usually the negotiation target rather than redline of the standard terms.
Vendor reference table last reviewed 2026-05-10. AI vendor terms change frequently, confirm against the current vendor agreement before relying on this summary.