2026 Risk Assessment · Attorney-Built
How risky is your AI stack?
If you're commercializing AI outputs — selling images, code, copy, audio, or video your tools generated — the contracts upstream and downstream of you have specific holes. This 90-second calculator scores them.
ToS exposure
Ownership clarity
Customer disclosure
Cross-jurisdiction
$ exposure estimate
How the score is built
The composite score (0-100) is a weighted average of four sub-scores. Higher numbers mean more risk. The weights and thresholds reflect the patterns I see most often when actually reading AI provider contracts and downstream agency / SaaS deals.
1. ToS Violation Exposure (30% weight)
Built from the average risk of the tools you selected and whether you've actually read the contracts. Image and audio generators (Midjourney, Suno) carry higher base risk because their terms include non-obvious gotchas: free-tier CC-BY-NC defaults, training opt-ins, ownership splits between paid tiers, and revenue-threshold "Enterprise" requirements (Stable Diffusion 3.x triggers indemnity changes above $1M in revenue).
2. Output Ownership Clarity (25% weight)
Where contractual ownership is clean (most LLM providers assign output to the user) the score is low. Where ownership is conditional or split — image, audio, music, and some agent-builder tools — the score climbs. Note: contractual ownership is one question; copyrightability is a separate one. Purely AI-generated work generally cannot be copyrighted in the U.S. without substantial human authorship per the Copyright Office's 2025 reports and the Thaler line of cases.
3. Customer Disclosure Adequacy (25% weight)
Whether your end customers know AI was involved. The FTC's "Operation AI Comply" actions through 2024-2025 made clear that undisclosed AI deliverables and "AI-washed" claims are enforcement priorities, especially for legal, medical, financial, and educational use cases. ToS-only disclosure is rarely enough for industry-regulated work.
4. Cross-Jurisdiction Risk (20% weight)
EU customers trigger the EU AI Act (high-risk system classification depending on use case, transparency obligations, GPAI rules) plus GDPR processing rules for prompt logs and any personal data the model touches. UK customers add UK GDPR plus the ICO's evolving AI guidance. Multi-region operation amplifies friction even when individual jurisdictions are manageable.
The cases I weigh when scoring
Bartz v. Anthropic (2025)
$1.5B settlement. Reset what training-data exposure can look like for downstream commercial users of a model trained on disputed sources.
NYT v. OpenAI (ongoing)
Drives indemnification analysis. Whether you're covered for a copyright claim depends on which tier of which provider you're using.
Disney/Universal v. Midjourney (2025)
Image generators face direct infringement claims tied to recognizable trained content. Affects ownership-clarity score for image tools.
BMG v. Anthropic (2024-25)
Music and lyric outputs — a specific risk vector for content businesses using LLMs to draft adjacent material.
UMG & Concord v. Suno / Udio
Why Suno carries the highest base risk in the calculator. Audio generation is the most contested space right now.
Thomson Reuters v. Ross Intelligence
Affirmed that fair-use defenses for AI training are fact-specific and narrow. Boosts ToS-violation weight when training opt-ins are at issue.
U.S. Copyright Office Reports (2023-2025)
Three-part series confirming the human-authorship requirement. Drives the ownership-clarity sub-score even when contracts assign output cleanly.
EU AI Act & GPAI Code of Practice
Cross-jurisdiction multiplier. High-risk system classification can apply even to a US-headquartered SaaS with EU customers.
Why I built this calculator
I'm Sergei Tokmakov, a California attorney (CA Bar #279869, licensed since 2011). The questions I get most often from founders and agencies in 2026 are the same: am I covered if a customer asks where the work came from? Does my Midjourney commercial license actually survive when I resell? What happens when an upstream provider quietly changes its training defaults?
This calculator is the assessment I do at the front of every AI Output Rights Audit. I run the same factor weights, look at the same case docket, and end up at the same tier recommendation. Free version — no email gate to see the score — because the people who need the audit usually figure it out themselves once they see their own breakdown.
ST
Sergei Tokmakov · California State Bar
#279869 (licensed 2011). Sole attorney behind Terms.Law. AI, contracts, demand letters. I write these tools myself, read every term I cite, and update the case-law list when the docket moves. Email:
owner@terms.law.
Frequently asked questions
Is the dollar exposure number something I'd actually owe?▾
No. It's a planning number, not a litigation number. The multiplier reflects what I've seen in negotiated settlements, contract claw-backs, and downstream-license breaches across the matters I've handled. Real exposure depends on which provider, which clause, which insurance, and how aggressive the counterparty is. The figure is here to right-size the conversation, not predict an outcome.
I scored 80. Do I have to start with the $1,500 audit?▾
No. The audit is what fits a high score, but if budget is the constraint, the $349 single-tool review is a real option — pick the riskiest tool in your stack first. Email me with your score and stack, and I'll tell you honestly which tier fits without trying to upsell.
Why isn't my data sent anywhere?▾
The score is computed entirely in your browser. No revenue figure, no tool list, no email leaves the page unless you submit the email-capture form. If you do submit it, the email plus your score and inputs come to
owner@terms.law for me to review. That's the entire data flow.
Does this work for solo creators or only agencies?▾
Both. Solo creators selling AI-assisted images on Etsy, freelance copywriters using ChatGPT for client work, and 30-person creative agencies all get the same scoring. The dollar exposure scales with revenue, so a solo creator with $40k of AI-tied revenue won't see a number that distorts the recommendation.
What's the difference between this and just reading the ToS myself?▾
Reading one ToS is a starting point. The risk lives at the boundaries: the upstream provider's terms versus the downstream license you grant your customer, the insurance you carry, the indemnification gap between provider tiers, and the cross-jurisdiction overlay. The audit aligns the whole stack — the calculator is the rough cut.
Is this attorney-client work?▾
No. This calculator is an informational tool. It does not create an attorney-client relationship and is not legal advice for your specific situation. If you want privileged advice, the next step is the audit or a paid consultation — I open an engagement letter at that point.
What does the audit actually look like as a deliverable?▾
A written memo, typically 12-18 pages, with: a ToS map of every primary tool, drafted license-clause language for what you ship downstream, a risk register flagging active litigation that could change your terms, an indemnification analysis tier-by-tier, and two strategy calls. Turn-around is 7-10 business days.
What if I add a new AI tool next quarter?▾
Re-run the calculator. If the new tool tips you into a higher band, that's the trigger to either upgrade scope (audit) or do a $349 single-tool review on the new addition. Quarterly check-ins are a $240/hr engagement — usually one or two hours per quarter for a stable stack.
Disclaimer. This calculator is an informational tool authored by Sergei Tokmakov, a California-licensed attorney (CA Bar #279869). It is not legal advice and does not create an attorney-client relationship. The composite score, sub-scores, and dollar exposure figure are heuristic estimates based on a fixed set of factors — they do not substitute for review of your actual contracts, insurance, and use case. Rules vary by jurisdiction and change over time. For advice on a specific matter, email
owner@terms.law.