Live case study

AI implementation, by an attorney who runs it on his own site

Every AI tool below is live on Terms.Law. I built them, I supervise them, they convert paying clients. I'll build similar systems for your firm.

Sergei Tokmakov, Esq. · CA State Bar #279869 · BU Law J.D. · Daily AI user since GPT-3 private beta

Why this page exists

Most AI consultants advising law firms do not run client-facing AI systems on their own site. They sell decks, run trainings, and write policies that gather dust. I do all of that, and I also operate the production AI stack you see here. The same Anthropic Messages API tool-use pattern I would build for your firm is already shipping eight live tools that handle traffic, capture leads, and route prospects into my paid services. I am not theorizing about what an Opus 4.7 system prompt does for client intake; I run one downstairs. I am my own first client.

The eight live AI tools on Terms.Law

Use them. Free, no email gate, no demo restriction. Each block below names the legal-practice problem, links to the live tool, breaks down the AI architecture, names where I stay in the loop, lists risk controls, and tells you which package replicates the pattern for your firm.

Demo 1 · Single-shot Opus

Free-text case-strength scoring

"Prospects describe their situation in 200 different ways. A static intake form cannot classify them."

Try it live Plaintiff intake Replicates to $575 DL

How the AI works

The frontend POSTs free-text to my Cloudflare Worker at /tool-proxy, which calls the Anthropic Messages API with claude-opus-4-7 and a system prompt that tells Claude to reason like a CA plaintiff attorney. The prompt forces structured JSON: legalTheory, statuteCitation, strengthScore (1-100), and a sampleParagraph the prospect could drop into a demand letter. About five cents per call.

Where I stay in the loop

Tool output is a starting point, never a delivered work product. I review every demand letter that goes out under my name and draft accepted-proposal letters from scratch.

Risk controls

Thirty requests per sixty seconds per IP via Cloudflare KV. URL sanitizer strips any link not in my content graph. No PII persisted; only an anonymous tool-completion event hits GA4.

Replication for your firm: $2,500 audit plus $3,500-$5,000 build delivers a custom tool tuned to your practice area, case-evaluation criteria, and engagement-letter language.

Want this for your firm? Email me with your AI workflow.

Demo 2 · Single-shot Opus

California causes of action identifier

"A client describes a dispute in plain English. Which CA causes of action apply, with what elements, and what is the leading case?"

Try it live Pre-litigation Replicates to $1,200

How the AI works

Same Worker proxy as Demo 1, different system prompt. Claude enumerates every plausible CA civil cause of action, lists the elements as a checklist, attaches a leading decision per cause, and flags anything triggering a notice-of-claim or short SOL.

Where I stay in the loop

Anti-SLAPP territory, fraud, and punitive damages get a manual reread before I quote a fee. The tool enumerates; I pick which two causes belong in the final complaint.

Risk controls

Same rate limit and URL sanitizer as Demo 1. The prompt also tells Claude to avoid decisions newer than its training cutoff and flag any uncertain citation.

Replication for your firm: practice-area-specific build for employment, tort, IP, consumer, or commercial. Most fit inside the $3,500-$5,000 implementation tier.

Want this for your firm? Email me with your AI workflow.

Demo 3 · Single-shot Opus

AI compliance audit for law firms

"My firm is using AI. I have no written policy. Am I already exposed under CRPC 1.1, 1.6, or 5.3?"

Try it live AI ethics Replicates to $2,500

How the AI works

The user describes the firm's current AI use. Claude maps that against CRPC 1.1, 1.6, 1.4, 1.5, 5.3, and ABA Op. 512 and returns a gap analysis with priority ratings. Each rule pairs with a specific firm fact and a specific remediation step.

Where I stay in the loop

Tool flags risks. Only an attorney builds a defensible policy. Every paid audit is drafted by me against the firm's matter mix, billing practices, and engagement-letter language.

Risk controls

Rate limit, no persistence. Prompt tells the user the tool is informational and does not establish an attorney-client relationship.

Replication for your firm: firm-specific compliance program with quarterly audits, vendor matrix, redaction protocol, and engagement-letter addendum. $2,500 flat fee, two revision rounds.

Want this for your firm? Email me with your AI workflow.

Demo 4 · Single-shot Opus

Cease and desist response strategy

"A client received a C&D and is panicking. We need a quick read on whether to ignore, reply, or escalate."

Try it live Defense intake Replicates to $575 DL

How the AI works

Claude identifies the claim type (defamation, trademark, copyright, breach, anti-SLAPP-triggering speech, harassment), reads the threat's credibility, lists the strongest defenses, and drafts a neutral sample response. The prompt blocks inflammatory drafting so a copy-paste worst case is non-escalatory, not Twitter-bait.

Where I stay in the loop

Anti-SLAPP territory is where amateur replies cause real damage, so I require fact-pattern review before any letter ships on my letterhead.

Risk controls

Rate limit, no persistence. Prompt blocks language that would constitute publication of a defamatory statement, and flags matters that smell like federal litigation.

Replication for your firm: defamation defense, IP, or commercial litigation can get a custom intake assistant tuned to the claim types the firm handles most. $3,500-$5,000 build.

Want this for your firm? Email me with your AI workflow.

Demo 5 · Single-shot Opus

Contract clause risk scanner

"A founder pastes a clause from a vendor agreement. I want a fast risk read and a redline suggestion before they sign."

Try it live Transactional Replicates to $349 / $599+

How the AI works

Claude returns structured analysis: clause category (indemnity, LoL, arbitration, IP assignment, non-compete, exclusivity, audit, change-of-control), risk score, plain-English read, worst case, and a suggested redline. Short bounded input makes this the cheapest tool to run and the highest-converting.

Where I stay in the loop

The redline is a draft. I rewrite every accepted contract-review engagement against the full agreement, deal context, and client leverage.

Risk controls

Rate limit plus a prompt rule that the tool is not a substitute for review of the full agreement. Any clause triggering a fiduciary, M&A, or securities-law flag returns a stop-and-hire-counsel message instead of a redline.

Replication for your firm: transactional or contract-review practice can deploy a branded clause scanner for your deal types. $3,500-$5,000 implementation tier.

Want this for your firm? Email me with your AI workflow.

Demo 6 · Single-shot Opus

Pro se filing roadmap

"A limited-scope client wants to file in California Superior Court themselves. What forms, what order, what court?"

Try it live Limited-scope Replicates to $1,250 pro se

How the AI works

The user describes the case (parties, claim, dollar amount, county). Claude returns a step-by-step roadmap: which CA Superior Court forms to file, filing fees, proof-of-service requirement, deadlines, common dismissal traps, and links to Judicial Council forms on courts.ca.gov.

Where I stay in the loop

For the paid pro se filing service I prepare the actual documents; the user files them. The roadmap is informational and never drafts anything that would constitute UPL on the user's behalf.

Risk controls

Rate limit plus an explicit prompt rule against generating fact-specific legal arguments. Procedural guidance, not litigation strategy.

Replication for your firm: any limited-scope practice or pro-se-support line. Same pattern works for any state's civil court system. $3,500-$5,000 implementation tier.

Want this for your firm? Email me with your AI workflow.

Demo 7 · Multi-step tool use · Flagship

Multi-step research agent

"A prospect has a complex California legal question. I want the AI to actually research it across my site, not just guess."

Try it live Tool-use loop Replicates to $3,500-5,000

How the AI works

The most compute-intensive tool on the site. The Worker runs a multi-step Anthropic Messages API loop up to five iterations, with five server-side tools: search_content_graph, get_statute_text, recommend_tier, compute_calculator, flag_urgency. Claude picks which tools to call, the Worker executes them, the loop continues until Claude can draft a structured memo: practice area, statutes, urgency banner, resource grid of Terms.Law pages, recommended tier, next steps. Twenty to forty cents per session.

Where I stay in the loop

Research, not advice. Anyone accepting a paid proposal goes through me. The recommended tier opens fee discussion; it is never the final quote.

Risk controls

Rate limit tightened to ten per sixty seconds per IP. URL sanitizer strips any URL not in the content graph, killing fabricated-citation risk. Graceful fallback panel with owner@terms.law CTA on loop failure.

Replication for your firm: a custom knowledge agent backed by your firm's content, brief bank, or research database. $3,500-$5,000 implementation, scoped per integration.

Want this for your firm? Email me with your AI workflow.

Demo 8 · Persistent AI assistant

Unified AI Legal Analyst chatbox

"Most prospects bounce before they ever find the right service page. We need a sitewide assistant that converts."

Open any page, look bottom-right Sitewide Replicates at $5,000+

How the AI works

The AI Legal Analyst chatbox lives bottom-right on every Terms.Law page. Multi-tool architecture: search the content graph, recommend a demand letter / calculator / tool, suggest a service tier with matching PayPal accept-proposal link, summarize into a structured intake, grade the lead, persist memory across sessions. Opus 4.7 vs. legacy GPT-5.4 runs as an A/B. Soft-CTA rule: no service push in the first one or two turns.

Where I stay in the loop

Every captured email triggers an inbox notification plus a Telegram backup so I can reply personally within hours. The chatbox is never the closer; I am.

Risk controls

Thirty requests per sixty seconds per IP. Brand-locked voice ("AI Legal Analyst," never "AI Lawyer"). No first-message hard sell. Only email-and-summary pairs persist, never raw transcript bodies.

Replication for your firm: custom client-intake chatbox for any practice area, integrated with your CRM, engagement-letter automation, or calendaring. Builds start at $5,000 and scale with integration count.

Want this for your firm? Email me with your AI workflow.

Want the same on your firm's site?

Email me with your firm's current AI workflow and the one task you most want to automate. I write back inside two business days with a fixed-fee scope.

What I keep doing as the attorney

The tools are the assistive layer. They do not replace the bar-licensed parts of legal practice. Here is what stays with me.

Final review of every document

Every demand letter, draft lawsuit, contract, and engagement-letter addendum gets a full attorney read before it leaves my desk. AI is a drafting layer, not a publishing layer.

Strategy and judgment calls

Sue, settle, or walk? Arbitration or court? Which causes survive a demurrer? The AI surfaces options; I pick.

Anti-SLAPP and sensitive litigation

California's anti-SLAPP statute and defamation doctrine are landmine territory. Every C&D matter, public-figure dispute, and protected-speech case gets human-only strategy work.

Personal client communication

Every accepted-proposal reply is written by me. The chatbox routes and captures intake; the relationship runs through my inbox, in writing.

Bar-licensed work product

The final filing, the executed contract, the signed letter on my letterhead. Those carry my license number. No AI ships work product under my name without me reading every word.

Fee disclosures and engagement letters

If AI cuts a four-hour task to forty minutes, the client gets the savings. Every engagement letter discloses where AI assists and how the time-savings split.

Why CRPC matters, and how I keep these tools compliant

Every architectural choice on this page maps to a specific California Rule of Professional Conduct. Pairings in plain English.

  1. Competence CRPC 1.1

    Rule 1.1 requires the attorney to understand the technology used. ABA Op. 512 (July 2024) applies this to generative AI.

    How I comply: I read the Anthropic API docs, model card, changelog, and CA State Bar Practical Guidance before I deploy. Every system prompt reflects my own competence, not vendor boilerplate.

  2. Confidentiality CRPC 1.6

    Rule 1.6 prohibits revealing client information without informed consent. Free-tier AI that trains on inputs is unsafe for confidential material.

    How I comply: Tool proxy does not persist input or output. Anthropic processes prompts under non-training API terms. Prompts treat free-text as fact patterns, not identities.

  3. Communication CRPC 1.4

    Rule 1.4 requires reasonable client communication about means used. AI use is the kind of thing a client would want to know.

    How I comply: Engagement letters include an AI disclosure addendum naming tools, data flow, and review process. The chatbox identifies itself as "AI Legal Analyst," never as me.

  4. Fees CRPC 1.5

    Rule 1.5 prohibits unreasonable fees. AI compresses many tasks; the savings belong to the client.

    How I comply: Flat-fee pricing wherever possible. Hourly bills only human time spent reviewing and finalizing.

  5. Supervision CRPC 5.3

    Rule 5.3 makes attorneys responsible for supervising non-lawyer assistants. CA State Bar guidance treats AI tools as falling within that regime.

    How I comply: Every tool output is reviewed by me before any deliverable ships. Prompts include explicit "do not draft X" rules. Supervision is engineered in, not bolted on.

  6. Candor and verification ABA Op. 512 · CRPC 3.3

    Fabricated citations remain the attorney's responsibility under Rule 3.3. Several sanctioned attorneys in 2023 and 2024 learned this the hard way.

    How I comply: URL sanitizer at the Worker layer strips any link Claude returned that is not in my live content graph. Every cite in a paid deliverable is verified by hand.

Pricing

Three engagement paths. Most firms start with the $2,500 audit and extend into a full implementation if useful.

One-off

Hourly Advisory

$240 / hr
  • AI vendor contract review
  • Specific AI ethics question (written opinion)
  • Single-document review
  • Written response within two business days
  • No retainer
Custom quote

AI Implementation Package

$3,500–$5,000 · custom
  • Everything in the Audit & Policy Package
  • Custom workflow design (intake, drafting, generation)
  • Tool build modeled after one of the eight demos above
  • Team training (up to 10 attorneys/staff)
  • 30-day post-deployment support
  • Scope and fee fixed in writing before work starts
Email me for a quote

Frequently asked questions

The questions every firm asks me before signing on. Short answers, written by me, in plain English.

Are you a coder or a lawyer?

Both. I'm a California attorney (Bar #279869) licensed since 2011 with a J.D. from Boston University School of Law, and I've been a daily user of generative AI since the GPT-3 API was a private beta. I wrote and ship every line of the tool code on Terms.Law myself, I draft the system prompts, and I review every paid work product. The legal judgment is mine. The AI helps me move faster on the parts that are mechanical.

Do you store our prompts?

No. The tool proxy on Terms.Law does not persist input or output payloads. Anthropic processes the prompt through the Messages API and returns a response, and the response is rendered into the browser. When a user opts in by capturing an email for a lead notification, only the email plus a short results summary go to my inbox and a Telegram backup, never the raw prompt body. Free-text inputs are kept ephemeral by design.

What models do you use?

Claude Opus 4.7 via the Anthropic Messages API is the workhorse for every interactive tool on the site, plus the calculator narrative layer. The chatbox runs an A/B between Opus and a legacy GPT-5.4 deployment so I can measure conversion lift directly. I picked Opus because it stays grounded on multi-step California-specific reasoning where cheaper tiers hallucinate citations. The differentiation matters when the output is going on a demand letter.

Can you build this for our practice area?

Yes. The eight tools on this page cover plaintiff litigation, contract review, AI ethics, and pro se filing. Replicating the pattern for employment, IP, tax, healthcare, family, real estate, or transactional work is mostly a matter of swapping the system prompt and the tier-routing logic. Flat-fee audits start at $2,500. Full custom builds run $3,500 to $5,000 depending on scope. I quote a fixed fee in writing before any work begins.

How do you handle confidentiality (CRPC 1.6)?

Three layers. First, vendor diligence: I only route confidential work through providers with contractual non-training commitments. Second, redaction: every system prompt instructs the model to treat input as patterns and facts rather than identities, and the audit package includes a redaction checklist for staff. Third, workflow design: the most sensitive material never touches a third-party API at all, and I document which task category gets which tool. CRPC 1.6 compliance is engineered in, not bolted on.

Do you train clients?

Yes. Every $2,500 audit package includes a one-hour live training session, recorded so new hires can watch later. The session covers the firm's specific AI Use Policy, the named-vendor matrix, the redaction protocol, the supervision and review checklist, and the engagement-letter disclosure language. For the $3,500 to $5,000 implementation package, I add a second training that walks the team through the custom workflow. Both formats are written communication only, no calls during scoping.

What if our firm uses different vendors?

I work with any combination of Claude, ChatGPT, Gemini, CoCounsel, Harvey, Lexis+ AI, Westlaw AI, and Microsoft Copilot. The audit produces a vendor matrix that maps each vendor to a data classification and a permitted task list, so the policy survives a vendor swap. I have no vendor referral fees, no white-label resale, and no incentive to push any specific tool. The right stack is whatever fits your data, budget, and existing licenses.

How long does implementation take?

The $2,500 audit runs one to two weeks from kickoff email to delivered policy. The $3,500 to $5,000 implementation runs four to twelve weeks depending on workflow complexity, number of tools to deploy, and team-training scope. Both timelines assume responsive email turnaround from your side. I write a scope and timeline document at the start of the engagement and bill against it, so there are no surprise overruns.

What's your typical client?

Solos through small firms in the two-to-fifty-attorney range, plus in-house GCs running lean legal departments, plus accountants and professional-services firms whose confidentiality calculus mirrors law firm ethics. The connecting thread is written work product where the underlying data is sensitive and the cost of an AI mistake is real. I do not work with consumer-facing chatbot startups or with anyone building unsupervised legal-advice products.

Why do you give the tools away free?

Conversion math. The tools attract informational traffic that would otherwise bounce off a static service page, and a meaningful share of users who finish a tool session end up emailing me to hire me for the actual work. A single $575 demand letter conversion pays for roughly 11,500 Opus tool calls at current API pricing. The break-even per acquired client is tiny. The tools also prove competence on a live page, which is harder to fake than a testimonials carousel.

Email me with your firm's AI workflow

Tell me your firm size, practice area, current AI tools, and the one task you most want to automate. I reply inside two business days with a fixed-fee scope.

Informational only, not legal advice. This page describes the AI infrastructure I run on Terms.Law and the engagement paths I offer to firms that want similar systems. Reviewing this page does not create an attorney-client relationship. Sergei Tokmakov, Esq., California State Bar #279869, Boston University School of Law J.D., owner@terms.law.