My AI stack - what I use, how I keep it ethical

If I'm going to write your AI Use Policy, you should know what's in mine first. This page is the same transparency I'd ask of any firm I audit.

Foundation models - when I use which

All three of these are good. I pick based on the task, the context window I need, and which model handles my tone best for a given client. I rotate intentionally so I never become dependent on a single vendor.

Claude (Opus, Sonnet) Primary drafting

Anthropic's Claude.ai Team plan, with training contractually disabled on inputs. Claude Opus is my workhorse for long contract drafts, multi-document analysis, and anything where the model needs to hold a lot of context without drifting. Sonnet handles faster turnaround work and tool-call workflows.

Tier: Claude.ai Team Training: Off (contractual) Used for: Long-form drafting, contract analysis, policy writing

ChatGPT (Team) Second opinion

OpenAI Team plan, training disabled. I use GPT-5.4 for second opinions on drafts, ideation, and fast research summaries. I rarely use it as the only model on a piece of work; the most reliable workflow is "draft in Claude, sanity-check in ChatGPT, then ship."

Tier: ChatGPT Team Training: Off (contractual) Used for: Second opinions, fast research, the AI chatbox on Terms.Law

Gemini Read-only

I read Gemini's output to understand the competitive landscape and to test prompts against a third model. I do not paste client work into Gemini. Google's privacy posture on the consumer tier is the most permissive of the three big models for using inputs, and I haven't found enough Workspace-tier delta to move client work there.

Tier: Free / public Training: Assume yes Used for: Competitive testing only

Coding & research tools

Terms.Law is a static-HTML site I run end-to-end. These are the AI tools I use to build it and to build client tools. Each one has a posture choice baked in.

Claude Code Codebase work

Anthropic's CLI coding agent, running against the Anthropic API. I use it for Terms.Law itself - building generators, refactoring HTML, writing the AI Use Policy templates I sell to clients. Same data posture as Claude.ai Team. No client-confidential material passes through it.

Used for: Site builds, generator scaffolding, refactors Client data: Never

Cursor IDE integration

VS Code-derived IDE with AI inline. I use it for the same codebase work as Claude Code, depending on which workflow is faster for the task. Privacy mode on; I select my model and keep enterprise-tier postures.

Used for: Inline coding, light refactoring Client data: Never

Document automation

Most of what I build for clients lives here. These are not consumer tools; they are workflows I assemble for a specific firm.

For each client, I typically build a small purpose-built tool that takes structured input (a form on their domain) and produces a structured output (an engagement letter, an NDA red-flag report, a vendor-diligence questionnaire). The AI sits behind a form so staff don't have to write prompts. The model runs through an API key tied to the firm, not a shared consumer login. Outputs are reviewed by an attorney before they leave the firm.

The thinking is simple: the firm gets the speed of AI without the chaos of "everyone paste whatever you want into ChatGPT." Supervision is built into the tool, not bolted on after.

What I won't use

A short list of things I refuse, with the reason.

Free-tier ChatGPT for client work

The consumer tier reserves rights to use inputs for training and model improvement. That posture is incompatible with CRPC 1.6. If a client says "we already use it," step one is moving them off.

Any vendor without a contractual non-training commitment

If the TOS lets the vendor train on your prompts, your privileged work is fueling a model someone else can prompt. There are now enough alternatives that there's no reason to accept this.

AI for unsupervised legal advice

An AI chatbot cannot form an attorney-client relationship or give legal advice. Everywhere I deploy AI in a legal-adjacent context, it's branded "AI Legal Analyst," not "AI Lawyer," and a human attorney sits behind it.

Citation insertion without verification

Every cite in every brief gets checked against the primary source before filing. No exceptions. The Mata v. Avianca sanctions case made the rule expensive to ignore; I don't.

"Just trust the output"

Hallucinations happen on every model I use. Output review is not optional. Anything that leaves my desk has been read by me, word by word.

How I keep client data out of the wrong places

Five rules I apply to every engagement.

1. No identifying client data in prompts.

Names, addresses, account numbers, case numbers, employer names - these get redacted to "Client A" and "Defendant B" before anything enters a model. Patterns and legal questions only.

2. Contractual non-training, not policy hope.

I do not rely on a vendor's blog post promising they "don't usually train on inputs." The non-training commitment has to be in the TOS or a side letter, and I read both.

3. Local-first when the data is sensitive enough.

For the most sensitive matters, I do the analysis on my machine with redaction tools and don't send the raw material to any cloud model. The tradeoff is speed; sometimes that's the right tradeoff.

4. Human review is the last step, every time.

Output review isn't a "should." It's the difference between supervised AI and a malpractice risk. I read what the model wrote, I check what it cited, and I edit before anything ships.

5. Disclosed in writing.

My engagement letter explains, in plain English, that AI tools assist with drafting, that I supervise outputs, and that no identifying client information is sent to a non-compliant vendor. Clients sign that. There is no surprise about how the work gets done.

Want me to build the same posture for your firm?

The $2,500 Audit & Policy Package translates everything on this page into a written policy and vendor matrix customized to your practice.

Related

Disclaimer. This page is informational content authored by Sergei Tokmakov, a California-licensed attorney (CA Bar #279869). It describes the AI tools and posture I currently use in my own practice. It is not legal advice and does not create an attorney-client relationship. Vendor terms and ethics guidance change frequently; the postures described here reflect my reading as of publication. For advice on your firm's specific situation, email owner@terms.law.