If I'm going to write your AI Use Policy, you should know what's in mine first. This page is the same transparency I'd ask of any firm I audit.
All three of these are good. I pick based on the task, the context window I need, and which model handles my tone best for a given client. I rotate intentionally so I never become dependent on a single vendor.
Anthropic's Claude.ai Team plan, with training contractually disabled on inputs. Claude Opus is my workhorse for long contract drafts, multi-document analysis, and anything where the model needs to hold a lot of context without drifting. Sonnet handles faster turnaround work and tool-call workflows.
OpenAI Team plan, training disabled. I use GPT-5.4 for second opinions on drafts, ideation, and fast research summaries. I rarely use it as the only model on a piece of work; the most reliable workflow is "draft in Claude, sanity-check in ChatGPT, then ship."
I read Gemini's output to understand the competitive landscape and to test prompts against a third model. I do not paste client work into Gemini. Google's privacy posture on the consumer tier is the most permissive of the three big models for using inputs, and I haven't found enough Workspace-tier delta to move client work there.
Terms.Law is a static-HTML site I run end-to-end. These are the AI tools I use to build it and to build client tools. Each one has a posture choice baked in.
Anthropic's CLI coding agent, running against the Anthropic API. I use it for Terms.Law itself - building generators, refactoring HTML, writing the AI Use Policy templates I sell to clients. Same data posture as Claude.ai Team. No client-confidential material passes through it.
VS Code-derived IDE with AI inline. I use it for the same codebase work as Claude Code, depending on which workflow is faster for the task. Privacy mode on; I select my model and keep enterprise-tier postures.
Most of what I build for clients lives here. These are not consumer tools; they are workflows I assemble for a specific firm.
For each client, I typically build a small purpose-built tool that takes structured input (a form on their domain) and produces a structured output (an engagement letter, an NDA red-flag report, a vendor-diligence questionnaire). The AI sits behind a form so staff don't have to write prompts. The model runs through an API key tied to the firm, not a shared consumer login. Outputs are reviewed by an attorney before they leave the firm.
The thinking is simple: the firm gets the speed of AI without the chaos of "everyone paste whatever you want into ChatGPT." Supervision is built into the tool, not bolted on after.
A short list of things I refuse, with the reason.
The consumer tier reserves rights to use inputs for training and model improvement. That posture is incompatible with CRPC 1.6. If a client says "we already use it," step one is moving them off.
If the TOS lets the vendor train on your prompts, your privileged work is fueling a model someone else can prompt. There are now enough alternatives that there's no reason to accept this.
An AI chatbot cannot form an attorney-client relationship or give legal advice. Everywhere I deploy AI in a legal-adjacent context, it's branded "AI Legal Analyst," not "AI Lawyer," and a human attorney sits behind it.
Every cite in every brief gets checked against the primary source before filing. No exceptions. The Mata v. Avianca sanctions case made the rule expensive to ignore; I don't.
Hallucinations happen on every model I use. Output review is not optional. Anything that leaves my desk has been read by me, word by word.
Five rules I apply to every engagement.
Names, addresses, account numbers, case numbers, employer names - these get redacted to "Client A" and "Defendant B" before anything enters a model. Patterns and legal questions only.
I do not rely on a vendor's blog post promising they "don't usually train on inputs." The non-training commitment has to be in the TOS or a side letter, and I read both.
For the most sensitive matters, I do the analysis on my machine with redaction tools and don't send the raw material to any cloud model. The tradeoff is speed; sometimes that's the right tradeoff.
Output review isn't a "should." It's the difference between supervised AI and a malpractice risk. I read what the model wrote, I check what it cited, and I edit before anything ships.
My engagement letter explains, in plain English, that AI tools assist with drafting, that I supervise outputs, and that no identifying client information is sent to a non-compliant vendor. Clients sign that. There is no surprise about how the work gets done.
The $2,500 Audit & Policy Package translates everything on this page into a written policy and vendor matrix customized to your practice.