I'm a daily AI user since GPT-3 and a California attorney since 2011. I run ethics-compliant AI rollouts for solo practitioners, small firms, and in-house legal teams, so the upside reaches the work product before a bar complaint reaches your inbox.
Sergei Tokmakov, Esq. · California State Bar #279869 · BU Law J.D.
If you produce written work product for clients and your data is sensitive, you're in the right place. AI without supervision is malpractice insurance bait. AI without strategy is shelfware.
You're already pasting things into ChatGPT. I help you do it safely, document it, and bill for the right amount of time.
You need one written policy that covers every associate, a vendor matrix, and a training rhythm that doesn't pull partners off billable work.
Procurement wants an AI Use Policy. Outside counsel wants a vendor diligence template. I build both, plus a board-level summary.
Engagement letters, tax memos, client intake. The confidentiality calculus is similar to law: AI ROI is real, supervision still mandatory.
Architects, consultants, financial advisors, healthcare admins - anyone whose client files are confidential and whose deliverables are written.
One in-house attorney, twenty open projects, no time. AI workflows that one person can actually supervise are my specialty.
I've watched all of these happen in real firms. Most were avoidable with a thirty-page policy and an hour of training.
An associate pastes the deposition transcript into the free tier of ChatGPT to summarize it. By default, that data is now training fodder. Rule 1.6 prohibits revealing client information without informed consent, and "I didn't realize the box was checked" is not a defense. The fix is contractual: enterprise plans that disable training, paired with a policy that names which tools handle which data class.
Rule 1.1 now requires understanding the benefits and risks of relevant technology. ABA Formal Opinion 512 (July 2024) operationalized this for generative AI. A partner who doesn't know what their tools do can't supervise; a partner who refuses to learn is increasingly out of step with the standard of care.
Rule 5.3 makes you responsible for non-lawyer assistants. The State Bar guidance treats AI tools as non-lawyer assistants. That means every output that leaves your firm needs a human review trail, and every staffer needs to know which tools are approved for which task. Without a written policy, the supervisory record doesn't exist.
Rule 1.5 prohibits unreasonable fees. If AI compresses a four-hour task to forty minutes, the client gets the savings - not a four-hour invoice. Some matters are flat-fee, some are hourly, and AI changes the math for both. I help firms write engagement letters that say plainly what's billed for AI-assisted work.
There's no universal duty to tell clients you use AI, but Rule 1.4 (communication) plus the State Bar guidance both lean toward disclosure when AI materially affects deliverables or fees. A one-page engagement-letter addendum solves this. Failing to disclose, when the client would reasonably want to know, is the kind of thing that surfaces in a bar complaint years later.
If I'm going to advise on your AI use, you should know exactly what I use in my own practice and how I keep client data out of the wrong places.
Three engagement paths. Most firms start with the $2,500 audit and decide whether to extend into a full implementation. The $240/hr option is for one-off questions and vendor reviews.
The $2,500 Audit & Policy Package is the fastest way to go from "we use AI sometimes" to a documented, defensible firm-wide practice.
There are AI consultants who don't know legal ethics, and there are legal ethics teachers who don't use AI. I'm both.
What you're actually buying: direct attorney work - not a junior associate, not an offshore team, not a generic AI consultant. I've been a daily user of generative AI since the GPT-3 API was a private beta. I've drafted 1,500+ contracts, run a static-HTML law firm at terms.law, and spent the last three years writing AI Use Policies for clients in California, New York, Texas, and Illinois. I read every California State Bar opinion, every ABA opinion, and every relevant vendor TOS before I write a single line of your policy.
Anonymized snapshots from the last twelve months. Full case studies live on the case-studies page.
Drafted a 12-page AI Use Policy with vendor matrix, client-disclosure addendum, and a one-hour training. $2,500 flat.
In-house legal · SaaSCustom Claude workflow plus a written policy that procurement would actually accept. $4,500 implementation.
Accounting firm · TXReplaced a 20-minute manual process with a 2-minute generator, plus a confidentiality policy clients sign. $3,500 build.
I've published a lot on AI in legal practice. These pages explain how I think - read them before you hire me.
The questions every firm asks me before signing on. Short answers, written by me, in plain English.
No, not by itself. California Rule 1.1 (competence) requires that I understand the technology I use, including its benefits and risks. The State Bar's 2023 Practical Guidance on Generative AI is clear: I can use AI, but I must supervise outputs, verify citations, and not rely on AI for legal judgment. Refusing to use AI when it would benefit a client may itself raise competence concerns.
Rule 1.6 prohibits revealing client information without informed consent. That means free-tier ChatGPT, which trains on inputs by default, is unsafe for client-confidential material. I use enterprise plans that contractually disable training, route through APIs that don't retain prompts, or redact identifying facts before pasting. A written AI Use Policy locks this in across the firm.
If you have any staff, yes. CRPC 5.3 makes you responsible for supervising non-lawyer assistants, which includes AI tools. A written policy documents your supervision system: which tools are approved, what data is allowed, who reviews outputs, and how you handle client disclosure. Without it, a future bar complaint has no record of your guardrails.
Claude (Anthropic) enterprise plans and ChatGPT Team/Enterprise both contractually disable training on your inputs. Microsoft Copilot for M365 inherits your tenant's compliance posture. Tools I avoid for confidential work: free-tier ChatGPT, Google Gemini free tier, and any vendor whose TOS lets them use your prompts for model training. The right tool depends on your data classification, not on which model is "smartest."
Then you can't put privileged material into it. Period. Many free or consumer tiers reserve broad rights to use inputs for training and improvement. The fix is either to upgrade to a contractually compliant tier, switch vendors, or restrict that tool to redacted hypotheticals and public material. Vendor diligence is the first thing I check in every audit.
There's no universal duty to disclose, but Rule 1.4 (communication) and the State Bar guidance both point toward disclosure when AI materially affects the work product or fees. I draft engagement-letter addenda that explain in plain English what tools I use, how outputs are reviewed, and what data is and isn't sent to a vendor. Most clients appreciate the transparency.
Yes, but only for the human time spent. Rule 1.5 prohibits unreasonable fees. If AI cuts a four-hour task to forty minutes, I bill forty minutes - not four hours. The efficiency gain belongs to the client. Where I do bill is for prompt engineering, output review, and the judgment work AI can't do.
ABA Formal Opinion 512 (July 2024) is the bar's first comprehensive guidance on generative AI. It addresses competence (Rule 1.1), confidentiality (Rule 1.6), supervision (Rules 5.1, 5.3), communication (Rule 1.4), fees (Rule 1.5), and candor to tribunals (Rule 3.3). California's State Bar adopted similar Practical Guidance in November 2023. Both are foundation reading for any firm rolling out AI.
Yes, with two cautions. First, an AI intake bot can't form an attorney-client relationship or give legal advice - it has to be branded as informational, not "AI lawyer." Second, prospective-client information triggers Rule 1.18 duties around confidentiality and conflicts. I build intake chatbots that route to attorney review, capture only what's needed, and don't pretend to be a lawyer.
It happens, and Rule 3.3 (candor to the tribunal) makes you responsible for what you file. The fix is verification: every cite gets checked against the primary source before it leaves the firm. My AI Use Policy templates make this an explicit checklist step. Several sanctioned attorneys in 2023-2024 learned this rule the hard way.
Flat fee. Written deliverables in 14 business days. If after the audit you want a custom implementation, I credit the audit fee against the implementation quote.