AI Implementation for Law Firms & Professional Services

I'm a daily AI user since GPT-3 and a California attorney since 2011. I run ethics-compliant AI rollouts for solo practitioners, small firms, and in-house legal teams, so the upside reaches the work product before a bar complaint reaches your inbox.

Sergei Tokmakov, Esq. · California State Bar #279869 · BU Law J.D.

Who I help

If you produce written work product for clients and your data is sensitive, you're in the right place. AI without supervision is malpractice insurance bait. AI without strategy is shelfware.

Solo attorneys

You're already pasting things into ChatGPT. I help you do it safely, document it, and bill for the right amount of time.

Small firms (2-50 lawyers)

You need one written policy that covers every associate, a vendor matrix, and a training rhythm that doesn't pull partners off billable work.

In-house GCs & legal ops

Procurement wants an AI Use Policy. Outside counsel wants a vendor diligence template. I build both, plus a board-level summary.

Accountants & CPAs

Engagement letters, tax memos, client intake. The confidentiality calculus is similar to law: AI ROI is real, supervision still mandatory.

Professional services

Architects, consultants, financial advisors, healthcare admins - anyone whose client files are confidential and whose deliverables are written.

Founder-led legal departments

One in-house attorney, twenty open projects, no time. AI workflows that one person can actually supervise are my specialty.

What goes wrong without expert review

I've watched all of these happen in real firms. Most were avoidable with a thirty-page policy and an hour of training.

  1. Confidentiality leaks CRPC 1.6

    An associate pastes the deposition transcript into the free tier of ChatGPT to summarize it. By default, that data is now training fodder. Rule 1.6 prohibits revealing client information without informed consent, and "I didn't realize the box was checked" is not a defense. The fix is contractual: enterprise plans that disable training, paired with a policy that names which tools handle which data class.

  2. Competence gaps CRPC 1.1 · ABA Op. 512

    Rule 1.1 now requires understanding the benefits and risks of relevant technology. ABA Formal Opinion 512 (July 2024) operationalized this for generative AI. A partner who doesn't know what their tools do can't supervise; a partner who refuses to learn is increasingly out of step with the standard of care.

  3. Unsupervised AI CRPC 5.3

    Rule 5.3 makes you responsible for non-lawyer assistants. The State Bar guidance treats AI tools as non-lawyer assistants. That means every output that leaves your firm needs a human review trail, and every staffer needs to know which tools are approved for which task. Without a written policy, the supervisory record doesn't exist.

  4. Billing disclosure CRPC 1.5

    Rule 1.5 prohibits unreasonable fees. If AI compresses a four-hour task to forty minutes, the client gets the savings - not a four-hour invoice. Some matters are flat-fee, some are hourly, and AI changes the math for both. I help firms write engagement letters that say plainly what's billed for AI-assisted work.

  5. Client disclosure CRPC 1.4

    There's no universal duty to tell clients you use AI, but Rule 1.4 (communication) plus the State Bar guidance both lean toward disclosure when AI materially affects deliverables or fees. A one-page engagement-letter addendum solves this. Failing to disclose, when the client would reasonably want to know, is the kind of thing that surfaces in a bar complaint years later.

My AI stack - radical transparency

If I'm going to advise on your AI use, you should know exactly what I use in my own practice and how I keep client data out of the wrong places.

What I use

  • Claude (Opus, Sonnet)Primary drafting and analysis. Anthropic Claude.ai Team plan; training disabled by contract.
  • Claude CodeCodebase work for Terms.Law and client tools. Runs against the Anthropic API; same data posture as Claude.
  • ChatGPT (Team)Second-opinion drafting, ideation, fast research. Team-tier; training off.
  • CursorIDE-integrated AI for engineering work on this site and document generators.
  • Custom workflowsDocument generators, intake bots, redline tools - purpose-built per client.
  • Gemini (read-only)I read Gemini outputs to understand competitive landscape; I do not paste client work into Gemini.

How I keep it ethical

  • No identifying client data in promptsI redact names, addresses, account numbers, and case numbers before pasting. Facts and patterns only.
  • Contractual non-training onlyEvery tool I use for client work has a written commitment that prompts are not used for training. Free tiers are excluded from client work.
  • Human review is mandatoryNo AI output leaves my desk without me reading every word. Citations get checked against the primary source. Always.
  • Disclosed in engagement lettersEvery engagement letter explains, in plain English, that AI assists with drafting and that I supervise outputs.
  • Local-first where possibleFor the most sensitive work, I use redaction tools and keep the analysis on my machine.
  • I document my own useFull breakdown here - same transparency I'd ask of any firm I audit.

What you get

Three engagement paths. Most firms start with the $2,500 audit and decide whether to extend into a full implementation. The $240/hr option is for one-off questions and vendor reviews.

One-off

Hourly Advisory

$240 / hr
  • AI vendor contract review
  • Specific AI ethics question (written opinion)
  • Single-document review (engagement letter, policy addendum)
  • Written response within two business days
  • No retainer
Custom quote

AI Implementation Package

$3,500–$5,000 · custom
  • Everything in the Audit & Policy Package
  • Custom workflow design (intake, drafting, document generation)
  • Document-generator setup on your domain or mine
  • Team training (up to 10 attorneys/staff)
  • 30-day post-deployment support
  • Scope and fee fixed in writing before work starts
Email me for a quote

Ready to put real guardrails around your AI use?

The $2,500 Audit & Policy Package is the fastest way to go from "we use AI sometimes" to a documented, defensible firm-wide practice.

Why me, specifically

There are AI consultants who don't know legal ethics, and there are legal ethics teachers who don't use AI. I'm both.

What you're actually buying: direct attorney work - not a junior associate, not an offshore team, not a generic AI consultant. I've been a daily user of generative AI since the GPT-3 API was a private beta. I've drafted 1,500+ contracts, run a static-HTML law firm at terms.law, and spent the last three years writing AI Use Policies for clients in California, New York, Texas, and Illinois. I read every California State Bar opinion, every ABA opinion, and every relevant vendor TOS before I write a single line of your policy.

Recent engagements

Anonymized snapshots from the last twelve months. Full case studies live on the case-studies page.

Related resources on Terms.Law

I've published a lot on AI in legal practice. These pages explain how I think - read them before you hire me.

Frequently asked questions

The questions every firm asks me before signing on. Short answers, written by me, in plain English.

Does using AI in legal practice violate CA RPC 1.1?

No, not by itself. California Rule 1.1 (competence) requires that I understand the technology I use, including its benefits and risks. The State Bar's 2023 Practical Guidance on Generative AI is clear: I can use AI, but I must supervise outputs, verify citations, and not rely on AI for legal judgment. Refusing to use AI when it would benefit a client may itself raise competence concerns.

What about confidentiality under CRPC 1.6?

Rule 1.6 prohibits revealing client information without informed consent. That means free-tier ChatGPT, which trains on inputs by default, is unsafe for client-confidential material. I use enterprise plans that contractually disable training, route through APIs that don't retain prompts, or redact identifying facts before pasting. A written AI Use Policy locks this in across the firm.

Do I need a written AI Use Policy?

If you have any staff, yes. CRPC 5.3 makes you responsible for supervising non-lawyer assistants, which includes AI tools. A written policy documents your supervision system: which tools are approved, what data is allowed, who reviews outputs, and how you handle client disclosure. Without it, a future bar complaint has no record of your guardrails.

Which AI tools are safest for legal work?

Claude (Anthropic) enterprise plans and ChatGPT Team/Enterprise both contractually disable training on your inputs. Microsoft Copilot for M365 inherits your tenant's compliance posture. Tools I avoid for confidential work: free-tier ChatGPT, Google Gemini free tier, and any vendor whose TOS lets them use your prompts for model training. The right tool depends on your data classification, not on which model is "smartest."

What if my AI tool's TOS doesn't allow legal-confidential data?

Then you can't put privileged material into it. Period. Many free or consumer tiers reserve broad rights to use inputs for training and improvement. The fix is either to upgrade to a contractually compliant tier, switch vendors, or restrict that tool to redacted hypotheticals and public material. Vendor diligence is the first thing I check in every audit.

How do I disclose AI use to clients?

There's no universal duty to disclose, but Rule 1.4 (communication) and the State Bar guidance both point toward disclosure when AI materially affects the work product or fees. I draft engagement-letter addenda that explain in plain English what tools I use, how outputs are reviewed, and what data is and isn't sent to a vendor. Most clients appreciate the transparency.

Is AI-assisted work billable?

Yes, but only for the human time spent. Rule 1.5 prohibits unreasonable fees. If AI cuts a four-hour task to forty minutes, I bill forty minutes - not four hours. The efficiency gain belongs to the client. Where I do bill is for prompt engineering, output review, and the judgment work AI can't do.

What's ABA Formal Opinion 512?

ABA Formal Opinion 512 (July 2024) is the bar's first comprehensive guidance on generative AI. It addresses competence (Rule 1.1), confidentiality (Rule 1.6), supervision (Rules 5.1, 5.3), communication (Rule 1.4), fees (Rule 1.5), and candor to tribunals (Rule 3.3). California's State Bar adopted similar Practical Guidance in November 2023. Both are foundation reading for any firm rolling out AI.

Can I use AI for client intake?

Yes, with two cautions. First, an AI intake bot can't form an attorney-client relationship or give legal advice - it has to be branded as informational, not "AI lawyer." Second, prospective-client information triggers Rule 1.18 duties around confidentiality and conflicts. I build intake chatbots that route to attorney review, capture only what's needed, and don't pretend to be a lawyer.

What if AI hallucinates a citation?

It happens, and Rule 3.3 (candor to the tribunal) makes you responsible for what you file. The fix is verification: every cite gets checked against the primary source before it leaves the firm. My AI Use Policy templates make this an explicit checklist step. Several sanctioned attorneys in 2023-2024 learned this rule the hard way.

Start with the $2,500 audit

Flat fee. Written deliverables in 14 business days. If after the audit you want a custom implementation, I credit the audit fee against the implementation quote.

Disclaimer. This page is informational content authored by Sergei Tokmakov, a California-licensed attorney (CA Bar #279869). It does not constitute legal advice and does not create an attorney-client relationship. AI ethics rules vary by jurisdiction and change frequently; the rules cited here are accurate as of publication but may have evolved. For advice on your specific situation, email owner@terms.law.