Practice Area

AI compliance attorney for law firms

I'm Sergei Tokmakov, a California attorney and a daily AI user since the GPT-3 beta. I run ethics-compliant AI rollouts for solo and small-firm attorneys against CA RPC 1.1, 1.6, 5.3 and ABA Formal Opinion 512, with cross-border coverage for the EU AI Act, California ADMT, and the Colorado AI Act. Flat-fee work, written deliverables, no offshore handoffs.

Sergei Tokmakov, Esq. · CA Bar #279869 · BU Law J.D. · Daily AI user since GPT-3
Quick answer

California Rules of Professional Conduct 1.1 (competence), 1.6 (confidentiality), and 5.3 (non-lawyer supervision), combined with ABA Formal Opinion 512 (July 2024), require lawyers to understand their AI tools, supervise outputs, protect client data, and disclose AI use where it materially affects the work. The California State Bar's November 2023 Practical Guidance is the state-specific companion. Cross-border firms also navigate the EU AI Act (Regulation (EU) 2024/1689), California ADMT regulations under CPRA, and the Colorado AI Act (effective February 2026). The fix is a written AI Use Policy plus vendor diligence plus a documented supervision record.

What I do for AI implementation in law firms

For $2,500, I run a firm-wide AI Use Audit and deliver a written package. I interview the attorneys and staff to map current AI use (which tools, which data, which workflows), audit the existing tools against CA RPC 1.1, 1.6, 1.4, 1.5, and 5.3 plus ABA Op. 512, run vendor diligence on the actual contracts you have signed with each AI vendor, draft a written AI Use Policy tailored to your practice areas, draft a client AI disclosure addendum for engagement letters, deliver a one-hour live training (recorded), and provide two rounds of revisions over 30 days. This is the bread-and-butter package.

For $3,500 to $5,000 (custom-quoted), I do the audit plus implementation. That includes custom workflow design (intake bots, document generators, redline tools, citation-verification checklists), document-generator setup on your domain or mine, training for up to 10 attorneys and staff, and 30 days of post-deployment support. The scope and fee are fixed in writing before work starts; I do not bill against retainers for this work.

For $240/hour, I take one-off questions. AI vendor contract review (single agreement), a written ethics opinion on a specific scenario, single-document policy review or revision. Written response within two business days. This is the right tier for a discrete question rather than a firm-wide review.

Why this calls for an attorney, not a consultant

There are AI consultants who do not know legal ethics, and there are legal ethics teachers who do not use AI. I am both. That matters because every meaningful AI compliance question for a law firm requires translating between a tool's technical architecture (where the data goes, who can see it, what the TOS reserves) and the bar's rules (which information is confidential, what counts as supervision, when client disclosure is required). A non-lawyer consultant can advise on the tool but cannot opine on the rule. A non-AI-user lawyer can opine on the rule but cannot evaluate the tool.

The other reason: an AI Use Policy is the supervisory record. CA RPC 5.3 makes you responsible for non-lawyer assistants, which the State Bar guidance treats AI as. If a future bar complaint surfaces (a client claims AI mishandled their matter, a court asks about an AI-generated citation), the written policy is the evidence that you had a supervision system in place. A consultant deliverable is not that evidence; an attorney-drafted policy on the attorney's letterhead, citing the specific rules, is. The policy is also engagement-letter armor: it gives you a confident answer to the client question "do you use AI?" and the procurement question "what is your AI governance?"

I have also drafted 1,500+ contracts and reviewed countless AI vendor TOS. The vendor-diligence part of the audit catches the contracts that say "we don't train on your data" in marketing and reserve broad training rights in the actual TOS. That gap is where most AI confidentiality leaks come from.

The controlling law and guidance

California Rule of Professional Conduct 1.1 (Competence) requires that I understand the technology I use, including its benefits and risks. The California State Bar's Practical Guidance on Generative AI (November 2023) makes clear that this includes AI: the obligation is not to use AI, nor to refuse AI, but to understand what each tool does, supervise outputs, and verify citations. Refusing to use AI when it would benefit the client can itself implicate competence concerns.

CA RPC 1.6 (Confidentiality) prohibits revealing client information without informed consent. Free-tier ChatGPT and other consumer AI services that train on inputs by default are not safe for client-confidential material. The fix is contractual: enterprise plans that disable training, APIs that do not retain prompts, or aggressive redaction before pasting.

CA RPC 5.3 (Responsibilities Regarding Non-Lawyer Assistants) makes you responsible for supervising non-lawyer assistants. The State Bar guidance treats AI tools as non-lawyer assistants for this purpose. Without a written supervision system, the supervisory record does not exist.

CA RPC 1.4 (Communication) and 1.5 (Fees) together govern AI disclosure and billing. There is no universal duty to disclose AI use, but both rules and the State Bar guidance lean toward disclosure when AI materially affects deliverables or fees. § 1.5 prohibits unreasonable fees, so AI efficiency gains belong to the client where the matter is hourly.

ABA Formal Opinion 512 (July 2024) is the bar's first comprehensive guidance on generative AI. It operationalizes competence, confidentiality, supervision, communication, fees, and candor to tribunals for AI use. The California State Bar's Practical Guidance is the state-specific companion.

EU AI Act (Regulation (EU) 2024/1689) is extraterritorial: it applies to any AI system whose output is used in the EU, regardless of where the provider is located. For US firms with EU clients, EU attorneys, or EU end-users, the Act creates risk-classification, transparency, and (for high-risk systems) conformity-assessment obligations.

California ADMT (CPRA) regulates Automated Decision-Making Technology that makes significant decisions about consumers. Law firms that use AI for client intake, fee structuring, or matter routing may be in scope. The California Privacy Protection Agency finalized regulations on pre-use notice, opt-out rights, and risk assessments.

Colorado AI Act (Colo. Rev. Stat. § 6-1-1701 et seq., effective February 2026) regulates high-risk AI systems making consequential decisions in legal services and other regulated sectors. Firms serving Colorado clients with AI-assisted intake or scoping may have notice and impact-assessment obligations.

Why proof matters. My own practice runs on AI daily. The Terms.Law chatbox is Claude Opus 4.7 via the Anthropic API. The Opus tools on /tools/* are the same. Document generators across the site use Claude Code. I write AI Use Policies for clients while running a public AI-implementation lab on my own domain. That is the proof point: I do the same work I am proposing to do for your firm, with the same ethics guardrails, and the practice is documented.

What clients send me

Before the audit kicks off, I ask for the following so the deliverable is grounded in your actual practice, not a generic template:

If you do not have all of the above, send what you have. The audit interview is partly about filling in the gaps.

What I send back

For the $2,500 Audit and Policy Package:

For the $3,500-$5,000 Implementation Package, everything in the audit package plus the custom workflow build, training for up to 10 staff, and 30 days of post-deployment support. Scope and fee fixed in writing before work starts.

Pricing

One-off

Hourly Advisory

$240 / hr
  • AI vendor contract review
  • Specific AI ethics question (written opinion)
  • Single-document review (engagement letter, policy addendum)
  • Written response within two business days
  • No retainer
Custom quote

Full Implementation

$3,500-$5,000 custom
  • Everything in the Audit + Policy package
  • Custom workflow design (intake, drafting, generation)
  • Document-generator setup on your domain or mine
  • Team training (up to 10 attorneys/staff)
  • 30-day post-deployment support
  • Scope and fee fixed in writing before work starts
Email me for a quote

Related on Terms.Law

Frequently asked questions

Does using AI in legal practice violate CA RPC 1.1?

No, not by itself. California Rule of Professional Conduct 1.1 (competence) requires that I understand the technology I use, including its benefits and risks. The State Bar's November 2023 Practical Guidance on Generative AI is clear: I can use AI, but I must supervise outputs, verify citations, and not rely on AI for legal judgment. Refusing to use AI when it would benefit a client can itself raise competence concerns. ABA Formal Opinion 512 (July 2024) operationalized this standard nationally.

What about confidentiality under CA RPC 1.6?

Rule 1.6 prohibits revealing client information without informed consent. That means free-tier ChatGPT, which trains on inputs by default, is unsafe for client-confidential material. The fix is contractual: enterprise plans that disable training, APIs that do not retain prompts, or aggressive redaction before pasting. A written AI Use Policy locks this in across the firm and creates the supervision record. I have audited firms that thought they were "just using ChatGPT" and were quietly leaking privileged matter into training data for months.

Do I need a written AI Use Policy?

If you have any non-lawyer staff (paralegals, assistants, contractors), yes. CA RPC 5.3 makes you responsible for supervising non-lawyer assistants, and the State Bar's guidance treats AI tools as such. A written policy documents your supervision system: which tools are approved, what data is allowed, who reviews outputs, and how you handle client disclosure. Without it, a future bar complaint has no record of your guardrails. Even a solo with no staff benefits from a policy for engagement-letter consistency.

What's ABA Formal Opinion 512?

ABA Formal Opinion 512 (July 2024) is the bar's first comprehensive guidance on generative AI. It addresses competence (Model Rule 1.1), confidentiality (1.6), supervision (5.1, 5.3), communication (1.4), fees (1.5), and candor to tribunals (3.3). California's State Bar Practical Guidance (November 2023) is the state-specific companion. Both are foundational reading for any firm rolling out AI; my audits include a compliance walkthrough against both documents.

What is California ADMT?

California's Automated Decision-Making Technology (ADMT) rules under the California Privacy Rights Act (CPRA) regulate businesses that use AI for significant decisions about consumers. The California Privacy Protection Agency (CPPA) finalized ADMT regulations governing pre-use notice, opt-out rights, and risk assessments. Law firms that use AI for client intake decisions, fee structures, or matter routing may be in scope. I run an ADMT applicability check for any firm with consumer-facing AI features and document the result in writing.

What about the EU AI Act?

The EU AI Act (Regulation (EU) 2024/1689) is extraterritorial: it applies to any AI system whose output is used in the EU, regardless of where the provider is located. For US law firms with EU clients, EU-based attorneys, or EU end-users, the Act creates compliance obligations around risk classification, transparency, and (for high-risk systems) registration and conformity assessments. I assess EU AI Act exposure as part of the audit for any firm with cross-border activity.

What about the Colorado AI Act?

Colorado's AI Act (Colo. Rev. Stat. § 6-1-1701 et seq., effective February 1, 2026) regulates "high-risk artificial intelligence systems" that make consequential decisions affecting consumers in employment, education, financial services, healthcare, housing, insurance, and legal services. Developers and deployers have notice, risk-management, and impact-assessment obligations. Firms serving Colorado clients with AI-assisted intake, pricing, or scoping decisions may be in scope. The audit includes a Colorado AI Act applicability check.

Which AI tools do you recommend for legal work?

I do not recommend tools generically; I match tools to the data classification and the use case. Claude (Anthropic) Team and Enterprise plans, ChatGPT Team and Enterprise, and Microsoft Copilot for M365 are the three I most often clear for client-confidential work because they contractually disable training. I avoid free-tier ChatGPT, Google Gemini free tier, and any vendor whose TOS reserves broad rights to train on inputs. The right tool depends on your practice areas and data, not on which model is "smartest" at the moment.

How do I bill for AI-assisted work?

CA RPC 1.5 prohibits unreasonable fees. If AI compresses a four-hour task to forty minutes, you bill forty minutes, not four hours. The efficiency gain belongs to the client. What you can bill for: prompt engineering, output review, the judgment work AI cannot do, and the time you spend tailoring AI output to the matter. Flat-fee engagements largely solve this; hourly engagements need engagement-letter language that explains what is and is not billable. I draft that language as part of the audit.

What if AI hallucinates a citation in my filing?

Rule 3.3 (candor to the tribunal) makes you responsible for what you file. Several attorneys have been sanctioned for filing briefs with fabricated AI-generated citations. The fix is verification: every cite gets checked against the primary source before it leaves the firm. My AI Use Policy templates include an explicit citation-verification checklist as a mandatory step. The doctrinal principle is simple: the lawyer is the supervising authority over the AI, just as the lawyer is over a junior associate.

Ready to put real guardrails around your firm's AI use?

Email me the firm size, primary practice areas, and current AI tools. I'll respond same day with a scoped proposal.

Email owner@terms.law