AI for Solo Lawyers: The 2026 Ethics-First Guide

A practical guide for solo and small-firm attorneys who already use AI (or want to start) and want to do it without inviting a bar complaint. The five ethics rules, the three categories of AI use, tool picks, a starter one-page policy, and when to hire an attorney for help.

Sergei Tokmakov, Esq. · CA Bar #279869 · Daily AI user since GPT-3

Why solos need this more than big firms do

Big firms have general counsel, risk committees, and a procurement function that vets every new tool. As a solo, you are all three of those people. You also have less margin for error: a single bar complaint or a single hallucinated citation in a filed brief can end a career that a forty-person firm would absorb without flinching.

And yet solos have more to gain from AI than anyone. Drafting time, research time, summarization time, intake - these are the bottlenecks that limit how many matters you can handle. AI compresses them. The catch is that the rules don't change because you're a solo. The State Bar of California's November 2023 Practical Guidance on Generative AI applies to you the same as it applies to Latham & Watkins. ABA Formal Opinion 512 (July 2024) is the same standard. You get the upside of AI; you also get the supervision burden.

I've been a daily AI user since the GPT-3 API was a private beta, and I run my own solo practice. This guide is what I tell solo attorneys who ask me how I do it.

The five ethics rules that apply

You don't need to read all of the State Bar guidance and ABA Opinion 512 to start. You need to know which rules are in play and what each one means for daily practice. Here are the five.

CRPC 1.1 - Competence.

You must understand the technology you use, including its benefits and risks. That doesn't mean you need a CS degree; it means you can't say "I don't know how it works" if it goes wrong. Practically: read the TOS of every tool you use, know whether it trains on your inputs, and know its known failure modes (hallucinations, confidently wrong cites).

CRPC 1.6 - Confidentiality.

You may not reveal client information without informed consent. Free-tier consumer AI tools that train on inputs are inherently incompatible with this rule unless you redact every identifying detail. The clean fix is a paid enterprise tier with a contractual non-training commitment, or an API call that doesn't retain prompts.

CRPC 1.4 - Communication.

You must keep clients reasonably informed about significant developments. The State Bar guidance suggests disclosure when AI materially affects the work product or fees. In practice this means a one-paragraph addendum to your engagement letter that explains how you use AI.

CRPC 1.5 - Fees.

Fees must be reasonable. If AI compresses a four-hour task into forty minutes, you bill forty minutes. The efficiency belongs to the client. You can still bill for prompt engineering, output review, and the judgment work AI can't do.

CRPC 5.3 - Supervision.

You are responsible for non-lawyer assistants. The State Bar treats AI tools as non-lawyer assistants for this purpose. That means every AI output that leaves your firm needs a documented human review step, and every staffer needs to know which tools are approved and what data is allowed.

Two more rules deserve mention even though they're not on the headline list: Rule 3.3 (candor to the tribunal) makes you responsible for every citation in every brief you file - verify each one against the primary source - and Rule 1.18 (prospective clients) applies to AI intake chatbots that capture information from people who haven't yet hired you.

The three categories of AI use

I find it useful to think of AI in legal practice as falling into three distinct categories, each with a different risk profile.

1. Research

Using AI to find cases, summarize statutes, surface counterarguments, or explain a doctrine you haven't worked with in a while. Risk profile: moderate. Hallucinated citations are the headline risk. Don't paste in identifying client facts; do verify every cite against the primary source before relying on it.

2. Drafting

First-draft contracts, demand letters, briefs, memos, client comms. Risk profile: moderate to high. Confidentiality matters more here because real facts often have to enter the prompt. Use enterprise tiers with non-training commitments. Human review is mandatory - every word that ships has to be yours.

3. Automation

Document generators, intake bots, batch redlining, classification tasks. Risk profile: high but bounded. The tool runs without you watching, so the guardrails have to be in the design: structured inputs, attorney review at a defined checkpoint, no autonomous client-facing legal advice. This is where most solos go wrong by building too much, too fast.

Tool recommendations by use case

This list is current as of publication. Vendor TOS and pricing change frequently. Re-verify before you commit budget.

Use case
What I recommend
Tier
Long-form drafting (contracts, memos)
Claude (Anthropic) - best context window and reasoning for legal prose
Claude.ai Team or Pro
Second-opinion drafting / research
ChatGPT (OpenAI) - fast, broad capability
ChatGPT Team
Legal research with primary-source citations
Lexis+ AI or Westlaw Precision (better cite verification than general-purpose AI)
Paid subscription
Document automation on your domain
Custom workflow (form → Claude API → DOCX/PDF); I build these
API + thin web app
Microsoft 365 environments
Copilot for M365 (inherits your tenant's compliance posture)
Add-on subscription
Avoid for client-confidential work
Free-tier ChatGPT, free-tier Gemini, any tool whose TOS lets it train on inputs
N/A
Common mistake:

"I'll just use ChatGPT for everything." Free-tier ChatGPT trains on your inputs by default. If you paste a client's deposition into it, that material becomes potential training data. Move to ChatGPT Team or Claude Team before doing client work. The cost is roughly $25-$30 per user per month.

A starter one-page AI Use Policy you can paste

This is a working draft. It's not a substitute for a custom policy tailored to your firm, but it's a defensible starting point that takes you from "no written policy" to "documented written policy" in about ten minutes. Edit the bracketed bits, sign it, file it.

[Firm Name] - AI Use Policy

Effective: [Date] · Last reviewed: [Date] · Owner: [Attorney Name, Bar #]
1. Scope

This policy applies to every attorney, paralegal, contractor, and staff member of [Firm Name] who uses any generative-AI tool in connection with firm work. It covers research, drafting, automation, and any AI-assisted communication with clients or third parties.

2. Approved tools

The only AI tools approved for client-confidential work are:

  1. [Tool 1, e.g., Claude.ai Team] - for [drafting, summarization]
  2. [Tool 2, e.g., ChatGPT Team] - for [research, second opinions]
  3. [Tool 3, e.g., Lexis+ AI] - for [legal research with citation verification]

All approved tools must be on a paid tier with a contractual commitment that prompts and outputs are not used for model training. Before adding a new tool to this list, the firm owner reviews the vendor's TOS and confirms the non-training commitment in writing.

3. Prohibited tools and uses
  1. Free-tier consumer AI (ChatGPT free, Gemini free, etc.) is prohibited for any client-confidential material.
  2. No AI tool may be used to give a client a legal opinion without attorney review and sign-off.
  3. No AI-generated citation may be filed or sent to a client without verification against the primary source.
4. Data handling

Identifying client information (names, addresses, account numbers, case numbers, employer names) is redacted before being entered into any AI tool. Replace identifying detail with placeholders ("Client A," "Defendant B"). Facts and patterns may be used in full.

5. Supervision and review

Every AI output that leaves the firm - to a client, a tribunal, a regulator, or any third party - is reviewed and approved by a licensed attorney. The reviewing attorney is responsible for accuracy, citation verification, and compliance with applicable rules of professional conduct.

6. Client disclosure

Every engagement letter includes a paragraph disclosing the firm's use of AI tools for drafting and analysis, including that all outputs are reviewed by a licensed attorney before delivery.

7. Billing

AI-assisted work is billed for the human time actually spent, including prompt engineering, output review, and revision. The firm does not bill clients for AI processing time as if it were attorney time.

8. Incidents

Any suspected confidentiality breach, hallucinated citation that escaped review, or other AI-related incident is reported to the firm owner within 24 hours. The firm owner determines whether client notification or bar reporting is required.

Signed: ____________________   Date: ____________
[Attorney Name], [Title], [Firm Name]

If you copy this verbatim and use it, you have a written policy. That puts you ahead of most solos. If you want one customized to your practice areas, your specific tool stack, and your malpractice insurer's expectations, that's what the $2,500 audit is for.

When to call an attorney for help

You don't need outside counsel to use AI safely. You do need outside counsel when:

Need help getting it right the first time?

The $2,500 AI Use Audit & Policy Package gives you a customized written policy, a vendor matrix, a client disclosure addendum, and a one-hour training session - typically in 14 business days, flat fee.

Related

Disclaimer. This guide is informational content authored by Sergei Tokmakov, a California-licensed attorney (CA Bar #279869). It is not legal advice and does not create an attorney-client relationship. The starter AI Use Policy on this page is a generic template; it is not a substitute for a policy tailored to your firm and jurisdiction. Rules of professional conduct vary by state and change frequently; the rules cited here are accurate as of publication. For advice on your specific situation, email owner@terms.law.