Why solos need this more than big firms do
Big firms have general counsel, risk committees, and a procurement function that vets every new tool. As a solo, you are all three of those people. You also have less margin for error: a single bar complaint or a single hallucinated citation in a filed brief can end a career that a forty-person firm would absorb without flinching.
And yet solos have more to gain from AI than anyone. Drafting time, research time, summarization time, intake - these are the bottlenecks that limit how many matters you can handle. AI compresses them. The catch is that the rules don't change because you're a solo. The State Bar of California's November 2023 Practical Guidance on Generative AI applies to you the same as it applies to Latham & Watkins. ABA Formal Opinion 512 (July 2024) is the same standard. You get the upside of AI; you also get the supervision burden.
I've been a daily AI user since the GPT-3 API was a private beta, and I run my own solo practice. This guide is what I tell solo attorneys who ask me how I do it.
The five ethics rules that apply
You don't need to read all of the State Bar guidance and ABA Opinion 512 to start. You need to know which rules are in play and what each one means for daily practice. Here are the five.
You must understand the technology you use, including its benefits and risks. That doesn't mean you need a CS degree; it means you can't say "I don't know how it works" if it goes wrong. Practically: read the TOS of every tool you use, know whether it trains on your inputs, and know its known failure modes (hallucinations, confidently wrong cites).
You may not reveal client information without informed consent. Free-tier consumer AI tools that train on inputs are inherently incompatible with this rule unless you redact every identifying detail. The clean fix is a paid enterprise tier with a contractual non-training commitment, or an API call that doesn't retain prompts.
You must keep clients reasonably informed about significant developments. The State Bar guidance suggests disclosure when AI materially affects the work product or fees. In practice this means a one-paragraph addendum to your engagement letter that explains how you use AI.
Fees must be reasonable. If AI compresses a four-hour task into forty minutes, you bill forty minutes. The efficiency belongs to the client. You can still bill for prompt engineering, output review, and the judgment work AI can't do.
You are responsible for non-lawyer assistants. The State Bar treats AI tools as non-lawyer assistants for this purpose. That means every AI output that leaves your firm needs a documented human review step, and every staffer needs to know which tools are approved and what data is allowed.
Two more rules deserve mention even though they're not on the headline list: Rule 3.3 (candor to the tribunal) makes you responsible for every citation in every brief you file - verify each one against the primary source - and Rule 1.18 (prospective clients) applies to AI intake chatbots that capture information from people who haven't yet hired you.
The three categories of AI use
I find it useful to think of AI in legal practice as falling into three distinct categories, each with a different risk profile.
1. Research
Using AI to find cases, summarize statutes, surface counterarguments, or explain a doctrine you haven't worked with in a while. Risk profile: moderate. Hallucinated citations are the headline risk. Don't paste in identifying client facts; do verify every cite against the primary source before relying on it.
2. Drafting
First-draft contracts, demand letters, briefs, memos, client comms. Risk profile: moderate to high. Confidentiality matters more here because real facts often have to enter the prompt. Use enterprise tiers with non-training commitments. Human review is mandatory - every word that ships has to be yours.
3. Automation
Document generators, intake bots, batch redlining, classification tasks. Risk profile: high but bounded. The tool runs without you watching, so the guardrails have to be in the design: structured inputs, attorney review at a defined checkpoint, no autonomous client-facing legal advice. This is where most solos go wrong by building too much, too fast.
Tool recommendations by use case
This list is current as of publication. Vendor TOS and pricing change frequently. Re-verify before you commit budget.
"I'll just use ChatGPT for everything." Free-tier ChatGPT trains on your inputs by default. If you paste a client's deposition into it, that material becomes potential training data. Move to ChatGPT Team or Claude Team before doing client work. The cost is roughly $25-$30 per user per month.
A starter one-page AI Use Policy you can paste
This is a working draft. It's not a substitute for a custom policy tailored to your firm, but it's a defensible starting point that takes you from "no written policy" to "documented written policy" in about ten minutes. Edit the bracketed bits, sign it, file it.
[Firm Name] - AI Use Policy
1. Scope
This policy applies to every attorney, paralegal, contractor, and staff member of [Firm Name] who uses any generative-AI tool in connection with firm work. It covers research, drafting, automation, and any AI-assisted communication with clients or third parties.
2. Approved tools
The only AI tools approved for client-confidential work are:
- [Tool 1, e.g., Claude.ai Team] - for [drafting, summarization]
- [Tool 2, e.g., ChatGPT Team] - for [research, second opinions]
- [Tool 3, e.g., Lexis+ AI] - for [legal research with citation verification]
All approved tools must be on a paid tier with a contractual commitment that prompts and outputs are not used for model training. Before adding a new tool to this list, the firm owner reviews the vendor's TOS and confirms the non-training commitment in writing.
3. Prohibited tools and uses
- Free-tier consumer AI (ChatGPT free, Gemini free, etc.) is prohibited for any client-confidential material.
- No AI tool may be used to give a client a legal opinion without attorney review and sign-off.
- No AI-generated citation may be filed or sent to a client without verification against the primary source.
4. Data handling
Identifying client information (names, addresses, account numbers, case numbers, employer names) is redacted before being entered into any AI tool. Replace identifying detail with placeholders ("Client A," "Defendant B"). Facts and patterns may be used in full.
5. Supervision and review
Every AI output that leaves the firm - to a client, a tribunal, a regulator, or any third party - is reviewed and approved by a licensed attorney. The reviewing attorney is responsible for accuracy, citation verification, and compliance with applicable rules of professional conduct.
6. Client disclosure
Every engagement letter includes a paragraph disclosing the firm's use of AI tools for drafting and analysis, including that all outputs are reviewed by a licensed attorney before delivery.
7. Billing
AI-assisted work is billed for the human time actually spent, including prompt engineering, output review, and revision. The firm does not bill clients for AI processing time as if it were attorney time.
8. Incidents
Any suspected confidentiality breach, hallucinated citation that escaped review, or other AI-related incident is reported to the firm owner within 24 hours. The firm owner determines whether client notification or bar reporting is required.
[Attorney Name], [Title], [Firm Name]
If you copy this verbatim and use it, you have a written policy. That puts you ahead of most solos. If you want one customized to your practice areas, your specific tool stack, and your malpractice insurer's expectations, that's what the $2,500 audit is for.
When to call an attorney for help
You don't need outside counsel to use AI safely. You do need outside counsel when:
- You're rolling out AI across multiple staff for the first time. The policy and training matter more than the tool.
- You're being asked by a malpractice carrier, a referral source, or a client about your AI posture. A written policy and vendor matrix protect you in those conversations.
- You're signing a contract with an AI vendor and the input-use or indemnity clauses worry you. Vendor-contract review is exactly the kind of thing solos punt on and then regret.
- You're building a client-facing AI tool (intake bot, document generator). Rule 1.18, advertising rules, and unauthorized-practice questions all surface here.
- You've had an incident - a leaked prompt, a hallucinated cite that almost shipped, a paralegal who used a tool you didn't approve. The first thing to do is document the response; the second is fix the policy gap.
Need help getting it right the first time?
The $2,500 AI Use Audit & Policy Package gives you a customized written policy, a vendor matrix, a client disclosure addendum, and a one-hour training session - typically in 14 business days, flat fee.