Field Guide · 2026

AI Citations vs Sponsored Results: A Lawyer's Field Guide for 2026

Three AI systems lawyers keep confusing — organic citations, AI-assisted discovery, and paid AI placements — live under three different ethics regimes. I pulled 88 days of my own citation data, read the rules line by line, and wrote this guide so you do not have to.

Citation window
0
Bing AI Overview export, Jan 19 – Apr 16, 2026
AI citations logged
0
Distinct AI citations to terms.law pages
Unique pages cited
0
Distinct URLs surfaced by AI systems
Sources cited below
0
Primary rules, regulations, platform policies
Section 2 · Thesis

Three AI systems, three ethics regimes

When a lawyer asks "am I allowed to appear in ChatGPT?" the honest answer is: which ChatGPT? The consumer product now contains at least three distinct surfaces, each governed by a different combination of ABA Model Rules, California Chapter 7 rules, FTC guidance, and platform policy. Conflating them is how firms get into trouble.

SYSTEM 1

Organic AI citation

An AI assistant surfaces your page because its retrieval layer judged the page authoritative for the query. No payment changes hands. You did not bid. Your page gets a short quote and a source link. Governed primarily by ABA 7.1 applied to the cited landing page, Rule 5.3 supervisory duties, and general Rule 8.4(c) honesty.

Generally permitted
SYSTEM 2

AI-assisted discovery

Google AI Overviews and Bing/Copilot chat-in-results are hybrid surfaces: they blend crawled organic content with paid placements above or adjacent. What begins as a "just an overview" answer can sit one tap above a sponsored ad. Governed by FTC Native-Ads Guidance, ABA 7.2, and state disclosure regimes.

Disclosure-intensive
SYSTEM 3

Sponsored AI placement

A law firm pays (directly or through a lead generator) to appear inside an AI response. As of April 2026, OpenAI bans advertising for "legal services" across ChatGPT, so this channel is presently closed on the largest AI assistant. Google AIO and Microsoft Copilot do accept legal ads, subject to verification, disclosures, and archive retention. Governed by Rule 7.3, CA Chapter 7, B&P § 6157, and platform terms.

Channel-specific ban (ChatGPT)

The rest of this field guide walks each system down to the level of the actual rule, the actual platform policy, and a disclosure workflow I use for my own pages. I built a rule-by-channel matrix, a platform comparison, a disclosure wizard, and a 7.1 risk calculator so you can check a specific fact pattern against the specific rule in one pass.

A caveat up front: this is informational content written by a licensed attorney, not individual legal advice. Your jurisdiction may follow a state-specific variation of the ABA Model Rules, and platform policies move faster than any published guide can track. Every policy citation below includes the publisher, the policy title, and the date I retrieved it.

Section 3 · Myth vs Reality

Eight things lawyers believe about AI that are wrong in 2026

These are the misconceptions I heard most from attorneys during 88 days of client intakes. Each card flips: click the front to see the reality and the rule or policy that drives it.

Myth 1

"If ChatGPT cites me, that is an ad."

Reality: An organic AI citation is not, without more, advertising. No payment changes hands and the lawyer did not direct the platform to surface the page. The cited page itself, however, is treated as a communication about legal services, so Rule 7.1 applies to what the page says, not to the retrieval layer.

Myth 2

"I can buy my way into ChatGPT answers."

Reality: As of April 2026, OpenAI's advertising policy bans ads for "legal services" across the ChatGPT consumer surface. The rollout of sponsored results on Free and Go tiers in the US, AU, NZ, and CA (Feb 9, 2026) excluded legal. So you cannot presently buy into a ChatGPT answer as a law firm, through any lead-gen intermediary, in any tier.

Myth 3

"Google AI Overviews are just search results."

Reality: AI Overviews blend crawled organic content with adjacent paid placements. FTC guidance treats paid content that appears in a format indistinguishable from editorial content as native advertising, requiring clear and conspicuous disclosure. The Overview itself is not paid; the ad unit above or beside it is. They must not visually merge.

Myth 4

"An AI summary of my page is fair use. I am safe."

Reality: Fair use is irrelevant to professional-responsibility exposure. If the AI rewrites your fee structure or a case result in a way that becomes misleading when read by a lay person, you are still responsible for the underlying content under Rule 7.1 and may have a supervisory obligation under Rule 5.3 to correct or challenge the cited text.

Myth 5

"A chatbot conversation is not a prospective-client communication."

Reality: ABA Model Rule 1.18 and ABA Formal Opinion 512 (2024) treat a consumer's good-faith consultation with an AI tool controlled by a lawyer the same as a prospective-client intake. Information given during that conversation may be confidential, may disqualify the lawyer, and must be supervised.

Myth 6

"Retargeting a user after an AI chat is the same as email marketing."

Reality: Session-aware retargeting off an AI conversation can cross the Rule 7.3 solicitation line because the prompt revealed a specific legal problem. The ABA, California, and several states treat this as "real-time electronic solicitation" if the creative references the user's specific problem. An audit trail of the trigger terms is the minimum prudent practice.

Myth 7

"If I do not run ads, no advertising rule applies."

Reality: The content of your website is itself a "communication about the lawyer or the lawyer's services" under Rule 7.1. When an AI cites that page, Rule 7.1 still controls what the page says. California Bus. & Prof. Code §§ 6157–6159.2 imposes record-keeping independent of whether the surface was paid.

Myth 8

"Generative-AI reviews and testimonials are fine if they are based on real cases."

Reality: Rule 8.4(c) reaches AI-fabricated testimonials, AI-generated quote-cards attributed to real clients, and synthetic "before/after" narratives that omit material facts. The FTC's Endorsement Guides add a civil-liability layer on top of bar discipline exposure.

Section 4 · Platform comparison

The seven AI surfaces a lawyer can actually appear on

I mapped every AI surface currently capable of citing or monetizing legal content. The matrix below pulls live data from /shared/pillar/data/platforms.json. Click any cell to open the platform's controlling policy in a side drawer.

The table collapses to a card-per-platform layout below 640px. If you prefer a print-friendly version, the source registry at the bottom of this guide lists each platform's canonical policy URL with the date I last verified it.

Three takeaways from the matrix

  1. The ads-permitted column changes fastest. OpenAI's ban on legal-services ads is a platform policy decision, not a rule of professional conduct. When OpenAI reverses that decision, every other column will need to be re-read on the same day.
  2. Verification requirements are converging. Google, Microsoft, and the major lead-gen networks are all moving toward proof-of-bar-membership before serving legal ads. Keep a PDF of your active-status letter ready; the verification desks ask for it.
  3. Anthropic's Claude occupies a different risk layer. It refuses to advertise, but its output is increasingly embedded in third-party assistants. If a consumer-facing product uses Claude under the hood to describe a "lawyer finder," the product, not Anthropic, is the advertising party for professional-responsibility purposes.
Section 5 · The data

Eighty-eight days of AI citations to my own pages

I exported my AI-citation data from Bing Webmaster Tools for the 88-day window January 19 – April 16, 2026. Here is the raw curve, with a seven-day rolling average laid on top. If the chart does not render, the accessible table underneath carries the same data.

Data: Bing Webmaster Tools AI Performance Overview, retrieved April 19, 2026. One "citation" equals one distinct answer-box surfacing of a terms.law URL.

What the curve shows

Daily citation counts rose from a single-digit baseline in late January to a sustained double-digit band by mid-April. Growth was not linear; two clear inflection points align with (a) a structured-data refresh I pushed in mid-February and (b) OpenAI's Feb 9 sponsored-results launch, which pulled volume away from Bing chat and, counter-intuitively, increased the rate at which Bing cited smaller authority sites.

Three caveats anyone interpreting this curve should remember:

  • The denominator matters. Citation counts are informative only if you also track page-count of the underlying corpus and the query mix. An eight-fold citation increase on a site that doubled its page count is a four-fold improvement, not eight.
  • Bing's sample is not all AI. This export covers Bing-derived surfaces (Copilot, Bing chat) and, by inference, the long tail of assistants that query Bing's API. ChatGPT, Claude, and Perplexity aggregate indirectly into Bing's telemetry but are not enumerated here.
  • Correlation is not causation. I cannot prove my structured-data refresh caused the February inflection. I can only say the inflection occurred within 72 hours of deploy.

The 7-day rolling average

Rolling averages smooth weekend dips and the typical Tuesday publishing peak. My first-week average was small; my last-week average was roughly an order of magnitude higher. The page-level breakdown in Section 9 drills into which URLs actually pulled that growth.

Accessible data table — show the 88-day detail

Table loads from the same JSON source as the chart. If you need the CSV directly, the export is linked in the source registry (entry: Bing Webmaster Tools — QueryStats export).

Section 6 · The compare view

Organic citation vs sponsored placement, side by side

Every lawyer I talked to during this 88-day window eventually asked the same question: "what actually is the difference, in practice, between the two?" The toggle below lays both systems out on the same axes. Flip between them to see how incentive, disclosure, ethics, and data retention differ.

Organic AI citation

  • Incentive: Assistant judges your page authoritative for a query. No payment, no bid.
  • Disclosure: The assistant attributes with a link; no "Sponsored" label because nothing is sponsored.
  • Primary ethics rule: Rule 7.1 applied to the cited page; Rule 5.3 supervision of third-party platforms.
  • Risk of inadvertent solicitation: Low, assuming the page speaks to a general audience.
  • Data retention you control: Your server logs plus the platform's public query stats (Bing, Google Search Console). Nothing paid.
  • What triggers discipline: The cited page misstating results, pricing, or specialization. The retrieval event itself is not the hazard.

The two systems also differ in how fast you can stop them. A mispriced ad can be paused in seconds. A misleading page cited by an AI assistant stays cached in the assistant's retrieval layer for days. When I push a correction to a page that I know Bing has cited, I expect a 48–72 hour lag before the assistant updates.

Section 7 · The rule-by-channel matrix

Which ethics rule governs which channel

This is the single most-bookmarked artifact on the page. Nine rules down the side, four AI channels across the top, one verdict per cell (green / amber / red) with a click-to-open source drawer. I maintain this matrix monthly.

How to read the matrix

  • Green (ok). The rule either does not apply to that channel or is satisfied by default. Most green cells still have a click-through note because "default compliance" is fact-specific.
  • Amber (warn). The rule applies and creates a non-trivial compliance task. These are the cells most lawyers miss.
  • Red (bad). Either the channel is closed to legal services (e.g., OpenAI's advertising ban) or the default use of the channel would violate the rule.

Three rule-by-rule notes

Rule 7.1 on every cell that involves a cited page. The retrieval layer is not the "communication"; the cited page is. If Bing surfaces your page in response to a query for "best solo immigration lawyer in San Diego," the page's own content must not make that claim unless you can substantiate it.

Rule 7.3 on every "sponsored" column. Session-aware targeting is how sponsored AI placements personalize. The personalization hook is the exact mechanism that pushes a neutral ad into the "live person-to-person solicitation" zone. Keep the targeting taxonomy general (practice area, city) and never tie to a specific prompt phrase.

Rule 5.3 everywhere. The lawyer is responsible for the conduct of non-lawyer agents. Your marketing firm, your SEO vendor, your lead-gen intermediary, and the AI platform itself are all non-lawyer agents for this purpose. Quarterly audits of what is actually being published on your behalf is the minimum prudent practice.

Section 8 · Under the hood

How an AI citation actually gets produced

Most lawyers I advise think of AI citations as a black-box lottery. They are not. Modern AI assistants follow a reasonably deterministic pipeline, and understanding each stage tells you where your page has to win to be cited. I walk through the pipeline below, then I show the four page attributes that actually move the needle.

The retrieval-augmented-generation pipeline, stage by stage

Stage 1 — Query classification

The assistant first decides what kind of question it received. Informational queries ("what is a demand letter") route differently from transactional queries ("how do I hire a lawyer near me") and from "YMYL" (your-money-or-your-life) queries, which legal almost always is. See Google's search quality guidelines for the YMYL definition.

Practical consequence for lawyers: high-YMYL queries trigger stricter authority heuristics. A blog post by an anonymous author will not win a YMYL query in 2026, no matter how good the content.

Stage 2 — Retrieval

The assistant retrieves candidate documents. The retriever pulls from (a) the model's own training data, (b) a real-time search index (Bing for ChatGPT and Copilot; Google for AIO; proprietary crawl for Perplexity), and (c) for some products, a curated authority layer. Retrieval is usually a hybrid of keyword and dense-vector matching.

This is the stage where your structured data earns its keep. A page with LegalService markup, a verified author with hasCredential, and a FAQPage block is substantially easier to match than a plain-HTML article with the same text.

Stage 3 — Reranking and source selection

The retriever hands back, say, twenty candidates. A reranker scores them for authority, freshness, alignment with the query, and risk (is the source known to hallucinate; is it on a blocklist). Legal content is reranked particularly aggressively for author credentials. An article by a bar-verified attorney consistently outranks the same article published anonymously.

If the reranker cannot find a source above its confidence threshold, many assistants will refuse to answer the legal question or respond with a disclaimer. In my testing, ChatGPT now refuses roughly one in five legal queries of this kind, which is why the organic-citation opportunity is concentrated in a smaller pool of authority sites than it was in 2024.

Stage 4 — Generation with grounding

The model composes an answer while constrained to cite from the selected sources. "Grounding" is the term of art for this constraint. When grounding is strict, the model should not introduce facts not found in the sources. When grounding is loose, the model blends source content with its parametric memory — which is how hallucinations enter legal answers.

Your page's signal here is quotability. A page with well-structured definitions, enumerated lists, and clean sentence boundaries is easier for the generator to quote accurately than a wall of prose. See Section 13 on writing for AI citation.

Stage 5 — Attribution and post-filtering

The assistant attaches inline citations to spans of text in the generated answer. Post-filters run for safety (PII, medical, legal), for advertising compliance on platforms that sell ads, and for reading-level targeting. A page that passes all prior stages can still be dropped at post-filter if, for example, its domain is on an ads-policy blocklist for the current query.

The four page attributes that actually drive AI citation

After eighty-eight days of watching my own pages get cited or not cited, the signal weights I observe — in descending order of importance — are:

  1. Author credential markup. Pages with a verified Person JSON-LD block containing hasCredential for my bar license get cited at roughly three times the rate of pages without it, holding content constant.
  2. Answer-shaped content. An H2 phrased as a question followed by a one-paragraph direct answer is cited more often than an H2 phrased as a topic followed by a lede. The assistant's generator prefers sources that are already shaped like answers.
  3. Freshness with a real reason. A dateModified that reflects a substantive edit — not a trivial update — outranks older pages. Adding "2026" to a headline without editing the body is visible to the reranker and is penalized.
  4. Cross-link topology. Pages that are linked by other authority pages on the same site win. Isolated orphan pages lose. The twenty pages on terms.law that get cited most are the twenty pages that are each linked by three or more other pages on the site.

You will notice none of these four attributes are "keyword density" or "meta keywords" or any of the 2014-era SEO primitives. The citation layer is reading a different feature set than traditional search ever did.

Section 9 · The page-level breakdown

What actually gets cited: 20 pages, 88 days

I ranked every URL on terms.law by AI-citation count during the 88-day window. The chart below is the top 20. Three patterns emerged.

Bar chart. The top page (a legal-deadline calculator) received more citations than the next three combined.

Pattern 1 — calculators dominate

Five of the top twenty pages are interactive calculators. Why? Because they produce an unambiguous, quotable output for a specific user question ("how much security deposit can my landlord withhold"). The assistant cites the calculator because the calculator gives it a citable number, not just a discussion. If you have the subject-matter expertise to build one for your practice area, calculators are the single highest-leverage AI-citation asset.

Pattern 2 — long-form guides with tax or procedure specifics

Six of the top twenty are long-form guides that combine statutory citations, worked examples, and deadlines. Section 1256 contracts, CTR filing, meet-and-confer rules — these pages win because they cite the statute verbatim, then translate it. The assistant can re-quote the page's quote of the statute and be confident it is grounded.

Pattern 3 — a 0% CTR outlier that AI loves

The seventh-ranked page in my citation data is a privacy-policy template with a 0% human click-through rate from the AI answer itself. AI assistants love it because it is clearly labeled, cleanly formatted, and authoritative; but the AI summary answers the user's question so completely that the user never clicks. This is the clearest example I have of "AI impression without AI click" — and it is the biggest under-appreciated risk in the current AI-citation economy.

The fix for that page is neither SEO nor new content; it is a citation-worthy hook that the AI summary cannot satisfy without a click. I rebuilt the page to put an interactive wizard at the top, which the AI cannot reproduce in its answer. Every page on your site that an AI summary can fully satisfy should be augmented with something the AI cannot satisfy — a calculator, a wizard, a personalized draft. That is how zero-click citations become clicks.

Section 10 · Interactive wizard

Do you need an ad-disclosure on your AI surface?

The single most frequent question I get is: "Sergei, do I need to add a 'Sponsored' label to this?" It depends on the channel, the payment structure, and whether the creative is session-aware. The wizard below walks the decision tree step by step. At the end, it prints a verdict, the controlling rule, and an action list you can copy.

The wizard does not store your answers. Nothing leaves the browser. If you want me to review the specific ad creative and targeting setup, the fee is $349 flat for a written risk memo; see the service panel further down the page.

Section 11 · Risk calculator

Rule 7.1 risk score: how misleading is the cited language?

Rule 7.1 prohibits communications that are false or misleading about a lawyer or the lawyer's services. "Misleading" is not a binary; it is a weighted judgment about specific attributes of the communication. I distilled the most cited cases and disciplinary opinions into a weighted scoring form. Answer the questions and the tool prints a risk band.

Score the cited page or ad creative

The score is not a legal opinion. It is a rubric I use for my own pages before I publish them. A high score in this rubric is correlated with, but not identical to, actual Rule 7.1 exposure. For any specific fact pattern, email me.

Section 12 · The rules, in detail

The ethics deep-dive: every rule that actually governs AI citations and ads

This is the part I know you will return to. Each rule below gets its own sub-section: the text of the rule, the underlying policy, how it applies to AI, and a concrete practice pointer. Where California diverges from the ABA Model Rules, I flag the California rule and the California Bus. & Prof. Code section that imposes the additional obligation.

ABA Model Rule 7.1 — False or misleading communications

A lawyer shall not make a false or misleading communication about the lawyer's services. Omissions count too. When a retrieval layer cites a page, the page is the communication. A stale fee schedule, a "specialist" claim without certification, a results claim without the caveat — all are 7.1 exposure whether the reader is human or an AI summary. Ad creative and the landing page it drives to are both "communications," and a headline that is literally true but becomes misleading when paired with an inconsistent landing page violates 7.1.

Practice pointer

Run a rolling 7.1 audit on every landing page: results claims, specialization claims, testimonial use, pricing language, and responsible-attorney disclosure. The risk calculator in Section 11 is the version I use.

ABA Model Rule 7.2 — Advertising

The 2018 text reduced to three substantive obligations: (a) any medium is permitted subject to 7.1 and 7.3; (b) no "thing of value" to a person for recommending your services, with narrow exceptions; (c) every communication must identify the responsible lawyer. Sub-section (b) is why pay-per-lead intermediaries are scrutinized; an AI assistant that recommends a specific firm for a fee crosses every line at once. Sub-section (c) is the most common miss in AI ads — if you run them, the responsible attorney's name must be on the creative.

ABA Model Rule 7.3 — Solicitation

7.3 prohibits solicitation by "live person-to-person contact" when a significant motive is pecuniary gain. The 2018 revision defined that phrase to include "real-time electronic" communications substantially similar to live contact. The session-aware retargeting question. A generic practice-area creative that happens to appear after an AI chat is not solicitation; a creative that names the specific problem the user disclosed in the session is. Keep ad targeting at practice-area and geography; keep an audit trail of targeting parameters.

ABA Model Rule 1.18 — Prospective client

ABA Formal Opinion 512 (2024) is direct: a consumer's good-faith consultation with an AI chatbot presented as a service of the lawyer is a prospective-client consultation. The lawyer assumes 1.18 duties when the chatbot receives the consumer's information, even if no human ever reads it. Every chatbot needs (1) a non-engagement notice, (2) a retention policy matching intake-form handling, (3) a conflict-check hook before substantive content is solicited.

Rule 5.3 · Rule 8.4(c) · CA Chapter 7 · B&P §§ 6157–6159.2

5.3 makes the AI platform a non-lawyer assistant under your supervisory duty — you are responsible for its targeting, moderation, and disclosure labels. 8.4(c) is the catch-all for fabricated testimonials, synthetic case images, and AI-generated "quote-cards"; discipline under 8.4(c) moves faster than under 7.1. California Chapter 7 tightens three points: presumed-misleading "specialist" claims (7.1), "real-time visual or auditory" solicitation (7.3), and firm-name rules for chatbots that imply bar membership (7.5). B&P §§ 6157–6159.2 impose a two-year retention rule on every advertisement plus a private right of action; I export a monthly ZIP of creative + landing-page HTML + targeting parameters as my § 6159.1 defense.

FTC and AI-disclosure statutes in one paragraph

The FTC Native-Ads Policy Statement and the 2023 Endorsement Guides require "clear and conspicuous" disclosure in proximity, size, and weight comparable to surrounding text. California AB 2013 (training-data transparency) and SB 942 (AI-content disclosures) give you contractual leverage against vendors. The EU AI Act Article 50 reaches U.S. providers that serve the EU market; if your chatbot is encountered by EU consumers, Article 50 disclosure applies.

For state-specific variations, the source registry links each state bar's Chapter 7 equivalent. If you need a written opinion on a specific fact pattern, email me — I take those on a flat fee.

Section 13 · The playbook

Writing for AI citation: a concrete checklist

After the analytical sections, the practical one. Below is the checklist I run every page through before publishing. It is the same checklist I would give a colleague who asked "what do I do on Monday morning?"

Pre-publication checklist

  1. One H1 stating the exact question the user asked. Questions phrased as questions are retrieved more reliably than topic-phrased headings.
  2. A direct answer in the first 200 words. If the user's question were "is X legal in California," the first two paragraphs should contain a specific, falsifiable answer.
  3. Statutory citations in line, not only in footnotes. "California Bus. & Prof. Code § 6157.1" embedded in the sentence that makes the claim, not hidden in a citation list at the end.
  4. A table, enumerated list, or worked example. AI assistants prefer structured content they can quote without paraphrasing. If you can express the point as a table or an ordered list, do.
  5. Author bio with credential markup. Your name, your bar number, your jurisdiction. The Person JSON-LD block with hasCredential is not optional in 2026.
  6. Date-modified timestamp with a real change. If the page is not substantively different from last month, do not update the timestamp. Gaming the freshness signal is visible to the reranker and backfires.
  7. Internal links to at least three related authority pages on the same site. Cross-link topology is what distinguishes an authority page from an orphan.
  8. Rule 7.1 review before publish. Run the risk calculator in Section 11 or an equivalent; any page that scores into the amber band should be rewritten before publish.
  9. Schema blocks: Article, Person, Organization, BreadcrumbList, FAQPage if the page has a FAQ. The FAQPage text must match the DOM text verbatim or Google's validator rejects it.
  10. An accessible table or text version of every chart and interactive. A page that cannot be read by a screen reader does not win YMYL queries.

The two most common mistakes I see

Over-optimization of the title. A title that reads as a keyword list — "California Demand Letter Services | Business Law Attorney Demand Letters California" — is both a Rule 7.1 risk (overclaiming specialization) and an AI-citation penalty (the reranker demotes keyword-spam titles). Write the title the way you would write the first sentence of a client email.

Under-investment in structured data. I see lawyers spend weeks on copy and zero hours on structured data markup. The markup is a one-time investment that pays dividends on every future AI retrieval. If your site does not have at least Article, Person, Organization, and LegalService markup on the service pages, that is the highest-leverage Monday-morning task.

A diff-style example

Here is a before/after for an opening paragraph I rewrote in February. The "after" version was cited by a Bing Copilot answer within a week of republication.

Before: "Our firm handles a wide range of business disputes and can help you explore your options. Contact us to learn more about our services."
After: "California's statute of limitations for breach of a written contract is four years under Code of Civil Procedure § 337. A demand letter must be sent before the statute runs. I (Sergei Tokmakov, CA Bar #279869) draft demand letters on a flat $575 fee for a single certified letter. For multi-party matters or litigation-draft bundles, I quote separately."

The "after" version is more specific, more quotable, names the statute, names the price, names the attorney. Every one of those attributes moved the AI-citation signal.

Section 14 · Platform deep-dives

Platform-by-platform: what each assistant actually allows

Section 4 was the one-screen summary. This section is the click-through. Each accordion below is a platform, with the policy excerpts that matter for legal advertising. Current as of retrieval dates listed in the source registry.

ChatGPT (OpenAI) — ads policy as of April 2026

OpenAI launched sponsored results on the Free and Go tiers in the US, AU, NZ, and CA on February 9, 2026. The launch excluded the "legal services" category per OpenAI's advertising policy. That policy also excludes other "restricted" categories: financial services (with exceptions), healthcare (with exceptions), gambling, political advertising, and categories deemed high-risk for consumer harm.

For organic citation, OpenAI's retrieval layer uses Bing as its primary real-time index. A page that ranks well on Bing for a legal query will, with high correlation, be retrievable by ChatGPT.

Practice pointer: Until OpenAI reverses the legal-services exclusion, no legitimate paid placement inside ChatGPT is possible. If a vendor tells you they can place your firm "inside ChatGPT's answers," they are either mistaken, describing a lead-gen intermediary, or describing a policy violation.

Google AI Overviews — policy, verification, retention

Google AI Overviews are subject to Google's general advertising policy for legal services, which requires certification for lawyer advertisers in specific categories. Certification currently applies to bail-bond services, addiction services, and a handful of additional categories; general legal advertising is not certification-gated in most jurisdictions but is subject to ordinary local-law compliance.

Retention: Google's Ad Transparency Center retains ad creative for a minimum period that, combined with California B&P § 6159.1's two-year requirement, gives lawyers a usable archive — but do not rely on the platform's retention as the only copy. Export your own archive monthly.

Microsoft Copilot — session-aware ads

Microsoft's Copilot advertising policy permits legal advertising subject to bar verification and the general Microsoft Advertising editorial standards. Copilot's ad surface is unusual because it can be session-aware: the conversation context can influence which ads are triggered. This is the single highest-risk feature for Rule 7.3 exposure.

Practice pointer: On Copilot specifically, request that your advertiser account be placed in "broad-match only, no session-signal targeting" mode. Document the request.

Anthropic Claude — no ads, but embedded in products

Anthropic does not operate an advertising product. Claude cannot be sponsored. However, Claude is widely embedded in third-party products. If a lawyer-finder product uses Claude to describe a firm, the product is the advertising party. Anthropic's usage policies prohibit using Claude to generate deceptive content, which applies to any embedding party using Claude to produce lawyer advertising.

Perplexity — an emerging citation surface

Perplexity is primarily a citation-first answer engine. It launched a sponsored-results product in late 2025 with broad disclosure labels; legal services are permitted with verification. Perplexity's citation practice is among the most aggressive in attribution — it typically includes two or three sources inline — which makes it a high-leverage surface for authority sites.

Bing Copilot for Search — where most of the data in this guide comes from

Bing's AI surface is where I extracted the 88-day dataset used throughout this guide. Microsoft's advertiser verification for legal services follows the Copilot ads policy; the organic-citation telemetry is available through Bing Webmaster Tools. If you want a reality-check dataset on your own site's AI-citation performance, Bing Webmaster is currently the only free public telemetry that will give it to you.

ChatGPT Product Discovery — the new retail-style surface

In early 2026 OpenAI launched a product-discovery surface that recommends specific products for specific user intents. This surface is retail-first and explicitly excludes legal services under the advertising policy. If the consumer surface expands to include professional services in the future, the disclosure wizard above covers the decision path lawyers will need to walk at that time.

Section 15 · Definitions

Glossary: 30 terms of art, defined

Every term of art used in this guide is defined below, with the source where applicable. Hover or tap any bold-italic term elsewhere on this page to see the same definition inline.

Section 16 · FAQ

Frequently asked questions

Ten questions I have answered most often during the 88-day window this guide covers. These also populate the FAQPage JSON-LD block for this page.

Is an AI citation advertising?

Not on its own. The retrieval event is not advertising. The page the AI quotes, however, is a "communication about the lawyer's services" under Rule 7.1 and must comply with that rule.

Can I pay to appear in ChatGPT answers as a lawyer?

Not as of April 2026. OpenAI's advertising policy excludes "legal services" from sponsored ChatGPT results. Any vendor offering paid placement inside ChatGPT for a law firm is either mistaken or describing a lead-gen intermediary that does not actually place the firm inside the AI answer.

Does Rule 7.3 apply to a retargeted ad served after an AI chat?

If the creative references the specific problem the user disclosed in the chat, most states treat that as real-time electronic solicitation. Targeting kept at practice-area and geography level — no prompt-phrase match — is not session-aware solicitation.

Is a chatbot on my law firm's site a "prospective client" contact?

Per ABA Formal Opinion 512, yes, if it is presented as a service of the firm. Treat the conversation transcript the way you treat a prospective-client intake form: conflict-check, confidential retention, clear disclosure at the start of the conversation.

Can I use AI-generated images of happy clients?

No. Rule 8.4(c) territory. AI-generated imagery depicting people who appear to be real clients, when they are not, is fabrication. The FTC Endorsement Guides add civil-liability exposure on top of bar discipline.

What is the California two-year retention obligation, concretely?

Bus. & Prof. Code § 6159.1 requires a true and correct copy of any advertisement be retained for two years, with a record of when, where, and to whom it was addressed. For AI advertising this includes the creative, the landing page HTML, and the platform's targeting parameters.

Is "specialist" or "expert" on my page automatically a problem?

In California, yes, unless you are certified by a State-Bar-accredited body. Rule 7.1's California comment treats unqualified specialization claims as presumptively misleading. "Focuses on" is safer; "certified by the State Bar" with a citation is safest.

What is the best single investment for AI-citation performance?

Structured-data markup. Specifically Article, Person with hasCredential for bar licensure, Organization, BreadcrumbList, and FAQPage where applicable. A one-time investment that pays on every future retrieval.

If I do not advertise at all, does any of this matter?

Yes. Website content is a communication about your services under Rule 7.1 whether or not you buy ads. When an AI cites it, Rule 7.1 still applies. Supervisory duty under Rule 5.3 still applies to the platform. Record-keeping under California B&P § 6159.1 applies to "advertising" broadly defined.

Single most important takeaway from this guide?

AI citations and AI ads are different systems under different rules. Build the page the AI will cite to a Rule 7.1 standard and audit it quarterly. If you decide to run AI ads, do not let session-aware targeting cross into real-time solicitation. Supervise everything.

Section 17 · Primary sources

Source registry: every rule, policy, and statute cited above

Every source I cited on this page appears below with its publisher, title, publication date, and the date I personally verified the URL. The registry is data-driven from /shared/pillar/data/sources.json; when I add or update sources, this section updates automatically.

A short donut breakdown

The source registry above has roughly one hundred entries in total. Rough breakdown:

  • ~35% primary rules of professional conduct (ABA Model Rules, California Rules of Professional Conduct, and sampled state variations)
  • ~25% platform policies (OpenAI, Google, Microsoft, Anthropic, Perplexity)
  • ~15% federal regulatory (FTC native-ad guidance, FTC Endorsement Guides, FCC advisory opinions)
  • ~15% state statutory (California B&P, analogous statutes in NY, TX, FL, IL)
  • ~10% scholarly and ethics-opinion commentary (ABA Formal Opinions, State Bar ethics opinions, law-review articles)

How to cite this page. Preferred form: Sergei Tokmakov, AI Citations vs Sponsored Results: A Lawyer's Field Guide for 2026, Terms.Law (Apr. 18, 2026), https://terms.law/ai-citations-sponsored-results-lawyers/. I update the page when source policies change; the "Last verified" column in the registry shows retrieval dates.

Sergei Tokmakov · California Bar #279869 (licensed 2011). I am the sole attorney behind Terms.Law. My practice focuses on technology, AI governance, consumer contracts, and demand letters. I write these guides myself, read every rule I cite, and keep a public changelog when platform policy changes. You work with me directly — not an intake desk, not an AI summary of an intake desk. Contact: owner@terms.law.