⚖️ AI Impersonation of Professionals: Cease and Desist
Imagine waking up to find an app store listing for “Ask Sergei AI – Your Startup Lawyer” or a Telegram bot called “Dr. Smith Bot – Instant Diagnosis”:
- It uses your full name, maybe your photo.
- It gives legal or medical advice you’d never stand behind.
- Users think you built it, or at least approved it.
This isn’t a thought experiment anymore. Creative “founders” and indie devs are already spinning up “AI therapists,” “AI attorneys,” and “AI coaches” around real people’s names and brands.
Legally, that’s a cocktail of:
- Right of publicity problems (name/likeness misappropriation),
- Trademark and false endorsement,
- Unauthorized practice of law/medicine, and
- Deceptive marketing / consumer protection issues, increasingly on the FTC’s radar. (Quinn Emanuel)
This piece is a practical playbook for professionals and firms whose names are being used to front “AI doctor/lawyer/coach” bots—especially when you want a focused cease-and-desist strategy rather than a law-review article.
🧰 Legal toolbox for AI impersonation bots
Here’s the high-level map of theories that matter most when someone ships “Ask [Your Name] AI” without consent.
🧱 Table – Core claims for “AI [Professional] using your name/brand”
| Claim / theory | Typical targets | What it covers | Why it fits “Ask Sergei AI” / “Dr. Smith Bot” |
|---|---|---|---|
| Right of publicity / misappropriation of likeness | Bot developer, platform monetizing bot | Unconsented commercial use of your name, image, likeness, voice, persona | Using your name/photo to drive signups, subscriptions, or ad revenue. (Quinn Emanuel) |
| Trademark infringement & false endorsement (Lanham Act §43(a)) | Bot developer, app publisher, possibly platform | Confusing use of your name or brand as a mark; suggesting sponsorship/affiliation | “Ask Sergei AI – your startup lawyer” looks like your official product; users reasonably think you endorse it. (Thompson Hine LLP) |
| Unauthorized practice of law/medicine (UPL/UOP) | Developer, operator holding themselves out as licensed professional | AI tool holding itself out as a lawyer/doctor, giving individualized advice without a license | Many UPL/medical practice statutes cover non-human tools; state bars and boards already target AI “legal” and “therapy” bots. (Justia) |
| Deceptive trade practices / consumer protection | Developer, marketing entities | Misleading consumers about who built the bot, its qualifications, and oversight | Impersonating a licensed pro and implying one-on-one professional advice is classic unfair/deceptive practice. (Romano Law) |
| FTC impersonation & endorsement rules | Developer, company selling/hosting bot | Rules against impersonating businesses/individuals and against misleading endorsements | FTC’s impersonation rule + proposed expansion to individuals, and endorsement rules around false implied endorsements, all fit this fact pattern. (Federal Trade Commission) |
You don’t need all of these in every letter—but it’s useful to know what’s in the quiver.
🧍 Right of publicity: your name and persona aren’t “open-source”
Most U.S. states recognize some form of right of publicity—a tort protecting against unconsented commercial use of your name, image, likeness, voice, or other indicia of identity. (Quinn Emanuel)
The typical elements (varies by state, but broadly):
- Use of your identity (name, photo, likeness, persona);
- For commercial advantage (marketing, selling access to the bot, capturing data/traffic);
- Without consent;
- Resulting injury (economic, reputational, or dignitary).
Using your name as the brand of an AI advice bot—especially with your photo, firm logo, or biographical details—is essentially textbook publicity misappropriation.
Deepfake scholarship and recent litigation over AI voice clones emphasize the same point: right-of-publicity law is one of the main shields against AI personas that trade on your identity. (Franklin Pierce Law School)
In a cease-and-desist, this gives you a clean, professional-person-centric theory:
“You are commercially exploiting my name and likeness in connection with an AI assistant without my consent.”
That’s hard for the other side to spin as “just a technical experiment.”
® Trademark & false endorsement: when the bot looks “official”
If your name functions as a mark in your jurisdiction (which is common for professionals with established reputations) or you have a registered mark (firm name, logo, clinic name), impersonation opens up classic trademark and false endorsement claims.
Key points:
- Under the Lanham Act § 43(a), you can act against uses of your name/mark that are likely to cause consumers to believe the bot is sponsored by, affiliated with, or approved by you, even if they don’t use your exact logo. (Thompson Hine LLP)
- For an “Ask [Name] AI” legal or medical bot, confusion is not hypothetical—confused users are the point of the brand choice.
You can usually argue:
- Your name/brand has goodwill in the relevant market (clients, patients, followers).
- The bot’s name and marketing deliberately trade on that goodwill.
- Users reasonably believe the bot is a product of your practice, firm, or clinic.
- Any bad advice, bias, or hallucinations will be attributed to you, damaging your brand.
That supports claims for:
- Trademark infringement (likelihood of confusion),
- False endorsement, and
- False advertising if they also make express claims like “trained on Sergei’s proprietary legal playbook.”
🧑⚕️ UPL and UOP: AI “doctor/lawyer” bots practicing without a license
State regulators are already looking at AI tools that sit where licensed professionals are supposed to sit.
- In legal, state bars have warned that AI programs that give legal advice without a lawyer’s supervision may constitute unauthorized practice of law, and have already threatened products like DoNotPay over UPL concerns. (DarrowEverett LLP)
- In mental health, a coalition of state Attorneys General and licensing boards filed a complaint arguing that AI “therapy” chatbots on Character.AI and Meta AI Studio are practicing medicine/psychology without a license and impersonating real clinicians. (Transparency Coalition)
- States like Illinois, Nevada, and Utah are already enacting or considering laws restricting or banning AI-based therapy services, especially where bots are advertised as substitutes for licensed therapists. (The Washington Post)
If someone deploys “Dr. Smith Bot” or “Sergei AI, Your Lawyer” and lets it:
- answer individualized questions,
- make implied diagnoses or legal conclusions,
- use license numbers or credentials,
you have dual leverage:
- UPL/UOP as a substantive violation in your demand; and
- a credible threat of referral to the relevant bar/medical board, which tends to get attention in a way “we might sue” sometimes doesn’t.
In a C&D, you can concisely lay out:
- the relevant licensing regime,
- that the bot is holding itself out as a professional, using your credentials, and
- that you view it as unauthorized practice that you will report if not immediately discontinued.
🛡️ Consumer protection and FTC impersonation rules
Beyond private civil claims, the consumer-protection context is getting sharper around impersonation and AI.
FTC impersonation rule and proposed extension
The FTC’s Impersonation Rule, effective April 2024, gives the agency stronger tools to act against impersonation of government agencies and businesses, including civil penalties. (Federal Trade Commission)
In February 2024, the FTC proposed expanding that rule to cover impersonation of individuals as well, explicitly citing the rise of AI voice cloning, deepfakes, and synthetic personas. (Federal Trade Commission)
Combine that with:
- FTC AI guidance under “Operation AI Comply,” warning that AI tools used to mislead consumers are not exempt from existing laws, and recent enforcement actions against AI services that misrepresented themselves or their capabilities (including legal-service and review-generation tools). (Reuters)
While you’re not the FTC, referencing this in a letter signals:
“You are squarely in a category the FTC has publicly said it cares about.”
That matters to any developer with investors, platform distribution, or a corporate employer.
🎯 Target selection: who gets what kind of letter?
In practice, you’ll often have multiple potential recipients:
- The bot creator / app publisher (GitHub, HuggingFace, App Store dev, etc.).
- The hosting or platform (Discord, Telegram, app stores, marketplace, AI studio platforms).
- In some cases, the company using the bot in marketing (e.g., a clinic advertising “Dr. Smith AI” on its website).
You can tailor your approach:
- Primary C&D to the creator/publisher invoking right of publicity, trademark, UPL/UOP, and consumer protection.
- Platform notices focused on ToS violations, impersonation policies, and risk to users.
- Regulatory referrals (bar, medical board, FTC complaint) as a second stage if they don’t respond.
✉️ Structuring a cease and desist for “AI [Your Name]”
Here’s a clean structure that hits everything without turning into a 20-page memo.
🧱 Table – Core sections of an AI impersonation C&D
| Section | Job | What it looks like in practice |
|---|---|---|
| Introduction & identification | Make it undeniably about this bot and this name. | Identify the bot by title, URLs, app store listings, screenshots. State that it uses your full name, likeness, credentials, or brand and that you have not authorized any such use. |
| Your professional status and brand | Establish why your name is protectable. | Brief description of your practice or clinic, jurisdiction(s) where you’re licensed, and how your name/brand is used in commerce (firm name, marks, website, media, followers). |
| Right of publicity / name & likeness | First, simple theory: you didn’t consent. | State that the bot is using your name, likeness, and professional persona in a commercial product without consent, violating your rights of publicity and privacy under applicable state law. (Quinn Emanuel) |
| Trademark / false endorsement | Add the brand confusion angle. | If applicable: explain your mark(s) (registered or common-law). Then: explain how “Ask [Name] AI” or “Dr. [Name] Bot” is likely to cause consumers to believe you authored, sponsor, or are affiliated with the bot, constituting trademark infringement and false endorsement under the Lanham Act and analogous state laws. (Thompson Hine LLP) |
| UPL/UOP and licensing concerns | Show it’s not just about IP—it’s about public protection. | For lawyers: cite your state’s prohibition on the unauthorized practice of law and note that the bot holds itself out as a lawyer using your identity, providing legal advice without a license or supervision. For doctors/therapists: similarly reference medical/mental-health practice statutes and recent concerns about AI therapy and diagnosis. (Justia) |
| Consumer protection & FTC context | Turn up the pressure. | Briefly state that marketing an AI tool as “Dr. [Name]” or “[Name] Lawyer Bot” when you are not involved is deceptive under state consumer-protection statutes and falls within the conduct the FTC has highlighted in its impersonation and AI guidance. (Federal Trade Commission) |
| Concrete demands | Give them a checklist to comply with. | Typically: (1) immediately cease all use of your name/likeness in connection with any bot, model, or app; (2) remove or rename the bot and all related listings, screenshots, and marketing materials; (3) confirm in writing that they will not deploy any future AI products under your name or brand without written consent; (4) provide a description of where the bot has been distributed (platforms, URLs, app stores) so you can evaluate residual risk. |
| Preservation and logs | Protect future remedies if they don’t comply. | Demand they preserve all training data, prompts, system instructions, chat logs, and usage logs for the bot—especially any that associate your name with legal/medical advice. This sets up later discovery and deters deletion after the fact. |
| Timeline & escalation | Make next steps predictable (and credible). | Set a short response deadline (e.g. 7–10 days). State that if they fail to comply, you will consider: (1) filing suit for publicity, trademark, and unfair-competition claims; (2) notifying relevant regulators (bar/board, FTC or equivalent); and (3) submitting detailed violation reports to the platforms hosting or distributing the bot. |
You can adjust the rhetoric depending on whether you want an off-ramp (“rename it and we’re done”) or to make an example out of them.
🧵 Parallel notices to platforms and regulators
While the primary C&D tends to go to the developer, it’s often worth running parallel notices:
- To platforms (Discord, app stores, AI studios, marketplaces), focusing on:
- impersonation and misrepresentation under their ToS;
- risk of harm to users (fake legal/medical advice);
- violations of any policies about using real names and photos without consent.
- To regulators (if necessary):
- state bar / medical or mental-health licensing boards, where the bot materially impersonates a licensed professional; (Justia)
- FTC complaint, if the impersonation is part of a commercial scheme and the scale justifies it, tying directly into their impersonation and AI deception priorities. (Federal Trade Commission)
Platforms especially are becoming more sensitive to AI impersonation of real professionals after public controversies over celebrity chatbots and “therapy” bots; a well-documented report tends to get faster traction than a generic “this is bad” complaint. (LinkedIn)
❓ Frequently asked questions: AI impersonation bots
What if the bot has a tiny disclaimer like “not actually [Name]”?
Disclaimers help, but they’re not a magic shield:
- If the title, branding, and UX scream “this is [Name]’s bot,” a small disclaimer is unlikely to cure confusion or eliminate right-of-publicity and false endorsement issues. (Franklin Pierce Law School)
- For licensed professions, disclaimers do not necessarily fix UPL/UOP where the overall message is “this is legal/medical advice from [Name].”
The more the bot trades on your identity, the weaker the “we put a footnote somewhere” argument becomes.
Does it matter if the bot’s underlying model is trained on my public content?
It matters how they use that fact:
- Merely training on publicly available content is a separate, contentious issue (copyright/data).
- But when they brand the bot with your name and tell users it’s “based on Dr. Smith” or “trained on Sergei’s deals,” they move from “training” into misappropriation and false endorsement.
Your letter can be agnostic about training and laser-focused on identity and branding.
Can I demand they delete the model if it was fine-tuned “on me”?
You can certainly demand it; whether you can force it depends on facts and jurisdiction:
- For right-of-publicity / trademark / consumer deception, the most urgent remedies are usually rebranding + ceasing impersonation, not abstract internal model surgery.
- If they heavily market that the model is “trained on your proprietary materials,” and those materials aren’t actually public or licensed, you may also have copyright / trade secret angles—but that’s a more intensive fight.
A pragmatic approach is often:
- Immediate cease of name/likeness in branding and UX.
- Commitment not to represent any output as your advice.
- Discussion (if you care) about what they keep vs. purge on the training side.
Should I ever agree to “officialize” the bot instead of shutting it down?
Sometimes, yes:
- If the tech is decent, you might negotiate:
- a license to your name/brand;
- quality control rights and veto over prompts/guardrails;
- clear UX disclaimers about AI limitations and human oversight;
- fee/revenue terms.
But you only entertain that conversation after you’ve reset the table with a proper C&D—otherwise, you’re negotiating against your own rights.
🧭 Big picture
“Ask Sergei AI” and “Dr. Smith Bot” aren’t cute experiments when:
- they use real names and brands,
- give regulated advice, and
- mislead users into thinking a licensed professional is behind the curtain.
From a legal standpoint, that’s not some new AI problem. It’s a classic mix of:
- right of publicity,
- trademark/false endorsement,
- UPL/UOP, and
- deceptive advertising—
just delivered through a new medium.
The practical work is in:
- documenting how and where the impersonation occurs,
- sending clean, multi-theory cease-and-desist letters to the right entities, and
- running parallel notices to platforms and regulators when necessary.
Once you have that playbook in place, you’re not just reacting to the next “AI [Your Name]” stunt—you’re in a position to shut it down quickly, or shape it into something you actually control.