This dashboard is your interactive guide to the **7-Step Human Review Protocol**. Start by reviewing the "Typical AI Risk Profile" to see where models commonly fail. Then, use the checklist below to audit your own letter, step by step, for the most critical legal flaws.
Typical AI-Drafted Letter Risk Profile
This chart visualizes the common failure points of AI-generated legal letters. AI excels at "Specificity" (it can list facts you give it) but fails catastrophically on nuanced legal strategy.
- High Risk (Extortion/Tone): AI's aggressive language often crosses the legal line into extortion.
- High Risk (Fact vs. Opinion): AI cannot distinguish protected opinion from actionable false fact.
- High Risk (Sec. 230): AI will happily draft a letter to the wrong party (the platform, not the user).
The 7-Step Human Review Protocol
Understanding *why* AI fails is key. These are the four core legal concepts your AI doesn't understand. An AI is a language tool, not a legal strategist. It can't comprehend nuance, intent, or jurisdictional defenses, all of which are central to defamation law.
The Anti-SLAPP Hammer: How You Get Sued
Anti-SLAPP (Strategic Lawsuit Against Public Participation) laws are designed to stop meritless lawsuits intended to silence free speech. A bad C&D letter is the *perfect* trigger for one.
Here is the 2-step process that happens in court after you send your bad letter and the recipient files an Anti-SLAPP motion:
The person you sent the letter to (the defendant) only has to prove that your claim (the C&D) arises from their **protected free speech** (e.g., an online review, a blog post on a public issue).
This is a very easy bar to clear.
The burden of proof *shifts to you* (the C&D sender). You must now prove to the court that you have a **"probability of prevailing"** on your claim. This means showing real evidence that:
- The statement was a **provably false fact**.
- You are a private figure, OR you can prove "actual malice".
- The statement was not privileged.
If you fail this step, your case is dismissed AND you **must pay the defendant's legal fees**.
These hypothetical scenarios, based on real cases, show how these AI flaws combine to create devastating (and expensive) failures for the sender.
Case 1: The AI and the Vengeful Reviewer
A restaurant owner, furious about a 1-star Yelp review stating, "The owner is an unethical crook who sells week-old fish," uses an AI to demand a retraction and $50,000.
The AI Flaw:
- Ignored the Fact vs. Opinion line (demanded retraction of "unethical crook," which is protected opinion).
- Failed to check for truth (the owner had no evidence to prove the "week-old fish" claim was false).
The SLAPP Outcome:
The Yelper files an Anti-SLAPP motion. The owner can't prove the statement of fact was false and improperly targeted protected opinion. The motion is **granted**. The restaurant owner must now pay **$25,000 in the Yelper's legal fees**.
Case 2: The AI and the Public Figure
A local politician (a public figure) is angry about a blog post stating, "The senator is corrupt and clearly took bribes." She uses an AI to draft a C&D.
The AI Flaw:
- Completely missed the "Actual Malice" standard required for public figures.
- The AI drafted the letter as if the politician were a private citizen, only needing to prove negligence.
The SLAPP Outcome:
The blogger files an Anti-SLAPP motion. The politician fails Step 2 because she cannot *possibly* prove "actual malice" (that the blogger *knew* it was false). The C&D is deemed a frivolous attempt to silence political commentary. The politician's case is dismissed, and she is ordered to **pay the blogger's legal fees**.