February 27-28, 2026 — LATEST
IT HAPPENED: Anthropic Designated "Supply Chain Risk" — First US Company Ever Blacklisted Under 10 USC 3252
After Anthropic refused to comply with the Friday deadline, Hegseth followed through. What was a threat on Feb 25 became official action on Feb 27-28:
1
Supply chain risk designation: Hegseth declared Anthropic a "supply chain risk" under 10 USC § 3252 — a statute historically reserved for foreign adversaries like Huawei. This is the first time it has ever been used against an American company.
2
Federal ban on Claude: Trump ordered all federal agencies to immediately stop using Claude, with a 6-month wind-down period for the Department of Defense to transition away from classified systems currently running on Anthropic's models.
3
Anthropic defied the order: In a public statement, Anthropic declared: "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court."
4
OpenAI swooped in: Hours after the designation, OpenAI announced a Pentagon contract deal. Sam Altman stated OpenAI maintains the same two guardrails Anthropic demanded — no mass surveillance, no autonomous weapons — yet was accepted where Anthropic was blacklisted.
5
Tech employee revolt: Hundreds of Google and OpenAI employees signed a public petition calling on their companies to mirror Anthropic's position and refuse unrestricted military AI use.
6
Musk sides with Trump: Elon Musk called Anthropic a company that "hates Western Civilization." Grok is being positioned for classified government systems.
7
Streisand effect — Claude hits #1: In the wake of the blacklisting, Claude surged to #1 on the Apple App Store, surpassing ChatGPT. Consumer backlash is driving massive adoption.
8
Enterprise fallout: The supply chain risk designation means any company doing business with the US military cannot have any commercial activity with Anthropic. Defense contractors must sever all Anthropic relationships.
9
Legal battle brewing: Anthropic argues the designation under 10 USC § 3252 can only extend to use of Claude in Department of War contracts — it cannot legally affect how defense contractors use Claude for non-government customers. This will be tested in court.
February 24-25, 2026 — THE THREAT
How We Got Here: Pentagon Threatened Anthropic With Defense Production Act
On Feb 24, Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a Friday deadline to open Claude AI for unrestricted military use or face invocation of the Defense Production Act and designation as a supply chain security risk — a label typically reserved for foreign adversaries like Huawei. Anthropic refused. On Feb 27, Hegseth followed through.
At stake was a $200 million Pentagon contract. Anthropic's Claude was the only AI model used for the military's most sensitive classified work. Hegseth demanded "all lawful use" with zero guardrails. Anthropic refused to allow mass domestic surveillance of American citizens or fully autonomous weapons without human oversight — and held that position even after the blacklisting.
The same day as the original threat, Anthropic published RSP 3.0 — dropping its hard safety commitments in favor of nonbinding "Frontier Safety Roadmaps." The company says the timing was coincidental. OpenAI, Google DeepMind, and xAI had all agreed to the Pentagon's "any lawful use" terms.
10 USC 3252
Supply Chain Risk
What the Designation Actually Does
Hegseth invoked 10 USC § 3252 to designate Anthropic a "supply chain risk" — a statute historically used against foreign adversaries like Huawei, never before against an American company. The DPA (50 USC § 4501) remains available for further escalation.
The designation now requires every DoD vendor and contractor to certify they do not use Anthropic's models — effectively blacklisting the company from the entire defense ecosystem. Any company with military contracts must sever commercial ties with Anthropic.
Legal battle: Anthropic argues the designation can only reach Claude's use in DOW contracts and cannot legally dictate how contractors use Claude for non-government customers. If model training decisions are editorial choices protected by the First Amendment, forcing Anthropic to strip guardrails may constitute compelled speech. Both arguments will be tested in court.
RSP 3.0
Safety Policy
Anthropic Drops Hard Safety Limits
Effective February 24, 2026, Anthropic replaced its Responsible Scaling Policy with RSP Version 3.0. The key change: the categorical commitment to pause training if safety measures weren't proven is gone.
New framework: nonbinding "Frontier Safety Roadmaps" with public progress reports every 3-6 months. The pause trigger now requires both AI race leadership and material catastrophic risk — a much higher bar.
Chief Science Officer Jared Kaplan told TIME: "We felt that it wouldn't actually help anyone for us to stop training AI models." The decision took nearly a year of internal debate.
Industry impact: OpenAI and Google DeepMind adopted similar frameworks after Anthropic's original RSP. A rollback by the originator may reshape what "responsible scaling" means industry-wide.
India Summit
Global Policy
US Opposes Global AI Guardrails
At the AI Impact Summit in New Delhi (February 20, 2026), the Trump administration stood alone opposing centralized regulation of generative AI, blocking a multinational statement on AI governance.
US position: guardrails slow innovation and give adversaries like China an edge. Critics warn a poorly governed race toward AGI poses existential risk. India, EU, and dozens of nations favor caution.
Competitors
Compliance
Where Other AI Companies Stand
OpenAI: Announced Pentagon deal hours after Anthropic was blacklisted. Sam Altman says OpenAI has the same two guardrails (no mass surveillance, no autonomous weapons) — yet was accepted.
Google DeepMind: Agreed to unrestricted lawful military use. Hundreds of employees signed petition supporting Anthropic's position.
xAI (Musk): Approved for classified settings. Musk called Anthropic a company that "hates Western Civilization." Grok being positioned for classified systems.
Anthropic: Blacklisted but defiant. Vows court challenge. Claude hit #1 on App Store in consumer backlash.
Federal Ban
All Agencies
Trump Orders All Federal Agencies Off Claude
The executive action goes beyond the Pentagon: all federal agencies are ordered to immediately cease using Claude. DOD gets a 6-month wind-down period given the sensitivity of classified systems currently running on Anthropic's models.
This creates immediate operational chaos for agencies that integrated Claude into workflows. No transition plan has been published. Other AI providers are scrambling to pitch replacements.
Employee Revolt
Industry
Hundreds of Tech Employees Rally Behind Anthropic
Employees from Google and OpenAI signed a public petition calling on their own companies to mirror Anthropic's position — refusing to provide AI for mass domestic surveillance or fully autonomous weapons without human oversight.
The petition echoes the 2018 Google "Project Maven" revolt that led Google to withdraw from Pentagon drone contracts. This time, the stakes are higher: an entire company has been blacklisted.
Streisand Effect
Consumer
Claude Surges to #1 on Apple App Store
In a textbook Streisand effect, the government's attempt to punish Anthropic drove massive consumer adoption. Claude surpassed ChatGPT to become the #1 free app on the Apple App Store.
The irony: while Anthropic loses government revenue, consumer and enterprise demand is surging. The blacklisting effectively served as the largest marketing event in AI history.
AI Policy Timeline — 2025-2026
Jan 23, 2025
Trump signs EO 14179 revoking Biden AI safety executive order. Directs removal of all "barriers to American AI leadership."
Dec 11, 2025
Trump signs "Ensuring a National Policy Framework for AI" EO. Establishes DOJ AI Litigation Task Force to challenge state AI laws. $42B BEAD broadband funding conditioned on state AI law repeal.
Jan 1, 2026
New state AI laws take effect in 38 states. Colorado AI Act delayed to June 30, 2026 under preemption pressure.
Jan 10, 2026
DOJ AI Litigation Task Force becomes operational. Begins identifying state laws to challenge in federal court.
Feb 20, 2026
AI Impact Summit, New Delhi. US opposes global AI guardrails statement. "Rate Payer Protection Pledge" announced — tech companies must power their own AI data centers.
Feb 24, 2026
Hegseth gives Anthropic Friday deadline. DPA + supply chain blacklist threatened. Same day: Anthropic publishes RSP 3.0, dropping hard safety commitments.
Feb 27, 2026
BLACKLISTED: Hegseth designates Anthropic a "supply chain risk" under 10 USC § 3252 — first time ever used against an American company. Trump orders all federal agencies to stop using Claude (6-month DOD wind-down).
Feb 27, 2026
DEFIANCE: Anthropic issues public statement refusing to comply: "No amount of intimidation or punishment from the Department of War will change our position." Announces court challenge to the designation.
Feb 27, 2026
OpenAI announces Pentagon contract hours after Anthropic blacklisted. Sam Altman claims same two guardrails (no mass surveillance, no autonomous weapons).
Feb 28, 2026
BACKLASH: Claude hits #1 on Apple App Store, surpassing ChatGPT. Hundreds of Google/OpenAI employees sign petition supporting Anthropic's position. Musk calls Anthropic "hates Western Civilization," positions Grok for classified systems.
Mar 11, 2026
Secretary of Commerce must publish comprehensive review of state AI laws, identifying those "overly burdensome."
Jun 30, 2026
Colorado AI Act rescheduled effective date. State preemption enforcement expected to begin.