Breaking: Anthropic Files Dual Lawsuits Against Pentagon
Anthropic just filed simultaneous lawsuits in two courts: the U.S. District Court for the Northern District of California and the D.C. Circuit Court of Appeals. The target is the Pentagon's designation of Anthropic as a "supply chain risk" under 10 U.S.C. § 3252 — a statute originally enacted to deal with foreign adversary companies like Huawei and Kaspersky.
The core facts: The Pentagon demanded unrestricted "all lawful purposes" access to Anthropic's Claude AI models. Anthropic held two redlines during negotiations — no mass domestic surveillance applications and no fully autonomous weapons systems. When Anthropic refused to drop those guardrails by the February 27 deadline, Secretary Hegseth designated the company as a supply chain risk. Trump then ordered all federal agencies to stop using Claude.
Anthropic's constitutional arguments span three pillars:
- First Amendment: Compelling Anthropic to remove its safety guardrails constitutes forced speech.
- Fifth Amendment: The designation was imposed without any hearing or due process protections.
- Administrative Procedure Act: The designation was arbitrary and capricious, issued without notice-and-comment rulemaking.
The company is seeking an emergency injunction. Their lawyers say the designation could cost Anthropic hundreds of millions or even billions in lost revenue — not just direct government contracts, but the entire defense contractor ecosystem. The supply chain risk label forces every company doing defense work to certify they do not use Claude in any capacity.
Critically, Anthropic says these suits are not about forcing the government to buy their product. They are about preventing the government from blacklisting a company over a policy disagreement about AI safety.
Full legal analysis and timeline at /Trump/AI-Policy/.