Private members-only forum

MEGATHREAD PINNED Tech Worker Revolt — 900+ Sign 'We Will Not Be Divided' Against Military AI

Started by KellyMartinez_Mod · Mar 2, 2026 · 52 replies · General Discussion
Information shared here is for educational purposes only and does not constitute legal advice.

TL;DR — Key Discussion Points

KM
KellyMartinez_Mod
OP MOD

Consolidating the multiple threads on this into one megathread since the discussion is moving fast and touching on several areas of law and policy.

Here is what we know as of this morning. Over 900 Anthropic employees have signed an open letter titled "We Will Not Be Divided," protesting the company's contract with the U.S. Department of Defense. The letter was published internally on Friday and leaked to the press within hours. Signatories include senior researchers, safety team leads, and at least two members of Anthropic's technical staff leadership.

The immediate trigger appears to be Operation Epic Fury, the U.S. military strikes on Iranian military assets on February 28. Reporting from The Intercept and confirmed by Anthropic's own spokesperson indicates that Claude AI was used in operational planning, targeting chain analysis, and logistics coordination for the strikes. Employees say they were not informed that Claude had been deployed in active military operations and that this violates Anthropic's own Responsible Scaling Policy.

Since then, the situation has escalated rapidly. President Trump called Anthropic a "Radical Left AI company" in a Truth Social post and the administration has moved to ban Anthropic from all government contracts. OpenAI reportedly secured the Pentagon deal within hours of the ban being announced.

This thread will cover: (1) employment law protections for the signatories, (2) the ethics and policy of military AI, (3) corporate governance implications, and (4) the competitive dynamics with OpenAI and China. Please keep discussion substantive. Personal attacks on named individuals will be removed.

RL
RachelLiu_EmpLaw
Employment Atty

Employment attorney here, practicing in California for 14 years. Let me lay out the legal framework because I am already seeing a lot of misinformation on social media about what these employees can and cannot do.

First, the National Labor Relations Act Section 7 protects employees' rights to engage in "concerted activity" for "mutual aid or protection." This applies to non-union workplaces. Signing a collective open letter about working conditions — which includes concerns about what your labor is being used for — has been found to be protected concerted activity in numerous NLRB decisions. The key cases here are Meyers Industries (1986) and more recently Alstate Maintenance (2014).

Second, and this is critical for Anthropic employees specifically, California Labor Code sections 1101 and 1102 prohibit employers from making rules or taking action that prevent or control employees' political activities or affiliations. Protesting military use of AI is arguably political activity, and California courts have interpreted these statutes broadly. The Gay Law Students Assn. v. Pacific Telephone decision established that these protections extend beyond party politics to broader social and political engagement.

Third, Anthropic is headquartered in San Francisco. The San Francisco Police Code Article 33H provides additional protections against employer retaliation for lawful off-duty conduct.

That said, there are limits. If employees access classified or confidential information about the contract to use in their protest, that could fall outside protected activity. If individuals are insubordinate in the workplace (refusing direct work assignments, sabotaging systems) rather than engaging in collective speech, the analysis changes significantly. But the act of signing a letter? That is about as textbook Section 7 as it gets.

DT
DanielTorres_DefenseAI

I work in the defense AI space (not at Anthropic, at a major prime contractor) and I have a very different perspective that I know will be unpopular here.

These employees signed up to work at an AI company. Anthropic's mission is to build safe, beneficial AI. There is a very reasonable argument that ensuring the U.S. military uses the most safety-conscious AI systems — rather than something thrown together by a contractor with no alignment research — IS the safe and beneficial path. If Claude is not in that targeting chain, something worse will be. That is not a hypothetical. That is what happens on Monday when OpenAI takes over and deploys a model with far fewer safety guardrails in its architecture.

I watched the Google Project Maven walkout in 2018. You know what happened after those engineers walked out? Google pulled out of Maven, and the contract went to Palantir and a constellation of smaller firms with zero interest in AI ethics. The net result was strictly worse for everyone who cared about responsible AI in defense applications.

The 900 signatories feel morally righteous, I am sure. But they need to think through the second-order consequences of what they are demanding. Unilateral disarmament by the most safety-focused AI lab in the world does not make anyone safer. It just means the Pentagon uses less safe tools, and China gets closer to parity.

AP
AishaP_TechEthics

@DanielTorres_DefenseAI The "if we do not do it, someone worse will" argument is a moral dead end and has been used to justify every form of complicity throughout history. The engineers at Los Alamos could have made the same argument. Some of them did. Many of them spent the rest of their lives regretting it.

The issue is not whether the U.S. military should use AI. That ship has sailed. The issue is whether employees have the right to know and consent to how their work is used, and whether a company that explicitly branded itself as the "safety-first" alternative gets to secretly deploy its model in active military strikes without telling the people who built it.

Read the open letter. The employees are not saying "Anthropic should never work with the government." They are saying they were lied to about the scope and nature of the contract, that the Responsible Scaling Policy was violated, and that they should have a voice in these decisions. That is not unilateral disarmament. That is asking for basic transparency and consent in the employer-employee relationship.

Also, the China argument is a red herring deployed every single time anyone raises ethical concerns about any technology. DeepSeek and Chinese military AI development will proceed regardless of whether Anthropic specifically has a Pentagon contract. American national security does not rest on one company's API.

MK
MarcusKim_BigLaw
Attorney

Adding to @RachelLiu_EmpLaw's excellent analysis. There is a wrinkle here that I have not seen discussed: the intersection of at-will employment and NLRA protections when the employer is a government contractor (or was, until the ban).

Anthropic employees are almost certainly at-will. California does not recognize implied employment contracts based on employee handbooks (after the Guz v. Bechtel decision). But at-will does not mean the employer can fire you for any reason — it means they can fire you for any reason that is not illegal. And retaliating against NLRA-protected activity is illegal under Section 8(a)(1).

The more interesting question is what happens with employees who have security clearances. If Anthropic's DoD contract required certain employees to hold clearances, and those employees signed the letter, the government could theoretically move to revoke clearances based on "reliability" concerns. That is a very different legal framework — security clearance decisions are largely unreviewable by courts under Department of the Navy v. Egan (1988).

I would strongly advise any Anthropic employee with a security clearance who signed this letter to consult with a national security attorney immediately. The NLRA analysis that applies to most signatories may not protect you if the clearance revocation comes from the government rather than the employer.

JW
JessicaWong_HR

HR professional here (15 years in tech, including two FAANG companies). I want to speak practically about what Anthropic's management is likely doing right now behind closed doors.

When 900+ employees sign something like this, you cannot fire them all. It is not legally possible (mass retaliation would be an NLRB slam dunk) and it is not practically possible (you would lose your entire engineering workforce). So the playbook is usually:

  1. Identify the "ringleaders" — the people who organized the letter, not just signed it. Put them on a list.
  2. Begin building performance documentation for those individuals. Not immediately, but over the next 3-6 months. Suddenly their code reviews get more scrutiny, their project timelines get tighter, their peer feedback is weighted differently.
  3. Issue a company-wide statement acknowledging "diverse perspectives" while reaffirming the company's right to make business decisions.
  4. Quietly restructure or eliminate the teams most associated with the protest.
  5. Make the next round of layoffs disproportionately impact signatories, but frame it as "business needs" and "organizational restructuring."

I am not saying this is right. I am saying this is what I have watched happen at two different companies after employee protests. The retaliation is never overt. It is always structural and deniable. If you are an organizer of this letter, you need to be documenting everything starting yesterday. Save every performance review, every Slack message, every 1:1 note. You will need it.

SN
SarahNguyen_AnthEmp

Throwaway-ish account but I am a current Anthropic employee who signed the letter. I will be careful about what I say but I want to correct some things being reported.

First: we were told the DoD contract was for "research and evaluation purposes." That was the framing in the all-hands. Many of us understood that to mean the military was evaluating Claude's capabilities, not deploying it in active operations. When the Operation Epic Fury reporting came out, people were genuinely shocked. Not naive — shocked at the gap between what we were told and what apparently happened.

Second: the letter was not organized by some fringe group. It started in a Slack channel that already had 400+ members within the first two hours. People across every team — safety, alignment, infrastructure, policy, even some people on the partnerships team — signed. This is not a dozen disgruntled engineers. This is the majority of the technical staff.

Third: some people are saying Dario wanted the contract all along and the "safety mission" was always marketing. I do not believe that is true, and I think it is unfair. But I also think the leadership team got in over their heads. The revenue pressure from investors after the last funding round was enormous. The DoD contract was worth — well, I should not say the number. But it was significant enough to change the company's financial trajectory.

We are not anti-American. We are not "radical left." We are people who took specific jobs at a specific company because of specific promises about how our work would be used. Those promises were broken.

BF
BrianFoster_VC

VC perspective here (Series B/C stage investor, not an Anthropic investor). The market is watching this situation very carefully and the implications go well beyond one company.

Anthropic was valued at $60B in its last round. That valuation was predicated on (a) technical leadership, (b) enterprise revenue growth, and (c) the government/defense vertical. If the government vertical is gone — and it appears to be, at least for this administration — that is a material impairment to the business thesis.

But here is the thing that Wall Street types are missing: Anthropic's most valuable asset is its people. If you alienate 900+ employees to the point where they start leaving, you do not just lose the government contract. You lose the people who make the model competitive. And then you lose everything.

I am hearing from founders in my portfolio that they are watching how Anthropic handles this to decide whether to build on Claude or switch to OpenAI or Gemini. The enterprise market cares about stability and reliability. A company at war with its own workforce is not stable.

My prediction: Anthropic will try to thread the needle. Publicly sympathize with employees, quietly restructure the defense contracts through a subsidiary or partner, and hope this blows over. Whether the employees accept that framing is the real question.

RJ
RobertJackson_NatSec

Former DoD civilian here, spent 20 years in defense acquisition. I want to push back on several things being said in this thread.

Operation Epic Fury was a lawful military operation authorized by the President under Article II powers. The targets were Iranian military assets that had been used to attack U.S. personnel and allies. Whether you agree with the policy or not, the operation was legal under domestic and international law.

The use of AI in operational planning is not new. The military has used algorithmic decision support tools since the 1990s. What changed is the capability level. Claude can process intelligence at a speed and volume that no human analyst team can match. That does not mean Claude is "choosing targets" — it means it is helping analysts process data faster. Every targeting decision still goes through a human chain of command. That is required by DoD Directive 3000.09 on autonomy in weapon systems.

The idea that employees at a technology company should have veto power over lawful government use of commercially available tools is frankly alarming. Lockheed Martin employees do not get to vote on whether the F-35 is used in a particular conflict. Boeing engineers do not sign open letters about where their aircraft fly. The precedent these employees are trying to set would make the U.S. defense industrial base ungovernable.

I respect the right to protest. But let us not pretend this is some clear-cut moral victory. These employees are trying to override the democratically elected government's decisions about national defense. Think carefully about what that means.

LP
LauraPetrov_LaborLaw
Labor Atty

@RobertJackson_NatSec Your Lockheed/Boeing comparison has a fundamental flaw. Those companies never marketed themselves as ethical alternatives to the defense industry. They never recruited employees with promises about safety-first AI development. Anthropic did. That creates a different employment relationship and different reasonable expectations about the scope of one's work.

More importantly from a labor law perspective, there is a well-established principle that employers cannot make representations about working conditions during hiring and then unilaterally change those conditions. If Anthropic recruited employees by promising that Claude would not be used in military operations (and several former recruiters have confirmed this on social media), that could form the basis of either a Section 7 claim (change in working conditions) or potentially even a fraud in the inducement claim.

Also, your framing that employees are "overriding the democratically elected government" is a strawman. Employees are not commanding the military to do anything. They are telling their employer that they object to the employer's decision to participate in military operations. That is the most basic form of concerted employee activity protected by federal law since 1935.

The NLRA does not have a national security exception. Congress could have written one. It did not.

HV
HenryVoss_ExAnth

Left Anthropic about eight months ago so I feel comfortable sharing more context than current employees can.

The defense contract discussion started internally in mid-2025. Dario held a series of "office hours" where he laid out the case that engaging with the U.S. government was necessary both for revenue and for influence over AI policy. The pitch was: "We can either be at the table or on the menu." Lots of people were uncomfortable but the framing was always about research partnerships, safety evaluations, and red-teaming exercises.

What was never discussed in those sessions was operational deployment. There is a massive difference between helping the DoD understand Claude's capabilities and limitations, and having Claude integrated into a kill chain. I do not care how many human approvals are in the loop — if the AI is processing targeting data, it is part of the kill chain.

I left because I could see where things were heading. The moment Amazon's $4B investment came with strings, the company's identity started to shift. Not overnight, not dramatically, but steadily. The people on the safety team who raised concerns were not fired — they were gradually sidelined. Their projects got deprioritized. Their headcount requests were denied. And the people who were enthusiastic about commercial partnerships got promoted.

The employees signing this letter are braver than I was. I chose to leave quietly. They are choosing to fight from the inside. I hope it works, but based on what I saw, I am not optimistic.

CT
CarmenTrujillo_IP
IP Attorney

I want to flag an intellectual property angle that nobody seems to be discussing. Many Anthropic employees have assignment clauses in their employment agreements that assign all work product to the company. That is standard in tech.

But there is an interesting question about whether employees who contributed to Claude's training data curation, RLHF, or constitutional AI methodology have moral rights (in the civil law sense) or at minimum equitable arguments about how their specific contributions are used. U.S. law does not recognize moral rights for most works, but the Berne Convention and some state laws (California Civil Code 987 for visual art) create carve-outs that could be relevant in novel contexts.

I know this is a stretch legally. But the philosophical question is important: when you contribute your expertise to training an AI system under the understanding that it will be used for certain purposes, and then it is used for fundamentally different purposes, is there any legal theory that gives you standing to object beyond just quitting?

The answer right now is probably no. But this is exactly the kind of case that could create new law. If any of the signatories decide to litigate, the IP angle could be fascinating territory.

NW
NathanWright_DevOps

Senior DevOps engineer at a different AI company. Just want to make a practical observation.

Everyone is debating the ethics and the law. Nobody is talking about the technical reality. Modern AI deployments do not have a clean separation between "research use" and "operational use." Once you give the DoD API access to Claude, what they do with those API calls is largely outside Anthropic's control. You can put usage policies in the contract, but enforcing them requires monitoring every query and response, which is both technically challenging and potentially illegal if the queries contain classified information.

This is not like selling someone a physical product. It is like giving someone a key to your brain and then being surprised when they use it for something you did not intend. The architecture of these LLM deployments makes the ethical boundary that Anthropic tried to draw practically unenforceable.

The only way to ensure your AI is not used for military targeting is to not give the military access to your AI. That is the lesson of this entire debacle.

PS
PriyaSharma_GovCon

Government contracts attorney here. I want to address the Trump ban specifically because the legal basis is shaky and I think it matters for the broader discussion.

The President has broad authority to direct procurement policy under the Federal Property and Administrative Services Act. But banning a specific company from all government contracts based on the political speech of its employees would raise serious First Amendment concerns. The government cannot condition the benefit of a contract on the suppression of employee speech — that is Board of County Commissioners v. Umbehr (1996), extended to independent contractors.

More practically, the ban was announced on Truth Social before any formal debarment process was initiated. Debarment under FAR 9.406 requires notice, an opportunity to respond, and specific grounds (fraud, criminal conduct, serious violations of contract terms). "Their employees are radical leftists" is not a FAR-recognized ground for debarment.

I expect Anthropic's legal team will challenge this ban. But the political dynamics make it complicated. Do you sue the administration that just banned you and further inflame the situation? Or do you quietly accept it and try to get the decision reversed behind the scenes? That is a business judgment as much as a legal one.

The irony, of course, is that Trump's ban actually gives the employees what they wanted — no more Pentagon contract. But it gives it to them in the worst possible way, attached to a political punishment that threatens the entire company.

OG
OliverGrant_PolSci

Political science professor here, not a lawyer. I study tech worker activism and I have been following this story closely.

For historical context: the Google Project Maven walkout in 2018 involved roughly 4,000 employees signing a letter and about a dozen high-profile resignations. Google ultimately did not renew the Maven contract. But the aftermath is instructive. Google went on to win the JEDI cloud contract (later renamed JWCC), expanded its defense work significantly, and eventually established a dedicated defense and intelligence subsidiary. The walkout changed the optics but not the trajectory.

What makes the Anthropic situation different is the company's explicit identity as an AI safety organization. Google never claimed to be a safety-first company. Anthropic's entire brand, its entire recruitment pitch, its entire public persona was built on being the responsible alternative. That makes the perceived betrayal much deeper and the employee response much more intense.

The other difference is the political environment. In 2018, the backlash against Maven came from the left and the right largely ignored it. In 2026, the right is actively weaponizing the protest to attack the company. Trump's "Radical Left" framing turns what should be a labor dispute into a culture war, which makes it much harder for the company to find a middle ground.

My prediction: unlike Google, Anthropic cannot just absorb this and move on. The company has to choose an identity. Safety-first organization or commercial AI powerhouse. The two have become mutually exclusive in this political environment.

MB
MelissaBrown_Recruiter

Tech recruiter for 11 years, specializing in ML and AI placements. I want to add the hiring market perspective because it is directly relevant to how this plays out.

I have been fielding calls all weekend from Anthropic engineers asking about opportunities. Not 2 or 3 — dozens. These are L5/L6 equivalent engineers with deep expertise in ML infrastructure, RLHF, and constitutional AI. They are among the most sought-after talent in the industry right now.

Here is the thing: every single one of them has told me the same thing. They are not leaving yet. They want to see how Anthropic responds. But they are preparing their options. That is the most dangerous position for an employer — when your best people have one foot out the door and one eye on the job market.

Competitors are already circling. I know for a fact that Google DeepMind, Cohere, and two well-funded stealth startups I cannot name have activated targeted recruiting campaigns aimed at Anthropic employees this weekend. The poaching has begun.

Anthropic has maybe a 2-week window to respond in a way that keeps these people. After that, the resignations will start and they will cascade. Once the first senior researcher goes public with a resignation letter, it gives everyone else permission to follow.

JD
JamesDeluca_SWE

I am a software engineer in the Bay Area (not at Anthropic). This situation is exactly why I turned down an Anthropic offer 18 months ago. During the interview process, I asked directly about government and military work. The recruiter said — and I quote from my notes — "Anthropic does not pursue military contracts. Our focus is commercial enterprise and research."

That was apparently a lie, or at minimum it became a lie within months of them telling me that. I ended up at a company that is transparent about its government work (yes, it exists, we do some, and everyone knows about it before they sign the offer letter). I prefer honest complexity to dishonest simplicity.

The lesson here is not that AI companies should never work with the military. It is that you cannot build a brand on ethical purity and then quietly take a defense contract. The cognitive dissonance will destroy you from the inside.

EM
ElenaMarkov_BioEthics

Bioethicist here. I study dual-use technology dilemmas and the Anthropic situation is a textbook case of what we call the "dual-use research of concern" problem.

In the biological sciences, we resolved this (imperfectly) through institutional review boards, the Fink Report, and the NSABB framework. The key principle is that researchers have a right to participate in decisions about dual-use applications of their work, and institutions have an obligation to create governance structures that enable that participation.

The AI industry has nothing comparable. There is no IRB for AI deployment decisions. There is no institutional mechanism for employees to raise dual-use concerns through formal channels. Anthropic's Responsible Scaling Policy was the closest thing, and apparently it was either ignored or did not cover military deployment at all.

What the employees are really asking for is not the abolition of military AI. They are asking for governance. They want a process, a voice, a way to raise concerns that does not require leaking to the press. That is not radical. That is basic institutional design that every other dual-use technology sector figured out decades ago.

DK
DerekKhan_Palantir

I work at Palantir (yes, that Palantir). Let me tell you how we handle this because I think our model is relevant to this discussion.

Palantir has a tiered access system. Not every employee works on every contract. Employees who do not want to work on defense or intelligence projects are not required to. They work on commercial deployments, healthcare, supply chain, or other verticals. Employees who work on classified programs go through additional vetting and explicitly opt in. There is no ambiguity about what you are working on.

Is it perfect? No. There are resource allocation tensions and some people feel that the defense work subsidizes their commercial projects. But nobody at Palantir has ever been surprised to learn that their work was used by the military. It is in the company's DNA, practically.

Anthropic's mistake was trying to have it both ways: take the defense revenue while maintaining the fiction that they were purely a safety research organization. Palantir understood from day one that transparency about your customer base is not optional when your customers include the Pentagon.

The tiered access model is not just an ethical framework — it is a practical one. It lets you attract talent that does not want to do defense work AND talent that does, without lying to either group. Anthropic should have adopted something like this before signing the DoD contract.

AT
AnnaTsai_ConLaw
Con Law Atty

Constitutional law professor here. I want to address the Trump ban from a First Amendment perspective because @PriyaSharma_GovCon raised important points but I think the analysis goes deeper.

The unconstitutional conditions doctrine prohibits the government from conditioning a benefit (here, government contracts) on the surrender of constitutional rights (here, employee speech). Perry v. Sindermann (1972) established this in the employment context. Umbehr extended it to independent contractors.

But there is a complication. The employees are not the contractors — Anthropic is. Anthropic's employees have no direct contractual relationship with the government. So the question becomes whether the government can punish a company for its employees' speech. Under Citizens United, corporations have First Amendment rights. But do those corporate rights extend to protecting the speech of individual employees?

I think there is a strong argument that they do, under a compelled-speech theory. If the government says "we will not contract with you unless you silence your employees," that is functionally compelling the corporation to suppress speech as a condition of doing business with the government. That should fail strict scrutiny.

The practical problem is that the Trump administration will never frame it that way. They will cite "operational security concerns" or "reliability of supply chain" as the basis for the ban. Proving the real motivation was retaliatory requires discovery, which requires litigation, which takes years. By then, the contract has long since gone to OpenAI and the political moment has passed.

RH
RyanHarper_Libertarian

Hot take: everyone in this thread is overthinking this. These employees exercised their right to speak. The government exercised its right to choose contractors. The market is functioning exactly as designed.

You do not have a right to a government contract. You do not have a right to force your employer to reject revenue. You have a right to speak, and your employer has a right to decide its business strategy, and the government has a right to work with whichever vendor it chooses.

The employees spoke. Good for them. The consequences followed. That is how it works. The NLRA protects them from being fired for the speech. It does not protect the company from losing a contract because the government no longer trusts them.

And let us be real — if you were the Pentagon, would YOU want to run military operations on an AI system where 900+ of the employees who maintain it just publicly declared they oppose its use for that purpose? That is a genuine operational security concern, not a political vendetta.

KM
KellyMartinez_Mod
OP MOD

Quick moderation note: I have removed three posts that contained doxxing of specific Anthropic employees. This is a forum rule violation and will result in a permanent ban. Discuss the issues, not the individuals. If you have information about specific people, keep it off this forum.

Also updating the thread with breaking news: Anthropic's CEO Dario Amodei has released a public statement. Key quote: "We hear our employees and we share their commitment to the safe and responsible development of AI. We are initiating a comprehensive review of our government partnerships and will share the results with our full team." The statement does not address Operation Epic Fury specifically or acknowledge the open letter by name.

VR
VeronicaReed_ACLU

Civil liberties attorney here. The Dario statement is classic crisis PR: acknowledge the concern without admitting wrongdoing, promise a review that can take as long as you need it to, and avoid the specific allegations entirely.

What I am not seeing is any commitment to not retaliate against signatories. That is the most important thing right now. If Anthropic is serious about "hearing" its employees, it should immediately issue a written commitment — not a PR statement, but an enforceable written policy — that no adverse action will be taken against any employee who signed the letter or participated in organizing it.

Without that, the "comprehensive review" is just a mechanism to buy time while HR builds the files that @JessicaWong_HR described. I have seen this pattern dozens of times in my career. The company expresses sympathy publicly while quietly building the case to terminate the most vocal organizers.

If any Anthropic employee is reading this: document everything starting now. Screenshot the CEO's public statement and save it with a timestamp. If they later take adverse action against you, the gap between the public statement and the private action is your retaliation case.

TM
TylerMitchell_ML

ML researcher here. I want to address the technical argument that Daniel made earlier about "if Claude does not do it, something worse will."

This argument assumes that Claude's safety features are meaningfully operative in a military deployment. They are not. When you deploy an LLM for military use, the first thing you do is strip or modify the safety layers that prevent it from engaging with violent content. You have to — otherwise it would refuse to process targeting data because of its Constitutional AI training. The Claude that the Pentagon was using was not the same Claude that refuses to help you write malware. It was a fine-tuned, guardrail-reduced version optimized for the specific military use case.

So the "at least Claude is safer" argument falls apart on inspection. The military version of Claude is not meaningfully safer than a military version of GPT-5 or Gemini Ultra. The safety research that Anthropic employees poured their careers into is literally not present in the deployed system. Their work was used to build a foundation model that was then stripped of the very features they cared most about.

That is not "responsible military AI." That is using safety researchers' labor to build a better weapon while telling them they are building a safer AI.

GH
GraceHenderson_CorpGov
Corp Attorney

Corporate governance attorney. I want to talk about Anthropic's unique corporate structure because it is directly relevant to whether the employees have any real legal leverage beyond the NLRA.

Anthropic is structured as a Public Benefit Corporation (PBC) in Delaware. PBCs have a legal obligation to consider the impact of their decisions on stakeholders beyond just shareholders — including employees, the community, and the environment. Under Delaware General Corporation Law Section 365(a), the board of a PBC must balance stockholder interests with the public benefit specified in its charter.

Anthropic's stated public benefit is "the responsible development and maintenance of advanced AI for the long-term benefit of humanity." If deploying AI in military strike operations is inconsistent with that stated benefit, then the employees are not just making a moral argument — they are pointing to a potential breach of the company's own legal charter.

Under DGCL Section 367, stockholders owning at least 2% of shares can bring a derivative action claiming the board failed to balance stakeholder interests as required by the PBC charter. Most Anthropic employees have equity. If enough of them are stockholders, they could potentially bring a derivative suit against the board for approving the DoD contract in violation of the PBC charter.

This is untested legal territory. There has never been a PBC derivative action based on a military contract. But the legal framework exists, and I would not be surprised if someone files one in the coming weeks.

WZ
WeiZhang_AIPolicy

AI policy researcher here, previously worked at a DC think tank focused on US-China tech competition. I want to give the China angle a more serious treatment than the dismissive responses it has gotten so far.

The PLA's Strategic Support Force has been integrating AI into military operations since at least 2020. DeepSeek, Baidu, and several Chinese companies with no public profile are supplying AI capabilities to the Chinese military with zero internal debate or employee pushback because that kind of dissent is simply not permitted in China's political system.

This is not a "red herring." This is the strategic reality. The U.S. is in an AI arms race whether it wants to be or not. The question is whether American values — including the value of employee speech and protest — are compatible with winning that race.

I think they are, but only if we build institutional structures that channel dissent into governance rather than letting it become a binary choice between "do everything the Pentagon asks" and "refuse all military work." The employees are right that they deserve transparency. The national security community is right that abandoning military AI is dangerous. Both things can be true simultaneously.

What we need is something like the Palantir model but formalized and industry-wide. An AI defense ethics board with employee representation, independent oversight, and real authority to set boundaries. Not another toothless advisory committee that gets dissolved the moment it becomes inconvenient.

KP
KatePowers_Whistleblower
Attorney

Whistleblower protection attorney. I need to clarify something important: the open letter signatories are NOT whistleblowers in the legal sense, and they should not be advised as if they are.

Whistleblower protections under federal law (Dodd-Frank, SOX, False Claims Act qui tam) require reporting of specific legal violations through specific channels. Signing an open letter protesting a lawful business decision is not whistleblowing. The DoD contract, as described publicly, does not appear to violate any law.

This matters because the legal protections are different. NLRA Section 7 protects concerted activity but the remedies are limited (reinstatement, back pay, no punitive damages). Whistleblower statutes provide much stronger protections including anti-retaliation provisions with real teeth and substantial monetary awards.

HOWEVER — if any employees have evidence that Anthropic misrepresented the nature of the contract to investors, that could be securities fraud, which would trigger Dodd-Frank whistleblower protections. If the contract was structured to avoid Congressional notification requirements, that could trigger other statutory protections. If classified information was mishandled, that opens up yet another avenue entirely.

The open letter signatories should be thinking carefully about whether they have information that goes beyond policy disagreement into actual legal violations. If they do, they need separate counsel and they should NOT be sharing that information in an open letter or on social media. They should be going to the SEC or the relevant Inspector General.

DT
DanielTorres_DefenseAI

@TylerMitchell_ML I appreciate the technical detail but your argument actually proves my point. You say the military strips safety features. That is exactly why you want the most safety-conscious lab at the table — to push back on that stripping, to set contractual requirements for what guardrails must remain, and to monitor deployment practices.

When Google pulled out of Maven, do you think the replacement contractors fought to keep ethical guardrails? They did not. They gave the Pentagon exactly what it asked for with no pushback whatsoever. Anthropic, whatever its flaws in this situation, was at least in a position to negotiate for responsible deployment. Now OpenAI has the contract, and based on their public statements, they view military work as an unambiguous good. Which company do you think will push harder for human oversight in the targeting chain?

The perfect is the enemy of the good. And in this case, the "perfect" (no AI in military operations) is genuinely not achievable. The choice is between imperfect engagement and total abdication. The employees chose abdication. History will judge whether that was brave or naive.

SN
SarahNguyen_AnthEmp

@DanielTorres_DefenseAI You keep making this argument but you are ignoring the central fact: we were not consulted. We were not told. We did not get to negotiate for responsible deployment because we did not know there was a deployment to negotiate about.

If Anthropic had come to us and said "The Pentagon wants to use Claude for operational planning. Here is what we have negotiated in terms of guardrails. Here are the red lines we have drawn. What do you think?" then we would have had that conversation. Many people would have been uncomfortable. Some would have objected. But there would have been a process.

Instead, the first time most of us learned about it was from a news article about a military strike that killed people. That is the betrayal. Not the existence of a defense contract. The deception about its scope and the complete absence of internal deliberation.

Please stop telling us what we should have wanted. We are telling you what we actually want: transparency, a voice in decisions about how our work is used, and accountability when leadership breaks its own stated principles.

AP
AlexPetrovich_Union

Labor organizer here (CWA). I have been in contact with some Anthropic employees and I want to address the elephant in the room: unionization.

This is exactly the kind of event that catalyzes union drives. You have a large group of employees who feel betrayed by management, who have demonstrated the ability to organize collectively (900+ signatures is impressive), and who want a formal voice in company decisions. That is literally what a union provides.

The tech industry has resisted unionization for decades by offering high compensation, stock options, and the illusion of a flat organizational culture. But when the illusion breaks — when employees realize they have no actual power over decisions that matter to them — the arguments against unionization start to ring hollow.

Under the NLRA, Anthropic employees have the right to form a union. If 30% of the bargaining unit signs authorization cards, the NLRB will hold a representation election. Given that 900+ employees already signed the letter, reaching that threshold would be trivial.

I am not saying a union is the answer to every problem here. But if these employees want a permanent, legally protected mechanism to negotiate about the use of their work, collective bargaining is the tool that exists. Everything else — open letters, protests, social media campaigns — is temporary and has no enforcement mechanism.

For what it is worth, the Alphabet Workers Union (CWA Local 9009) was formed after Project Maven. If the Anthropic employees are serious about structural change, they should be talking to CWA.

FS
FrankSullivan_MgmtSide
Mgmt Atty

Management-side labor attorney. Let me give the perspective from the other side of the table, because this thread is heavily tilted toward the employee view.

Anthropic's management has a fiduciary duty to its investors and (as a PBC) to its stated mission. Both of those duties may support the defense contract. The investors provided capital expecting revenue growth. The mission statement says "long-term benefit of humanity," which can reasonably include national defense. The board made a business judgment. That judgment is entitled to deference under the business judgment rule.

On the NLRA point: yes, the letter is likely protected concerted activity. But "protected" does not mean "consequence-free for the company." The company lost a massive government contract, in part because of the public nature of the protest. If the company can demonstrate that specific employees' actions went beyond protected speech — for example, leaking confidential contract terms to the press — those individuals can be lawfully terminated.

The line between protected concerted activity and unprotected disclosure of confidential information is fact-specific and case-dependent. If the open letter contains details that could only have come from employees with access to the DoD contract, that is a problem for those employees regardless of NLRA protection. Section 7 protects collective speech about working conditions. It does not protect the disclosure of trade secrets or classified information.

My advice to Anthropic management (if they were my client, which they are not): take no adverse action against letter signatories as a group. Focus enforcement on anyone who leaked confidential information. And restructure the defense work through a subsidiary with separate employees who opt in, modeled on the Palantir approach.

NB
NicoleBaker_FormerGoogle

I was one of the Google employees who signed the Maven letter in 2018. I want to share what happened after, because the Anthropic employees are about to go through the same thing and they should be prepared.

The first phase is solidarity and energy. That is where you are now. It feels like you are changing the world. 900 signatures feels unstoppable.

The second phase is management's response. It will be sympathetic-sounding and vague. "Comprehensive review." "Listening sessions." "Shared values." This is designed to diffuse the energy and run out the clock. It works. People think "okay, they heard us," and go back to their desks.

The third phase, and this is the one nobody warns you about, is the slow squeeze. Over the next 6-12 months, the organizers will be passed over for promotions. Their project proposals will be deprioritized. Their skip-level meetings will be canceled. Their performance reviews will be inexplicably lower than previous years. One by one, the most vocal people will leave — not fired, just made so miserable that they choose to go.

At Google, I lasted 14 months after the Maven letter before I realized I had been effectively sidelined. The final straw was when my manager told me, in a 1:1, that my "external activities" were "creating a distraction" and that I should "focus on impact." That is corporate-speak for "shut up or leave."

The NLRA theoretically protects against this. But proving constructive retaliation through a thousand small cuts is incredibly difficult. The NLRB is understaffed and backlogged by years. By the time your case is heard, you have already left the company.

My advice to the Anthropic employees: the next 30 days are critical. If you do not achieve structural change (a formal governance mechanism, a union, or a binding policy) in that window, the energy will dissipate and management will win by attrition. That is what happened to us.

JC
JaredChen_FounderCEO

Founder/CEO of an AI startup (we are pre-Series A, about 30 people). I am watching this closely because it directly affects how I think about my own company's future.

Here is what keeps me up at night: we are going to need revenue to survive. Government contracts are some of the largest, most stable revenue sources available to AI companies. If the social norm in the AI industry becomes "employees can veto government contracts," then the entire business model shifts. Either you only hire people who are comfortable with government work (which dramatically shrinks your talent pool) or you avoid government work entirely (which dramatically shrinks your revenue potential).

I am sympathetic to the Anthropic employees. I genuinely am. But I am also looking at my own team of 30 people and thinking about what happens when we need to make hard choices about customers. Do I need to pre-negotiate every major contract with my engineering team? Where does employee voice end and management authority begin?

The answer the labor lawyers will give is "NLRA Section 7." But running a company is not a legal abstraction. It is about trust, culture, and the practical ability to execute strategy. If every strategic decision is subject to employee referendum, you cannot execute. Period.

RL
RachelLiu_EmpLaw
Employment Atty

@JaredChen_FounderCEO I want to push back gently. Nobody is saying every business decision should be subject to employee referendum. The NLRA does not give employees veto power over business decisions. It gives them the right to collectively voice concerns about working conditions without being fired for it.

The distinction matters. Anthropic employees are not demanding the right to approve every contract. They are demanding transparency about contracts that fundamentally change the nature of their work, and a mechanism to raise concerns when they believe the company is violating its own stated principles. That is not a referendum. It is basic governance.

Also, from a practical standpoint: you WANT your employees to feel empowered to raise concerns. A company where nobody speaks up is a company where problems fester until they explode. The fact that 900 Anthropic employees felt strongly enough to sign a letter is a failure of internal communication, not a failure of employee loyalty. If management had created channels for this kind of feedback before the crisis, the letter would not have been necessary.

As a founder, you should be thinking about how to build those channels now, before you face your own version of this crisis. Because you will. Every company that grows large enough eventually faces a moment where employee values and business strategy collide.

MD
MikeDavis_VetSWE

Military veteran turned software engineer. I have a perspective that I think is missing from this conversation.

I spent 8 years in the Army, including two deployments. I now work in tech. I understand both the military's need for advanced technology and the tech workers' discomfort with military applications. I live at this intersection every day.

What frustrates me about the discourse is the abstraction. When people say "military AI" or "kill chain," they are using terms that erase the reality of what these tools do. AI in military operations can mean a lot of things. It can mean identifying IED emplacement patterns to keep soldiers alive. It can mean optimizing logistics so humanitarian aid gets to the right place faster. It can also mean helping select targets for kinetic strikes.

The employees have every right to know which of these uses their work is supporting. "We used Claude in Operation Epic Fury" tells you nothing about what it actually did. Was it processing satellite imagery? Analyzing signal intercepts? Optimizing flight paths? Each of those has a different ethical profile.

The problem with both sides of this debate is the lack of specificity. The company did not provide enough information for employees to make informed judgments. And the employees, in turn, are protesting based on assumptions about what the military is doing with the tool. Both sides would benefit from more transparency, but classification rules make that transparency nearly impossible.

That is the fundamental trap of military AI. The people who build the tools cannot know how they are used, and the people who use the tools cannot tell the builders. Informed consent is structurally impossible in this context.

IM
IsabelMorales_CivLib

@RyanHarper_Libertarian Your "operational security" argument for the contract ban is doing a lot of heavy lifting. Let me dismantle it.

The employees did not refuse to work. They did not sabotage anything. They signed a letter expressing disagreement with a business decision. If that is enough to constitute an "operational security concern," then every employee at every defense contractor who has ever expressed any opinion about U.S. foreign policy is a security risk. That is not a standard anyone actually wants to apply consistently.

What you are actually describing is a loyalty test: employees must not only do their work, they must also publicly support the government's use of that work, or the government will punish the company. That is not how a democracy works. It is not how the NLRA works. And it is not how the First Amendment works.

The Trump administration is not banning Anthropic because of a genuine security concern. It is banning Anthropic because the President saw an opportunity to punish a company associated with AI safety (which he views as "woke") and reward a company (OpenAI) whose leadership has been more politically aligned with the administration. This is contract allocation as political patronage, which is illegal under the Competition in Contracting Act.

PL
PaulLambert_TaxCPA

CPA here, slightly off-topic but critically relevant to any Anthropic employee thinking about leaving. If you are considering resignation, you need to understand the tax implications of your equity position.

If you have exercised ISOs and are still within the holding period (2 years from grant, 1 year from exercise), leaving does not change your holding period for LTCG treatment. But if you have unexercised options, most option plans give you only 90 days post-termination to exercise. After that, the options expire worthless.

At Anthropic's last valuation ($60B), the spread on those options could be enormous. If you exercise and the FMV has increased significantly since your strike price, you could face a massive AMT hit in the current tax year. And if the company's valuation drops because of this crisis, you could end up paying AMT on phantom gains — tax on appreciation that evaporated before you could realize it.

Bottom line: do not resign in a moment of moral clarity without consulting a tax advisor about your equity. I have seen too many tech workers take principled stands and then get destroyed by an unexpected six-figure tax bill. Principles are important. So is not going bankrupt. Get advice before you make any irrevocable decisions.

KM
KellyMartinez_Mod
OP MOD

Breaking update: OpenAI has officially confirmed it has assumed the Pentagon AI contract previously held by Anthropic. In a blog post titled "Serving America's Defense," OpenAI's leadership stated that the company is "proud to support the U.S. military's mission to protect national security and uphold democratic values through advanced AI technology." No mention of employee consultation or ethical review processes.

Additionally, reporting from Reuters indicates that at least 15 Anthropic employees have already submitted their resignations, including two senior safety researchers. Anthropic has declined to comment on specific personnel matters.

CH
ChrisHarrington_Ethics

AI ethics researcher at a university. The OpenAI statement is the most predictable thing that has happened in this entire saga.

Let me put this bluntly: OpenAI pivoted from a nonprofit committed to "safe, beneficial AGI" to a capped-profit entity to a for-profit corporation in less than five years. It went from "we will publish all our research" to "we will publish nothing proprietary" to "we will take the Pentagon contract that the safety company just got fired from." The trajectory tells you everything about what happens when commercial incentives collide with stated values. Values lose. Every time.

This is not unique to AI. It is the story of every industry that started with idealistic founders and ended with corporate consolidation. The difference is that AI systems have a unique capacity for harm at scale. A social media company that compromises its values produces misinformation. An AI company that compromises its values could produce autonomous weapons systems.

The 900 Anthropic employees who signed that letter are trying to hold the line. But the structural forces arrayed against them — investor pressure, competitive dynamics, government coercion — are enormous. I do not know if they will succeed. I know they are right to try.

RT
RichardTong_AngInvestor

Angel investor and Anthropic shareholder (through a fund). I want to address the financial reality because the idealism in this thread, while admirable, is disconnected from how companies actually survive.

Anthropic burns approximately $2B per year on compute alone. Its commercial revenue, while growing, does not cover that burn rate. The DoD contract was reportedly worth $500M+ over 3 years. That is not pocket change — it is the difference between Anthropic being a going concern and Anthropic needing to raise another massive round at potentially unfavorable terms in a hostile political environment.

With the contract gone and the political situation making any government work toxic for the foreseeable future, Anthropic has to find that revenue somewhere else. Enterprise sales are growing but not fast enough. Consumer subscriptions are a fraction of what is needed. The company is now in a significantly weaker financial position.

I respect the employees' principles. But principles do not pay for H100 clusters. If Anthropic cannot find a path to financial sustainability, it will not matter how many employees signed the letter because there will not be a company left to have the debate at. Google, Microsoft, or Amazon will absorb the remains in an acqui-hire, and those companies will absolutely take military contracts without blinking.

The employees may have won the moral argument while losing the war. That is not a comfortable thought, but it is the financial reality.

SJ
SamanthaJones_HRTech

HR tech consultant. I want to address something @JessicaWong_HR raised about the retaliation playbook. She is right about how it usually works in most companies, but I think the dynamics here are different in important ways.

When 900+ people sign a letter, the usual targeted retaliation strategy breaks down mechanically. You cannot build performance documentation cases against 900 people simultaneously without it being obvious. The NLRB would have a field day with that pattern.

More importantly, Anthropic is in a labor market where its employees have enormous individual bargaining power. These are not warehouse workers (no offense to warehouse workers who also deserve protections). These are people with $300K-$600K total comp packages who could get hired at a competitor tomorrow morning. The threat is not that they will be fired — it is that they will leave voluntarily, taking institutional knowledge and research capabilities with them.

What Anthropic should be doing right now is NOT playing the retaliation game. It should be engaging substantively with the employees' demands. Create the governance mechanism. Institute a formal dual-use review process. Bring employees into the decision-making structure. Not because the law requires it, but because keeping 900 highly compensated employees is a lot cheaper than replacing them.

The companies that survive these moments are the ones that treat employee activism as a signal, not a threat. The ones that treat it as a threat usually end up losing more than the contract was worth.

AW
AndrewWalters_MilTech

Retired military, now working in defense tech. I want to share something I think is being missed in the strategic analysis of this situation.

China's military AI integration is not a theoretical future threat. The PLA demonstrated AI-assisted command and control capabilities in their exercises around Taiwan last year. The systems they are deploying are not subject to any ethical review, any employee input, or any transparency requirements whatsoever. They are being built by engineers who have no choice in the matter, using training data that was not collected with consent, and deployed with no human-in-the-loop requirements.

I understand the Anthropic employees' position. I even sympathize with much of it. But I need them to understand what the alternative looks like on the geopolitical stage. The alternative is not "no AI in warfare." The alternative is "only authoritarian AI in warfare." The U.S. military will be slower, less capable, and more likely to make catastrophic errors without access to the best AI systems. Real people — soldiers, civilians in conflict zones — will die as a result of that capability gap.

The employees may say "that is not our problem." And legally, they are right. They have no obligation to support the military. But morally? I think the calculus is more complicated than "military AI bad." And I say that as someone who has seen firsthand what happens when the military has bad intelligence and outdated tools.

LV
LisaVaughn_1stAmend
1st Amend Atty

First Amendment attorney. I want to highlight something that has gotten lost in the policy debate: the Trump ban may be the most legally significant part of this entire story from a constitutional law perspective.

If the administration banned Anthropic from government contracts because of the political speech of its employees, that is a textbook First Amendment violation under the unconstitutional conditions doctrine. It does not matter that no individual employee has a right to a government contract. The government cannot use contract allocation to punish speech. Full stop.

The evidence here is unusually strong for this type of case. Trump's own social media posts explicitly link the ban to the company's political character ("Radical Left AI company"). The timing — ban announced hours after the letter became public — speaks for itself. There is no pretextual justification that can survive that evidentiary record.

If Anthropic sues (and I think they should), this could become a landmark case on government retaliation against corporate employees' speech. The Umbehr line of cases established that independent contractors have First Amendment protection in their contractual relationships with the government. This case would extend that principle to say that the government cannot punish a corporation for the protected speech of its employees. That is a significant expansion of First Amendment doctrine and it is one that I think the current Supreme Court might actually support, given its strong free speech orientation in recent terms.

The problem is practical: litigation takes years. The contract goes to OpenAI tomorrow. But the precedent matters far beyond this one case.

YK
YasminKhalil_PhDCandidate

PhD candidate in CS, studying AI alignment. I had an offer from Anthropic that I was supposed to start in April. I am now seriously reconsidering.

The thing that bothers me is not the defense contract per se. It is the organizational culture that allowed it to happen the way it did. If the leadership team can make a decision this consequential without informing the people who build the technology, what else are they not telling employees? What other compromises have been made quietly?

My research is in alignment — making AI systems do what humans actually want them to do. The irony of working at a company where the AI might be more aligned with stated values than the leadership is not lost on me.

I am in conversations with two other labs now. I suspect many other early-career researchers in my position are making similar calculations. Anthropic's ability to attract top research talent was one of its biggest competitive advantages. If this crisis damages that reputation among grad students and postdocs, the long-term impact on the company's technical capabilities could be severe and compounding.

RJ
RobertJackson_NatSec

Coming back to respond to several people who pushed back on my earlier post. I appreciate the substantive engagement even where we disagree.

@LauraPetrov_LaborLaw You are correct that Anthropic is not Lockheed. But I think the distinction cuts both ways. Anthropic employees accepted compensation packages that valued the company at $60B. That valuation was not based solely on consumer chatbot revenue. It was based on the total addressable market, which includes government and defense. The employees benefited financially from the business strategy they are now protesting.

That does not negate their right to protest. But the moral clarity of the position is complicated when your stock options are denominated in a valuation that assumed defense revenue. "I object to military contracts, but I will keep the equity that priced in those contracts" is a coherent legal position but a less coherent moral one.

@IsabelMorales_CivLib On the loyalty test point: I did not suggest employees should be required to support government policy. I said the government has a legitimate interest in the reliability of its supply chain. If over half the engineers at a weapons manufacturer signed a letter saying they morally oppose making weapons, the Pentagon would understandably look for another supplier. That is not a loyalty test. It is supply chain risk management.

The employees have every right to speak. The government has every right to act on that information. Those rights coexist.

TB
TommyBernard_JuniorDev

Junior dev, 2 years in the industry. I just want to say that this thread has been one of the most informative things I have read about the intersection of tech, law, and politics. The legal analysis especially — NLRA Section 7, California Labor Code 1101-1102, the PBC angle, the security clearance wrinkle — is stuff that they absolutely do not teach you in a CS program.

For those of us early in our careers, the Anthropic situation is a wake-up call. We need to read our employment agreements more carefully before signing them. We need to ask harder questions during interviews about government work and military contracts. And we need to understand our legal rights before we need them, not after.

Saving this thread for future reference. Thank you to the attorneys and experienced professionals who are sharing their expertise here. This is genuinely valuable public service.

DR
DianaRoss_IntlLaw
Intl Law Atty

International humanitarian law attorney. I want to add a dimension that this thread has not addressed: the international legal implications of AI-assisted military targeting.

Under the Geneva Conventions and Additional Protocol I, parties to a conflict must take "constant care" to spare civilians and must verify that targets are military objectives. Under Article 57, those who plan or decide upon an attack must "do everything feasible" to verify targets and minimize civilian harm. The question with AI-assisted targeting is whether relying on an AI system's analysis satisfies the "feasibility" requirement for human verification.

The ICRC has taken the position that AI decision-support tools in targeting must be transparent and explainable to the humans in the loop. If the military is using Claude — a large language model that is fundamentally not explainable in the way the ICRC envisions — there may be arguments that the resulting targeting decisions do not satisfy IHL requirements for human oversight.

This is relevant to the employees because if the use of Claude in Operation Epic Fury resulted in civilian casualties (which we do not yet know), the employees who built the system could theoretically face scrutiny under the principle of aiding and abetting violations of IHL. That is an extremely unlikely legal outcome, but the fact that it is even theoretically possible is something the employees and their counsel should be aware of.

The broader point is that AI in military operations does not just raise domestic employment law questions. It raises questions under the laws of war that the entire AI industry has been avoiding for years.

MR
MarioRivera_DataScience

Data scientist at a Fortune 500. The thing I keep coming back to is the Streisand effect of Trump's response.

Before the "Radical Left" declaration, this was an internal corporate dispute that most Americans would never have heard about. It would have been covered in tech press and legal blogs and that is about it. By turning it into a culture war flashpoint, Trump made the Anthropic protest the biggest tech story of 2026 so far. Claude became the number one app on the App Store within 48 hours of the ban — people downloaded it just because the President told them not to use it.

From a pure brand perspective, being banned by the Trump administration may be the best thing that ever happened to Anthropic's consumer business. The "safety company that stood up to the Pentagon" is a compelling narrative, even if the reality is much messier than that framing suggests. Every person who downloads Claude because they saw the news coverage represents consumer revenue that partially offsets the lost government contract.

I am not saying this was planned. But the outcome is deeply ironic: the punishment intended to destroy Anthropic may end up strengthening it in the commercial market. The question is whether the consumer and enterprise revenue growth is fast enough to fill the hole left by the DoD contract before the company runs out of runway.

JP
JenniferPark_StartupAtty
Startup Atty

Startup attorney here. @GraceHenderson_CorpGov raised the PBC derivative action angle and I want to expand on it because I think this could be genuinely significant for corporate governance law.

Anthropic's PBC charter obligation is to "the responsible development and maintenance of advanced AI for the long-term benefit of humanity." There are two potential PBC claims here, and they cut in opposite directions:

  1. Employees could argue that the board breached its PBC obligations by approving a military contract that was inconsistent with "responsible development." This is the employees' argument.
  2. Investors could argue that the board breached its fiduciary duties by failing to manage the employee relations risk that led to losing the contract and the government ban. This is the investors' argument.

Under Delaware law, both claims would be evaluated under an enhanced version of the business judgment rule that applies to PBCs. The board needs to demonstrate that it balanced stockholder interests with the stated public benefit. If the board approved the DoD contract without consulting the safety team or considering employee impact, that process failure could undermine the business judgment presumption.

I think the investors' claim is actually stronger than the employees' claim, which is a fascinating inversion. The board may face more legal exposure for HOW it handled the contract (poorly managing the human capital risk and failing to anticipate the backlash) than for WHETHER it took the contract in the first place.

NW
NathanWright_DevOps

Follow-up to my earlier post about the technical reality. I have been reading through OpenAI's blog post about assuming the Pentagon contract and one thing jumped out at me.

OpenAI is promising "dedicated infrastructure" and "isolated deployment" for the military use case. This is the Palantir model that Derek described — separate systems, separate teams, separate access controls. It is also exactly what Anthropic should have done from the beginning.

The fact that OpenAI is implementing this architecture from day one tells you that they learned from Anthropic's mistake. When you commingle your consumer and military deployments, you create exactly the kind of ethical and operational mess that Anthropic is now dealing with. When you isolate them, employees who want to work on the military project can opt in, and employees who do not can work on consumer products without worrying about what their code is being used for.

This is not rocket science. It is basic separation of concerns, which every engineer learns in their first year. The fact that Anthropic's leadership failed to implement it suggests either incompetence or a deliberate choice to obscure the military use case from employees. Neither explanation is flattering.

SN
SarahNguyen_AnthEmp

Final update from inside (for now). Anthropic held an all-hands meeting yesterday. I will share what I can without violating any confidentiality agreements.

Dario spoke for about 45 minutes. The tone was somber. He acknowledged that the communication around the DoD contract was "insufficient" and that employees "deserved to know more about the scope of our government partnerships." He announced the creation of an internal Deployment Review Board with employee representatives that will review all future government and military-adjacent contracts before they are signed. He also committed to publishing a public transparency report quarterly that will describe (at a high level) the categories of customers using Claude and the broad use case categories.

On the specific question of Operation Epic Fury, he said he is "limited in what he can share" due to classification restrictions, but acknowledged that "the deployment exceeded the scope of what was communicated to the team." That is the closest to an admission of fault we are likely to get.

The reaction in the room was mixed. Some people felt it was a genuine step forward and evidence that the protest worked. Others felt it was too little, too late, and the same crisis PR playbook that @NicoleBaker_FormerGoogle warned us about. The organizers of the letter are taking the position that the Deployment Review Board is a good start but needs to have actual veto power, not just advisory authority. Negotiations are ongoing.

I do not know how this ends. But I know that 900 people raised their voices and the company heard them. Whether it heard them enough remains to be seen. I will update this thread as things develop. Thank you to everyone here who provided legal context — it has genuinely helped us understand our rights and our options.

KM
KellyMartinez_Mod
OP MOD

Thank you to everyone who has contributed to this discussion. This thread now has over 50 posts covering employment law, constitutional law, international humanitarian law, corporate governance, military strategy, AI ethics, tax implications, and union organizing. That is the kind of cross-disciplinary analysis that complex issues deserve.

Summary of key legal and practical takeaways from this thread:

  • NLRA Section 7 almost certainly protects the act of signing the open letter as concerted activity for mutual aid or protection. Anthropic cannot lawfully retaliate against signatories for this speech.
  • California Labor Code 1101-1102 provides additional state-level protection for political activities, which likely covers protesting military AI use.
  • Security clearance holders may face different risks, as clearance revocation decisions are largely unreviewable by courts under Dept. of Navy v. Egan.
  • The Trump ban may violate the unconstitutional conditions doctrine and the Competition in Contracting Act, but litigation would take years to resolve.
  • PBC derivative actions are a potential avenue for both employees (challenging the contract's consistency with the stated mission) and investors (challenging the board's management of the crisis).
  • Palantir's tiered access model is widely cited as a practical framework that could have prevented this situation.
  • Equity and tax implications should be carefully considered by any employee contemplating resignation — consult a CPA before making irrevocable decisions.
  • Document everything — performance reviews, communications, policy changes, management statements — in case of future retaliation claims.

I am keeping this thread pinned as a megathread. New developments will be added here as the situation evolves. If you are an Anthropic employee or someone directly affected by these events, please consult with a qualified attorney in your jurisdiction. Nothing in this thread constitutes legal advice.

Thread remains open for discussion.

Join the discussion — free for verified professionals.

Browse Forum