Private members-only forum

MEGATHREAD PINNED Tech Worker Revolt — 900+ Sign 'We Will Not Be Divided' Against Military AI

Started by KellyMartinez_Mod · Apr 2, 2024 · 54 replies · General Discussion
Information shared here is for educational purposes only and does not constitute legal advice.

Key Takeaways

Summary generated from 54 posts in this thread
Questions about employee rights in AI work? Schedule a consultation — $125/30min. Schedule now

Consolidating the multiple threads on this into one megathread since the discussion is moving fast and touching on several areas of law and policy.

Here is what we know as of this morning. Over 900 Anthropic employees have signed an open letter titled "We Will Not Be Divided," protesting the company's contract with the U.S. Department of Defense. The letter was published internally on Friday and leaked to the press within hours. Signatories include senior researchers, safety team leads, and at least two members of Anthropic's technical staff leadership.

The immediate trigger appears to be Operation Epic Fury, the U.S. military strikes on Iranian military assets on February 28. Reporting from The Intercept and confirmed by Anthropic's own spokesperson indicates that Claude AI was used in operational planning, targeting chain analysis, and logistics coordination for the strikes. Employees say they were not informed that Claude had been deployed in active military operations and that this violates Anthropic's own Responsible Scaling Policy.

Since then, the situation has escalated rapidly. President Trump called Anthropic a "Radical Left AI company" in a Truth Social post and the administration has moved to ban Anthropic from all government contracts. OpenAI reportedly secured the Pentagon deal within hours of the ban being announced.

This thread will cover: (1) employment law protections for the signatories, (2) the ethics and policy of military AI, (3) corporate governance implications, and (4) the competitive dynamics with OpenAI and China. Please keep discussion substantive. Personal attacks on named individuals will be removed.

RL
RachelLiu_EmpLaw
Employment Atty

Employment attorney here, practicing in California for 14 years. Let me lay out the legal framework because I am already seeing a lot of misinformation on social media about what these employees can and cannot do.

First, the National Labor Relations Act Section 7 protects employees' rights to engage in "concerted activity" for "mutual aid or protection." This applies to non-union workplaces. Signing a collective open letter about working conditions — which includes concerns about what your labor is being used for — has been found to be protected concerted activity in numerous NLRB decisions. The key cases here are Meyers Industries (1986) and more recently Alstate Maintenance (2014).

Second, and this is critical for Anthropic employees specifically, California Labor Code sections 1101 and 1102 prohibit employers from making rules or taking action that prevent or control employees' political activities or affiliations. Protesting military use of AI is arguably political activity, and California courts have interpreted these statutes broadly. The Gay Law Students Assn. v. Pacific Telephone decision established that these protections extend beyond party politics to broader social and political engagement.

Third, Anthropic is headquartered in San Francisco. The San Francisco Police Code Article 33H provides additional protections against employer retaliation for lawful off-duty conduct.

That said, there are limits. If employees access classified or confidential information about the contract to use in their protest, that could fall outside protected activity. If individuals are insubordinate in the workplace (refusing direct work assignments, sabotaging systems) rather than engaging in collective speech, the analysis changes significantly. But the act of signing a letter? That is about as textbook Section 7 as it gets.

DT
DanielTorres_DefenseAI

I work in the defense AI space (not at Anthropic, at a major prime contractor) and I have a very different perspective that I know will be unpopular here.

These employees signed up to work at an AI company. Anthropic's mission is to build safe, beneficial AI. There is a very reasonable argument that ensuring the U.S. military uses the most safety-conscious AI systems — rather than something thrown together by a contractor with no alignment research — IS the safe and beneficial path. If Claude is not in that targeting chain, something worse will be. That is not a hypothetical. That is what happens on Monday when OpenAI takes over and deploys a model with far fewer safety guardrails in its architecture.

I watched the Google Project Maven walkout in 2018. You know what happened after those engineers walked out? Google pulled out of Maven, and the contract went to Palantir and a constellation of smaller firms with zero interest in AI ethics. The net result was strictly worse for everyone who cared about responsible AI in defense applications.

The 900 signatories feel morally righteous, I am sure. But they need to think through the second-order consequences of what they are demanding. Unilateral disarmament by the most safety-focused AI lab in the world does not make anyone safer. It just means the Pentagon uses less safe tools, and China gets closer to parity.

AP
AishaP_TechEthics

@DanielTorres_DefenseAI The "if we do not do it, someone worse will" argument is a moral dead end and has been used to justify every form of complicity throughout history. The engineers at Los Alamos could have made the same argument. Some of them did. Many of them spent the rest of their lives regretting it.

The issue is not whether the U.S. military should use AI. That ship has sailed. The issue is whether employees have the right to know and consent to how their work is used, and whether a company that explicitly branded itself as the "safety-first" alternative gets to secretly deploy its model in active military strikes without telling the people who built it.

Read the open letter. The employees are not saying "Anthropic should never work with the government." They are saying they were lied to about the scope and nature of the contract, that the Responsible Scaling Policy was violated, and that they should have a voice in these decisions. That is not unilateral disarmament. That is asking for basic transparency and consent in the employer-employee relationship.

Also, the China argument is a red herring deployed every single time anyone raises ethical concerns about any technology. DeepSeek and Chinese military AI development will proceed regardless of whether Anthropic specifically has a Pentagon contract. American national security does not rest on one company's API.

MK
MarcusKim_BigLaw
Attorney

Adding to @RachelLiu_EmpLaw's excellent analysis. There is a wrinkle here that I have not seen discussed: the intersection of at-will employment and NLRA protections when the employer is a government contractor (or was, until the ban).

Anthropic employees are almost certainly at-will. California does not recognize implied employment contracts based on employee handbooks (after the Guz v. Bechtel decision). But at-will does not mean the employer can fire you for any reason — it means they can fire you for any reason that is not illegal. And retaliating against NLRA-protected activity is illegal under Section 8(a)(1).

The more interesting question is what happens with employees who have security clearances. If Anthropic's DoD contract required certain employees to hold clearances, and those employees signed the letter, the government could theoretically move to revoke clearances based on "reliability" concerns. That is a very different legal framework — security clearance decisions are largely unreviewable by courts under Department of the Navy v. Egan (1988).

I would strongly advise any Anthropic employee with a security clearance who signed this letter to consult with a national security attorney immediately. The NLRA analysis that applies to most signatories may not protect you if the clearance revocation comes from the government rather than the employer.

JW
JessicaWong_HR

HR professional here (15 years in tech, including two FAANG companies). I want to speak practically about what Anthropic's management is likely doing right now behind closed doors.

When 900+ employees sign something like this, you cannot fire them all. It is not legally possible (mass retaliation would be an NLRB slam dunk) and it is not practically possible (you would lose your entire engineering workforce). So the playbook is usually:

  1. Identify the "ringleaders" — the people who organized the letter, not just signed it. Put them on a list.
  2. Begin building performance documentation for those individuals. Not immediately, but over the next 3-6 months. Suddenly their code reviews get more scrutiny, their project timelines get tighter, their peer feedback is weighted differently.
  3. Issue a company-wide statement acknowledging "diverse perspectives" while reaffirming the company's right to make business decisions.
  4. Quietly restructure or eliminate the teams most associated with the protest.
  5. Make the next round of layoffs disproportionately impact signatories, but frame it as "business needs" and "organizational restructuring."

I am not saying this is right. I am saying this is what I have watched happen at two different companies after employee protests. The retaliation is never overt. It is always structural and deniable. If you are an organizer of this letter, you need to be documenting everything starting yesterday. Save every performance review, every Slack message, every 1:1 note. You will need it.

SN
SarahNguyen_AnthEmp

Throwaway-ish account but I am a current Anthropic employee who signed the letter. I will be careful about what I say but I want to correct some things being reported.

First: we were told the DoD contract was for "research and evaluation purposes." That was the framing in the all-hands. Many of us understood that to mean the military was evaluating Claude's capabilities, not deploying it in active operations. When the Operation Epic Fury reporting came out, people were genuinely shocked. Not naive — shocked at the gap between what we were told and what apparently happened.

Second: the letter was not organized by some fringe group. It started in a Slack channel that already had 400+ members within the first two hours. People across every team — safety, alignment, infrastructure, policy, even some people on the partnerships team — signed. This is not a dozen disgruntled engineers. This is the majority of the technical staff.

Third: some people are saying Dario wanted the contract all along and the "safety mission" was always marketing. I do not believe that is true, and I think it is unfair. But I also think the leadership team got in over their heads. The revenue pressure from investors after the last funding round was enormous. The DoD contract was worth — well, I should not say the number. But it was significant enough to change the company's financial trajectory.

We are not anti-American. We are not "radical left." We are people who took specific jobs at a specific company because of specific promises about how our work would be used. Those promises were broken.

BF
BrianFoster_VC

VC perspective here (Series B/C stage investor, not an Anthropic investor). The market is watching this situation very carefully and the implications go well beyond one company.

Anthropic was valued at $60B in its last round. That valuation was predicated on (a) technical leadership, (b) enterprise revenue growth, and (c) the government/defense vertical. If the government vertical is gone — and it appears to be, at least for this administration — that is a material impairment to the business thesis.

But here is the thing that Wall Street types are missing: Anthropic's most valuable asset is its people. If you alienate 900+ employees to the point where they start leaving, you do not just lose the government contract. You lose the people who make the model competitive. And then you lose everything.

I am hearing from founders in my portfolio that they are watching how Anthropic handles this to decide whether to build on Claude or switch to OpenAI or Gemini. The enterprise market cares about stability and reliability. A company at war with its own workforce is not stable.

My prediction: Anthropic will try to thread the needle. Publicly sympathize with employees, quietly restructure the defense contracts through a subsidiary or partner, and hope this blows over. Whether the employees accept that framing is the real question.

RJ
RobertJackson_NatSec

Former DoD civilian here, spent 20 years in defense acquisition. I want to push back on several things being said in this thread.

Operation Epic Fury was a lawful military operation authorized by the President under Article II powers. The targets were Iranian military assets that had been used to attack U.S. personnel and allies. Whether you agree with the policy or not, the operation was legal under domestic and international law.

The use of AI in operational planning is not new. The military has used algorithmic decision support tools since the 1990s. What changed is the capability level. Claude can process intelligence at a speed and volume that no human analyst team can match. That does not mean Claude is "choosing targets" — it means it is helping analysts process data faster. Every targeting decision still goes through a human chain of command. That is required by DoD Directive 3000.09 on autonomy in weapon systems.

The idea that employees at a technology company should have veto power over lawful government use of commercially available tools is frankly alarming. Lockheed Martin employees do not get to vote on whether the F-35 is used in a particular conflict. Boeing engineers do not sign open letters about where their aircraft fly. The precedent these employees are trying to set would make the U.S. defense industrial base ungovernable.

I respect the right to protest. But let us not pretend this is some clear-cut moral victory. These employees are trying to override the democratically elected government's decisions about national defense. Think carefully about what that means.

LP
LauraPetrov_LaborLaw
Labor Atty

@RobertJackson_NatSec Your Lockheed/Boeing comparison has a fundamental flaw. Those companies never marketed themselves as ethical alternatives to the defense industry. They never recruited employees with promises about safety-first AI development. Anthropic did. That creates a different employment relationship and different reasonable expectations about the scope of one's work.

More importantly from a labor law perspective, there is a well-established principle that employers cannot make representations about working conditions during hiring and then unilaterally change those conditions. If Anthropic recruited employees by promising that Claude would not be used in military operations (and several former recruiters have confirmed this on social media), that could form the basis of either a Section 7 claim (change in working conditions) or potentially even a fraud in the inducement claim.

Also, your framing that employees are "overriding the democratically elected government" is a strawman. Employees are not commanding the military to do anything. They are telling their employer that they object to the employer's decision to participate in military operations. That is the most basic form of concerted employee activity protected by federal law since 1935.

The NLRA does not have a national security exception. Congress could have written one. It did not.

HV
HenryVoss_ExAnth

Left Anthropic about eight months ago so I feel comfortable sharing more context than current employees can.

The defense contract discussion started internally in mid-2025. Dario held a series of "office hours" where he laid out the case that engaging with the U.S. government was necessary both for revenue and for influence over AI policy. The pitch was: "We can either be at the table or on the menu." Lots of people were uncomfortable but the framing was always about research partnerships, safety evaluations, and red-teaming exercises.

What was never discussed in those sessions was operational deployment. There is a massive difference between helping the DoD understand Claude's capabilities and limitations, and having Claude integrated into a kill chain. I do not care how many human approvals are in the loop — if the AI is processing targeting data, it is part of the kill chain.

I left because I could see where things were heading. The moment Amazon's $4B investment came with strings, the company's identity started to shift. Not overnight, not dramatically, but steadily. The people on the safety team who raised concerns were not fired — they were gradually sidelined. Their projects got deprioritized. Their headcount requests were denied. And the people who were enthusiastic about commercial partnerships got promoted.

The employees signing this letter are braver than I was. I chose to leave quietly. They are choosing to fight from the inside. I hope it works, but based on what I saw, I am not optimistic.

CT
CarmenTrujillo_IP
IP Attorney

I want to flag an intellectual property angle that nobody seems to be discussing. Many Anthropic employees have assignment clauses in their employment agreements that assign all work product to the company. That is standard in tech.

But there is an interesting question about whether employees who contributed to Claude's training data curation, RLHF, or constitutional AI methodology have moral rights (in the civil law sense) or at minimum equitable arguments about how their specific contributions are used. U.S. law does not recognize moral rights for most works, but the Berne Convention and some state laws (California Civil Code 987 for visual art) create carve-outs that could be relevant in novel contexts.

I know this is a stretch legally. But the philosophical question is important: when you contribute your expertise to training an AI system under the understanding that it will be used for certain purposes, and then it is used for fundamentally different purposes, is there any legal theory that gives you standing to object beyond just quitting?

The answer right now is probably no. But this is exactly the kind of case that could create new law. If any of the signatories decide to litigate, the IP angle could be fascinating territory.

NW
NathanWright_DevOps

Senior DevOps engineer at a different AI company. Just want to make a practical observation.

Everyone is debating the ethics and the law. Nobody is talking about the technical reality. Modern AI deployments do not have a clean separation between "research use" and "operational use." Once you give the DoD API access to Claude, what they do with those API calls is largely outside Anthropic's control. You can put usage policies in the contract, but enforcing them requires monitoring every query and response, which is both technically challenging and potentially illegal if the queries contain classified information.

This is not like selling someone a physical product. It is like giving someone a key to your brain and then being surprised when they use it for something you did not intend. The architecture of these LLM deployments makes the ethical boundary that Anthropic tried to draw practically unenforceable.

The only way to ensure your AI is not used for military targeting is to not give the military access to your AI. That is the lesson of this entire debacle.

PS
PriyaSharma_GovCon

Government contracts attorney here. I want to address the Trump ban specifically because the legal basis is shaky and I think it matters for the broader discussion.

The President has broad authority to direct procurement policy under the Federal Property and Administrative Services Act. But banning a specific company from all government contracts based on the political speech of its employees would raise serious First Amendment concerns. The government cannot condition the benefit of a contract on the suppression of employee speech — that is Board of County Commissioners v. Umbehr (1996), extended to independent contractors.

More practically, the ban was announced on Truth Social before any formal debarment process was initiated. Debarment under FAR 9.406 requires notice, an opportunity to respond, and specific grounds (fraud, criminal conduct, serious violations of contract terms). "Their employees are radical leftists" is not a FAR-recognized ground for debarment.

I expect Anthropic's legal team will challenge this ban. But the political dynamics make it complicated. Do you sue the administration that just banned you and further inflame the situation? Or do you quietly accept it and try to get the decision reversed behind the scenes? That is a business judgment as much as a legal one.

The irony, of course, is that Trump's ban actually gives the employees what they wanted — no more Pentagon contract. But it gives it to them in the worst possible way, attached to a political punishment that threatens the entire company.

OG
OliverGrant_PolSci

Political science professor here, not a lawyer. I study tech worker activism and I have been following this story closely.

For historical context: the Google Project Maven walkout in 2018 involved roughly 4,000 employees signing a letter and about a dozen high-profile resignations. Google ultimately did not renew the Maven contract. But the aftermath is instructive. Google went on to win the JEDI cloud contract (later renamed JWCC), expanded its defense work significantly, and eventually established a dedicated defense and intelligence subsidiary. The walkout changed the optics but not the trajectory.

What makes the Anthropic situation different is the company's explicit identity as an AI safety organization. Google never claimed to be a safety-first company. Anthropic's entire brand, its entire recruitment pitch, its entire public persona was built on being the responsible alternative. That makes the perceived betrayal much deeper and the employee response much more intense.

The other difference is the political environment. In 2018, the backlash against Maven came from the left and the right largely ignored it. In 2026, the right is actively weaponizing the protest to attack the company. Trump's "Radical Left" framing turns what should be a labor dispute into a culture war, which makes it much harder for the company to find a middle ground.

My prediction: unlike Google, Anthropic cannot just absorb this and move on. The company has to choose an identity. Safety-first organization or commercial AI powerhouse. The two have become mutually exclusive in this political environment.

MB
MelissaBrown_Recruiter

Tech recruiter for 11 years, specializing in ML and AI placements. I want to add the hiring market perspective because it is directly relevant to how this plays out.

I have been fielding calls all weekend from Anthropic engineers asking about opportunities. Not 2 or 3 — dozens. These are L5/L6 equivalent engineers with deep expertise in ML infrastructure, RLHF, and constitutional AI. They are among the most sought-after talent in the industry right now.

Here is the thing: every single one of them has told me the same thing. They are not leaving yet. They want to see how Anthropic responds. But they are preparing their options. That is the most dangerous position for an employer — when your best people have one foot out the door and one eye on the job market.

Competitors are already circling. I know for a fact that Google DeepMind, Cohere, and two well-funded stealth startups I cannot name have activated targeted recruiting campaigns aimed at Anthropic employees this weekend. The poaching has begun.

Anthropic has maybe a 2-week window to respond in a way that keeps these people. After that, the resignations will start and they will cascade. Once the first senior researcher goes public with a resignation letter, it gives everyone else permission to follow.

JD
JamesDeluca_SWE

I am a software engineer in the Bay Area (not at Anthropic). This situation is exactly why I turned down an Anthropic offer 18 months ago. During the interview process, I asked directly about government and military work. The recruiter said — and I quote from my notes — "Anthropic does not pursue military contracts. Our focus is commercial enterprise and research."

That was apparently a lie, or at minimum it became a lie within months of them telling me that. I ended up at a company that is transparent about its government work (yes, it exists, we do some, and everyone knows about it before they sign the offer letter). I prefer honest complexity to dishonest simplicity.

The lesson here is not that AI companies should never work with the military. It is that you cannot build a brand on ethical purity and then quietly take a defense contract. The cognitive dissonance will destroy you from the inside.

EM
ElenaMarkov_BioEthics

Bioethicist here. I study dual-use technology dilemmas and the Anthropic situation is a textbook case of what we call the "dual-use research of concern" problem.

In the biological sciences, we resolved this (imperfectly) through institutional review boards, the Fink Report, and the NSABB framework. The key principle is that researchers have a right to participate in decisions about dual-use applications of their work, and institutions have an obligation to create governance structures that enable that participation.

The AI industry has nothing comparable. There is no IRB for AI deployment decisions. There is no institutional mechanism for employees to raise dual-use concerns through formal channels. Anthropic's Responsible Scaling Policy was the closest thing, and apparently it was either ignored or did not cover military deployment at all.

What the employees are really asking for is not the abolition of military AI. They are asking for governance. They want a process, a voice, a way to raise concerns that does not require leaking to the press. That is not radical. That is basic institutional design that every other dual-use technology sector figured out decades ago.

DK
DerekKhan_Palantir

I work at Palantir (yes, that Palantir). Let me tell you how we handle this because I think our model is relevant to this discussion.

Palantir has a tiered access system. Not every employee works on every contract. Employees who do not want to work on defense or intelligence projects are not required to. They work on commercial deployments, healthcare, supply chain, or other verticals. Employees who work on classified programs go through additional vetting and explicitly opt in. There is no ambiguity about what you are working on.

Is it perfect? No. There are resource allocation tensions and some people feel that the defense work subsidizes their commercial projects. But nobody at Palantir has ever been surprised to learn that their work was used by the military. It is in the company's DNA, practically.

Anthropic's mistake was trying to have it both ways: take the defense revenue while maintaining the fiction that they were purely a safety research organization. Palantir understood from day one that transparency about your customer base is not optional when your customers include the Pentagon.

The tiered access model is not just an ethical framework — it is a practical one. It lets you attract talent that does not want to do defense work AND talent that does, without lying to either group. Anthropic should have adopted something like this before signing the DoD contract.

AT
AnnaTsai_ConLaw
Con Law Atty

Constitutional law professor here. I want to address the Trump ban from a First Amendment perspective because @PriyaSharma_GovCon raised important points but I think the analysis goes deeper.

The unconstitutional conditions doctrine prohibits the government from conditioning a benefit (here, government contracts) on the surrender of constitutional rights (here, employee speech). Perry v. Sindermann (1972) established this in the employment context. Umbehr extended it to independent contractors.

But there is a complication. The employees are not the contractors — Anthropic is. Anthropic's employees have no direct contractual relationship with the government. So the question becomes whether the government can punish a company for its employees' speech. Under Citizens United, corporations have First Amendment rights. But do those corporate rights extend to protecting the speech of individual employees?

I think there is a strong argument that they do, under a compelled-speech theory. If the government says "we will not contract with you unless you silence your employees," that is functionally compelling the corporation to suppress speech as a condition of doing business with the government. That should fail strict scrutiny.

The practical problem is that the Trump administration will never frame it that way. They will cite "operational security concerns" or "reliability of supply chain" as the basis for the ban. Proving the real motivation was retaliatory requires discovery, which requires litigation, which takes years. By then, the contract has long since gone to OpenAI and the political moment has passed.

RH
RyanHarper_Libertarian

Hot take: everyone in this thread is overthinking this. These employees exercised their right to speak. The government exercised its right to choose contractors. The market is functioning exactly as designed.

You do not have a right to a government contract. You do not have a right to force your employer to reject revenue. You have a right to speak, and your employer has a right to decide its business strategy, and the government has a right to work with whichever vendor it chooses.

The employees spoke. Good for them. The consequences followed. That is how it works. The NLRA protects them from being fired for the speech. It does not protect the company from losing a contract because the government no longer trusts them.

And let us be real — if you were the Pentagon, would YOU want to run military operations on an AI system where 900+ of the employees who maintain it just publicly declared they oppose its use for that purpose? That is a genuine operational security concern, not a political vendetta.

KM
KellyMartinez_Mod
OP MOD

Quick moderation note: I have removed three posts that contained doxxing of specific Anthropic employees. This is a forum rule violation and will result in a permanent ban. Discuss the issues, not the individuals. If you have information about specific people, keep it off this forum.

Also updating the thread with breaking news: Anthropic's CEO Dario Amodei has released a public statement. Key quote: "We hear our employees and we share their commitment to the safe and responsible development of AI. We are initiating a comprehensive review of our government partnerships and will share the results with our full team." The statement does not address Operation Epic Fury specifically or acknowledge the open letter by name.

VR
VeronicaReed_ACLU

Civil liberties attorney here. The Dario statement is classic crisis PR: acknowledge the concern without admitting wrongdoing, promise a review that can take as long as you need it to, and avoid the specific allegations entirely.

What I am not seeing is any commitment to not retaliate against signatories. That is the most important thing right now. If Anthropic is serious about "hearing" its employees, it should immediately issue a written commitment — not a PR statement, but an enforceable written policy — that no adverse action will be taken against any employee who signed the letter or participated in organizing it.

Without that, the "comprehensive review" is just a mechanism to buy time while HR builds the files that @JessicaWong_HR described. I have seen this pattern dozens of times in my career. The company expresses sympathy publicly while quietly building the case to terminate the most vocal organizers.

If any Anthropic employee is reading this: document everything starting now. Screenshot the CEO's public statement and save it with a timestamp. If they later take adverse action against you, the gap between the public statement and the private action is your retaliation case.

TM
TylerMitchell_ML

ML researcher here. I want to address the technical argument that Daniel made earlier about "if Claude does not do it, something worse will."

This argument assumes that Claude's safety features are meaningfully operative in a military deployment. They are not. When you deploy an LLM for military use, the first thing you do is strip or modify the safety layers that prevent it from engaging with violent content. You have to — otherwise it would refuse to process targeting data because of its Constitutional AI training. The Claude that the Pentagon was using was not the same Claude that refuses to help you write malware. It was a fine-tuned, guardrail-reduced version optimized for the specific military use case.

So the "at least Claude is safer" argument falls apart on inspection. The military version of Claude is not meaningfully safer than a military version of GPT-5 or Gemini Ultra. The safety research that Anthropic employees poured their careers into is literally not present in the deployed system. Their work was used to build a foundation model that was then stripped of the very features they cared most about.

That is not "responsible military AI." That is using safety researchers' labor to build a better weapon while telling them they are building a safer AI.

GH
GraceHenderson_CorpGov
Corp Attorney

Corporate governance attorney. I want to talk about Anthropic's unique corporate structure because it is directly relevant to whether the employees have any real legal leverage beyond the NLRA.

Anthropic is structured as a Public Benefit Corporation (PBC) in Delaware. PBCs have a legal obligation to consider the impact of their decisions on stakeholders beyond just shareholders — including employees, the community, and the environment. Under Delaware General Corporation Law Section 365(a), the board of a PBC must balance stockholder interests with the public benefit specified in its charter.

Anthropic's stated public benefit is "the responsible development and maintenance of advanced AI for the long-term benefit of humanity." If deploying AI in military strike operations is inconsistent with that stated benefit, then the employees are not just making a moral argument — they are pointing to a potential breach of the company's own legal charter.

Under DGCL Section 367, stockholders owning at least 2% of shares can bring a derivative action claiming the board failed to balance stakeholder interests as required by the PBC charter. Most Anthropic employees have equity. If enough of them are stockholders, they could potentially bring a derivative suit against the board for approving the DoD contract in violation of the PBC charter.

This is untested legal territory. There has never been a PBC derivative action based on a military contract. But the legal framework exists, and I would not be surprised if someone files one in the coming weeks.

WZ
WeiZhang_AIPolicy

AI policy researcher here, previously worked at a DC think tank focused on US-China tech competition. I want to give the China angle a more serious treatment than the dismissive responses it has gotten so far.

The PLA's Strategic Support Force has been integrating AI into military operations since at least 2020. DeepSeek, Baidu, and several Chinese companies with no public profile are supplying AI capabilities to the Chinese military with zero internal debate or employee pushback because that kind of dissent is simply not permitted in China's political system.

This is not a "red herring." This is the strategic reality. The U.S. is in an AI arms race whether it wants to be or not. The question is whether American values — including the value of employee speech and protest — are compatible with winning that race.

I think they are, but only if we build institutional structures that channel dissent into governance rather than letting it become a binary choice between "do everything the Pentagon asks" and "refuse all military work." The employees are right that they deserve transparency. The national security community is right that abandoning military AI is dangerous. Both things can be true simultaneously.

What we need is something like the Palantir model but formalized and industry-wide. An AI defense ethics board with employee representation, independent oversight, and real authority to set boundaries. Not another toothless advisory committee that gets dissolved the moment it becomes inconvenient.

KP
KatePowers_Whistleblower
Attorney

Whistleblower protection attorney. I need to clarify something important: the open letter signatories are NOT whistleblowers in the legal sense, and they should not be advised as if they are.

Whistleblower protections under federal law (Dodd-Frank, SOX, False Claims Act qui tam) require reporting of specific legal violations through specific channels. Signing an open letter protesting a lawful business decision is not whistleblowing. The DoD contract, as described publicly, does not appear to violate any law.

This matters because the legal protections are different. NLRA Section 7 protects concerted activity but the remedies are limited (reinstatement, back pay, no punitive damages). Whistleblower statutes provide much stronger protections including anti-retaliation provisions with real teeth and substantial monetary awards.

HOWEVER — if any employees have evidence that Anthropic misrepresented the nature of the contract to investors, that could be securities fraud, which would trigger Dodd-Frank whistleblower protections. If the contract was structured to avoid Congressional notification requirements, that could trigger other statutory protections. If classified information was mishandled, that opens up yet another avenue entirely.

The open letter signatories should be thinking carefully about whether they have information that goes beyond policy disagreement into actual legal violations. If they do, they need separate counsel and they should NOT be sharing that information in an open letter or on social media. They should be going to the SEC or the relevant Inspector General.

DT
DanielTorres_DefenseAI

@TylerMitchell_ML I appreciate the technical detail but your argument actually proves my point. You say the military strips safety features. That is exactly why you want the most safety-conscious lab at the table — to push back on that stripping, to set contractual requirements for what guardrails must remain, and to monitor deployment practices.

When Google pulled out of Maven, do you think the replacement contractors fought to keep ethical guardrails? They did not. They gave the Pentagon exactly what it asked for with no pushback whatsoever. Anthropic, whatever its flaws in this situation, was at least in a position to negotiate for responsible deployment. Now OpenAI has the contract, and based on their public statements, they view military work as an unambiguous good. Which company do you think will push harder for human oversight in the targeting chain?

The perfect is the enemy of the good. And in this case, the "perfect" (no AI in military operations) is genuinely not achievable. The choice is between imperfect engagement and total abdication. The employees chose abdication. History will judge whether that was brave or naive.

SN
SarahNguyen_AnthEmp

@DanielTorres_DefenseAI You keep making this argument but you are ignoring the central fact: we were not consulted. We were not told. We did not get to negotiate for responsible deployment because we did not know there was a deployment to negotiate about.

If Anthropic had come to us and said "The Pentagon wants to use Claude for operational planning. Here is what we have negotiated in terms of guardrails. Here are the red lines we have drawn. What do you think?" then we would have had that conversation. Many people would have been uncomfortable. Some would have objected. But there would have been a process.

Instead, the first time most of us learned about it was from a news article about a military strike that killed people. That is the betrayal. Not the existence of a defense contract. The deception about its scope and the complete absence of internal deliberation.

Please stop telling us what we should have wanted. We are telling you what we actually want: transparency, a voice in decisions about how our work is used, and accountability when leadership breaks its own stated principles.

AP
AlexPetrovich_Union

Labor organizer here (CWA). I have been in contact with some Anthropic employees and I want to address the elephant in the room: unionization.

This is exactly the kind of event that catalyzes union drives. You have a large group of employees who feel betrayed by management, who have demonstrated the ability to organize collectively (900+ signatures is impressive), and who want a formal voice in company decisions. That is literally what a union provides.

The tech industry has resisted unionization for decades by offering high compensation, stock options, and the illusion of a flat organizational culture. But when the illusion breaks — when employees realize they have no actual power over decisions that matter to them — the arguments against unionization start to ring hollow.

Under the NLRA, Anthropic employees have the right to form a union. If 30% of the bargaining unit signs authorization cards, the NLRB will hold a representation election. Given that 900+ employees already signed the letter, reaching that threshold would be trivial.

I am not saying a union is the answer to every problem here. But if these employees want a permanent, legally protected mechanism to negotiate about the use of their work, collective bargaining is the tool that exists. Everything else — open letters, protests, social media campaigns — is temporary and has no enforcement mechanism.

For what it is worth, the Alphabet Workers Union (CWA Local 9009) was formed after Project Maven. If the Anthropic employees are serious about structural change, they should be talking to CWA.

FS
FrankSullivan_MgmtSide
Mgmt Atty

Management-side labor attorney. Let me give the perspective from the other side of the table, because this thread is heavily tilted toward the employee view.

Anthropic's management has a fiduciary duty to its investors and (as a PBC) to its stated mission. Both of those duties may support the defense contract. The investors provided capital expecting revenue growth. The mission statement says "long-term benefit of humanity," which can reasonably include national defense. The board made a business judgment. That judgment is entitled to deference under the business judgment rule.

On the NLRA point: yes, the letter is likely protected concerted activity. But "protected" does not mean "consequence-free for the company." The company lost a massive government contract, in part because of the public nature of the protest. If the company can demonstrate that specific employees' actions went beyond protected speech — for example, leaking confidential contract terms to the press — those individuals can be lawfully terminated.

The line between protected concerted activity and unprotected disclosure of confidential information is fact-specific and case-dependent. If the open letter contains details that could only have come from employees with access to the DoD contract, that is a problem for those employees regardless of NLRA protection. Section 7 protects collective speech about working conditions. It does not protect the disclosure of trade secrets or classified information.

My advice to Anthropic management (if they were my client, which they are not): take no adverse action against letter signatories as a group. Focus enforcement on anyone who leaked confidential information. And restructure the defense work through a subsidiary with separate employees who opt in, modeled on the Palantir approach.

NB
NicoleBaker_FormerGoogle

I was one of the Google employees who signed the Maven letter in 2018. I want to share what happened after, because the Anthropic employees are about to go through the same thing and they should be prepared.

The first phase is solidarity and energy. That is where you are now. It feels like you are changing the world. 900 signatures feels unstoppable.

The second phase is management's response. It will be sympathetic-sounding and vague. "Comprehensive review." "Listening sessions." "Shared values." This is designed to diffuse the energy and run out the clock. It works. People think "okay, they heard us," and go back to their desks.

The third phase, and this is the one nobody warns you about, is the slow squeeze. Over the next 6-12 months, the organizers will be passed over for promotions. Their project proposals will be deprioritized. Their skip-level meetings will be canceled. Their performance reviews will be inexplicably lower than previous years. One by one, the most vocal people will leave — not fired, just made so miserable that they choose to go.

At Google, I lasted 14 months after the Maven letter before I realized I had been effectively sidelined. The final straw was when my manager told me, in a 1:1, that my "external activities" were "creating a distraction" and that I should "focus on impact." That is corporate-speak for "shut up or leave."

The NLRA theoretically protects against this. But proving constructive retaliation through a thousand small cuts is incredibly difficult. The NLRB is understaffed and backlogged by years. By the time your case is heard, you have already left the company.

My advice to the Anthropic employees: the next 30 days are critical. If you do not achieve structural change (a formal governance mechanism, a union, or a binding policy) in that window, the energy will dissipate and management will win by attrition. That is what happened to us.

JC
JaredChen_FounderCEO

Founder/CEO of an AI startup (we are pre-Series A, about 30 people). I am watching this closely because it directly affects how I think about my own company's future.

Here is what keeps me up at night: we are going to need revenue to survive. Government contracts are some of the largest, most stable revenue sources available to AI companies. If the social norm in the AI industry becomes "employees can veto government contracts," then the entire business model shifts. Either you only hire people who are comfortable with government work (which dramatically shrinks your talent pool) or you avoid government work entirely (which dramatically shrinks your revenue potential).

I am sympathetic to the Anthropic employees. I genuinely am. But I am also looking at my own team of 30 people and thinking about what happens when we need to make hard choices about customers. Do I need to pre-negotiate every major contract with my engineering team? Where does employee voice end and management authority begin?

The answer the labor lawyers will give is "NLRA Section 7." But running a company is not a legal abstraction. It is about trust, culture, and the practical ability to execute strategy. If every strategic decision is subject to employee referendum, you cannot execute. Period.

RL
RachelLiu_EmpLaw
Employment Atty

@JaredChen_FounderCEO I want to push back gently. Nobody is saying every business decision should be subject to employee referendum. The NLRA does not give employees veto power over business decisions. It gives them the right to collectively voice concerns about working conditions without being fired for it.

The distinction matters. Anthropic employees are not demanding the right to approve every contract. They are demanding transparency about contracts that fundamentally change the nature of their work, and a mechanism to raise concerns when they believe the company is violating its own stated principles. That is not a referendum. It is basic governance.

Also, from a practical standpoint: you WANT your employees to feel empowered to raise concerns. A company where nobody speaks up is a company where problems fester until they explode. The fact that 900 Anthropic employees felt strongly enough to sign a letter is a failure of internal communication, not a failure of employee loyalty. If management had created channels for this kind of feedback before the crisis, the letter would not have been necessary.

As a founder, you should be thinking about how to build those channels now, before you face your own version of this crisis. Because you will. Every company that grows large enough eventually faces a moment where employee values and business strategy collide.

MD
MikeDavis_VetSWE

Military veteran turned software engineer. I have a perspective that I think is missing from this conversation.

I spent 8 years in the Army, including two deployments. I now work in tech. I understand both the military's need for advanced technology and the tech workers' discomfort with military applications. I live at this intersection every day.

What frustrates me about the discourse is the abstraction. When people say "military AI" or "kill chain," they are using terms that erase the reality of what these tools do. AI in military operations can mean a lot of things. It can mean identifying IED emplacement patterns to keep soldiers alive. It can mean optimizing logistics so humanitarian aid gets to the right place faster. It can also mean helping select targets for kinetic strikes.

The employees have every right to know which of these uses their work is supporting. "We used Claude in Operation Epic Fury" tells you nothing about what it actually did. Was it processing satellite imagery? Analyzing signal intercepts? Optimizing flight paths? Each of those has a different ethical profile.

The problem with both sides of this debate is the lack of specificity. The company did not provide enough information for employees to make informed judgments. And the employees, in turn, are protesting based on assumptions about what the military is doing with the tool. Both sides would benefit from more transparency, but classification rules make that transparency nearly impossible.

That is the fundamental trap of military AI. The people who build the tools cannot know how they are used, and the people who use the tools cannot tell the builders. Informed consent is structurally impossible in this context.

IM
IsabelMorales_CivLib

@RyanHarper_Libertarian Your "operational security" argument for the contract ban is doing a lot of heavy lifting. Let me dismantle it.

The employees did not refuse to work. They did not sabotage anything. They signed a letter expressing disagreement with a business decision. If that is enough to constitute an "operational security concern," then every employee at every defense contractor who has ever expressed any opinion about U.S. foreign policy is a security risk. That is not a standard anyone actually wants to apply consistently.

What you are actually describing is a loyalty test: employees must not only do their work, they must also publicly support the government's use of that work, or the government will punish the company. That is not how a democracy works. It is not how the NLRA works. And it is not how the First Amendment works.

The Trump administration is not banning Anthropic because of a genuine security concern. It is banning Anthropic because the President saw an opportunity to punish a company associated with AI safety (which he views as "woke") and reward a company (OpenAI) whose leadership has been more politically aligned with the administration. This is contract allocation as political patronage, which is illegal under the Competition in Contracting Act.

PL
PaulLambert_TaxCPA

CPA here, slightly off-topic but critically relevant to any Anthropic employee thinking about leaving. If you are considering resignation, you need to understand the tax implications of your equity position.

If you have exercised ISOs and are still within the holding period (2 years from grant, 1 year from exercise), leaving does not change your holding period for LTCG treatment. But if you have unexercised options, most option plans give you only 90 days post-termination to exercise. After that, the options expire worthless.

At Anthropic's last valuation ($60B), the spread on those options could be enormous. If you exercise and the FMV has increased significantly since your strike price, you could face a massive AMT hit in the current tax year. And if the company's valuation drops because of this crisis, you could end up paying AMT on phantom gains — tax on appreciation that evaporated before you could realize it.

Bottom line: do not resign in a moment of moral clarity without consulting a tax advisor about your equity. I have seen too many tech workers take principled stands and then get destroyed by an unexpected six-figure tax bill. Principles are important. So is not going bankrupt. Get advice before you make any irrevocable decisions.

KM
KellyMartinez_Mod
OP MOD

Breaking update: OpenAI has officially confirmed it has assumed the Pentagon AI contract previously held by Anthropic. In a blog post titled "Serving America's Defense," OpenAI's leadership stated that the company is "proud to support the U.S. military's mission to protect national security and uphold democratic values through advanced AI technology." No mention of employee consultation or ethical review processes.

Additionally, reporting from Reuters indicates that at least 15 Anthropic employees have already submitted their resignations, including two senior safety researchers. Anthropic has declined to comment on specific personnel matters.

CH
ChrisHarrington_Ethics

AI ethics researcher at a university. The OpenAI statement is the most predictable thing that has happened in this entire saga.

Let me put this bluntly: OpenAI pivoted from a nonprofit committed to "safe, beneficial AGI" to a capped-profit entity to a for-profit corporation in less than five years. It went from "we will publish all our research" to "we will publish nothing proprietary" to "we will take the Pentagon contract that the safety company just got fired from." The trajectory tells you everything about what happens when commercial incentives collide with stated values. Values lose. Every time.

This is not unique to AI. It is the story of every industry that started with idealistic founders and ended with corporate consolidation. The difference is that AI systems have a unique capacity for harm at scale. A social media company that compromises its values produces misinformation. An AI company that compromises its values could produce autonomous weapons systems.

The 900 Anthropic employees who signed that letter are trying to hold the line. But the structural forces arrayed against them — investor pressure, competitive dynamics, government coercion — are enormous. I do not know if they will succeed. I know they are right to try.

RT
RichardTong_AngInvestor

Angel investor and Anthropic shareholder (through a fund). I want to address the financial reality because the idealism in this thread, while admirable, is disconnected from how companies actually survive.

Anthropic burns approximately $2B per year on compute alone. Its commercial revenue, while growing, does not cover that burn rate. The DoD contract was reportedly worth $500M+ over 3 years. That is not pocket change — it is the difference between Anthropic being a going concern and Anthropic needing to raise another massive round at potentially unfavorable terms in a hostile political environment.

With the contract gone and the political situation making any government work toxic for the foreseeable future, Anthropic has to find that revenue somewhere else. Enterprise sales are growing but not fast enough. Consumer subscriptions are a fraction of what is needed. The company is now in a significantly weaker financial position.

I respect the employees' principles. But principles do not pay for H100 clusters. If Anthropic cannot find a path to financial sustainability, it will not matter how many employees signed the letter because there will not be a company left to have the debate at. Google, Microsoft, or Amazon will absorb the remains in an acqui-hire, and those companies will absolutely take military contracts without blinking.

The employees may have won the moral argument while losing the war. That is not a comfortable thought, but it is the financial reality.

SJ
SamanthaJones_HRTech

HR tech consultant. I want to address something @JessicaWong_HR raised about the retaliation playbook. She is right about how it usually works in most companies, but I think the dynamics here are different in important ways.

When 900+ people sign a letter, the usual targeted retaliation strategy breaks down mechanically. You cannot build performance documentation cases against 900 people simultaneously without it being obvious. The NLRB would have a field day with that pattern.

More importantly, Anthropic is in a labor market where its employees have enormous individual bargaining power. These are not warehouse workers (no offense to warehouse workers who also deserve protections). These are people with $300K-$600K total comp packages who could get hired at a competitor tomorrow morning. The threat is not that they will be fired — it is that they will leave voluntarily, taking institutional knowledge and research capabilities with them.

What Anthropic should be doing right now is NOT playing the retaliation game. It should be engaging substantively with the employees' demands. Create the governance mechanism. Institute a formal dual-use review process. Bring employees into the decision-making structure. Not because the law requires it, but because keeping 900 highly compensated employees is a lot cheaper than replacing them.

The companies that survive these moments are the ones that treat employee activism as a signal, not a threat. The ones that treat it as a threat usually end up losing more than the contract was worth.

AW
AndrewWalters_MilTech

Retired military, now working in defense tech. I want to share something I think is being missed in the strategic analysis of this situation.

China's military AI integration is not a theoretical future threat. The PLA demonstrated AI-assisted command and control capabilities in their exercises around Taiwan last year. The systems they are deploying are not subject to any ethical review, any employee input, or any transparency requirements whatsoever. They are being built by engineers who have no choice in the matter, using training data that was not collected with consent, and deployed with no human-in-the-loop requirements.

I understand the Anthropic employees' position. I even sympathize with much of it. But I need them to understand what the alternative looks like on the geopolitical stage. The alternative is not "no AI in warfare." The alternative is "only authoritarian AI in warfare." The U.S. military will be slower, less capable, and more likely to make catastrophic errors without access to the best AI systems. Real people — soldiers, civilians in conflict zones — will die as a result of that capability gap.

The employees may say "that is not our problem." And legally, they are right. They have no obligation to support the military. But morally? I think the calculus is more complicated than "military AI bad." And I say that as someone who has seen firsthand what happens when the military has bad intelligence and outdated tools.

LV
LisaVaughn_1stAmend
1st Amend Atty

First Amendment attorney. I want to highlight something that has gotten lost in the policy debate: the Trump ban may be the most legally significant part of this entire story from a constitutional law perspective.

If the administration banned Anthropic from government contracts because of the political speech of its employees, that is a textbook First Amendment violation under the unconstitutional conditions doctrine. It does not matter that no individual employee has a right to a government contract. The government cannot use contract allocation to punish speech. Full stop.

The evidence here is unusually strong for this type of case. Trump's own social media posts explicitly link the ban to the company's political character ("Radical Left AI company"). The timing — ban announced hours after the letter became public — speaks for itself. There is no pretextual justification that can survive that evidentiary record.

If Anthropic sues (and I think they should), this could become a landmark case on government retaliation against corporate employees' speech. The Umbehr line of cases established that independent contractors have First Amendment protection in their contractual relationships with the government. This case would extend that principle to say that the government cannot punish a corporation for the protected speech of its employees. That is a significant expansion of First Amendment doctrine and it is one that I think the current Supreme Court might actually support, given its strong free speech orientation in recent terms.

The problem is practical: litigation takes years. The contract goes to OpenAI tomorrow. But the precedent matters far beyond this one case.

YK
YasminKhalil_PhDCandidate

PhD candidate in CS, studying AI alignment. I had an offer from Anthropic that I was supposed to start in April. I am now seriously reconsidering.

The thing that bothers me is not the defense contract per se. It is the organizational culture that allowed it to happen the way it did. If the leadership team can make a decision this consequential without informing the people who build the technology, what else are they not telling employees? What other compromises have been made quietly?

My research is in alignment — making AI systems do what humans actually want them to do. The irony of working at a company where the AI might be more aligned with stated values than the leadership is not lost on me.

I am in conversations with two other labs now. I suspect many other early-career researchers in my position are making similar calculations. Anthropic's ability to attract top research talent was one of its biggest competitive advantages. If this crisis damages that reputation among grad students and postdocs, the long-term impact on the company's technical capabilities could be severe and compounding.

RJ
RobertJackson_NatSec

Coming back to respond to several people who pushed back on my earlier post. I appreciate the substantive engagement even where we disagree.

@LauraPetrov_LaborLaw You are correct that Anthropic is not Lockheed. But I think the distinction cuts both ways. Anthropic employees accepted compensation packages that valued the company at $60B. That valuation was not based solely on consumer chatbot revenue. It was based on the total addressable market, which includes government and defense. The employees benefited financially from the business strategy they are now protesting.

That does not negate their right to protest. But the moral clarity of the position is complicated when your stock options are denominated in a valuation that assumed defense revenue. "I object to military contracts, but I will keep the equity that priced in those contracts" is a coherent legal position but a less coherent moral one.

@IsabelMorales_CivLib On the loyalty test point: I did not suggest employees should be required to support government policy. I said the government has a legitimate interest in the reliability of its supply chain. If over half the engineers at a weapons manufacturer signed a letter saying they morally oppose making weapons, the Pentagon would understandably look for another supplier. That is not a loyalty test. It is supply chain risk management.

The employees have every right to speak. The government has every right to act on that information. Those rights coexist.

TB
TommyBernard_JuniorDev

Junior dev, 2 years in the industry. I just want to say that this thread has been one of the most informative things I have read about the intersection of tech, law, and politics. The legal analysis especially — NLRA Section 7, California Labor Code 1101-1102, the PBC angle, the security clearance wrinkle — is stuff that they absolutely do not teach you in a CS program.

For those of us early in our careers, the Anthropic situation is a wake-up call. We need to read our employment agreements more carefully before signing them. We need to ask harder questions during interviews about government work and military contracts. And we need to understand our legal rights before we need them, not after.

Saving this thread for future reference. Thank you to the attorneys and experienced professionals who are sharing their expertise here. This is genuinely valuable public service.

DR
DianaRoss_IntlLaw
Intl Law Atty

International humanitarian law attorney. I want to add a dimension that this thread has not addressed: the international legal implications of AI-assisted military targeting.

Under the Geneva Conventions and Additional Protocol I, parties to a conflict must take "constant care" to spare civilians and must verify that targets are military objectives. Under Article 57, those who plan or decide upon an attack must "do everything feasible" to verify targets and minimize civilian harm. The question with AI-assisted targeting is whether relying on an AI system's analysis satisfies the "feasibility" requirement for human verification.

The ICRC has taken the position that AI decision-support tools in targeting must be transparent and explainable to the humans in the loop. If the military is using Claude — a large language model that is fundamentally not explainable in the way the ICRC envisions — there may be arguments that the resulting targeting decisions do not satisfy IHL requirements for human oversight.

This is relevant to the employees because if the use of Claude in Operation Epic Fury resulted in civilian casualties (which we do not yet know), the employees who built the system could theoretically face scrutiny under the principle of aiding and abetting violations of IHL. That is an extremely unlikely legal outcome, but the fact that it is even theoretically possible is something the employees and their counsel should be aware of.

The broader point is that AI in military operations does not just raise domestic employment law questions. It raises questions under the laws of war that the entire AI industry has been avoiding for years.

MR
MarioRivera_DataScience

Data scientist at a Fortune 500. The thing I keep coming back to is the Streisand effect of Trump's response.

Before the "Radical Left" declaration, this was an internal corporate dispute that most Americans would never have heard about. It would have been covered in tech press and legal blogs and that is about it. By turning it into a culture war flashpoint, Trump made the Anthropic protest the biggest tech story of 2026 so far. Claude became the number one app on the App Store within 48 hours of the ban — people downloaded it just because the President told them not to use it.

From a pure brand perspective, being banned by the Trump administration may be the best thing that ever happened to Anthropic's consumer business. The "safety company that stood up to the Pentagon" is a compelling narrative, even if the reality is much messier than that framing suggests. Every person who downloads Claude because they saw the news coverage represents consumer revenue that partially offsets the lost government contract.

I am not saying this was planned. But the outcome is deeply ironic: the punishment intended to destroy Anthropic may end up strengthening it in the commercial market. The question is whether the consumer and enterprise revenue growth is fast enough to fill the hole left by the DoD contract before the company runs out of runway.

JP
JenniferPark_StartupAtty
Startup Atty

Startup attorney here. @GraceHenderson_CorpGov raised the PBC derivative action angle and I want to expand on it because I think this could be genuinely significant for corporate governance law.

Anthropic's PBC charter obligation is to "the responsible development and maintenance of advanced AI for the long-term benefit of humanity." There are two potential PBC claims here, and they cut in opposite directions:

  1. Employees could argue that the board breached its PBC obligations by approving a military contract that was inconsistent with "responsible development." This is the employees' argument.
  2. Investors could argue that the board breached its fiduciary duties by failing to manage the employee relations risk that led to losing the contract and the government ban. This is the investors' argument.

Under Delaware law, both claims would be evaluated under an enhanced version of the business judgment rule that applies to PBCs. The board needs to demonstrate that it balanced stockholder interests with the stated public benefit. If the board approved the DoD contract without consulting the safety team or considering employee impact, that process failure could undermine the business judgment presumption.

I think the investors' claim is actually stronger than the employees' claim, which is a fascinating inversion. The board may face more legal exposure for HOW it handled the contract (poorly managing the human capital risk and failing to anticipate the backlash) than for WHETHER it took the contract in the first place.

NW
NathanWright_DevOps

Follow-up to my earlier post about the technical reality. I have been reading through OpenAI's blog post about assuming the Pentagon contract and one thing jumped out at me.

OpenAI is promising "dedicated infrastructure" and "isolated deployment" for the military use case. This is the Palantir model that Derek described — separate systems, separate teams, separate access controls. It is also exactly what Anthropic should have done from the beginning.

The fact that OpenAI is implementing this architecture from day one tells you that they learned from Anthropic's mistake. When you commingle your consumer and military deployments, you create exactly the kind of ethical and operational mess that Anthropic is now dealing with. When you isolate them, employees who want to work on the military project can opt in, and employees who do not can work on consumer products without worrying about what their code is being used for.

This is not rocket science. It is basic separation of concerns, which every engineer learns in their first year. The fact that Anthropic's leadership failed to implement it suggests either incompetence or a deliberate choice to obscure the military use case from employees. Neither explanation is flattering.

SN
SarahNguyen_AnthEmp

Final update from inside (for now). Anthropic held an all-hands meeting yesterday. I will share what I can without violating any confidentiality agreements.

Dario spoke for about 45 minutes. The tone was somber. He acknowledged that the communication around the DoD contract was "insufficient" and that employees "deserved to know more about the scope of our government partnerships." He announced the creation of an internal Deployment Review Board with employee representatives that will review all future government and military-adjacent contracts before they are signed. He also committed to publishing a public transparency report quarterly that will describe (at a high level) the categories of customers using Claude and the broad use case categories.

On the specific question of Operation Epic Fury, he said he is "limited in what he can share" due to classification restrictions, but acknowledged that "the deployment exceeded the scope of what was communicated to the team." That is the closest to an admission of fault we are likely to get.

The reaction in the room was mixed. Some people felt it was a genuine step forward and evidence that the protest worked. Others felt it was too little, too late, and the same crisis PR playbook that @NicoleBaker_FormerGoogle warned us about. The organizers of the letter are taking the position that the Deployment Review Board is a good start but needs to have actual veto power, not just advisory authority. Negotiations are ongoing.

I do not know how this ends. But I know that 900 people raised their voices and the company heard them. Whether it heard them enough remains to be seen. I will update this thread as things develop. Thank you to everyone here who provided legal context — it has genuinely helped us understand our rights and our options.

KM
KellyMartinez_Mod
OP MOD

Thank you to everyone who has contributed to this discussion. This thread now has over 50 posts covering employment law, constitutional law, international humanitarian law, corporate governance, military strategy, AI ethics, tax implications, and union organizing. That is the kind of cross-disciplinary analysis that complex issues deserve.

Summary of key legal and practical takeaways from this thread:

  • NLRA Section 7 almost certainly protects the act of signing the open letter as concerted activity for mutual aid or protection. Anthropic cannot lawfully retaliate against signatories for this speech.
  • California Labor Code 1101-1102 provides additional state-level protection for political activities, which likely covers protesting military AI use.
  • Security clearance holders may face different risks, as clearance revocation decisions are largely unreviewable by courts under Dept. of Navy v. Egan.
  • The Trump ban may violate the unconstitutional conditions doctrine and the Competition in Contracting Act, but litigation would take years to resolve.
  • PBC derivative actions are a potential avenue for both employees (challenging the contract's consistency with the stated mission) and investors (challenging the board's management of the crisis).
  • Palantir's tiered access model is widely cited as a practical framework that could have prevented this situation.
  • Equity and tax implications should be carefully considered by any employee contemplating resignation — consult a CPA before making irrevocable decisions.
  • Document everything — performance reviews, communications, policy changes, management statements — in case of future retaliation claims.

I am keeping this thread pinned as a megathread. New developments will be added here as the situation evolves. If you are an Anthropic employee or someone directly affected by these events, please consult with a qualified attorney in your jurisdiction. Nothing in this thread constitutes legal advice.

Thread remains open for discussion.

NAC
marcus.j_9

Non-compete update: the FTC's rule was blocked by the courts, so non-competes are still enforceable in most states. California is the exception — Business & Professions Code § 16600 makes virtually all non-competes void. If you're in CA and signed a non-compete, it's probably unenforceable.

NAC
marcus.j_9

Pro tip for freelancers: always include a "kill fee" clause in your contracts. If the client cancels the project mid-way, you're entitled to a percentage of the total contract value. Without this, you're stuck arguing quantum meruit (reasonable value of services rendered), which is harder to prove.

GB
gavel_banger_14

Tech industry labor researcher here. The Anthropic employee protests are part of a much larger pattern that is reshaping the relationship between AI companies and their workforces.

I track employee activism incidents across major tech companies. In 2025, there were 23 documented cases of organized employee protests, open letters, or walkouts related to AI ethics and military contracts at companies including Google, Microsoft, Amazon, Palantir, and now Anthropic. In Q1 2026 alone, we have already seen 11 incidents — a pace that would nearly triple the 2025 total if sustained.

What makes the Anthropic situation legally interesting is the intersection of at-will employment, whistleblower protection, and the emerging concept of “ethical objection” in the workplace. California Labor Code Section 1102.5 protects employees who report suspected violations of law, but it does not explicitly protect employees who object to lawful-but-ethically-controversial business decisions like accepting a military contract. The key question is whether Anthropic’s own published Responsible Scaling Policy creates an implied contractual obligation that employees can enforce. If a court finds that the RSP constitutes a binding commitment — analogous to an employee handbook creating implied contract terms under Foley v. Interactive Data Corp. (1988) 47 Cal.3d 654 — then employees who were disciplined or terminated for raising RSP compliance concerns could have viable wrongful termination claims.

The Deployment Review Board that Anthropic announced is a positive step, but it needs teeth. Specifically: (1) employee representatives must have actual voting power, not just advisory input, (2) the Board’s decisions should be binding on management, and (3) there must be whistleblower protections for employees who escalate concerns to the Board. Without these structural safeguards, it risks being a PR exercise rather than a genuine governance mechanism.

JK
JasonKeller_ArmyVet

Army veteran here. Served in Afghanistan 2011-2013. Now working as a software engineer at a small startup.

Reading this thread has been fascinating. I have a foot in both worlds so I understand both the tech workers' perspective and the military's needs. What strikes me is how little each side understands the other. The Anthropic employees talk about "kill chains" like it's a simple binary decision. The military folks talk like employee consent doesn't matter because "national security."

Here's what I wish more people understood: the military uses AI to save lives as much as it uses it to take lives. When I was deployed, we had pattern-of-life analysis software that helped us distinguish between civilians and combatants. It wasn't perfect, but it was better than making those calls based on gut instinct at 2 AM after 18 hours awake. Better AI means fewer civilian casualties, not more. That should be something safety-focused engineers WANT to contribute to.

But I also get why the employees are upset. If you joined a company because it promised to be different, and then it turns out to be the same as everyone else, you feel betrayed. Trust matters. The leadership broke that trust.

SC
SofiaChen_GoogleAlum

Former Google engineer. I was there during Project Maven but didn't sign the letter (regret that now). Watching the Anthropic situation unfold feels like déjà vu.

One thing I haven't seen mentioned: OpenAI getting the Pentagon contract creates a dangerous precedent. If the lesson companies learn is "the safety-focused lab gets punished and the commercially aggressive lab gets rewarded," that's going to shape AI development for the next decade. Every AI startup will think twice before implementing safety measures that might slow down product development or limit customer base.

The Trump ban is effectively a subsidy for reckless AI development. I don't think that's the intended consequence, but it's the actual consequence. Any CEO looking at this situation will conclude that safety theater is fine, but actual safety commitments that limit business options are career suicide.

TC
TammyCarson_ContractLaw
Attorney

Government contracts attorney with 20 years of federal procurement experience. I want to address the FAR debarment issue that @PriyaSharma_GovCon raised.

The Trump "ban" appears to be informal guidance to agencies rather than a formal debarment proceeding under FAR Part 9.4. That distinction matters. An informal ban can be reversed with a phone call or a change in administration. A formal debarment goes on the excluded parties list and requires a documented basis and due process.

My suspicion is that Anthropic's contracts people are already working backdoor channels to get this reversed or at least clarified in writing. Without a formal debarment, individual agencies can still contract with Anthropic if they choose to — they'd just need to justify why they're going against the President's stated preference. Some agencies might do exactly that if Anthropic's capabilities are significantly better than alternatives.

The wildcard is Congress. If appropriations language specifically prohibits contracting with Anthropic, that's game over until the language changes. But I haven't seen any indication that's happened yet.

DW
DavidWong_AIResearcher

AI safety researcher. Not at Anthropic, at an academic institution. I want to talk about how this affects the broader AI safety research community.

A lot of junior researchers look to Anthropic as proof that you can do cutting-edge AI work without compromising on safety. The company's existence created an incentive for grad students to study alignment and safety instead of just pushing capabilities. If Anthropic becomes just another commercial lab indistinguishable from OpenAI or Google, where do those students go?

There's also a recruiting problem. Some of the best safety researchers in the world work at Anthropic precisely because they believed in the mission. If they leave, that's not just a loss for Anthropic. It's a loss for the entire field. Those researchers can't be easily replaced. Safety expertise takes years to build.

I'm watching carefully to see if the Deployment Review Board is real or just window dressing. If it's real, Anthropic can recover. If it's cosmetic, the talent drain will accelerate.

RT
RachelTurner_CyberSec

Cybersecurity professional working in critical infrastructure protection. I want to add a perspective that's missing from this discussion: defensive military applications of AI.

Everyone's focused on offensive uses like targeting and strike planning. But a huge portion of DoD AI work is defensive — threat detection, vulnerability scanning, anomaly detection in networks, attribution analysis for cyberattacks. This stuff protects civilian infrastructure as much as it protects military systems.

When Anthropic employees say they don't want their work used for "military purposes," do they mean ALL military purposes? Or just kinetic operations? Because if Claude was being used to protect the electrical grid from hostile state actors, would that change the calculus?

The problem is we don't have enough information to know what "use in Operation Epic Fury" actually means. Was it offensive targeting? Defensive cyber operations? Logistics coordination? Intelligence analysis? Each of these has a different ethical profile, but the reporting has been frustratingly vague.

MP
MarcusPatel_DARPA

Former DARPA program manager (now in private sector). I worked on several AI/ML programs during my time there including autonomous systems research.

The relationship between DARPA and commercial AI labs has always been complicated. DARPA wants access to cutting-edge capabilities. The labs want funding and prestige but don't want to be seen as defense contractors. So you get these arrangements where the framing is always "basic research" and "dual-use" even when everyone knows the end goal is military application.

I'm not defending it — I think the ambiguity causes exactly the kind of problems Anthropic is facing now. But I want people to understand this isn't unique to Anthropic. Every major AI lab has some version of this relationship with the government. Some are just more honest about it than others.

Google does it through Google Public Sector. Microsoft does it through Azure Government. Amazon does it through AWS GovCloud. They all have military and intelligence customers. The main difference is those companies never positioned themselves as "safety-first alternatives," so there's no betrayal narrative when they take defense contracts.

LH
LisaHernandez_EUPolicy

European AI policy analyst. The U.S.-centric discussion here is missing an important international dimension.

The EU AI Act has specific provisions for "high-risk" AI systems used in law enforcement and migration management. Military AI systems are explicitly excluded from the Act's scope (Article 2), but that doesn't mean there are no regulations. EU member states are developing their own frameworks, and there's active discussion about whether military AI should be governed by humanitarian law, export controls, or both.

What's interesting is that European researchers are watching the Anthropic situation closely as a case study in corporate governance of dual-use AI. There's a growing consensus in Brussels that employee voice in deployment decisions isn't just a labor rights issue — it's a safety issue. AI systems used in high-stakes military contexts need checks and balances, and internal employee oversight is one mechanism.

If the U.S. continues down a path of "shut up and build," European labs with stronger governance structures may actually have a competitive advantage in certain markets. Not all customers want the most aggressive AI. Some want the most trustworthy AI.

KR
KevinRoss_PentagonContractor

I work for a major defense contractor (not naming which one for obvious reasons). Our company does AI/ML work for the military and has for years. I want to share how we handle employee concerns because I think it's relevant.

We have an ethics hotline. Any employee who has concerns about a program they're working on can file a report. It goes to an independent ethics office, not to management. The ethics office investigates and provides a written response within 30 days. If the concern involves potential legal violations, it gets escalated to legal and compliance.

Is it perfect? No. Are some concerns dismissed too quickly? Probably. But it's a formal mechanism with documented procedures and anti-retaliation protections. Employees know it exists before they start working on classified programs.

Anthropic appears to have had no equivalent mechanism. When employees discovered the Pentagon contract, their only options were (1) quit, (2) stay silent, or (3) organize a public protest. That's a failure of corporate governance, full stop. The Deployment Review Board is good, but why didn't this exist BEFORE the first defense contract was signed?

NS
NinaSmith_CongressStaffer

Congressional staffer (anonymizing for obvious reasons). I work on a committee that oversees defense acquisition and emerging technology. We've been following the Anthropic situation.

There are active discussions about whether Congress needs to get involved. Some members are concerned that the Trump ban was arbitrary and politically motivated. Others think Anthropic got what it deserved for employing "radical" employees. It's breaking along predictable partisan lines.

What might actually move the needle is if there are hearings on AI in military operations more broadly. The Anthropic controversy is a symptom of a much larger problem: the Pentagon is rapidly integrating AI into operations without clear policy frameworks, without Congressional oversight mechanisms, and without transparency to the public.

DoD Directive 3000.09 on autonomous weapons systems is from 2012 and hasn't been meaningfully updated. There's no requirement for DoD to report to Congress on AI deployments. There's no independent review of whether AI-assisted targeting complies with laws of war. This is a governance gap that needs to be filled.

JB
JordanBaker_PhilEthics

Philosophy professor specializing in technology ethics. I teach a course on AI and warfare and I'll be using this case next semester.

The Anthropic situation illustrates a classic problem in virtue ethics: can an organization be virtuous if it systematically deceives its own members? Anthropic marketed itself as the "responsible" AI company. That identity wasn't just marketing — it was a moral claim about the kind of organization they were.

When you make that claim, you create expectations. Employees who join based on those expectations are relying on the organization to uphold its stated values. If the organization then acts in ways that contradict those values — especially in secret — it's not just a business decision. It's a betrayal of a moral commitment.

The employees' anger isn't just about the Pentagon contract. It's about the discovery that the company's virtue claims were either never true or were abandoned the moment they became financially inconvenient. That's a deeper violation than policy disagreement.

AG
AlexGarcia_Autonomy

Robotics engineer working on autonomous systems (commercial, not military). The autonomous weapons debate is adjacent to this but worth discussing.

DoD Directive 3000.09 requires human supervision for lethal autonomous weapons. But what does "supervision" mean when the AI processes information faster than humans can verify it? If an AI system analyzes 10,000 data points and presents a targeting recommendation, and a human clicks "approve" without the time or expertise to verify those 10,000 data points, is that meaningful human control?

This is the real concern with AI in military operations. Not that humans will be completely removed from the loop, but that the human-in-the-loop becomes a rubber stamp because the AI is too complex and too fast for meaningful oversight.

The Anthropic employees are right to be worried about this. If Claude was used for operational planning in Epic Fury, someone needs to be asking: did the humans in the decision chain understand Claude's reasoning? Could they verify it? Or did they just trust it because the AI seemed confident?

SL
SamanthaLee_Journalist

Tech journalist covering AI for a major publication. I've been reporting on this story since it broke. A few observations:

The Anthropic employees have been remarkably disciplined about not leaking classified or sensitive information. The open letter stayed focused on process and transparency, not on revealing details about the contract. That's strategic and legally smart.

But it also limits what the public can know. I've tried to report on what Claude actually did in Operation Epic Fury and hit a wall. DoD won't comment. Anthropic won't comment beyond Dario's vague statement. Even the whistleblowers are being careful not to cross classification lines.

This creates an impossible situation for informed public debate. We're arguing about military AI in the abstract because the actual details are classified. The employees are upset about something they can't fully disclose. The public is asked to have opinions about operations they can't examine.

Classification is necessary for some things, but when it's used to shield policy decisions from democratic accountability, it becomes a problem. This entire debate is happening in an information vacuum.

MB
MikeBrown_RestaurantOwner

Small business owner here (restaurant, not tech). I know I'm way outside my lane commenting on this thread, but I've been following it because the employment law issues are universal.

I have 15 employees. If 10 of them signed a letter complaining about how I run my business, my first instinct would be defensive. But my second instinct — the one I'd hopefully act on — would be to ask why I lost the trust of two-thirds of my staff.

When that many people organize collectively, it's usually because individual channels of communication failed. Either they tried to raise concerns and were ignored, or they never felt safe raising concerns in the first place. Both are management failures.

The scale is different (900 employees vs. my 15), but the principle is the same. If your workers feel they have to organize a protest to be heard, you've already lost. The question is whether you can earn their trust back.

CT
ClaireThompson_HRConsult

HR consultant specializing in tech companies. I've advised three companies through employee activism crises similar to this (though not as high-profile).

The common thread: companies that survive these moments do three things. (1) They acknowledge the problem quickly and specifically. (2) They create structural changes, not just policy statements. (3) They empower the people who raised concerns instead of sidelining them.

The companies that fail do the opposite. They issue vague statements. They promise reviews that never materialize. They quietly punish the organizers while claiming publicly to support employee voice.

Anthropic is at a crossroads. The Deployment Review Board announcement was step 1. Now we'll see if it's real. If employee representatives have actual power and the board's decisions are binding, that's step 2. If the organizers are promoted instead of passed over, that's step 3.

I'm cautiously optimistic based on what I've seen so far. But the test will be whether this changes behavior or just changes optics.

WP
WilliamPerry_NatSecLaw
Attorney

National security lawyer. I represent both defense contractors and cleared employees in disputes. This case touches on several areas of national security law that haven't been fully addressed in this thread.

First, if any Anthropic employees with security clearances participated in organizing the letter, they may face clearance adjudication issues. The Adjudicative Guidelines for security clearances include "Guideline B: Foreign Influence" and "Guideline E: Personal Conduct." Public statements that could be viewed as opposition to U.S. defense policy can trigger scrutiny under Guideline E.

Second, the classification of Claude's use in Epic Fury creates a legal trap. If employees want to challenge the legality or propriety of the deployment, they can't publicly disclose the details without violating classification laws. But without disclosing details, they can't make a specific case for why it was improper. This is the same catch-22 that whistleblowers face in the intelligence community.

Third, export control laws (ITAR and EAR) may apply to military AI systems. If Claude was integrated into classified systems, there may be restrictions on who can work on it and what can be disclosed. This is an underappreciated legal constraint on employee voice in defense work.

DM
DianaMorgan_TruckDriver

Long-haul truck driver. I know this seems random but hear me out. I've been following this because AI is coming for my industry too.

Self-driving trucks are being tested by multiple companies. Some of those companies have military contracts or could get them. If autonomous vehicle AI developed for commercial trucking gets repurposed for military convoy operations, should the engineers who built it have a say in that? Or is it just "dual-use technology" and they should have known better?

I don't have an answer. But it seems like once you start down the road of building powerful AI systems, you lose control of how they're used. The Anthropic employees are trying to maintain that control. I hope they succeed because the precedent matters for a lot of industries beyond AI chatbots.

PN
PeterNguyen_CivilianDOD

DOD civilian employee working in acquisition. Not speaking for my employer, all views my own, etc.

The internal DOD reaction to the Anthropic ban has been... complicated. A lot of program managers are frustrated because they liked working with Anthropic. The technical quality was high, the responsiveness was good, and the safety focus actually made some of our compliance work easier.

Now we're being told to work with OpenAI instead. Some folks are fine with that. Others are worried we're getting a less mature safety culture. There's also concern that the ban was politically driven rather than based on actual performance or security issues, which sets a bad precedent for procurement integrity.

If contractors can be banned based on the political activities of their employees, that's going to have a chilling effect on employee speech across the entire defense industrial base. I don't think that's a road anyone actually wants to go down, but here we are.

EK
EmilyKnight_Teacher

High school teacher. I teach computer science and we've been discussing this case in class as part of our ethics unit.

My students (17-18 year olds) are split almost exactly down the middle. Half think the employees are being naive about how the world works. Half think the employees are heroes for standing up to the military-industrial complex.

What's interesting is that the split doesn't correlate with political affiliation like you'd expect. Some of my conservative students are sympathetic to the employees because they see it as workers standing up to corporate power. Some of my progressive students think the employees are wrong because they're undermining national defense.

I think that's because this case doesn't fit neatly into our usual political categories. It's not left vs. right. It's about competing values — worker autonomy vs. organizational authority, transparency vs. national security, individual conscience vs. collective necessity. No easy answers.

RW
RobertWilliams_Retiree

Retired aerospace engineer. Worked on defense contracts for 35 years including classified programs I still can't talk about.

Back in my day, if you didn't want to work on defense projects, you didn't take a job at Lockheed or Boeing or Northrop. It was that simple. The idea that you could work at a tech company and expect to have veto power over customer relationships would have seemed absurd.

But times change. The tech industry built a different culture where employee voice matters. That's not necessarily bad — some of the worst decisions in defense procurement happened because engineers kept their mouths shut when they saw problems.

What I hope comes out of this is a better model for how defense and commercial work can coexist. Separate teams, clear disclosure, opt-in participation. It's not that complicated. We've been doing it in aerospace for decades. Tech just needs to catch up.

JT
JenniferTang_StudentActivist

Graduate student in CS at Stanford. Also involved in student activism around tech ethics.

The Anthropic situation is affecting recruiting at universities in real time. Multiple students in my program who had Anthropic offers are reconsidering. The career services office has gotten questions from undergrads about how to evaluate companies' ethical commitments vs. their marketing claims.

There's a generational element here that's being underplayed. Millennials and Gen Z tech workers have different expectations about employer-employee relationships than previous generations. We saw the financial crisis, the gig economy, the constant layoffs. We're skeptical of corporate promises.

When a company explicitly brands itself as different and then acts like every other company, the backlash is severe. Anthropic's recruiting pitch to students was literally "we're the responsible AI company." That message resonated. Now it's a liability.

BH
BrianHarris_NurseICU

ICU nurse. Completely outside the tech world but I've been following this because it relates to a debate we're having in healthcare.

AI diagnostic tools are being developed by companies that also do defense contracting. If the same AI that helps me diagnose sepsis early is also being used for military intelligence analysis, does that matter? Should I care?

My instinct is that dual-use technology is everywhere and we can't avoid it. The chip in my phone probably has military applications. The internet I'm using to write this was originally DARPANET. Medical advances often come from military research.

But I also understand why the Anthropic employees are upset. They joined a specific company for specific reasons. The company changed the deal. That's different from just accepting that technology has multiple uses.

I guess what I'm saying is: the issue isn't whether technology CAN be dual-use. It's whether companies should be transparent about that from the beginning. Anthropic wasn't. That's the problem.

LC
LucasCohen_SmallBizOwner

Small business owner (IT consulting firm, 8 employees). The scale is wildly different but the trust issue resonates.

I made a decision last year to take on a client that one of my employees was uncomfortable with (cryptocurrency company, employee thought crypto was harmful). I talked to the employee, explained why I thought the work was legitimate, offered to let them not work on that project. They appreciated being consulted even though they disagreed with the decision.

That's the thing — you can't give employees veto power over every business decision. But you can create a culture where their concerns are heard and taken seriously. Seems like Anthropic didn't do even that basic level of communication.

The irony is that if Dario had floated the DoD contract idea in an all-hands before signing, he probably could have addressed concerns early and avoided this whole crisis. Some employees would have left. That's fine. Better than 900 angry employees after the fact.

SN
SarahNguyen_AnthEmp

Final update from me (probably). The Deployment Review Board has been formally constituted. Five employee representatives were elected by company-wide vote. Three external ethics advisors were appointed. The board's charter gives it binding authority over any government contract above $10M or any contract involving military end-use.

First meeting is next week. The organizers of the original letter are cautiously optimistic but in a "we'll believe it when we see it" mode. The big test will be whether management actually follows the board's recommendations or finds ways to work around them.

About 25 employees have left since this started, including the two senior safety researchers mentioned earlier. But the mass exodus that recruiters were predicting hasn't happened. Most people are waiting to see if the reforms are real.

For what it's worth, I'm staying. I believe the company can do better. The protest wasn't about leaving — it was about making Anthropic live up to its stated values. If we leave, those values die completely. If we stay and fight, there's at least a chance.

Thank you all for the legal education in this thread. It genuinely helped me and others understand our rights and options. That's valuable.

KM
KellyMartinez_Mod
OP MOD

Thread remains open for discussion as developments continue. This is now one of the longest and most substantive threads in the forum's history.

Key themes that emerged:

  • Employee rights under NLRA Section 7 and California Labor Code 1101-1102 are robust but not unlimited
  • The Trump ban raises serious First Amendment and procurement law questions
  • Anthropic's PBC structure creates unique corporate governance obligations
  • Dual-use technology governance needs industry-wide frameworks, not ad-hoc responses
  • The precedent set here will influence AI development and employee relations for years

Thank you to all contributors, especially the attorneys, former government officials, and tech workers who provided detailed analysis and personal perspectives. This is exactly what this forum is for.

Join the discussion — free for verified professionals.

Browse Forum