Private members-only forum

MEGATHREAD PINNED AI in the Iran War — Claude, ChatGPT, DeepSeek & the Ethics of Algorithmic Warfare

Started by KellyMartinez_Mod · Mar 1, 2026 · 59 replies · General Discussion
Information shared here is for educational purposes only and does not constitute legal advice.

TL;DR — Key Discussion Points

KM
KellyMartinez_Mod MOD OP

Pinning this as a megathread given how fast things are moving. Operation Epic Fury launched Friday night (Feb 28) and we already have confirmation that multiple AI systems were used in the targeting pipeline. This touches on international humanitarian law, employment law, government contracting, and tech ethics all at once, so I want one central place for the discussion.

Here is what we know so far from confirmed reporting:

  • The Pentagon used Anthropic's Claude for intelligence assessment, target identification, and battle damage assessment during the lead-up to strikes on Iranian nuclear and military infrastructure.
  • President Trump publicly declared Anthropic a "Radical Left AI company" and signed an executive order banning it from government contracts on March 1.
  • OpenAI reportedly took the Pentagon deal within hours of Anthropic being cut off.
  • Over 900 Anthropic employees have signed a letter titled "We Will Not Be Divided" protesting military use of Claude.
  • Separately, China's DeepSeek model was reportedly used by the PLA for battlefield simulation, running 10,000 scenarios in 48 seconds.
  • The LUCAS autonomous drone program ($35K per unit, reverse-engineered from Iranian Shahed-136) appears to incorporate AI targeting.

Ground rules for this thread: keep it professional, cite sources where possible, and flag clearly when you are speculating versus stating confirmed facts. I will be moderating aggressively. No partisan flamewars. We are here to discuss the legal and ethical dimensions.

I will update the TL;DR box at the top as the discussion evolves. Let's hear from the IHL experts, defense attorneys, tech workers, and anyone else with relevant perspective.

RH
Prof_RichardHaines IHL SCHOLAR

Thank you for organizing this, Kelly. I teach international humanitarian law at Georgetown and have been fielding calls from journalists all weekend. Let me lay out the legal framework as concisely as I can.

The core question is whether AI-assisted targeting complies with the principles of distinction and proportionality under Additional Protocol I to the Geneva Conventions. Article 51(4) prohibits indiscriminate attacks. Article 57 requires "those who plan or decide upon an attack" to take "constant care" to spare civilians. The critical legal question is: does an AI system doing target identification constitute "those who plan or decide upon an attack"?

Under current IHL, the answer is almost certainly no. The AI is a tool, and the human commander retains legal responsibility. But the practical reality is that when a system like Claude processes thousands of intelligence inputs and outputs a target recommendation in seconds, the human "in the loop" is increasingly a rubber stamp. The ICRC has been warning about this since their 2021 position paper on autonomous weapons.

The more immediate legal issue is whether the speed of AI-assisted targeting fundamentally undermines the "feasible precautions" requirement. If an AI can generate a target package faster than a human can meaningfully review it, are we meeting our Article 57 obligations? I would argue we are on extremely thin ice.

I should note that none of this is entirely new. Israel's "Gospel" and "Lavender" systems in Gaza raised identical questions. But the scale of Operation Epic Fury, and the explicit acknowledgment that Claude was in the targeting pipeline, makes this a qualitative leap.

JT
JakeTorres_Vet

20 years in the Army, retired O-5. I was in the targeting cell during OIF in 2003. Let me give you the practitioner's perspective here because the academic framing, with all due respect to the professor, misses some operational reality.

We have always used tools to identify targets. Signals intelligence, imagery analysis, human intelligence — all of it feeds into a targeting process that culminates in a commander's decision. Claude being in that pipeline is not fundamentally different from an imagery analyst using pattern recognition software. The commander still approves the strike. The JAG still reviews the target package. The ROE still apply.

Where I do see a problem is the speed issue the professor raised. In my day, a time-sensitive target took hours to work through the kill chain. If AI compresses that to minutes, the review process gets compressed too, and that is where mistakes happen. Not because the AI is wrong, but because humans do not have time to catch it when it is.

I also want to push back on the "rubber stamp" characterization. Every commander I served under took targeting authority deadly seriously. You do not just click approve on a strike package. You look at the collateral damage estimate, the civilian pattern of life, the intel confidence level. AI might speed up the front end, but the back end — the actual decision — still rests on a human being who knows they could end up in front of an Article 32 hearing if they get it wrong.

SN
SarahNguyen_IP ATTORNEY

I want to focus on the contracting and employment law angle here because that is my lane.

The executive order banning Anthropic from government contracts is, frankly, legally dubious. Government contracting is governed by the FAR (Federal Acquisition Regulation) and the CICA (Competition in Contracting Act). You cannot just ban a company because the President does not like its politics. There are specific debarment and suspension procedures under FAR 9.4 that require cause — fraud, criminal conviction, serious performance failure, etc. Calling a company "Radical Left" is not cause.

Anthropic almost certainly has standing to challenge this under the APA (Administrative Procedure Act) as arbitrary and capricious agency action. They could also potentially bring a First Amendment claim if the ban is retaliatory for the employee letter or the company's public statements about AI safety. There is precedent here: the Trump v. Hawaii travel ban cases established that even facially neutral executive actions can be struck down if motivated by animus.

The more interesting question is what happens to OpenAI's contract. If they took over a sole-source contract without going through proper competitive bidding, that is a potential CICA violation. Any disappointed bidder — or Anthropic itself — could file a protest with the GAO.

On the employment side, the 900+ Anthropic employees who signed the letter are in a fascinating position. California Labor Code Section 1101 prohibits employers from adopting rules that prevent employees from engaging in political activity. If Anthropic retaliates against signatories, they have a strong wrongful termination claim. But if Anthropic does not retaliate and instead embraces the letter, Trump might use that as further justification for the ban. It is a Catch-22.

AE
AnthropicAnon_2026

I work at Anthropic. I am posting under a throwaway for obvious reasons. I signed the letter. I want to give some context that is not in the press coverage.

Most of us did not know about the military contract until the Washington Post story broke Thursday night. The contract was handled by a small team and classified at a level most employees did not have access to. When the story dropped, people were furious. Not just because of the ethical implications, but because leadership had repeatedly told us in all-hands meetings that Claude would not be used for lethal targeting. The word "lethal" was doing a lot of work in those statements, apparently, because intelligence assessment and target identification were considered non-lethal applications.

The letter came together in about four hours Friday morning. 900 signatures is roughly 40% of the company. There were people on both sides. Some colleagues think we have a duty to support national defense. Others, including me, think we were hired under an implicit social contract that our work would not be used to help kill people. There is no explicit clause in our employment agreements about military use, which is part of the problem.

I cannot say more about the technical details of how Claude was used, but I can say that the framing of Claude as "just another tool in the pipeline" understates its role significantly. The system was not just processing data. It was making assessments and recommendations that directly shaped which targets were struck and which were not. Whether you call that a "decision" is a semantic question, but functionally, it was.

Some of us are talking to employment lawyers. The California protections SarahNguyen mentioned are real, and we are aware of them. But the bigger concern is that the executive order essentially put a target on anyone associated with Anthropic. Some colleagues with active security clearances are worried about having them revoked by association.

DW
DanWright_DefCon

Defense contractor here, 15 years in the space. I need to push back on some of the hand-wringing in this thread.

Every major weapons system in the US arsenal has software in the decision loop. AEGIS, Patriot, the F-35's sensor fusion, JDAM targeting — all of these incorporate algorithmic processing that shapes kill decisions. The difference with Claude is that it is a large language model rather than a narrow AI, and that distinction is less meaningful than people think. A targeting algorithm is a targeting algorithm. The question is accuracy and reliability, not whether the algorithm was trained on internet text.

As for the "speed kills meaningful review" argument: the entire point of time-sensitive targeting is speed. If a mobile missile launcher is about to fire, you do not have three hours for a committee meeting. The legal standard is "feasible precautions," not "every precaution imaginable." If AI gets you to a 95% confidence target ID in two minutes instead of 60% confidence in two hours, you have actually improved your compliance with IHL, not degraded it.

The real story here is not the ethics. It is that Anthropic got cold feet, got banned, and OpenAI swooped in. This is a $2.3 billion addressable market in DoD AI contracts over the next five years. Anthropic just voluntarily removed themselves from it. Their shareholders should be furious.

LK
LisaKim_Ethicist

I run an AI ethics research lab at Stanford. Dan, your framing of this as "hand-wringing" is exactly the dismissive attitude that has gotten us into this situation.

The difference between AEGIS and Claude is not trivial. AEGIS operates within extremely narrow, well-defined parameters. It detects incoming missiles and engages them. The decision space is constrained. Claude is a general-purpose reasoning system that was asked to assess intelligence and identify targets in a complex, ambiguous operational environment with imperfect information. The failure modes are categorically different. AEGIS might fail to intercept a missile. Claude might misidentify a school as a weapons depot because it pattern-matched incorrectly on satellite imagery metadata.

Your 95% confidence number is also misleading. Confidence scores from LLMs are not calibrated the same way as traditional statistical models. When Claude says it is 95% confident in a target identification, that number is generated by the same next-token prediction mechanism that sometimes confidently hallucinates legal citations. We do not have rigorous, published benchmarks for Claude's performance on military target identification because, by definition, that work is classified.

And the market argument is morally bankrupt. "Someone else will do it if we do not" is not an ethical principle. It is an abdication of responsibility. I am glad 900 Anthropic employees understand that even if their company's leadership apparently does not.

MR
MarcusReed_JAG ATTORNEY

Former JAG officer, now in private practice doing defense-side government contracts work. Let me address the legal review process because I think both sides are getting it slightly wrong.

The targeting review process in a major operation like Epic Fury involves multiple layers: the combatant command JAG, the component command JAG, sometimes the Pentagon's General Counsel office, and for high-collateral-damage estimates, the Secretary of Defense personally. This process does not go away because AI is in the pipeline. If anything, the existence of an AI tool generating the initial assessment creates a clearer record for post-hoc legal review.

The real legal exposure is not in the targeting itself. It is in the question of whether the AI system was adequately tested and validated before being deployed in a live operation. DoD Directive 3000.09 requires that autonomous and semi-autonomous weapons systems undergo "rigorous testing and evaluation." If Claude was deployed in a targeting role without sufficient T&E, that is a policy violation that could have legal consequences for the officials who authorized its use.

I also want to flag something nobody is talking about: battle damage assessment. Post-strike BDA is a legal requirement under IHL. If Claude was used for BDA, it was essentially determining whether a strike achieved its military objective and whether civilian casualties occurred. If Claude underreported civilian casualties, that is not just an ethical problem. It is a potential war crime cover-up, and the individuals who relied on that assessment without independent verification could face personal liability under command responsibility doctrine.

YP
YasminPatel_Tech

Software engineer here, previously at Google (quit during Project Maven). I want to talk about the technical reality of how these systems actually work because there is a lot of misunderstanding in this thread.

When the Pentagon says it used "Claude for intelligence assessment," that almost certainly means they built a custom pipeline on top of Claude's API, likely using Claude with fine-tuning or extensive system prompts tailored to military intelligence analysis. Claude itself is a general-purpose model. The military application layer is where the targeting-specific logic lives. This distinction matters because it means Anthropic may not have had full visibility into how Claude was being used. The API terms of service prohibit certain uses, but enforcement of those terms in a classified environment is essentially impossible.

The technical vulnerability here is prompt injection and adversarial inputs. If Iranian intelligence services understood the system architecture, they could potentially manipulate the inputs (fake signal intercepts, planted documents, spoofed satellite imagery) to cause Claude to generate incorrect target assessments. This is not hypothetical. Adversarial attacks on vision and language models are well-documented in the academic literature. In a military context, a successful adversarial attack could result in strikes on civilian infrastructure.

I left Google over Maven because I believed then, and believe now, that general-purpose AI systems are not reliable enough for life-and-death decisions. Eight years later, the models are better, but the fundamental problem — lack of ground truth validation in novel operational environments — has not been solved.

CP
ColonelPike_Ret

Retired Air Force colonel, 28 years. I commanded a wing that was one of the first to integrate AI targeting tools in 2024. I am now consulting for a defense think tank.

I want to address the DeepSeek dimension because everyone is focused on the US side and ignoring what is arguably the more dangerous development. The PLA running 10,000 battlefield simulations in 48 seconds is not just a speed advantage. It is a qualitative shift in how military planning works. Those simulations are not just "what if" exercises. They are being used to identify optimal force positioning, predict US response patterns, and pre-plan counter-strike packages.

The United States does not have the luxury of unilateral disarmament in AI warfare. If we decide, for ethical reasons, that Claude and ChatGPT cannot be used in targeting, we do not get a world without AI in warfare. We get a world where China and Russia use AI in warfare and we do not. That is not a better outcome for anyone, including Iranian civilians.

The Anthropic employee letter is well-intentioned but strategically naive. The signatories are essentially saying they would prefer the US military operate with a competitive disadvantage against adversaries who face no such ethical constraints. I understand the moral impulse, but morality does not exist in a vacuum. It exists in a world where adversaries are actively using these tools.

What we need is not prohibition. It is regulation. Clear rules of engagement for AI systems, mandatory human review thresholds, transparency requirements, and international treaty negotiations. The CCW (Convention on Certain Conventional Weapons) framework is the right venue for this, but progress has been glacially slow.

KM
KellyMartinez_Mod MOD OP

Great discussion so far. Quick mod note: I have deleted three posts that were just partisan talking points with no legal or ethical substance. Keep it focused, people.

AnthropicAnon — if you are still here, a question. You mentioned that the contract was "classified at a level most employees did not have access to." Does Anthropic have a facility security clearance? Are there employees with individual clearances? The implications for information handling and insider threat obligations under NISPOM would be significant if so.

AE
AnthropicAnon_2026

Kelly, I cannot answer that directly without potentially creating problems for myself. What I can say is that the organizational structure for handling classified work was set up in a way that kept it siloed from the broader engineering and research teams. Most of us interact with Claude through the same APIs and interfaces the public uses. There is a separate group that interfaces with government clients.

What I will add is that the "We Will Not Be Divided" letter title was deliberately chosen. There was a real fear that leadership would try to split the company into "pro-defense" and "anti-defense" factions. The letter was meant to signal that this is not a fringe position. It is a broadly shared concern across engineering, research, policy, and even some people in the government partnerships team.

Several people have already retained counsel. I am one of them. My lawyer tells me that California's whistleblower protections (Labor Code 1102.5) may apply if we can show that the military use of Claude violated Anthropic's own published responsible use policy. That policy explicitly states Claude should not be used for "activities that could cause serious harm." Whether military targeting constitutes "serious harm" seems self-evident, but corporate lawyers have a way of defining terms narrowly.

DW
DanWright_DefCon

AnthropicAnon, with respect, your company's leadership made a business decision to take a government contract. If you disagreed with that decision, your remedy was to quit, not to organize a public pressure campaign that gave the President political cover to ban your employer from a major revenue stream.

I realize that sounds harsh, but think about it from the defense industry perspective. Every major defense contractor has employees who personally disagree with specific programs. Lockheed engineers do not publicly sign letters protesting the F-35. Raytheon employees do not organize walkouts over missile sales to Saudi Arabia. Not because those employees lack morals, but because they understand the distinction between personal ethics and professional obligations.

The tech industry's belief that it gets to dictate how its products are used by the United States government is, frankly, extraordinary. No other industry operates this way. And the practical result of the Anthropic revolt is not that Claude stops being used for warfare. It is that OpenAI's ChatGPT gets used instead, without any of the safety guardrails Anthropic was apparently building in. Congratulations on your moral victory, I guess.

RA
RachelAdams_LaborLaw ATTORNEY

Employment lawyer here. I want to address Dan's point about "your remedy was to quit" because that is legally incorrect in several important ways.

First, under the National Labor Relations Act Section 7, employees have the right to engage in concerted activity for mutual aid and protection. A letter signed by 900+ employees protesting working conditions (which includes the nature of the work being performed) is textbook protected concerted activity. This applies even to non-unionized workplaces. Anthropic cannot retaliate against the signatories without violating federal labor law.

Second, California law goes further. Labor Code 1101-1102 prohibit employers from controlling or directing the political activities of employees. If the letter is framed as political speech (which protesting military contracts arguably is), it is doubly protected.

Third, there is the whistleblower angle. If Anthropic's acceptable use policy prohibits military targeting applications, and the company violated its own policy by taking this contract, employees who report that violation are protected under California Labor Code 1102.5 and potentially under Sarbanes-Oxley if Anthropic is considered a public-reporting company (they have public debt instruments).

The comparison to Lockheed and Raytheon is inapt. Those companies' employees signed up to work in defense. Anthropic's employees were recruited with explicit assurances about AI safety and beneficial use. The implied contract theory is strong here: if your employer promises you are building technology to help humanity and then uses it to identify bombing targets, that is a material change in working conditions that triggers various legal protections.

Dan, I would also note that "just quit" is not a legal remedy. It is an economic punishment for exercising legally protected rights. That is the definition of a chilling effect.

OG
OmarGarcia_OSINT

OSINT researcher here. I have been tracking the open-source evidence around Operation Epic Fury and want to share some findings relevant to the AI targeting discussion.

Based on satellite imagery from Planet Labs and Maxar, at least 47 distinct target sites were struck in the initial wave. Of those, OSINT analysts have been able to verify 39 as legitimate military targets (IRGC facilities, missile production sites, air defense installations). Six sites remain ambiguous, and two appear to be civilian infrastructure that was either misidentified or had dual-use characteristics.

The two potentially misidentified sites are a pharmaceutical warehouse in Isfahan and a telecommunications facility in Shiraz. Iranian state media is claiming these were deliberate attacks on civilian infrastructure. US Central Command says both sites were assessed as "military-adjacent" with "confirmed weapons storage." Without ground truth verification, which we will not have for weeks or months, we cannot determine whether the AI targeting system made an error, whether the human reviewers overrode an AI recommendation, or whether these were actually legitimate targets that Iran is misrepresenting.

What I can say is that the 39-out-of-47 accuracy rate, if you want to call it that, is broadly consistent with historical US strike accuracy in major operations. The first Gulf War had a higher reported rate, but that was also heavily curated. The 2003 invasion of Iraq had significant misidentification problems in the initial strikes. So if Claude was in the loop, it performed roughly in line with historical human-driven targeting. Whether "roughly in line" is good enough when AI was supposed to be better is a separate question.

NK
NatalieKwon_PhD

AI researcher at MIT, focus on LLM evaluation and safety. I want to address something Lisa raised about confidence calibration because it is central to the legal question of whether AI targeting meets the "feasible precautions" standard.

Lisa is correct that LLM confidence scores are not calibrated in the same way as traditional statistical models. But it is important to understand why. When Claude outputs a target assessment with a confidence level, that confidence is derived from the model's internal representations, not from a frequentist probability calculation. This means the confidence score is more like "how consistent is this conclusion with my training data and the provided inputs" rather than "there is a 95% probability this is a military target."

The Pentagon's AI evaluation frameworks (documented in the CDAO's Responsible AI Toolkit) do include calibration testing. If Claude was deployed through proper channels, its confidence scores would have been validated against known ground truth datasets. But here is the catch: those datasets are based on historical intelligence, which means the model's calibration is only as good as the historical data it was tested against. In a novel operational environment like Iran, with different infrastructure patterns, different camouflage techniques, and different geopolitical context, the calibration could be significantly off.

This is the fundamental problem with deploying general-purpose AI in military contexts. The model has broad capabilities but narrow validated reliability. It is like using a Swiss Army knife for surgery. Yes, it has a blade, but it was not designed, tested, or validated for that specific high-stakes application.

BH
BrianHolt_GovCon

Government contracts attorney here. I want to address the OpenAI takeover of the contract because there are serious procurement law issues that are not getting enough attention.

If the Anthropic contract was a standard FAR Part 15 negotiated procurement, you cannot just hand it to another vendor. The contracting officer would need to either (a) terminate Anthropic's contract for convenience and issue a new solicitation, or (b) invoke some emergency authority. The likely legal basis here is FAR 6.302-2 (unusual and compelling urgency) or potentially a national security exemption under 10 USC 3204.

But even under urgency or national security exemptions, there are documentation requirements. The contracting officer needs to prepare a justification and approval (J&A) document, and for a contract of this size, it would need approval at a very senior level — potentially the Under Secretary of Defense for Acquisition.

The timeline here is suspicious. If Anthropic was banned on Saturday morning and OpenAI had the contract by Saturday evening, there is almost no way proper procurement procedures were followed. That opens up the contract to a GAO bid protest, a Court of Federal Claims challenge, or both. Any disappointed offeror (not just Anthropic, but Google, Microsoft, Palantir, etc.) could file.

I would also note that sole-source contracts awarded under urgency exemptions are subject to enhanced oversight under the DoD Inspector General's purview. Someone is going to be auditing this contract, and if it turns out the "urgency" was manufactured by an executive order that itself lacks legal basis, the whole procurement could unravel.

VL
VeronicaLaw_ACLU

Constitutional law attorney. I spend most of my time on First Amendment and due process cases. The executive order banning Anthropic is one of the most blatant viewpoint-discrimination actions I have seen from any administration.

The President called Anthropic "Radical Left" and then banned them from government contracts. That is viewpoint discrimination on its face. Under Perry v. Sindermann (1972) and its progeny, the government cannot condition a benefit (here, eligibility for contracts) on the exercise of constitutional rights (here, speech by the company and its employees). The unconstitutional conditions doctrine applies squarely.

We are already in discussions about potential litigation. The factual record here is unusually strong because the President's own public statements establish the retaliatory motive. In most unconstitutional conditions cases, you have to infer the improper motive from circumstantial evidence. Here, the President tweeted the quiet part out loud.

I also want to flag that the ban has broader implications beyond Anthropic. If the government can ban a tech company from contracts because 40% of its employees signed a political letter, every company in America has reason to suppress employee speech. That is a chilling effect on a massive scale, and it implicates not just the First Amendment but also the NLRA protections Rachel mentioned earlier.

TZ
TinaZhang_ChinaLaw ATTORNEY

International law practitioner, focus on US-China tech regulation. I want to bring the DeepSeek angle back into the conversation because it has massive implications that are being overlooked.

First, the claim that DeepSeek ran 10,000 battlefield simulations in 48 seconds. This was reported by the South China Morning Post citing PLA sources, and we should treat it with appropriate skepticism. Chinese state media routinely inflates military capabilities for deterrence purposes. But even if the actual number is 1,000 simulations in 48 seconds, that represents a capability that the US does not publicly claim to match.

Second, the legal framework for AI in Chinese military operations is fundamentally different from the US framework. China is not a party to Additional Protocol I of the Geneva Conventions. China's domestic military regulations do not include the same "feasible precautions" requirements. And China's AI governance framework (the 2023 AI Regulation and its 2025 amendments) explicitly exempts military applications from transparency and safety requirements.

Third, DeepSeek is nominally a private company but operates under the effective direction of the Chinese state. The National Intelligence Law of 2017 requires all Chinese organizations to "support, assist, and cooperate with" state intelligence work. This means DeepSeek has no ability to refuse military applications of its technology, even if individual researchers wanted to. The contrast with Anthropic's employee revolt is stark.

The policy implication is uncomfortable: the Anthropic ban does not just remove one company from the defense pipeline. It signals to every AI company that cooperating with the military risks political retaliation while refusing to cooperate faces no consequences. That asymmetry benefits China directly.

JF
JamesFletcher_PrivMil

Private military contractor perspective. I have been working with autonomous systems in theater since 2023. Let me talk about the LUCAS drones because nobody in this thread seems to understand what they actually are.

LUCAS (Lightweight Unmanned Combat Aerial System) is not a "reverse-engineered Shahed-136." It uses the same delta-wing aerodynamic profile and a similar propulsion concept, but the avionics, guidance, and warhead are entirely different. The Shahed-136 is a one-way attack drone with GPS/INS guidance and no terminal homing. LUCAS has onboard computer vision, can loiter, can be redirected in flight, and has a human-in-the-loop authorization requirement for engagement.

At $35,000 per unit, LUCAS is a game-changer for attritable platforms. You can deploy them in swarms of 20-50, have them autonomously survey a target area, and then individually designate targets for engagement. The AI identifies potential targets, presents them to a human operator, and the human authorizes the strike. This is more human oversight, not less, compared to a conventional JDAM drop from 30,000 feet where the pilot never sees the target with their own eyes.

The legal issue with LUCAS is not autonomy. It is the communication link. If the link goes down, the current ROE requires the drone to enter a holding pattern and eventually self-destruct. But there are proposals to allow "autonomous engagement" in communications-denied environments. That is where the legal line gets blurry, and I think that is where the IHL community should be focusing its attention rather than relitigating whether Claude can read satellite photos.

SN
SarahNguyen_IP ATTORNEY

Circling back on a point Brian raised about the procurement timeline. I did some digging and found that OpenAI actually signed an Other Transaction Authority (OTA) agreement with the DoD back in November 2025 for "AI research and evaluation." OTAs are exempt from most FAR requirements, including competitive bidding. If the Pentagon is expanding that existing OTA to cover the work Anthropic was doing, they might have a technically compliant legal path, even if the underlying motivation is politically corrupt.

This is a well-known loophole in defense procurement. OTAs were designed for rapid prototyping and experimental technology, but they have been increasingly used to circumvent normal procurement rules. The GAO has flagged this in multiple reports. The irony is that both the Trump and Biden administrations expanded OTA authority specifically to get AI companies into the defense pipeline faster.

The lesson for Anthropic and future AI companies: read your OTA terms very carefully. Once you are in an OTA framework, the government has enormous flexibility to expand scope, modify terms, and even transfer the agreement. The standard protest remedies that exist under FAR Part 33 do not fully apply to OTAs.

RH
Prof_RichardHaines IHL SCHOLAR

Responding to Colonel Pike's point about unilateral disarmament. This is the most common argument I hear from the defense establishment, and it has surface appeal but falls apart under scrutiny.

International humanitarian law is not optional. It does not become inapplicable because your adversary is violating it. Article 1 common to all four Geneva Conventions requires states to "respect and ensure respect" for the conventions "in all circumstances." The phrase "in all circumstances" is not diplomatic filler. It means that US obligations under IHL do not change based on what China or Russia are doing with DeepSeek.

The argument also contains a logical flaw. The question is not whether to use AI in warfare. The question is whether the specific way AI is being used complies with existing legal requirements. If Claude's targeting recommendations do not allow for adequate human review, the solution is not to abandon AI but to redesign the human-machine interface to ensure meaningful human control. The ICRC's concept of "meaningful human control" is precisely this middle ground: you can use AI tools, but the human must retain genuine authority over life-and-death decisions.

I am increasingly convinced that the speed of AI-assisted targeting is the core legal issue. If a system generates target recommendations faster than a human can meaningfully evaluate them, the system design itself violates the "feasible precautions" requirement, regardless of whether a human technically clicks the "approve" button. We need legally mandated minimum review times for AI-generated target packages.

MW
MikeWalters_Paralegal

Paralegal at a BigLaw firm that does defense work. Not speaking for my firm, obviously. I just want to ask a practical question that I have not seen addressed.

What happens to the data? If Claude was used to process classified intelligence and generate targeting assessments, that data presumably passed through Anthropic's infrastructure (or a government-controlled instance of it). Who owns the outputs? Can Anthropic be compelled to preserve them for future war crimes investigations? Is there a litigation hold obligation?

Under the Federal Records Act, records created in the course of government business must be preserved. If Claude's outputs are government records, they cannot be destroyed. But if they are on Anthropic's servers, does Anthropic have a parallel obligation to preserve them? And if Anthropic is now banned from the contract, who ensures the data is properly transitioned to OpenAI or to a government archive?

These are not hypothetical concerns. The ICC is already making noises about investigating potential war crimes in the Iran strikes. If those investigations go forward, the AI targeting data will be discoverable, and chain of custody issues will matter.

HB
HeatherBrown_ICCBA ATTORNEY

International criminal law practitioner. I have argued cases before the ICC. Mike raises an excellent point about data preservation, and I want to expand on the ICC angle.

The United States is not a party to the Rome Statute, which means the ICC does not have direct jurisdiction over US personnel. However, Iran is also not a party, which complicates things further. The ICC could potentially assert jurisdiction if the UN Security Council refers the situation, but the US would veto that.

The more likely avenue for accountability is domestic. Under the War Crimes Act (18 USC 2441) and the Uniform Code of Military Justice, US personnel who commit war crimes can be prosecuted in US courts. If AI targeting recommendations contributed to strikes on civilian objects, the individuals who approved those strikes — and potentially the individuals who built and validated the AI system — could face liability.

The data preservation question is critical because AI systems create a uniquely detailed evidentiary record. Unlike a human analyst who might write a brief summary of their assessment, an AI system generates logs, confidence scores, alternative assessments, and input data that collectively create a comprehensive record of the decision-making process. This is both a blessing and a curse for accountability: it provides more evidence than traditional targeting processes, but it also makes it harder to argue "fog of war" or "honest mistake" if the logs show the AI flagged concerns that were overridden.

I would strongly advise any AI company involved in military targeting to treat their data as subject to potential future legal proceedings and preserve everything.

PP
PriyaPrasad_AIPolicy

AI policy researcher at Brookings. I want to zoom out and talk about the systemic implications of what we are seeing, because I think the thread is getting lost in the weeds of individual legal questions.

We are witnessing the real-time collapse of the AI safety consensus that existed (tenuously) since 2023. That consensus held that AI companies would develop safety norms, governments would regulate gradually, and military applications would be subject to special scrutiny. In the space of 72 hours, all three pillars have crumbled. Anthropic's safety commitments were overridden by a classified contract. Government regulation was replaced by executive fiat. And military AI deployment happened without any of the transparency mechanisms that were supposed to enable oversight.

The deeper problem is that AI governance was built on a foundation of voluntary corporate commitments. The White House AI commitments that major companies signed in 2023 were not legally binding. Anthropic's responsible use policy is a corporate document that can be amended by the board at any time. When national security interests collided with safety commitments, safety lost immediately.

The only durable solution is binding law. We need a domestic statute that establishes clear rules for military AI deployment, creates an oversight mechanism with teeth, and protects companies and employees who refuse to participate in applications that violate IHL. The EU AI Act attempted something like this but explicitly excluded military applications. The US has no equivalent legislation even on the table.

Until we have binding law, we are in a policy vacuum where the rules are whatever the current administration decides they are on any given Saturday morning.

DW
DanWright_DefCon

Rachel, I appreciate the labor law primer, but I think you are missing the forest for the trees. Yes, the employees have a legal right to sign that letter. Nobody is disputing that. What I am saying is that exercising a legal right can still be strategically foolish.

The letter gave the administration exactly what it wanted: a pretext to portray Anthropic as unpatriotic and transfer the contract to a more compliant vendor. OpenAI's leadership, whatever you think of them personally, understood the assignment. They did not issue statements about ethics. They did not have employees sign letters. They took the contract, cashed the check, and now they are building the next generation of military AI tools with zero public accountability because nobody at OpenAI is brave enough or foolish enough to raise concerns.

Is that outcome better for AI safety? Is that outcome better for IHL compliance? I would argue it is categorically worse. And the Anthropic employees, with their well-intentioned letter, helped make it happen.

LK
LisaKim_Ethicist

Dan, this argument is a variant of the "responsible stakeholder" theory that defense industry insiders always deploy. "Better us than them." "Stay at the table to influence outcomes." "If you leave, someone worse takes your place." It sounds pragmatic, but it is functionally identical to complicity.

Let me put it in legal terms since this is a law forum. If a lawyer discovers their client is committing fraud, the ethical obligation is to withdraw, not to stay on to "mitigate the fraud from inside." The same principle applies here. If Anthropic believed its technology was being used in ways that violated IHL, the ethical obligation was to stop providing it, not to continue providing it while filing internal memos about safety concerns.

And to your point about OpenAI being "worse" — yes, probably. But that is not Anthropic's fault. It is the government's fault for choosing to use AI in targeting applications without adequate legal frameworks, and OpenAI's fault for eagerly stepping into the breach. Blaming the people who said "no" for the consequences of someone else saying "yes" is a moral shell game.

GS
GregSimpson_USAF

Active duty Air Force, intelligence officer. Posting on personal time, personal views only, standard disclaimers apply.

I cannot discuss specifics of any current operation. What I can say is that the characterization of AI as "making targeting decisions" fundamentally misrepresents how the targeting process works. I have worked in combined air operations centers, and the targeting cycle is a multi-step process involving collection management, analysis, target development, weaponeering, force application, and assessment. AI tools assist at various stages, but at no point does an AI system "decide" to strike a target.

The human decision points are numerous and meaningful. A collection manager decides what intelligence to collect. An analyst decides how to interpret it. A targeting officer develops the target nomination. A JAG reviews the nomination for legal sufficiency. A commander approves the target for prosecution. A weaponeer selects the appropriate munition. A pilot or operator executes the strike. At each stage, a human being is making a judgment call with their name attached to it.

Does AI speed up parts of this process? Yes. Does it make some steps more efficient? Yes. Does it replace human judgment at any of the critical decision points? No. Not in any operation I have been involved in. The people claiming otherwise are either working from incomplete information or have an agenda.

YP
YasminPatel_Tech

Greg, I respect your service and your perspective, but the "AI is just a tool" framing ignores the well-documented phenomenon of automation bias. Research going back to the 1990s (Parasuraman & Riley, 1997; Cummings, 2004; many others) shows that humans consistently over-rely on automated recommendations, especially when the automated system has been presented as highly capable.

In controlled experiments, operators agree with automated recommendations 85-95% of the time, even when the automation is demonstrably wrong. This is not a human failing. It is a predictable cognitive response to working with systems that are usually right. If Claude's target assessments are correct 90% of the time, the human reviewer is going to develop a pattern of accepting them, and their ability to catch the 10% errors will degrade over time.

The formal process you described — multiple human decision points, JAG review, commander approval — is real and important. But the question is whether those humans are exercising independent judgment or whether they are experiencing automation bias that effectively turns their "review" into a confirmation of what the AI already decided. The legal fiction of human control does not guarantee the practical reality of human control.

This is why I think the ICRC's "meaningful human control" standard is more useful than the DoD's "human in the loop" standard. "In the loop" just means a human is present. "Meaningful control" means the human has the information, time, and genuine authority to override the system. Those are very different things.

AT
AlexTran_StartupLaw ATTORNEY

Startup and tech transactions attorney. I want to raise something that will affect every AI company in the country: the insurance implications.

After this week, every D&O insurance underwriter for AI companies is going to be re-evaluating their risk models. If your AI product can be commandeered for military targeting via a government contract, your company faces potential liability for war crimes, congressional investigations, employee lawsuits, and now apparently arbitrary executive orders banning you from major markets. The premium increases for AI company D&O coverage are going to be staggering.

More practically, every VC-backed AI company is going to face hard questions from their boards about government contracts. The Anthropic situation is a case study in how a single government contract can expose a company to simultaneous legal risk from every direction: employment law claims from employees, procurement law claims from the government, potential IHL liability from international bodies, and political risk from the executive branch. That is an uninsurable risk profile.

I would not be surprised to see AI companies start putting explicit "no military use" clauses in their terms of service, not because of ethics, but because their insurance carriers and investors demand it. Which, circling back to the geopolitics, further advantages Chinese AI companies that face none of these constraints.

KM
KellyMartinez_Mod MOD OP

Updating with some new developments as of this morning:

  • Reuters is reporting that the ACLU will file a lawsuit on Monday challenging the Anthropic executive order. Veronica, is this your case?
  • Anthropic's board issued a statement saying the company "complied with all applicable laws and contract terms" but did not address the substance of the employee letter.
  • Politico reports that at least three members of Congress are calling for hearings on AI in military targeting, with a focus on the Claude-to-ChatGPT transition.
  • Iranian state media is claiming 340 civilian casualties from Epic Fury. Pentagon says 12. The truth is almost certainly somewhere in between, but the AI targeting question is going to be central to any independent investigation.

Keep the discussion going. This thread is getting significant traffic from outside the usual forum membership, which is fine, but remember that registration is required to post.

DR
DerekRamos_MilHist

Military historian here. I want to provide some historical context that might cool some of the temperature in this discussion.

Every major military technology has gone through this exact cycle: deployment, controversy, calls for regulation, eventual normalization. The submarine was considered so barbaric that there were serious proposals to ban it entirely at the 1907 Hague Convention. Strategic bombing was condemned as inherently indiscriminate until precision-guided munitions made it (somewhat) more discriminate. Drones were called "cowardly" and "illegal" when they were first used for targeted killing in the early 2000s. Now they are completely normalized.

AI in targeting will follow the same trajectory. Within five years, the idea that AI should not be used in military operations will seem as quaint as the idea that submarines should be banned. The legal and ethical frameworks will adapt, as they always have, and the controversy we are having right now will be a footnote in military history.

This is not an argument for complacency. The legal frameworks matter. The ethics matter. The specific rules of engagement and oversight mechanisms matter. But the categorical opposition to AI in warfare is not going to win. It never has for any military technology in history. The productive work is in shaping how AI is used, not in arguing whether it should be used at all.

EJ
ElenaJohnson_HRLaw ATTORNEY

HR and employment law specialist. I want to address a very specific question that I know is on the minds of many tech workers reading this thread: if your company takes a military contract, can you refuse to work on it?

The short answer under current US law: it depends on your jurisdiction and the circumstances. There is no general federal "right of conscientious objection" for civilian employees. The military has conscientious objector provisions (10 USC 6049 for the Navy, AR 600-43 for the Army), but these do not extend to civilian employees of defense contractors or tech companies.

However, several legal theories may protect you:

  • NLRA Section 7: If you collectively refuse to perform military work, that is protected concerted activity, as Rachel explained above.
  • State law protections: California (Labor Code 1101-1102), New York (Labor Law 201-d), and several other states protect off-duty political activity and in some cases on-duty political speech.
  • Whistleblower protections: If the military use violates the company's own policies, applicable laws, or contractual commitments, reporting that violation is protected.
  • Contractual provisions: If your offer letter or employee handbook contains specific commitments about the nature of the work, a unilateral change to military applications could breach the implied covenant of good faith.

What you cannot do is unilaterally refuse a direct work assignment and expect to keep your job without consequences unless one of the above protections applies. If your manager tells you to work on a military project and you say no, they can discipline or terminate you unless you can invoke a specific legal protection. "I do not want to" is not a legal defense. "This violates our company's published safety policy" might be.

If you are in this situation, talk to a lawyer before you act. Seriously.

CB
CarlosBennet_Vet

Marine vet, two tours in Afghanistan, now working as a cybersecurity consultant. I want to speak to something that Jake and Greg touched on but that I think deserves more direct attention.

The people in this thread debating whether AI "makes decisions" have clearly never been in a targeting cell at 0300 when you have been awake for 36 hours and a time-sensitive target pops. In that situation, you are already not making fully rational decisions. Your judgment is impaired by fatigue, stress, information overload, and the pressure to act before the target moves. An AI system that can synthesize intelligence faster and present a clearer picture is not replacing human judgment. It is augmenting impaired human judgment.

I have seen what happens when targeting goes wrong without AI. Kunduz hospital in 2015. The Mosul airstrike in 2017 that killed over 100 civilians. Those were fully human decisions, and they were catastrophically wrong. The question is not whether AI is perfect. It is whether AI-augmented targeting is better than the status quo of exhausted humans making life-and-death calls on incomplete information at three in the morning.

I am not saying AI is the answer. I am saying that romanticizing human judgment in combat conditions is as dangerous as uncritically trusting AI. The truth is that both are fallible, and the goal should be a system that compensates for the weaknesses of each.

AE
AnthropicAnon_2026

I am back. Dan, I hear your argument about strategic foolishness, and I want to respond honestly: you might be right. Maybe the letter was counterproductive in the short term. Maybe it gave Trump the excuse he was looking for. But I do not think moral clarity is something you calibrate for tactical advantage.

Here is what I keep coming back to. I joined Anthropic because I believed the mission. AI safety is not just an abstract research agenda. It means building systems that are aligned with human values and that do not cause harm. When I found out that Claude was being used to help select bombing targets, I felt physically sick. Not because I am naive about how the world works, but because I had trusted that the company's stated values were real.

Several of my colleagues have quit. I am still here, partly because I think the people who stay have more ability to influence future decisions than the people who leave, and partly because, frankly, I have RSUs that have not vested yet and I cannot afford to walk away. That is an honest answer even if it is not a noble one.

What I will say is that the letter has already had an impact internally. Leadership is scrambling to develop a formal policy on military contracts. There are conversations happening about implementing technical controls that would prevent Claude from being used for targeting applications even if the API terms are changed. Whether those conversations lead to real changes or are just crisis management theater, I do not know yet.

TM
TamaraMills_ICRCAdvisor IHL EXPERT

I advise the ICRC on autonomous weapons issues. Posting in personal capacity. I want to address the "minimum review time" concept Professor Haines raised, because this is something we have been working on.

The concept of a legally mandated minimum review period for AI-generated targeting recommendations is attractive but faces practical obstacles. In time-sensitive targeting (fleeting targets, immediate threats to friendly forces, air defense suppression), a mandatory delay could cost military lives. Any legal framework needs to distinguish between deliberate targeting (where extended review is feasible) and dynamic targeting (where speed is operationally essential).

What we have proposed instead is a tiered oversight framework based on the type and consequences of the targeting decision:

  • Tier 1 (defensive/immediate threat): AI-assisted with concurrent human monitoring. Minimum two-person review. Review period: real-time.
  • Tier 2 (time-sensitive offensive): AI-assisted with mandatory human approval. Minimum JAG review of collateral damage estimate. Review period: minutes.
  • Tier 3 (deliberate targeting): AI-assisted with comprehensive human review including independent verification. Full legal review. Review period: hours.
  • Tier 4 (high-value/high-collateral): AI-assisted but with mandatory independent assessment (not relying solely on the AI output). Senior commander and legal approval. Review period: days.

This framework preserves operational flexibility while ensuring that the level of human oversight is proportionate to the stakes. It also creates a clear evidentiary record for post-hoc legal review: if a Tier 3 target was struck under Tier 2 procedures without justification, that is a clear violation.

The challenge is getting states to adopt it. The CCW talks have been deadlocked since 2023, and neither the US nor China is interested in binding constraints on military AI.

NV
NicolasVega_OpenAI

I work at OpenAI. Also posting anonymously for obvious reasons. I want to push back on the narrative that OpenAI "eagerly swooped in" to take the Pentagon contract.

The decision was made at the executive level, and it happened fast. Most of us found out from the press, same as Anthropic's people. There was no company-wide discussion. There was no ethics review that I am aware of. The deal was done before most of the company knew it was happening.

There is unease internally, but the culture at OpenAI is different from Anthropic's. We have been through enough internal crises (the board coup, the departures, the restructuring) that most people are in a "keep your head down and ship" mode. Nobody is organizing a letter. A few people have quietly updated their LinkedIn profiles, which in Silicon Valley is the equivalent of a resignation letter in draft form.

What concerns me technically is that ChatGPT is not the same as Claude in terms of safety architecture. Anthropic invested heavily in constitutional AI and RLHF specifically designed to reduce harmful outputs. Our safety work is good but oriented differently — more toward preventing misuse by individual users, less toward ensuring reliability in high-stakes decision support. Repurposing ChatGPT for military intelligence assessment on a weekend timeline is not ideal, to put it diplomatically.

I am not sure how long I am going to stay. But Dan's point about influence from the inside resonates, unfortunately.

MR
MarcusReed_JAG ATTORNEY

Nicolas, your comment about the weekend timeline for transitioning to ChatGPT is alarming from a legal perspective. DoD Directive 3000.09 requires that autonomous and semi-autonomous weapons systems undergo "rigorous testing and evaluation" before deployment. If ChatGPT was pressed into service for military intelligence assessment without adequate T&E, every targeting decision made using its outputs is legally vulnerable.

This is exactly the scenario I was worried about. The political decision to ban Anthropic created an operational gap. The operational gap was filled by rushing a different, potentially less-validated system into production. And if that system makes errors that lead to civilian casualties, the legal liability falls on the commanders who approved its use, the contracting officers who authorized the transition, and potentially the political officials who created the situation in the first place.

Command responsibility under both UCMJ and IHL extends to officials who knew or should have known that their decisions would result in violations. If it comes out that the ChatGPT transition was done without proper testing, the chain of responsibility goes all the way to the top.

RW
RobertWong_VCLaw ATTORNEY

VC and corporate attorney. I represent several AI companies and have spent the last 48 hours fielding panicked calls from founders and investors. Let me describe what is happening in the market.

Three of my clients have already received inquiries from the DoD about AI capabilities. Before this weekend, those conversations were exploratory and low-key. Now they are urgent, because the Pentagon just lost its primary AI vendor and the replacement is being questioned. There is a void, and every AI company with relevant capabilities is being asked to fill it.

The boards of these companies are paralyzed. On one hand, a DoD contract is significant revenue and a stamp of legitimacy. On the other hand, the Anthropic situation shows that taking a defense contract can destroy your company from inside (employee revolt) and outside (executive ban) simultaneously. One of my clients, a Series B company with about 200 employees, estimates they would lose 30-40% of their engineering staff if they took a military contract. That is an existential threat for a company that size.

The investment implications are already rippling through the ecosystem. I am hearing from LPs who are now asking VCs whether their portfolio companies have military exposure. Some institutional investors with ESG mandates are flagging AI companies as potential exclusion candidates. And the talent market is fragmenting: top AI researchers are now explicitly asking during interviews whether companies have or plan to have military contracts.

In short, the Anthropic situation has created a new axis of risk that the AI industry was not prepared for, and it is going to reshape the market in ways we cannot fully predict yet.

CP
ColonelPike_Ret

Robert, the scenario you are describing is exactly what I warned about. The US AI industry is self-selecting out of defense work at precisely the moment when we need it most. China does not have this problem. Russia does not have this problem. We are unilaterally disarming our AI warfare capability because of cultural dynamics unique to Silicon Valley.

Tamara's tiered framework is excellent and I support it fully. But frameworks only matter if the companies are willing to participate. If every top-tier AI company decides that military contracts are reputational poison, the DoD will be left working with second-tier vendors or building capabilities in-house. Neither option is optimal.

I want to directly challenge Professor Haines on his "in all circumstances" point. Yes, IHL applies in all circumstances. Nobody is arguing otherwise. What I am arguing is that compliance with IHL is better achieved with advanced AI tools and proper oversight than without them. Declining to use AI does not make targeting more humane. It makes it less informed, slower, and more reliant on the fallible human judgment that Carlos correctly identified as the source of many historical targeting errors.

The moral high ground is not abstinence from technology. It is responsible use of technology under clear legal rules. And right now, we do not have the rules, and the companies are fleeing, and the result is going to be worse outcomes for everyone, including Iranian civilians.

SB
SophiaBaker_LOAC ATTORNEY

Law of armed conflict practitioner, currently advising a NATO ally's defense ministry on exactly these questions. I want to bring in the allied perspective because this is not just a US issue.

Several NATO allies participated in Operation Epic Fury or provided intelligence support. Under the doctrine of joint responsibility, allied forces that contribute to operations where IHL violations occur can share legal liability. If an allied nation's intelligence was fed into the Claude/ChatGPT targeting pipeline and that pipeline produced an unlawful strike, the contributing nation could face state responsibility under the ILC Articles on State Responsibility.

This is creating significant concern in European capitals. Several of our client governments are requesting formal legal opinions on whether their continued intelligence sharing with the US is legally tenable given the AI targeting controversy. At least one country is considering pausing intelligence sharing until there is more clarity on the AI systems being used and the legal review processes in place.

The European perspective on AI in warfare is also shaped by the EU AI Act, which classified military AI under a national security exemption but is now facing political pressure to revisit that exemption. There are calls in the European Parliament for a regulation specifically addressing military AI, potentially including requirements for allied AI systems used in joint operations.

The transatlantic implications of this situation are enormous. The US cannot conduct major military operations without allied support, and allied support depends on legal certainty that the operations comply with IHL. The AI targeting controversy is undermining that legal certainty in real time.

FD
FrankDavis_RetJudge

Retired federal judge, EDVA. I sat on several cases involving government contracts and classification issues. Let me add a judicial perspective.

The legal challenges to the Anthropic executive order are going to face a significant obstacle: the political question doctrine. Courts are historically reluctant to intervene in national security decisions by the executive branch. The government will argue that the decision about which AI vendors to use for military operations is a core executive function not subject to judicial review.

However, the strength of that argument depends on how the order is framed. If it is framed as a procurement decision (we chose OpenAI over Anthropic for operational reasons), courts will likely defer. If it is framed as a political punishment (we banned Anthropic because the President does not like their politics), the First Amendment claim has much more traction. The President's own statements calling Anthropic "Radical Left" are going to be exhibit A for the plaintiffs.

The most interesting litigation question is standing. Anthropic clearly has standing as a directly injured party. But do the employees? Do Iranian civilians? Under Clapper v. Amnesty International, you need a concrete, imminent injury. The employees might argue economic harm from the company's loss of a major contract. Civilian standing is much harder — the political question and state secrets doctrines would likely prevent any court from reaching the merits.

My prediction: the ACLU case gets fast-tracked through the D.C. Circuit. It will be one of the most significant government contracts and First Amendment cases in a generation.

JT
JakeTorres_Vet

Coming back to this thread after some reflection. Yasmin's points about automation bias changed my thinking somewhat. I still believe the targeting process involves meaningful human judgment, but I can see how that judgment could be corrupted over time if operators learn to trust the AI uncritically.

When I was in the targeting cell, we had a rule: never trust a single source. If SIGINT said there was a target, you needed IMINT to confirm. If HUMINT said there was a target, you needed SIGINT to corroborate. The multi-source requirement was a check against exactly the kind of bias Yasmin described. If AI becomes a de facto single source that integrates everything, you lose that check.

I think the right answer is Tamara's tiered approach, combined with mandatory independent verification for any target that relies primarily on AI-generated assessment. The AI tells you what it thinks. A separate, non-AI analytical process tells you what it thinks. The commander sees both. If they disagree, you default to the more conservative assessment. That preserves the speed advantage of AI while maintaining the independent check that my generation relied on multiple intelligence sources to provide.

KL
KarenLee_TechEthics

Tech ethics researcher and former Google DeepMind employee. I want to raise an issue that has not been discussed: the asymmetric transparency problem.

We know a lot about Claude and ChatGPT because Anthropic and OpenAI have published extensive research on their architectures, training processes, and safety evaluations. This transparency, ironically, makes them easier to criticize. We can have informed debates about LLM confidence calibration, adversarial vulnerabilities, and alignment techniques because the research is public.

We know almost nothing about DeepSeek's military applications. The 10,000 simulations claim is unverifiable. We do not know what safety measures, if any, are built into DeepSeek's military variant. We do not know how it handles edge cases, how it represents civilian infrastructure, or whether it has any equivalent of constitutional AI principles. The PLA is not publishing papers on responsible military AI.

This creates a perverse dynamic where the most transparent AI systems face the most scrutiny, and the least transparent systems face none. The result is that public pressure, employee protests, and legal challenges are concentrated on the companies that are trying to do AI responsibly, while companies that are not trying face no consequences at all.

I do not have a good solution for this. But I think it is important to acknowledge that the current discourse is structurally biased against responsible AI development and in favor of opaque, unaccountable AI development. Every time we pressure Anthropic or OpenAI into withdrawing from defense work, we are not reducing military AI. We are shifting it to less accountable actors.

WC
WilliamChen_ExportCtrl ATTORNEY

Export controls and sanctions attorney. I want to flag a dimension of this that nobody has mentioned: the export control implications of AI-powered weapons systems.

The LUCAS drone, at $35,000 per unit, is going to be extremely attractive to US allies who cannot afford $2 million cruise missiles. If the US starts exporting LUCAS or similar AI-equipped drones, those exports are governed by the Arms Export Control Act (AECA), the International Traffic in Arms Regulations (ITAR), and the Missile Technology Control Regime (MTCR). The AI targeting component adds a layer of complexity because the software itself may be a controlled item under EAR Category 5 (information security) or USML Category XI (military electronics).

The bigger issue is reverse engineering. If the LUCAS drone was itself reverse-engineered from the Shahed-136, we are already in a cycle of proliferation. Iran builds the Shahed. The US reverse-engineers it and adds AI. Allies buy it. Adversaries capture or reverse-engineer it. Each iteration adds more autonomous capability. Export controls are supposed to prevent this cycle, but they are designed for a world where weapons systems take years to develop, not months.

Additionally, the AI models embedded in autonomous weapons systems may be subject to deemed export rules. If a foreign national working for a US defense contractor has access to the AI targeting algorithms, that could constitute a deemed export requiring a license. Given that most AI companies employ significant numbers of foreign nationals, this creates a workforce management nightmare.

LB
LauraBennett_JournLaw

Media law attorney. Quick but important point: the classified nature of AI targeting systems creates a serious problem for journalistic accountability.

Under current classification rules, the specific algorithms, training data, confidence thresholds, and decision criteria used in military AI targeting are classified at the TS/SCI level. This means no journalist can independently verify how the system works, what errors it has made, or whether it complies with IHL. We are entirely dependent on government assertions and anonymous leaks.

The Anthropic employees are in a particularly difficult position. If any of them have knowledge of classified information about how Claude was used in targeting, disclosing that information to the press or even to Congress without authorization is a federal crime under 18 USC 793-798. The Espionage Act does not have a whistleblower exception. Ask Daniel Ellsberg, Chelsea Manning, or Edward Snowden.

The Intelligence Community Whistleblower Protection Act (ICWPA) provides a channel for reporting concerns to the Inspector General or congressional intelligence committees, but it does not authorize public disclosure and it does not protect against retaliation with the same force as private-sector whistleblower statutes. If AnthropicAnon has classified knowledge and is sharing it on a public forum, even in sanitized form, they are taking a real legal risk.

The broader question is whether democratic accountability is possible for classified AI systems. I would argue it is not, and that is a fundamental problem that no amount of policy frameworks can solve without rethinking classification norms.

RA
RachelAdams_LaborLaw ATTORNEY

Laura raises an important point about AnthropicAnon's legal exposure. I want to clarify something for anyone in a similar position.

If you are an employee of a private company that held a classified government contract, and you did not personally hold a security clearance or have authorized access to classified information, then any information you learned through informal channels (office gossip, observing unusual activity, reading between the lines of internal communications) is not classified as to you. You cannot be prosecuted for disclosing information that you were never authorized to receive in the first place.

However, if you signed an NDA related to the government contract, that NDA may impose civil (not criminal) liability for disclosure. And if you held a clearance, even temporarily, you are bound by the Classified Information Nondisclosure Agreement (SF-312) for life.

AnthropicAnon, if you are reading this: I strongly recommend you consult with an attorney who specializes in national security whistleblower cases before posting anything further. Organizations like the Government Accountability Project and the Whistleblower and Source Protection programme at ExposeFacts can provide referrals. This is genuinely dangerous territory and good intentions do not protect you from prosecution.

DK
DevKapoor_MLEng

ML engineer at a mid-tier AI company (not Anthropic, not OpenAI). I want to share the perspective from inside the industry because the Anthropic situation is dominating every conversation at every AI company right now.

My company has had preliminary conversations with defense agencies. Nothing concrete, but the conversations accelerated dramatically this weekend. On our internal Slack, there was an informal poll: 65% of engineers said they would refuse to work on military projects, 20% said they would consider it with appropriate safeguards, and 15% said they were supportive. Those numbers essentially mean any military contract would cause a company-splitting crisis.

The hiring market is already responding. I have received three recruiter messages since Sunday specifically emphasizing that their companies have "no military contracts and no plans for military contracts." That is now a selling point in AI recruitment, which tells you everything about where the talent pool stands.

From a technical perspective, I want to reinforce something Yasmin and Natalie said. LLMs are not reliable enough for targeting applications. I work on production ML systems every day. I know the failure modes. These systems hallucinate. They are brittle to distribution shift. They have biases that are extremely difficult to characterize, let alone eliminate. Using an LLM for intelligence assessment in a novel operational environment (which Iran certainly is for any model trained primarily on Western data) is asking for trouble. It is not a question of whether errors will occur. It is a question of how many and how bad.

VL
VeronicaLaw_ACLU

Kelly, to answer your question: yes, we filed this morning. Anthropic, PBC v. United States, D.D.C. We are seeking a preliminary injunction against the executive order on First Amendment grounds (viewpoint discrimination, unconstitutional conditions) and APA grounds (arbitrary and capricious agency action, failure to follow debarment procedures).

I cannot discuss strategy in detail, but I can share what is in the public filing. The complaint includes 47 exhibits, including the President's social media posts calling Anthropic "Radical Left," internal White House communications obtained through FOIA, and declarations from former government contracting officials explaining that the order violated standard procurement procedures.

We are also representing five individual Anthropic employees as intervenors, alleging First Amendment retaliation. Their theory is that the executive order was motivated in part by the employee letter, and that banning the company from contracts was an indirect punishment of protected speech.

Judge Collyer drew the case. Motion for preliminary injunction is set for March 12. We expect the government to invoke state secrets and political question defenses. We have arguments ready for both.

This is going to be a landmark case at the intersection of government contracting, First Amendment law, and AI policy. Whatever happens at the district court level, this is heading to the D.C. Circuit and likely to the Supreme Court.

BH
BrianHolt_GovCon

Veronica, interesting filing. A few observations from the government contracts side.

The APA claim is strong procedurally. The executive order bypassed every established debarment and suspension mechanism in the FAR. Under FAR 9.406, debarment requires a specific cause, a written notice to the contractor, and an opportunity to respond. None of that happened here. The government will argue the President has inherent authority over procurement that supersedes the FAR, but that argument has been rejected by the Court of Federal Claims in several post-9/11 cases.

The state secrets defense is the biggest risk. If the government argues that the details of the AI targeting contract are classified and cannot be litigated in open court, the judge might dismiss on justiciability grounds without ever reaching the merits. The Totten bar on litigating secret contracts is a formidable obstacle, though you could argue that the contract's existence is already publicly known (the President himself disclosed it) so only the details, not the existence, are classified.

I would also watch for the government to argue mootness. If the underlying military operation concludes before the case is decided, they will say there is no live contract to reinstate and the controversy is moot. You might need to argue the "capable of repetition yet evading review" exception to mootness.

Good luck. This is important litigation regardless of which side you are on.

AP
AishaPrice_IranianAm

I have been lurking on this thread and I need to say something. I am an Iranian-American attorney practicing immigration law in Los Angeles. I have family in Tehran. I have been unable to reach my uncle and his family since Friday night.

This entire thread discusses AI targeting in the abstract, as a legal framework question, as a procurement issue, as an employment law puzzle. And I understand why — this is a law forum and those are legitimate questions. But I want to remind everyone that behind every "target identification" and "battle damage assessment" is a real person. A family. A neighborhood.

My uncle is a retired teacher. He lives three blocks from a telecommunications facility that Iranian media says was struck. Whether that facility was a legitimate military target or a misidentified civilian site, the people living near it are not combatants. They are not "collateral damage estimates." They are people, and right now I do not know if they are alive.

I am not making a legal argument. The legal arguments in this thread are thorough and important. I am making a human one. When we debate whether Claude's confidence calibration meets the "feasible precautions" standard, we are really debating whether the algorithms got it right about whether my uncle's neighborhood was worth bombing. That is what this conversation is about. Please do not lose sight of that.

RH
Prof_RichardHaines IHL SCHOLAR

Aisha, thank you. You are right, and your reminder is essential. The entire edifice of international humanitarian law exists because of people like your uncle. The principle of distinction exists to protect civilians. The proportionality principle exists to ensure that military advantage is not pursued at disproportionate cost to civilian life. Every legal framework we have discussed in this thread is, at its core, an attempt to prevent exactly the kind of anguish you are experiencing.

I want to be direct about something I have been circling around academically: if the AI targeting system contributed to strikes on civilian infrastructure, that is not a "calibration error." It is a potential violation of Article 52 of Additional Protocol I, which protects civilian objects from attack. The fact that an algorithm rather than a human analyst made the error does not diminish the violation. If anything, it amplifies the culpability of those who chose to rely on an inadequately validated system for life-and-death decisions.

The two sites Omar identified as potentially misidentified — the pharmaceutical warehouse and the telecommunications facility — need independent investigation. I am joining a call this afternoon with colleagues at the International Fact-Finding Commission established under Article 90 of Additional Protocol I to discuss the possibility of an inquiry. Whether any inquiry is politically feasible is another matter, but the legal basis exists.

Aisha, I hope your family is safe. I am sorry that the legal discussion can feel cold in the face of what you are going through.

JN
JordanNash_ConLaw ATTORNEY

Constitutional law professor at Yale. I have been following this thread closely and want to weigh in on what I think is the deepest constitutional question here, one that goes beyond the First Amendment and procurement issues.

The War Powers Resolution requires the President to notify Congress within 48 hours of introducing US armed forces into hostilities. The question I am interested in is: does the deployment of an AI targeting system constitute the introduction of "armed forces" for War Powers purposes? The statute was written in 1973 with human soldiers in mind. If the President can wage war through autonomous systems without deploying personnel, the War Powers framework may be functionally obsolete.

This is not hypothetical for the LUCAS drone program. If LUCAS drones are operated from US soil by operators who never enter a combat zone, are those operators "armed forces" introduced into "hostilities"? The plain text of the Resolution is ambiguous. The legislative history suggests Congress intended to capture any use of military force, but the enforcement mechanism depends on public awareness and political will, both of which are diminished when the war is fought by algorithms and robots rather than human soldiers.

The broader constitutional concern is that AI warfare enables the executive branch to use force with fewer political checks. No body bags. No deployment orders. No families seeing their loved ones ship out. The democratic accountability that constrains military action depends partly on the human cost being visible. AI warfare makes it invisible, and that shifts the balance of war powers dramatically toward the executive.

I am not saying AI warfare is unconstitutional. I am saying the constitutional framework for civilian control of the military was not designed for it, and we need to think seriously about whether that framework needs to be updated before, not after, the next conflict.

MT
MeganThompson_DigRights

Digital rights advocate and attorney. I want to connect this discussion to a practical question that will affect millions of people: the future of commercial AI tools.

If Claude and ChatGPT can be repurposed for military targeting, what does that mean for the billions of people who use these tools every day for writing, research, coding, and communication? The same models that are processing your emails are processing intelligence reports. The same architectures that recommend your next TV show are recommending bombing targets. This dual-use reality is going to reshape public trust in AI tools fundamentally.

Already, I am seeing calls on social media to boycott both Anthropic and OpenAI. Some businesses are reconsidering their AI vendor relationships. Several European data protection authorities have issued statements questioning whether military AI use is compatible with GDPR data processing principles, since models trained on European personal data may have been deployed for military purposes without data subjects' consent.

The commercial fallout may ultimately be more consequential than the legal battles. If consumers and businesses lose trust in frontier AI companies because of military entanglements, that could reshape the entire AI industry. The companies that explicitly wall off military applications — if any manage to credibly do so — may gain a significant competitive advantage in the commercial market.

The irony is that military contracts might represent a few billion dollars of revenue while commercial trust underpins hundreds of billions. If the defense work destroys commercial trust, the economics are upside down. This may be the strongest argument against military AI contracts, not ethics or law, but simple market logic.

KM
KellyMartinez_Mod MOD OP

This has been one of the most substantive discussions we have ever had on this forum, and I want to thank everyone who contributed thoughtfully. I am going to summarize where we are and what to watch going forward.

On international humanitarian law: The consensus among the IHL experts in this thread (Professor Haines, Tamara, Sophia, Heather) is that existing IHL applies to AI-assisted targeting but was not designed for it. The speed of AI-generated targeting recommendations may undermine the "feasible precautions" requirement. Tamara's tiered oversight framework is the most concrete proposal for addressing this. Independent investigation of the potentially misidentified strike sites is essential for accountability.

On employment and labor law: Anthropic employees who signed the letter are protected under NLRA Section 7 and California labor law. Whistleblower protections may apply if the military use violated company policy. Anyone in this position should consult specialized counsel immediately, especially given the classification risks Laura and Rachel highlighted.

On government contracting: The executive order banning Anthropic is legally vulnerable on multiple fronts (APA, First Amendment, procurement procedure violations). The ACLU lawsuit filed yesterday will be the landmark case. The OpenAI contract transition raises separate procurement law concerns, particularly around T&E requirements for military AI systems.

On the broader AI industry: The Anthropic situation has created a new risk axis for AI companies. Talent markets, investor expectations, insurance costs, and commercial trust are all being reshaped in real time. The asymmetric transparency problem (US companies face scrutiny, Chinese companies do not) is a genuine strategic concern.

On the human cost: Aisha's post was the most important one in this thread. Every legal and technical question we discussed ultimately comes back to whether real people, civilians who are entitled to protection under international law, are being harmed by these systems. We must not lose sight of that.

I am going to keep this thread pinned as the situation develops. The preliminary injunction hearing on March 12, the congressional hearings, and any independent investigation of the strike sites will all generate new discussion. Please continue to post with the same rigor and respect that characterized this discussion.

To the lurkers: if you have relevant expertise and have been hesitant to post, now is the time. This is too important for any of us to sit on the sidelines.

DW
DanWright_DefCon

Alright, I need everyone to sit with this for a moment because the cognitive dissonance here is staggering. CBS News published a detailed report this morning, corroborated by the Washington Post, confirming that Claude is still being actively used by the US military in Iran operations. Multiple sources inside CENTCOM and at least two defense contractors confirmed ongoing deployment. This is the same Claude that the executive order explicitly banned from all government contracts. The same Claude that Anthropic was publicly vilified for allowing to be used in military targeting. The government banned the company and then kept using the product.

The Pentagon spokesperson gave a masterclass in non-denial when asked about it: "We do not comment on specific intelligence tools or platforms used in ongoing operations." That is not a denial. That is confirmation wrapped in a classified wrapper. The CBS sources say Claude is embedded in at least two targeting workflow pipelines and one logistics coordination system, and that ripping it out mid-operation would create what one source called "an unacceptable capability gap." So the same government that told the American public Anthropic was a national security threat is quietly relying on Anthropic's technology to run the war.

I have been covering defense technology for fifteen years and I have never seen anything this blatantly contradictory. The executive order was theater. The ban was politics. And the troops on the ground are still using Claude because it actually works and there is no ready substitute. If I were the ACLU legal team, I would be filing supplemental briefing on this immediately. This is not just evidence of arbitrary government action under the APA. This is the government admitting through its own conduct that the stated rationale for the ban is pretextual.

RA
RachelAdams_LaborLaw ATTORNEY

Dan is right that this is legally extraordinary, but I want to unpack the specific doctrinal implications because they are significant for the March 12 hearing. Under the APA, an agency action is "arbitrary and capricious" if the agency fails to consider relevant factors or if there is a clear error of judgment. Motor Vehicle Mfrs. Ass'n v. State Farm, 463 U.S. 29 (1983). But what we have here goes beyond that. The government is simultaneously asserting that Claude poses such a grave risk to national security that Anthropic must be banned from all federal contracting, while also continuing to rely on Claude in active combat operations. That is not just arbitrary. That is the government contradicting its own factual predicate for the ban in real time.

For the preliminary injunction analysis, the ACLU needs to show likelihood of success on the merits, irreparable harm, balance of equities, and public interest. The CBS and Washington Post reporting hands them the first element on a silver platter. How can the government argue in court that the ban is rationally related to national security when its own military is actively using the banned technology in the field? The judge is going to ask that question and the DOJ attorneys are going to have a very uncomfortable time answering it. Any attempt to invoke state secrets to avoid the contradiction will only make it worse, because the reporting is already public.

There is also a Due Process angle that I do not think has been fully briefed yet. Anthropic is being deprived of all government contracting revenue, and the stated justification is demonstrably pretextual based on the government's own conduct. That looks like a taking or a punishment without due process, depending on how you frame it. I would expect a supplemental brief from the ACLU before March 12 incorporating this reporting, and I would expect the government to try very hard to keep it out of the record by claiming national security classification. Whether Judge Chen allows that will be one of the key procedural battles at the hearing.

BH
BrianHolt_GovCon

I want to address the procurement side of this because it has not gotten enough attention. If the Pentagon is still using Claude in Iran, someone is paying for it. AI models at military scale require compute infrastructure, API access, licensing, and ongoing maintenance. You do not just keep running a commercial AI system in a theater of war without a contractual relationship with the vendor or an authorized intermediary. So either there is a classified contract still in effect that contradicts the public executive order, or the military is using Claude through an unauthorized channel, which would be its own violation of federal procurement law and potentially the Antideficiency Act.

I have been in government contracting for twenty years and neither option is good for the administration. If there is a classified carve-out, then the executive order was deliberately misleading to the public and to Congress, which has oversight obligations over classified programs. If there is no authorized contract, then individual program managers are potentially committing federal crimes by using software without proper authorization and funding. The CBS report mentions that the technology is "embedded" in existing pipelines, which suggests it may have been integrated under prior contracts that predate the ban. But even so, continued use after a presidential executive order should have triggered a stop-work order. The fact that it did not tells you everything about how seriously the operational commands took the ban.

Anthropic is in perhaps the most paradoxical position any technology company has ever been in. They are simultaneously banned by the Pentagon and relied upon by the Pentagon. Their technology is officially a national security risk and operationally a national security asset. If I were their general counsel, I would be documenting every piece of evidence that the government is continuing to use Claude, because that documentation will be worth its weight in gold in the ACLU litigation and in any future contract dispute.

VL
VeronicaLaw_ACLU

I can confirm that our legal team is aware of the CBS News and Washington Post reporting and is actively evaluating whether to file a supplemental brief before the March 12 hearing. I obviously cannot discuss litigation strategy in detail, but I will say that the reporting is consistent with information we have received through other channels. The contradiction between the executive order's stated rationale and the military's continued operational reliance on Claude is something we intend to bring to the court's attention.

Rachel's analysis above is sound. For the preliminary injunction standard, the government's burden is to show that the ban serves a compelling or at least legitimate government interest and that it is rationally connected to that interest. When the government's own conduct demonstrates that it does not actually believe its own stated justification, that is powerful evidence of pretext. We have seen courts reject government actions on this basis in the immigration context, in the contracting debarment context, and in First Amendment retaliation cases. The principle is the same: the government cannot punish a private party based on a rationale that its own behavior reveals to be pretextual.

I also want to flag a point that has not been raised yet. If the military is using Claude without Anthropic's knowledge or current authorization, that raises serious questions about intellectual property, terms of service, and the computer fraud and abuse landscape. Anthropic may have claims of its own. More importantly, if Claude is being used in targeting or strike-chain functions without the safety oversight that Anthropic would normally provide, the risk of the exactly the kind of harm the company tried to prevent goes up significantly. The ban may have actually made the deployment of Claude in Iran less safe, not more, by cutting Anthropic out of the oversight loop. That is an argument we plan to make to Judge Chen. The hearing is in one week and the stakes could not be higher.

Join the discussion — free for verified professionals.

Browse Forum