Pinning this as a megathread given how fast things are moving. Operation Epic Fury launched Friday night (Feb 28) and we already have confirmation that multiple AI systems were used in the targeting pipeline. This touches on international humanitarian law, employment law, government contracting, and tech ethics all at once, so I want one central place for the discussion.
Here is what we know so far from confirmed reporting:
- The Pentagon used Anthropic's Claude for intelligence assessment, target identification, and battle damage assessment during the lead-up to strikes on Iranian nuclear and military infrastructure.
- President Trump publicly declared Anthropic a "Radical Left AI company" and signed an executive order banning it from government contracts on March 1.
- OpenAI reportedly took the Pentagon deal within hours of Anthropic being cut off.
- Over 900 Anthropic employees have signed a letter titled "We Will Not Be Divided" protesting military use of Claude.
- Separately, China's DeepSeek model was reportedly used by the PLA for battlefield simulation, running 10,000 scenarios in 48 seconds.
- The LUCAS autonomous drone program ($35K per unit, reverse-engineered from Iranian Shahed-136) appears to incorporate AI targeting.
Ground rules for this thread: keep it professional, cite sources where possible, and flag clearly when you are speculating versus stating confirmed facts. I will be moderating aggressively. No partisan flamewars. We are here to discuss the legal and ethical dimensions.
I will update the TL;DR box at the top as the discussion evolves. Let's hear from the IHL experts, defense attorneys, tech workers, and anyone else with relevant perspective.