As the rumblings of war began in the Middle East last week, a different kind of conflict erupted on social media — between two AI giants, OpenAI and Anthropic, over a potential deal with the Department of War (DOW) to deploy their models in active conflict.
Hours after Anthropic CEO Dario Amodei said the company could not in "good conscience" accept the DOW's terms, which included mass surveillance and fully autonomous weapons, OpenAI announced it had signed the deal. The announcement was followed, almost simultaneously, by a $110 billion funding round.
Overnight, Anthropic became a symbol of resistance against the Military Industrial Complex. Droves of ChatGPT users began switching platforms: U.S. uninstalls of ChatGPT spiked by 295 percent, and #CancelChatGPT and #QuitGPT trended on both X and Reddit. Claude, apparently unprepared for the surge, suffered global outages as its servers buckled under the sudden influx.
The reaction to OpenAI's deal was swift and harsh. "I've just canceled my OpenAI subscription and turned down a collaboration with OpenAI," wrote Pawel Huryn on X. "Some say Anthropic has lost. To me, they just earned something no contract can buy — trust."
I've just canceled my OpenAI subscription and turned down a collaboration with OpenAI.
— Paweł Huryn (@PawelHuryn) February 28, 2026
Some say Anthropic has lost. To me, they just earned something no contract can buy - trust. And something tells me that's not the end of the story. https://t.co/J4VbVK1dlw
Anthropic, meanwhile, faced fire from a different direction. President Trump publicly called the company a "Radical Left, woke" outfit run by "leftwing nut jobs who have no idea what the real world is all about," and ordered all federal agencies to immediately halt use of Claude. Secretary Hegseth, representing the newly renamed Department of War, went further by designating Claude a Supply Chain Risk to National Security, effectively severing Anthropic from its federal contracts.
The shakeup rattled OpenAI internally as well. In a New York Times op-ed titled "OpenAI Is Making the Mistakes Facebook Made," senior researcher Zoë Hitzig announced her resignation, citing the DOW deal and the rollout of ads in ChatGPT as evidence that the company had abandoned its founding mission to benefit humanity.
The most consequential blow came on March 4th, when Max Schwarzer announced she would step down as VP of Research and join Anthropic. "Many of the people I most trust and respect have joined Anthropic over the last couple of years, and I'm excited to work with them again," she wrote on X. "I have been very impressed with Anthropic's talent, research taste, and values."
OpenAI CEO Sam Altman attempted to address the criticism in an AMA on X. He acknowledged that the timing of the DOW contract looked "opportunistic and sloppy," but argued that OpenAI had signed it to de-escalate a volatile situation and ensure ethical safeguards remained part of the government's classified AI transition. He also said the contract was being amended to explicitly prohibit domestic surveillance, and condemned the administration's "supply chain risk" designation of Anthropic as a dangerous and unwarranted precedent for the tech industry.
Here is re-post of an internal post:
— Sam Altman (@sama) March 3, 2026
We have been working with the DoW to make some additions in our agreement to make our principles very clear.
1. We are going to amend our deal to add this language, in addition to everything else:
"• Consistent with applicable laws,…
His critics were quick to point out the contradiction. In a closed-door all-hands meeting on Tuesday, Altman had reportedly told employees, according to CNBC and the WSJ, that OpenAI doesn't get to choose how the military uses its technology: a statement that directly undercuts the reassurances he'd offered publicly just days before and in his latest AMA.
The public remained unconvinced. "It's over. You've lost the public trust. You're done," wrote activist Amy Siskind on X.
Others were baffled by a more specific contradiction: if the DOW had agreed to the same conditions with OpenAI that Anthropic had originally proposed, why was Anthropic the one designated a national security risk? "Ok but if the Pentagon agrees to this then why is Anthropic a supply chain risk and OpenAI isn't?!" wrote Alan Rozenshtein, a professor at the University of Minnesota Law School.
"The fundamental point is that OpenAI has agreed to all lawful use, whereas Anthropic sought constraints that went beyond lawful use, on the basis that current law and policy was not an adequate safeguard."
- Shashank Joshi, Defence editor, The Economist
Shashank Joshi, Defence editor at The Economist, offered one explanation: "The fundamental point is that OpenAI has agreed to all lawful use, whereas Anthropic sought constraints that went beyond lawful use, on the basis that current law and policy was not an adequate safeguard." In other words, the administration's objection was less about what Anthropic refused to do, and more about the fact that it had dared to set its own limits at all.
The week-long feud has since spilled into the political arena. High-profile Democrats including Elizabeth Warren and Bernie Sanders have raised concerns that the DOW's blacklisting of Anthropic amounts to political favoritism — the current administration rewarding companies willing to comply without question, and punishing those that refuse to abandon their ethical guardrails.
Beneath all the noise, a more uncomfortable truth was taking shape: that in an age when an AI model can tip the scales of armed conflict, the old dream of building technology that belongs to everyone, that serves no flag and fights no war, may have already quietly expired.
