Anthropic's CEO, Dario Amodei, criticized OpenAI's new defense contract as "safety theater," accusing Sam Altman of misleading the public. The criticism stems from Anthropic's own failed negotiations with the Department of Defense, where it refused a deal over concerns about potential use for mass surveillance or autonomous weapons.
The main topics covered are the ethical dispute between Anthropic and OpenAI over military AI contracts, the specific contractual concerns regarding "lawful use," and the public and employee reactions to these deals.
OpenAI secured the DoD agreement by stating it includes explicit protections against those same prohibited uses, though critics note laws can change. Public sentiment appears to side with Anthropic, evidenced by a spike in ChatGPT uninstalls and Amodei's claim that his company is now viewed as "the heroes."
Anthropic co-founder and CEO Dario Amodei is not happy — perhaps predictably so — with OpenAI chief Sam Altman. In a memo to staff, reported by The Information, Amodei referred to OpenAI’s dealings with the Department of Defense as “safety theater.”
“The main reason [OpenAI] accepted [the DoD’s deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses,” Amodei wrote.
Last week, Anthropic and the U.S. Department of Defense (DoD) failed to come to an agreement over the military’s request for unrestricted access to the AI company’s technology. Anthropic, which already had a $200 million contract with the military, insisted the DoD affirm that it would not use the company’s AI to enable domestic mass surveillance or autonomous weaponry.
Instead, the DoD — known under the Trump administration as the Department of War — struck a deal with OpenAI. Altman stated that his company’s new defense contract would include protections against the same red lines that Anthropic had asserted.
In a letter to staff, Amodei refers to OpenAI’s messaging as “straight up lies,” stating that Altman is falsely “presenting himself as a peacemaker and dealmaker.”
Amodei might not be speaking solely from a position of bitterness, here. Anthropic specifically took issue with the DoD’s insistence on the company’s AI being available for “any lawful use.” OpenAI said in a blog post that its contract allows use of its AI systems for “all lawful purposes.”
“It was clear in our interaction that the DoW considers mass domestic surveillance illegal and was not planning to use it for this purpose,” OpenAI’s blog post stated. “We ensured that the fact that it is not covered under lawful use was made explicit in our contract.”
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
Critics have pointed out that the law is subject to change, and what is considered illegal now might end up being allowed in the future.
And the public seems to be siding with Anthropic. ChatGPT uninstalls jumped 295% after OpenAI made its deal with the DoD.
“I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI’s deal with the DoW as sketchy or suspicious, and see us as the heroes (we’re #2 in the App Store now!),” Amodei wrote to his staff. “It is working on some Twitter morons, which doesn’t matter, but my main worry is how to make sure it doesn’t work on OpenAI employees.”