Anthropic CEO Dario Amodei announced the company will legally challenge the U.S. Defense Department's designation of the firm as a supply chain risk, a label that can prohibit work with the Pentagon. Amodei argues the designation is legally unsound and narrowly applies only to direct uses of its AI, Claude, in specific Defense contracts, not to all business with contractors.
The dispute centers on Anthropic's ethical boundaries, such as prohibiting mass surveillance and autonomous weapons, versus the Pentagon's desire for unrestricted access for lawful purposes. Amodei also addressed a leaked internal memo critical of rival OpenAI, apologizing for its tone and clarifying it does not reflect his current views.
Despite the conflict, Amodei stated Anthropic's priority is ensuring U.S. national security experts maintain access to its AI tools during ongoing operations, offering its models to the Defense Department at nominal cost.
Main Topics: Legal challenge to Pentagon's supply chain risk designation, ethical AI use principles, leaked memo controversy, ongoing support for U.S. defense operations.
Dario Amodei said Thursday that Anthropic plans to challenge the Defense Department’s decision to label the AI firm a supply chain risk in court, a designation he has called “legally unsound.”
The statement comes a few hours after the Department officially designated Anthropic a supply chain risk following a weeks-long dispute over how much control the military should have over AI systems. A supply chain risk designation can bar a company from working with the Pentagon and its contractors. Amodei drew a firm line that Anthropic’s AI should not be used for mass surveillance of Americans or for fully autonomous weapons, but the Pentagon believed it should have unrestricted access for “all lawful purposes.”
In his statement, Amodei said the vast majority of Anthropic’s customers are unaffected by the supply chain risk designation.
“With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts,” he said.
As a preview of what Anthropic will likely argue in court, Amodei said the Department’s letter labeling the firm a supply chain risk is narrow in scope.
“It exists to protect the government rather than to punish a supplier; in fact, the law requires the Secretary of War to use the least restrictive means necessary to accomplish the goal of protecting the supply chain,” Amodei said. “Even for Department of War contractors, the supply chain risk designation doesn’t (and can’t) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts.”
Amodei reiterated that Anthropic had been having productive conversations with the Department over the last several days, conversations that some suspect got derailed when an internal memo he sent to staff was leaked. In it, Amdodei characterized rival OpenAI’s dealings with the Department of Defense as “safety theater.”
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
OpenAI has signed a deal to work with the Defense Department in Anthropic’s place, a move that has sparked backlash among OpenAI staff.
Amodei apologized for the leak in his Thursday statement, claiming that the company did not intentionally share the memo or direct anyone else to do so. “It is not in our interest to escalate the situation,” he said.
Amodei said the memo was written within “a few hours” of a series of announcements, including a presidential Truth Social post saying Anthropic would be removed from federal systems, then Defense Secretary Hegseth’s supply chain risk designation, and finally the Pentagon’s deal announcement with OpenAI. He apologized for the tone, calling it “a difficult day for the company” and said the memo didn’t reflect his “careful or considered views.” Written six days ago, he added, it’s now an “out-of-date assessment.”
He finished by saying Anthropic’s top priority is to ensure American soldiers and national security experts maintain access to important tools in the middle of ongoing major combat operations. Anthropic is currently supporting some of the U.S.’s operations in Iran, and Amodei said the company would continue to provide its models to the Defense Department at “nominal cost” for “as long as necessary to make that transition.”
Anthropic could challenge the desingation in federal court, likely in Washington, but the law behind the decision makes it harder to contest because it limits the usual ways companies can challenge government procurement decisions and gives the Pentagon broad discretion on national security matters.
Or as Dean Ball — a former Trump-era White House advisor on AI who has spoken out against Hegseth’s treatment of Anthropic — put it: “Courts are pretty reluctant to second-guess the government on what is and is not a national security issue…There’s a very high bar that one needs to clear in order to do that. But it’s not impossible.”