More than 30 employees from OpenAI and Google DeepMind filed a legal brief supporting Anthropic's lawsuit against the U.S. Defense Department. The DOD had labeled Anthropic a supply chain risk after the AI firm refused to allow its technology to be used for mass surveillance or autonomous weapons.
The employees argue the DOD's designation was an arbitrary abuse of power that could harm U.S. competitiveness and stifle open discussion of AI risks. They contend the Pentagon could have simply ended its contract with Anthropic rather than applying the severe label.
The brief emphasizes that without public AI laws, the contractual restrictions imposed by developers are a critical safeguard against misuse. This action follows the DOD quickly signing a deal with OpenAI after its dispute with Anthropic.
Main Topics: Legal challenge against the Pentagon's "supply chain risk" designation of Anthropic; industry support for ethical AI guardrails; conflict over military use of AI for surveillance and autonomous weapons.
More than 30 OpenAI and Google DeepMind employees filed a statement Monday supporting Anthropic’s lawsuit against the U.S. Defense Department after the federal agency labeled the AI firm a supply chain risk, according to court filings.
“The government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry,” reads the brief, whose signatories include Google DeepMind chief scientist Jeff Dean.
Late last week, the Pentagon labeled Anthropic a supply chain risk — usually reserved for foreign adversaries — after the AI firm refused to allow the Department of Defense to use its technology for mass surveillance of Americans or autonomously firing weapons. The DOD had argued that it should be able to use AI for any “lawful” purpose and not be constrained by a private contractor.
The amicus brief in support of Anthropic showed up on the docket a few hours after the Claude maker filed two lawsuits against the DOD and other federal agencies. Wired was first to report the news.
In the court filing, the Google and OpenAI employees make the point that if the Pentagon was “no longer satisfied with the agreed-upon terms of its contract with Anthropic,” the agency could have “simply canceled the contract and purchased the services of another leading AI company.”
The DOD did, in fact, sign a deal with OpenAI within moments of designating Anthropic a supply chain risk — a move many of the ChatGPT maker’s employees protested.
“If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond,” the brief reads. “And it will chill open deliberation in our field about the risks and benefits of today’s AI systems.”
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
The filing also affirms that Anthropic’s stated red lines are legitimate concerns warranting strong guardrails. Without public law to govern AI use, it argues, the contractual and technical restrictions developers impose on their systems are a critical safeguard against catastrophic misuse.
Many of the employees who signed the statement also signed open letters over the last couple of weeks urging the DOD to withdraw the label and calling on the leaders of their companies to support Anthropic and refuse unilateral use of their AI systems.