A senior Pentagon official revealed that commercial AI contracts contain sweeping operational restrictions that could paralyze U.S. military missions, including planning combat operations. The official described alarming terms that could cause an AI model to stop mid-operation if terms were violated, specifically citing restrictions affecting commands overseeing air operations in sensitive regions.
The disclosures help explain a recent dispute between the Pentagon and AI company Anthropic, whose tool Claude was reportedly used to help plan a military operation. Following the disagreement, Anthropic was banned from government business and labeled a national security risk, while rival OpenAI struck a new deal with the Department of Defense.
The main topics covered are the restrictive terms in Pentagon AI contracts, the specific operational risks these terms pose, and the resulting conflict between the U.S. military and AI provider Anthropic.
A senior Pentagon official said on Tuesday that commercial AI contracts signed under the Biden administration contained sweeping operational restrictions that threatened to paralyse U.S. military missions in real time, including the ability to plan and execute combat operations.
Emil Michael, under secretary of defence for research and engineering, described a moment of alarm when he reviewed the terms governing AI models already embedded in some of the military's âmost sensitive commands. â He â did not name the AI provider whose contracts he was reviewing. His comments came at the American Dynamism Summit in Washington, a gathering of technology companies keen on space and national security work. The summit occurred just days after a disagreement over how the Pentagon could use Anthropic's powerful and widely used AI tools, leading President Donald Trump to ban the startup from government business and label it a national security risk.
"I had a 'holy, holy cow' moment," Michael said at â the American Dynamism âSummit in Washington. "There were things ... you couldn't plan an operation ... if it would potentially lead to kinetics" or explosions. He described dozens of restrictions baked in to â agreements covering commands responsible for air operations over Iran, China and South America.
Michael said the contracts were structured in a way that, if an operator violated the terms of service, the model could theoretically "just stop in the middle of an operation." Anthropic's Claude had been the only AI model available to the Defense Department on its classified systems at the time Michael conducted his review. His concerns sharpened after a senior executive at an unnamed AI company raised questions about whether its software had been used âin what Michael called one of the most successful military operations in recent memory. Anthropic's Claude was reported to have been used to help plan the U.S. government raid that captured former Venezuelan â President Nicolas Maduro in January.
"What we're not going to do is let any one company dictate a new set of policies above and beyond what Congress has passed," Michael said.
The disclosures may help explain the dispute between Anthropic and the Department of Defense. Defense Secretary Pete Hegseth declared the company a "supply-chain risk" for its refusal to back down in negotiations over restrictions on autonomous weapons and mass surveillance. Hours later, rival OpenAI struck its own deal with the Pentagon. A statement by OpenAI CEO Sam Altman suggested that the Department had agreed to similar restrictions with OpenAI's models.
Emil Michael, under secretary of defence for research and engineering, described a moment of alarm when he reviewed the terms governing AI models already embedded in some of the military's âmost sensitive commands. â He â did not name the AI provider whose contracts he was reviewing. His comments came at the American Dynamism Summit in Washington, a gathering of technology companies keen on space and national security work. The summit occurred just days after a disagreement over how the Pentagon could use Anthropic's powerful and widely used AI tools, leading President Donald Trump to ban the startup from government business and label it a national security risk.
"I had a 'holy, holy cow' moment," Michael said at â the American Dynamism âSummit in Washington. "There were things ... you couldn't plan an operation ... if it would potentially lead to kinetics" or explosions. He described dozens of restrictions baked in to â agreements covering commands responsible for air operations over Iran, China and South America.
Michael said the contracts were structured in a way that, if an operator violated the terms of service, the model could theoretically "just stop in the middle of an operation." Anthropic's Claude had been the only AI model available to the Defense Department on its classified systems at the time Michael conducted his review. His concerns sharpened after a senior executive at an unnamed AI company raised questions about whether its software had been used âin what Michael called one of the most successful military operations in recent memory. Anthropic's Claude was reported to have been used to help plan the U.S. government raid that captured former Venezuelan â President Nicolas Maduro in January.
"What we're not going to do is let any one company dictate a new set of policies above and beyond what Congress has passed," Michael said.
The disclosures may help explain the dispute between Anthropic and the Department of Defense. Defense Secretary Pete Hegseth declared the company a "supply-chain risk" for its refusal to back down in negotiations over restrictions on autonomous weapons and mass surveillance. Hours later, rival OpenAI struck its own deal with the Pentagon. A statement by OpenAI CEO Sam Altman suggested that the Department had agreed to similar restrictions with OpenAI's models.