A senior Pentagon official revealed that commercial AI contracts contain sweeping operational restrictions that could paralyze U.S. military missions, including planning combat operations. The official described alarming terms that could cause an AI model to stop mid-operation if terms were violated, specifically citing restrictions affecting commands overseeing air operations in sensitive regions.
The disclosures help explain a recent dispute between the Pentagon and AI company Anthropic, whose tool Claude was reportedly used to help plan a military operation. Following the disagreement, Anthropic was banned from government business and labeled a national security risk, while rival OpenAI struck a new deal with the Department of Defense.
The main topics covered are the restrictive terms in Pentagon AI contracts, the specific operational risks these terms pose, and the resulting conflict between the U.S. military and AI provider Anthropic.