The U.S. government has designated AI company Anthropic a "supply chain risk," which could block federal contractors from working with it and has halted its defense deployment talks with the Pentagon.
The conflict centers on the Pentagon demanding AI be available for "any lawful use," while Anthropic refuses to cross its ethical red lines against fully autonomous weapons and mass domestic surveillance.
Anthropic will legally challenge the designation, calling it unprecedented and unsound, while the Pentagon prepares to transition to other AI providers within six months.
Despite the dispute, Microsoft says it will continue using Anthropic's models, and Anthropic's CEO described the situation as a deeply disorienting crisis.
Main Topics: U.S. government designation of Anthropic as a supply chain risk; ethical standoff over military AI use; legal and business repercussions.
Anthropic has drawn headlines over its recent standoff with the US Department of Defense, pushing the artificial intelligence (AI) company into an increasingly difficult position.
The US government has designated Anthropic a âsupply chain risk,â a move that could restrict federal contractors from working with the firm. The designation appears to have halted recent discussions between Anthropic and the Pentagon on how its AI models could be deployed in defence settings.
Anthropicâs flagship model Claude, is used across US national security agencies for applications such as intelligence analysis, modelling and simulation, operational planning, and cyber operations.
The Department of War is demanding that AI models be available for "any lawful use," while Anthropic refuses to cross its strict ethical red lines, specifically, no fully autonomous weapons and no mass domestic surveillance.
March 6, 2026: The US government has formally designated AI company Anthropic a âsupply chain risk,â a move that could prevent federal contractors from working with the firm.
Microsoft, however, said it would continue deploying Anthropicâs AI models in its products despite the companyâs dispute with the Pentagon including Anthropicâs investors, such as Amazon and Nvidia.
Anthropic in response said it would challenge the âsupply chain riskâ designation, calling it a legally unsound action never before applied to an American company.
The Pentagon, meanwhile, has begun preparing to transition its AI services to other providers within six months.
Amodei later told The Economist that the episode had been one of the most âdisorientingâ periods in the companyâs history while apologising for "the way he handled a recent crisis."
Also Read: Anthropic vs US govt: Amodei the last holdout against Trump regime's demand for unrestricted access to AI
The US government has designated Anthropic a âsupply chain risk,â a move that could restrict federal contractors from working with the firm. The designation appears to have halted recent discussions between Anthropic and the Pentagon on how its AI models could be deployed in defence settings.
Anthropicâs flagship model Claude, is used across US national security agencies for applications such as intelligence analysis, modelling and simulation, operational planning, and cyber operations.
The Department of War is demanding that AI models be available for "any lawful use," while Anthropic refuses to cross its strict ethical red lines, specifically, no fully autonomous weapons and no mass domestic surveillance.
March 6, 2026: The US government has formally designated AI company Anthropic a âsupply chain risk,â a move that could prevent federal contractors from working with the firm.
Microsoft, however, said it would continue deploying Anthropicâs AI models in its products despite the companyâs dispute with the Pentagon including Anthropicâs investors, such as Amazon and Nvidia.
Anthropic in response said it would challenge the âsupply chain riskâ designation, calling it a legally unsound action never before applied to an American company.
The Pentagon, meanwhile, has begun preparing to transition its AI services to other providers within six months.
Amodei later told The Economist that the episode had been one of the most âdisorientingâ periods in the companyâs history while apologising for "the way he handled a recent crisis."
Also Read: Anthropic vs US govt: Amodei the last holdout against Trump regime's demand for unrestricted access to AI