The US Defense Department considers Anthropic's Claude AI a national security concern due to "policy preferences" built into its training that conflict with military requirements. Officials state this "supply-chain risk" designation aims to prevent these embedded policies from affecting defense contractors' work, not to punish the company, and they have ruled out further negotiations.
Anthropic has sued the administration over the "unprecedented and unlawful" designation, which stems from a dispute over using AI in autonomous weapons and domestic surveillance. The company refused Pentagon requests to remove its strict ethical limits on these uses, arguing the technology is not safe or rights-compliant for such applications.
The Pentagon defends its position, asserting that decisions on AI in warfare must be governed by US law, not private company policies, to ensure the military retains freedom for any lawful use. It warns that Anthropic's restrictions could limit critical capabilities and risk American lives.
Main topics: US Defense Department vs. Anthropic, AI policy and national security, "supply-chain risk" designation, dispute over autonomous weapons and surveillance, legal and ethical conflict over AI use.
The US Defense Departmentâs chief technology officer, Emil Michael, has explained for the first time why the government considers AI models from Anthropic â particularly its Claude system â to be a national security concern.
Speaking on Squawk Box on CNBC, Michael said the issue lies in the âdifferent policy preferencesâ built into the model during training. According to him, these could conflict with the requirements of the US military.
âWe canât have a company that has a different policy preference that is baked into the model through its constitution, its soul⦠pollute the supply chain so our fighters are getting ineffective weapons, ineffective body armour, ineffective protection,â Michael said.
Not meant as punishment
Michael stressed that the move was not designed to punish Anthropic. He pointed out that most of the companyâs revenue comes from its commercial operations rather than US government contracts.
He also rejected reports suggesting that the Pentagon had warned companies against using Anthropicâs technology in general. According to him, such claims are simply ârumoursâ and the restrictions apply specifically to defence supply chains.
At the same time, Michael ruled out the possibility of renewed discussions with the company. "There's â no â chance,â he said. âThe (Anthropic) leadership has proven, through the leaking and through sort of bad faith negotiations, that they don't want to reach an agreement.â
âSupply-chain riskâ designation
Recently, Anthropic was formally classified as a âsupply-chain riskâ â a status usually applied only to organisations linked to foreign adversaries. As a result, defence contractors and suppliers must confirm that they are not using Claude in any work connected to the Pentagon.
Anthropic responded by filing a lawsuit against the administration of President Donald Trump. The company described the designation as âunprecedented and unlawfulâ and warned that it could put hundreds of millions of dollars in contracts at risk.
Dispute over AI use in military systems
The dispute stems from disagreements over how AI should be used in defence programmes. The Pentagon had asked Anthropic to remove strict limits preventing its technology from being used in fully autonomous weapons and in domestic surveillance of American citizens.
Anthropic refused, arguing that current AI technology is not dependable enough to control autonomous weapons safely. The company also said that using such systems for domestic surveillance would violate fundamental rights.
After the negotiations collapsed, US Defense Secretary Pete Hegseth officially labelled Anthropic a national security âsupply-chain riskâ. Trump later instructed federal agencies to stop working with the company, with a six-month transition period set for existing agreements.
Pentagon stands firm
The Defense Department has defended its position, saying decisions about how AI can be used in warfare should be determined by US law rather than the policies of private companies.
Officials argue the military must retain complete freedom to apply AI for âany lawful use.â They also warned that restrictions imposed by Anthropic could limit critical capabilities and potentially put American lives at risk.
Speaking on Squawk Box on CNBC, Michael said the issue lies in the âdifferent policy preferencesâ built into the model during training. According to him, these could conflict with the requirements of the US military.
âWe canât have a company that has a different policy preference that is baked into the model through its constitution, its soul⦠pollute the supply chain so our fighters are getting ineffective weapons, ineffective body armour, ineffective protection,â Michael said.
Not meant as punishment
Michael stressed that the move was not designed to punish Anthropic. He pointed out that most of the companyâs revenue comes from its commercial operations rather than US government contracts.
He also rejected reports suggesting that the Pentagon had warned companies against using Anthropicâs technology in general. According to him, such claims are simply ârumoursâ and the restrictions apply specifically to defence supply chains.
At the same time, Michael ruled out the possibility of renewed discussions with the company. "There's â no â chance,â he said. âThe (Anthropic) leadership has proven, through the leaking and through sort of bad faith negotiations, that they don't want to reach an agreement.â
âSupply-chain riskâ designation
Recently, Anthropic was formally classified as a âsupply-chain riskâ â a status usually applied only to organisations linked to foreign adversaries. As a result, defence contractors and suppliers must confirm that they are not using Claude in any work connected to the Pentagon.
Anthropic responded by filing a lawsuit against the administration of President Donald Trump. The company described the designation as âunprecedented and unlawfulâ and warned that it could put hundreds of millions of dollars in contracts at risk.
Dispute over AI use in military systems
The dispute stems from disagreements over how AI should be used in defence programmes. The Pentagon had asked Anthropic to remove strict limits preventing its technology from being used in fully autonomous weapons and in domestic surveillance of American citizens.
Anthropic refused, arguing that current AI technology is not dependable enough to control autonomous weapons safely. The company also said that using such systems for domestic surveillance would violate fundamental rights.
After the negotiations collapsed, US Defense Secretary Pete Hegseth officially labelled Anthropic a national security âsupply-chain riskâ. Trump later instructed federal agencies to stop working with the company, with a six-month transition period set for existing agreements.
Pentagon stands firm
The Defense Department has defended its position, saying decisions about how AI can be used in warfare should be determined by US law rather than the policies of private companies.
Officials argue the military must retain complete freedom to apply AI for âany lawful use.â They also warned that restrictions imposed by Anthropic could limit critical capabilities and potentially put American lives at risk.