The Pentagon has formally designated AI company Anthropic as a "supply-chain risk," barring defense contractors from using its Claude AI in government-related products. This unprecedented move against an American company follows a dispute over Anthropic's refusal to allow Claude's use for autonomous lethal weapons and mass surveillance.
The conflict escalated after failed negotiations, with the Pentagon arguing Anthropic's usage restrictions give a private company too much control. Defense Secretary Pete Hegseth has threatened broad enforcement, potentially cancelling contracts of any company doing business with Anthropic, which Anthropic calls illegal.
Despite a six-month deadline to remove Claude from government systems, the AI is reportedly deeply integrated, having recently played a major role in a successful military mission against Iran.
Main topics: U.S. Defense Department vs. Anthropic, supply-chain risk designation, ethical AI use policies, autonomous weapons and surveillance, government-contractor relations.
The Pentagon formally labels Anthropic a supply-chain risk
Pete Hegseth had been threatening to punish the AI company for not loosening its acceptable use policy. Now, it’s official.
Pete Hegseth had been threatening to punish the AI company for not loosening its acceptable use policy. Now, it’s official.
After weeks of failed negotiations, public ultimatums, and lawsuit threats, the Defense Department has formally labeled Anthropic a “supply-chain risk”, escalating its fight with the AI company over their acceptable use policies and potentially bringing their fight to court.
The decision, first reported by The Wall Street Journal on Thursday, citing one source familiar, will bar defense contractors from working with the government if they use Claude, Anthropic’s AI program, in their products. Though the designation is typically applied to foreign companies with ties to adversarial governments, this is the first time that an American company has publicly received this label.
At the heart of the conflict is Anthropic’s refusal to allow the Pentagon to use Claude for two purposes: autonomous lethal weapons without human oversight, and mass surveillance. The Pentagon has argued that Anthropic’s demands for control over government usage would place too much power in the hands of a private company, while Anthropic was not reassured that the government would respect their red lines. The negotiations grew ugly, however, as the Pentagon increasingly threatened to use the supply-chain risk designation should Anthropic refuse to comply with their demands. After Anthropic announced last Thursday that they would not comply, the Pentagon made good on that threat. (The Pentagon did not comment on the record. Anthropic did not immediately return a request for comment.)
It is unclear how broadly the Pentagon will attempt to apply their enforcement of this designation. On Friday, when he announced his intent to label Anthropic a risk, Defense Secretary Pete Hegseth stated that any company that performed “any commercial activity” with Anthropic —even outside their work for the Pentagon — would have their defense contracts cancelled. At the time, Anthropic stated in response that such a broad application of the law would be illegal.
Hegseth and President Donald Trump set a 6-month deadline for Anthropic to remove Claude from government systems, but it won’t be easy, especially from the military. After the U.S. attacked Iran over the weekend, killing Supreme Leader Ayatollah Ali Khamenei in a targeted missile strike, reports indicated that Claude-powered intelligence tools played a major role in the success of the mission.
Most Popular
- MacBook Neo versus an old MacBook Air: good luck
- Our first hands-on look at Apple’s MacBook Neo
- I’m not ashamed to admit the Kobo Remote is the best gadget I’ve bought this year
- Tim Sweeney signed away his right to criticize Google’s app store until 2032
- You can now fill your home with Ikea’s cheap and tiny new Bluetooth speaker