The U.S. Department of Defense designated AI company Anthropic as a "supply-chain risk" after the firm refused to remove safety guardrails that would prevent its technology from being used in autonomous weapons and domestic surveillance. This decision, which Anthropic views as retaliatory, jeopardizes the company's government contracts and could cost it billions in revenue.
Legal experts suggest the Pentagon's case may be weak due to legal mismatches, internal contradictions, and evidence the decision was driven by animus rather than genuine security concerns. In response, Anthropic is challenging the designation in court.
The main topics covered are the government-contractor conflict over AI safety guardrails, the economic and legal repercussions for Anthropic, and the broader implications for AI ethics and military procurement.
A standoff erupted between the U.S. Department of Defense and Anthropic in January after the AI lab refused to loosen safety guardrails on its systems, prompting the Pentagon to label it a 'supply-chain risk,' and putting the company's government contracts in jeopardy.
The Claude maker's executives warned that the designation, which they view as retaliation for opposing the use of their technology in autonomous weapons and domestic surveillance, could slash their 2026 revenue by billions of dollars.
Legal experts suggest the government's case may be undermined by a mismatch between the law invoked âand Anthropic's conduct, â internal contradictions â in the Pentagon's behavior and evidence that its decision may have been driven by animus rather than security.
Here is a timeline of the ongoing conflict:
January 29: The Pentagon and Anthropic clash over eliminating safeguards that could allow the government to use its technology to target weapons autonomously and conduct US domestic surveillance.
February 11: The Pentagon pushes AI companies, including Anthropic, to make their AI tools available in classified settings without many of the standard restrictions that the companies apply to other users.
February 14: The Pentagon considers ending its ties with Anthropic over the AI lab's insistence on keeping some limits on how the US military uses its models
February 23: US Defense Secretary Pete Hegseth summons Anthropic CEO Dario Amodei to the Pentagon for talks on the military use of â Claude
February 24: The Pentagon âasks Anthropic to get on board, or risk consequences, including being labeled a supply-chain risk
February 25: The Pentagon asks defence contractors, including Boeing and Lockheed Martin, to assess their reliance on Anthropic
February 26: Pentagon spokesperson Sean Parnell asks Anthropic to allow the Pentagon to use its technology â for all lawful purposes, giving the company until 5:01 p.m. ET on February 27 to decide February 26 Anthropic says it will not accede to the Pentagon's request to eliminate safeguards from its AIsystems
February 27: US President Donald Trump directs every federal agency to immediately cease all use of Anthropic's technology
February 27: Hegseth directs the US DoD to designate Anthropic a "supply-chain risk to national security"
February 27: Anthropic says it will challenge in court the Pentagon's decision
February 27: OpenAI announces deal to deploy technology in the DoD's classified network
February 28: OpenAI says its latest agreement with the Pentagon includes three red lines: its technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems or for any high-stakes automated decisions.
March 2: The US Departments of State, Treasury and Health and Human Services move to cease using Anthropic's Claude
March 3: Lockheed Martin pledges to follow âthe DoD's direction, signalling a likely exodus of defence contractors removing Anthropic's tools from their supply chains to protect their federal contracts, legal experts say
March 4: U.S. Treasury Secretary Scott Bessent tells CNBC that the agency will remove Anthropic from its government systems within days
March 4: Big tech industry group pushes to de-escalate the clash, saying a â supply-chain risk designation creates uncertainty for companies and could threaten the military's access to the best products and services
March 5: The U.S. DoD formally designates Anthropic as a supply-chain risk
March 6: Amazon says it is helping customers transition DoD workloads to alternative models on its cloud, while customers and partners could continue using Claude for all nonâPentagon workloads
March 6: The U.S. General Services Administration draws up strict rules for civilian artificial-intelligence contracts and terminates Anthropic's OneGov deal, which made Claude available to the federal government
March 9: Anthropic sues to block the Pentagon from placing it on a national security blacklist, saying the designation is unlawful and violates its free speech and due process rights
March 9: Anthropic executives say the U.S. government's blacklisting of the AI firm could cut its 2026 revenue by multiple billions of dollars and cause reputational harm
March 10: Microsoft files a brief backing Anthropic's lawsuit, saying the DoD designation directly affects it and that a temporary restraining order is needed to avoid costly supplier disruptions and rushed rebuilding of products that depend on Anthropic.
The Claude maker's executives warned that the designation, which they view as retaliation for opposing the use of their technology in autonomous weapons and domestic surveillance, could slash their 2026 revenue by billions of dollars.
Legal experts suggest the government's case may be undermined by a mismatch between the law invoked âand Anthropic's conduct, â internal contradictions â in the Pentagon's behavior and evidence that its decision may have been driven by animus rather than security.
Here is a timeline of the ongoing conflict:
January 29: The Pentagon and Anthropic clash over eliminating safeguards that could allow the government to use its technology to target weapons autonomously and conduct US domestic surveillance.
February 11: The Pentagon pushes AI companies, including Anthropic, to make their AI tools available in classified settings without many of the standard restrictions that the companies apply to other users.
February 14: The Pentagon considers ending its ties with Anthropic over the AI lab's insistence on keeping some limits on how the US military uses its models
February 23: US Defense Secretary Pete Hegseth summons Anthropic CEO Dario Amodei to the Pentagon for talks on the military use of â Claude
February 24: The Pentagon âasks Anthropic to get on board, or risk consequences, including being labeled a supply-chain risk
February 25: The Pentagon asks defence contractors, including Boeing and Lockheed Martin, to assess their reliance on Anthropic
February 26: Pentagon spokesperson Sean Parnell asks Anthropic to allow the Pentagon to use its technology â for all lawful purposes, giving the company until 5:01 p.m. ET on February 27 to decide February 26 Anthropic says it will not accede to the Pentagon's request to eliminate safeguards from its AIsystems
February 27: US President Donald Trump directs every federal agency to immediately cease all use of Anthropic's technology
February 27: Hegseth directs the US DoD to designate Anthropic a "supply-chain risk to national security"
February 27: Anthropic says it will challenge in court the Pentagon's decision
February 27: OpenAI announces deal to deploy technology in the DoD's classified network
February 28: OpenAI says its latest agreement with the Pentagon includes three red lines: its technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems or for any high-stakes automated decisions.
March 2: The US Departments of State, Treasury and Health and Human Services move to cease using Anthropic's Claude
March 3: Lockheed Martin pledges to follow âthe DoD's direction, signalling a likely exodus of defence contractors removing Anthropic's tools from their supply chains to protect their federal contracts, legal experts say
March 4: U.S. Treasury Secretary Scott Bessent tells CNBC that the agency will remove Anthropic from its government systems within days
March 4: Big tech industry group pushes to de-escalate the clash, saying a â supply-chain risk designation creates uncertainty for companies and could threaten the military's access to the best products and services
March 5: The U.S. DoD formally designates Anthropic as a supply-chain risk
March 6: Amazon says it is helping customers transition DoD workloads to alternative models on its cloud, while customers and partners could continue using Claude for all nonâPentagon workloads
March 6: The U.S. General Services Administration draws up strict rules for civilian artificial-intelligence contracts and terminates Anthropic's OneGov deal, which made Claude available to the federal government
March 9: Anthropic sues to block the Pentagon from placing it on a national security blacklist, saying the designation is unlawful and violates its free speech and due process rights
March 9: Anthropic executives say the U.S. government's blacklisting of the AI firm could cut its 2026 revenue by multiple billions of dollars and cause reputational harm
March 10: Microsoft files a brief backing Anthropic's lawsuit, saying the DoD designation directly affects it and that a temporary restraining order is needed to avoid costly supplier disruptions and rushed rebuilding of products that depend on Anthropic.