The U.S. Department of Defense has designated AI company Anthropic a "supply chain risk," halting discussions and potentially restricting federal contractors from working with the firm. This follows a contract dispute where the Pentagon demands AI models be available for "any lawful use," while Anthropic maintains ethical red lines against fully autonomous weapons and mass surveillance.
The conflict escalated after reports suggested Anthropic's technology may have been used in a military operation in Venezuela, which the company states did not violate its policies. In response to the Pentagon's "any lawful use" mandate for procurement, Anthropic has revised its internal safety policy, removing a prior commitment to guarantee risk mitigations before releasing models.
The main topics covered are the contractual and ethical dispute between Anthropic and the U.S. Department of Defense, the company's partnerships and government contracts, and the policy changes from both sides following the escalation of tensions.
Anthropic has drawn headlines over its recent standoff with the US Department of Defense, pushing the artificial intelligence (AI) company into an increasingly difficult position.
The US government has designated Anthropic a âsupply chain risk,â a move that could restrict federal contractors from working with the firm. The designation appears to have halted recent discussions between Anthropic and the Pentagon on how its AI models could be deployed in defence settings.
Anthropicâs flagship model Claude, is used across US national security agencies for applications such as intelligence analysis, modelling and simulation, operational planning, and cyber operations.
The Department of War is demanding that AI models be available for "any lawful use," while Anthropic refuses to cross its strict ethical red lines, specifically, no fully autonomous weapons and no mass domestic surveillance.
ET maps out the timeline of the dispute and how it escalated.
July 11, 2024: US data analytics firm Palantir announced a partnership with Anthropic to bring its Claude AI models into US government intelligence and defence operations.
November 7, 2024: Anthropic and Palantir expanded their collaboration with Amazon Web Services (AWS) to provide US intelligence and defence agencies access to the Claude 3 and Claude 3.5 models through AWSâs secure government cloud infrastructure.
July 14, 2025: The Pentagon awarded contracts worth up to $200 million each to Anthropic, Google, OpenAI, and xAI as part of its push to deploy advanced AI systems for defence applications.
Anthropic, the developer of the Claude chatbot, was among the first companies cleared for classified government use, with officials citing the modelâs security and reliability.
July 14, 2025: Anthropic signed a $200 million contract with the Department of Defense, incorporating restrictions from its acceptable use policy, including limits on certain military applications.
October 25, 2025: David Sacks, Donald Trumpâs AI and crypto adviser, had repeatedly criticised Anthropic for what he described as âwokeâ AI policies. Sacks accused Anthropic in October of "running a sophisticated regulatory capture strategy based on fear-mongering."
November 2025: Anthropic deepened its partnership with Palantir, integrating Claude as the reasoning engine inside a military decision-support platform used by defence agencies.
Early January 2026: Anthropic submitted a $100 million proposal to the Pentagon to develop voice-controlled autonomous drone swarming technology, according to Bloomberg News.
The proposal envisioned Claude translating a commanderâs intent into digital instructions that could coordinate fleets of drones.
January 2026: Tensions escalated after media reports suggested Anthropicâs technology may have been used during a US military operation in Venezuela on January 3, which reportedly resulted in the capture of President Nicolás Maduro and his wife Cilia Flores.
Anthropic later said it did not identify any violation of its usage policies linked to the operation.
January 9, 2026: US Defense Secretary Pete Hegseth issued a memorandum stating that AI systems used by the military must allow âany lawful useâ by defence agencies.
âDiversity, Equity and Inclusion and social ideology have no place in the Department of War,â the memo stated, arguing that AI models should not include ideological tuning that could affect responses.
The memo directed the Pentagon to incorporate the phrase âany lawful useâ into future AI procurement contracts within 180 days.
February 2026: Anthropic updated its Responsible Scaling Policy, its internal framework for mitigating catastrophic risks from advanced AI systems.
According to Time magazine, the revisions included removing a commitment not to release models unless risk mitigations were guaranteed in advance.
February 2026: Anthropic signed a $19,000 contract with the US State Department to deploy Claude.
February 24, 2026: Defense Secretary Pete Hegseth planned to meet with the CEO of Anthropic. Hegseth said his vision for military AI systems meant that they operate "without ideological constraints that limit lawful military applications," before adding that the Pentagon's "AI will not be woke."
February 25, 2026: Defense Secretary Pete Hegseth gave Anthropic chief executive Dario Amodei a deadline to remove two restrictions on military use of its AI models:
A ban on mass surveillance of American citizens and a prohibition on fully autonomous weapons without a human in the loop.
February 28, 2026: President Donald Trump publicly criticised the company, warning it must cooperate with the Pentagon.
âAnthropic better get their act together⦠or I will use the full power of the presidency to make them comply,â Trump said.
Anthropic CEO Dario Amodei called the move âretaliatory and punitive." He stressed that the company would challenge the designation in court if any such formal steps are taken by the government.
March 3, 2026: US Treasury Secretary Scott Bessent announced the department would terminate all use of Anthropic products, including the Claude platform.
âUnder President Trump no private company will dictate the terms of our national security,â Bessent wrote on X.
Also Read: ETtech Explainer: Anthropicâs rapid rise, Pentagon standoff and everything in between
March 3, 2026: Within hours of Anthropic declining to remove safeguards on Claude, OpenAI disclosed its own agreement to supply AI models for classified settings.
OpenAI said it would modify its Pentagon contract to ensure its AI models are not used for domestic surveillance after facing criticism over oversight concerns.
March 5, 2026: Anthropic CEO Dario Amodei said the companyâs relationship with the US government had deteriorated because it refused to offer what he described as âdictator-style praiseâ for President Trump.
However, Amodei also confirmed that discussions with the Pentagon had resumed, raising hopes that the dispute could be resolved.
March 6, 2026: The US government has formally designated AI company Anthropic a âsupply chain risk,â a move that could prevent federal contractors from working with the firm.
Microsoft, however, said it would continue deploying Anthropicâs AI models in its products despite the companyâs dispute with the Pentagon including Anthropicâs investors, such as Amazon and Nvidia.
Anthropic in response said it would challenge the âsupply chain riskâ designation, calling it a legally unsound action never before applied to an American company.
The Pentagon, meanwhile, has begun preparing to transition its AI services to other providers within six months.
Amodei later told The Economist that the episode had been one of the most âdisorientingâ periods in the companyâs history while apologising for "the way he handled a recent crisis."
Also Read: Anthropic vs US govt: Amodei the last holdout against Trump regime's demand for unrestricted access to AI
The US government has designated Anthropic a âsupply chain risk,â a move that could restrict federal contractors from working with the firm. The designation appears to have halted recent discussions between Anthropic and the Pentagon on how its AI models could be deployed in defence settings.
Anthropicâs flagship model Claude, is used across US national security agencies for applications such as intelligence analysis, modelling and simulation, operational planning, and cyber operations.
The Department of War is demanding that AI models be available for "any lawful use," while Anthropic refuses to cross its strict ethical red lines, specifically, no fully autonomous weapons and no mass domestic surveillance.
ET maps out the timeline of the dispute and how it escalated.
July 11, 2024: US data analytics firm Palantir announced a partnership with Anthropic to bring its Claude AI models into US government intelligence and defence operations.
November 7, 2024: Anthropic and Palantir expanded their collaboration with Amazon Web Services (AWS) to provide US intelligence and defence agencies access to the Claude 3 and Claude 3.5 models through AWSâs secure government cloud infrastructure.
July 14, 2025: The Pentagon awarded contracts worth up to $200 million each to Anthropic, Google, OpenAI, and xAI as part of its push to deploy advanced AI systems for defence applications.
Anthropic, the developer of the Claude chatbot, was among the first companies cleared for classified government use, with officials citing the modelâs security and reliability.
July 14, 2025: Anthropic signed a $200 million contract with the Department of Defense, incorporating restrictions from its acceptable use policy, including limits on certain military applications.
October 25, 2025: David Sacks, Donald Trumpâs AI and crypto adviser, had repeatedly criticised Anthropic for what he described as âwokeâ AI policies. Sacks accused Anthropic in October of "running a sophisticated regulatory capture strategy based on fear-mongering."
November 2025: Anthropic deepened its partnership with Palantir, integrating Claude as the reasoning engine inside a military decision-support platform used by defence agencies.
Early January 2026: Anthropic submitted a $100 million proposal to the Pentagon to develop voice-controlled autonomous drone swarming technology, according to Bloomberg News.
The proposal envisioned Claude translating a commanderâs intent into digital instructions that could coordinate fleets of drones.
January 2026: Tensions escalated after media reports suggested Anthropicâs technology may have been used during a US military operation in Venezuela on January 3, which reportedly resulted in the capture of President Nicolás Maduro and his wife Cilia Flores.
Anthropic later said it did not identify any violation of its usage policies linked to the operation.
January 9, 2026: US Defense Secretary Pete Hegseth issued a memorandum stating that AI systems used by the military must allow âany lawful useâ by defence agencies.
âDiversity, Equity and Inclusion and social ideology have no place in the Department of War,â the memo stated, arguing that AI models should not include ideological tuning that could affect responses.
The memo directed the Pentagon to incorporate the phrase âany lawful useâ into future AI procurement contracts within 180 days.
February 2026: Anthropic updated its Responsible Scaling Policy, its internal framework for mitigating catastrophic risks from advanced AI systems.
According to Time magazine, the revisions included removing a commitment not to release models unless risk mitigations were guaranteed in advance.
February 2026: Anthropic signed a $19,000 contract with the US State Department to deploy Claude.
February 24, 2026: Defense Secretary Pete Hegseth planned to meet with the CEO of Anthropic. Hegseth said his vision for military AI systems meant that they operate "without ideological constraints that limit lawful military applications," before adding that the Pentagon's "AI will not be woke."
February 25, 2026: Defense Secretary Pete Hegseth gave Anthropic chief executive Dario Amodei a deadline to remove two restrictions on military use of its AI models:
A ban on mass surveillance of American citizens and a prohibition on fully autonomous weapons without a human in the loop.
February 28, 2026: President Donald Trump publicly criticised the company, warning it must cooperate with the Pentagon.
âAnthropic better get their act together⦠or I will use the full power of the presidency to make them comply,â Trump said.
Anthropic CEO Dario Amodei called the move âretaliatory and punitive." He stressed that the company would challenge the designation in court if any such formal steps are taken by the government.
March 3, 2026: US Treasury Secretary Scott Bessent announced the department would terminate all use of Anthropic products, including the Claude platform.
âUnder President Trump no private company will dictate the terms of our national security,â Bessent wrote on X.
Also Read: ETtech Explainer: Anthropicâs rapid rise, Pentagon standoff and everything in between
March 3, 2026: Within hours of Anthropic declining to remove safeguards on Claude, OpenAI disclosed its own agreement to supply AI models for classified settings.
OpenAI said it would modify its Pentagon contract to ensure its AI models are not used for domestic surveillance after facing criticism over oversight concerns.
March 5, 2026: Anthropic CEO Dario Amodei said the companyâs relationship with the US government had deteriorated because it refused to offer what he described as âdictator-style praiseâ for President Trump.
However, Amodei also confirmed that discussions with the Pentagon had resumed, raising hopes that the dispute could be resolved.
March 6, 2026: The US government has formally designated AI company Anthropic a âsupply chain risk,â a move that could prevent federal contractors from working with the firm.
Microsoft, however, said it would continue deploying Anthropicâs AI models in its products despite the companyâs dispute with the Pentagon including Anthropicâs investors, such as Amazon and Nvidia.
Anthropic in response said it would challenge the âsupply chain riskâ designation, calling it a legally unsound action never before applied to an American company.
The Pentagon, meanwhile, has begun preparing to transition its AI services to other providers within six months.
Amodei later told The Economist that the episode had been one of the most âdisorientingâ periods in the companyâs history while apologising for "the way he handled a recent crisis."
Also Read: Anthropic vs US govt: Amodei the last holdout against Trump regime's demand for unrestricted access to AI