Three U.S. cabinet agencies—State, Treasury, and Health and Human Services—have directed staff to cease using Anthropic's AI products, including the Claude chatbot, following an order from President Donald Trump. This aligns with a similar move by the U.S. military, stemming from a failed agreement on limits for the military's use of Anthropic's technology.
The actions represent a significant rebuke of Anthropic, a leading AI firm, and threaten to isolate it within U.S. government contracting. The dispute centers on safeguards to prevent the military and intelligence agencies from using AI for autonomous weapons targeting and domestic surveillance.
Concurrently, rival OpenAI has announced a deal with the Defense Department, explicitly amending it to prohibit the intentional use of its technology for domestic surveillance of U.S. persons.
Main topics: U.S. government ban on Anthropic AI, contract dispute over military use and safeguards, shift to rival AI providers like OpenAI.
Three US cabinet agencies on Monday moved to cease use of Anthropic's AI products, joining the military in directing staff to use models from rivals such as OpenAI and Google.
Leaders at the Departments of State, Treasury and Health and Human Services directed their employees to abandon Anthropic's language-trained chatbot platform Claude on orders from President Donald Trump. They âjoined the U.S. â military in dropping â use of the platform, after Anthropic and the Trump administration failed to agree on limits to the military's use of the company's products.
The actions mark an extraordinary rebuke by the U.S. against one of the premier companies that has kept it in the lead on national-security-critical AI, threatening to give Anthropic a pariah status that Washington until now had reserved for enemy suppliers.
Treasury Secretary Scott Bessent said in a post on X his department was terminating all use of Anthropic products, including Claude. Separately, HHS notified its employees about the ban in a message obtained by Reuters, and directed them to use other AI platforms instead, such as OpenAI's ChatGPT â and Google's Gemini. A âspokeswoman for HHS confirmed the decision.
The U.S. State Department likewise said it was switching the model powering its in-house chatbot, StateChat, to OpenAI from Anthropic, according to a memo seen by Reuters.
"For now, StateChat will â use GPT4.1 from OpenAI," it said, adding that further information would come later.
"In line with the president's direction to cancel Anthropic contracts, we are taking immediate steps to implement the directive and bring our programs into full compliance," State Department spokesperson Tommy Pigott told Reuters in an email. Also on Monday, William Pulte, director of the Federal Housing Finance Agency, said in a post on X his bureau and mortgage agencies Fannie Mae and Freddie Mac were terminating all use of Anthropic products.
On Friday, Trump ordered a six-month phase-out for the Defense Department and other agencies using products from Anthropic, whose financial backers include Alphabet's Google and Amazon.com.
The moves dealt a major blow to the San Francisco-based artificial âintelligence startup following a standoff in contract talks with the Pentagon over technology guardrails, and whether the government or industry decides how AI is deployed.
The Trump administration has been at odds with Anthropic over safeguards to prevent the U.S. military and intelligence agencies from using â its AI technology to target weapons autonomously and conduct U.S. domestic surveillance, according to sources familiar with the negotiations.
Late on Friday, rival OpenAI, which is backed by Microsoft , Amazon and others, announced its own deal to deploy technology in the Defense Department's classified network.
In a post to X on Monday, chief executive Sam Altman said OpenAI would "amend" its DOD deal to make clear that its AI system would not be "intentionally used for domestic surveillance of U.S. persons and nationals."
He added that the department understood the limitation to "prohibit deliberate tracking, surveillance or monitoring of U.S. persons or nationals, including through procurement or use of commercially acquired personal or identifiable information."
Leaders at the Departments of State, Treasury and Health and Human Services directed their employees to abandon Anthropic's language-trained chatbot platform Claude on orders from President Donald Trump. They âjoined the U.S. â military in dropping â use of the platform, after Anthropic and the Trump administration failed to agree on limits to the military's use of the company's products.
The actions mark an extraordinary rebuke by the U.S. against one of the premier companies that has kept it in the lead on national-security-critical AI, threatening to give Anthropic a pariah status that Washington until now had reserved for enemy suppliers.
Treasury Secretary Scott Bessent said in a post on X his department was terminating all use of Anthropic products, including Claude. Separately, HHS notified its employees about the ban in a message obtained by Reuters, and directed them to use other AI platforms instead, such as OpenAI's ChatGPT â and Google's Gemini. A âspokeswoman for HHS confirmed the decision.
The U.S. State Department likewise said it was switching the model powering its in-house chatbot, StateChat, to OpenAI from Anthropic, according to a memo seen by Reuters.
"For now, StateChat will â use GPT4.1 from OpenAI," it said, adding that further information would come later.
"In line with the president's direction to cancel Anthropic contracts, we are taking immediate steps to implement the directive and bring our programs into full compliance," State Department spokesperson Tommy Pigott told Reuters in an email. Also on Monday, William Pulte, director of the Federal Housing Finance Agency, said in a post on X his bureau and mortgage agencies Fannie Mae and Freddie Mac were terminating all use of Anthropic products.
On Friday, Trump ordered a six-month phase-out for the Defense Department and other agencies using products from Anthropic, whose financial backers include Alphabet's Google and Amazon.com.
The moves dealt a major blow to the San Francisco-based artificial âintelligence startup following a standoff in contract talks with the Pentagon over technology guardrails, and whether the government or industry decides how AI is deployed.
The Trump administration has been at odds with Anthropic over safeguards to prevent the U.S. military and intelligence agencies from using â its AI technology to target weapons autonomously and conduct U.S. domestic surveillance, according to sources familiar with the negotiations.
Late on Friday, rival OpenAI, which is backed by Microsoft , Amazon and others, announced its own deal to deploy technology in the Defense Department's classified network.
In a post to X on Monday, chief executive Sam Altman said OpenAI would "amend" its DOD deal to make clear that its AI system would not be "intentionally used for domestic surveillance of U.S. persons and nationals."
He added that the department understood the limitation to "prohibit deliberate tracking, surveillance or monitoring of U.S. persons or nationals, including through procurement or use of commercially acquired personal or identifiable information."