The Trump administration has officially designated AI company Anthropic as a supply chain risk, an unprecedented move against a domestic firm. This decision, based on national security concerns over potential misuse for surveillance or autonomous weapons, could force government contractors to stop using Anthropic's Claude chatbot.
The action has drawn significant bipartisan criticism as a misuse of authority intended for foreign adversaries, with critics warning it sets a dangerous precedent and harms U.S. innovation. While major defense contractors like Lockheed Martin are cutting ties, Anthropic has seen a surge in consumer downloads as some public sentiment sides with the company.
The main topics covered are the government's designation and its rationale, the reaction from contractors, and the bipartisan political and expert criticism of the decision.
The Trump administration is following through with its threat to designate artificial intelligence company Anthropic as a supply chain risk in an unprecedented move that could force other government contractors to stop using the AI chatbot, Claude.
The Pentagon said in a statement Thursday that it has âofficially informed Anthropic leadership that the company and its products are deemed a supply chain risk, effective immediately.â
The decision appeared to shut down the opportunity for further negotiation with Anthropic, nearly a week after President Donald Trump and Defence Secretary Pete Hegseth accused the company of endangering national security.
Trump and Hegseth announced a series of threatened punishments last Friday, on the eve of the Iran war, after Anthropic CEO Dario Amodei refused to back down over concerns the company's products could be used for mass surveillance of Americans or autonomous weapons.
The San Francisco-based company didn't immediately respond to a request for comment Thursday. It has previously vowed to sue if the Pentagon pursued what the company described as a âlegally unsoundâ action ânever before publicly applied to an American company.â
The Pentagon didn't reply to questions in time for publication.
Some military contractors were already cutting ties with Anthropic, a rising star in the tech industry that sells Claude to a variety of businesses and government agencies. Lockheed Martin said it will âfollow the President's and the Department of War's directionâ and look to other providers of large language models.
âWe expect minimal impacts as Lockheed Martin is not dependent on any single LLM vendor for any portion of our work,â the company said. It's not yet clear if the designation aims to block Anthropic's use by all federal government contractors or just those that partner with the military.
The Pentagon's decision to apply a rule designed to address supply threats posed by foreign adversaries was quickly met with criticism from both opponents and some supporters of Trump's Republican administration. Federal codes have defined supply chain risk as a ârisk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvertâ a system in order to disrupt, degrade or spy on it.
US Sen Kirsten Gillibrand, a New York Democrat and member of the Senate Armed Services Committee and Senate Intelligence Committee, called it âa dangerous misuse of a tool meant to address adversary-controlled technology.â
âThis reckless action is shortsighted, self-destructive, and a gift to our adversaries,â she said in a written statement on Thursday.
Neil Chilson, a Republican former chief technologist for the Federal Trade Commission who now leads AI policy at the Abundance Institute, said the decision looks like âmassive overreach that would hurt both the US AI sector and the military's ability to acquire the best technology for the US warfighter.â
Earlier in the day, a group of former defence and national security officials sent a letter to US lawmakers expressing âserious concernâ about the designation.
âThe use of this authority against a domestic American company is a profound departure from its intended purpose and sets a dangerous precedent,â said the letter from former officials and policy experts, including former CIA director Michael Hayden and retired Air Force, Army and Navy leaders.
They added that such a designation is meant to âprotect the United States from infiltration by foreign adversaries â from companies beholden to Beijing or Moscow, not from American innovators operating transparently under the rule of law. Applying this tool to penalise a U.S. firm for declining to remove safeguards against mass domestic surveillance and fully autonomous weapons is a category error with consequences that extend far beyond this dispute.â
While losing its big partnerships with defence contractors, Anthropic experienced a surge of consumer downloads over the past week due to people siding with its moral stance. Anthropic has boasted of more than a million people signing up for Claude each day this week, lifting it past OpenAI's ChatGPT and Google's Gemini as the top AI app in more than 20 countries in Apple's app store.
The dispute with the Pentagon has also further deepened Anthropic's bitter rivalry with OpenAI, which it announced a Friday deal with the Pentagon to effectively replace Anthropic with ChatGPT in classified environments.
OpenAI CEO Sam Altman later said he's saying he shouldn't have rushed a deal that âlooked opportunistic and sloppy.â
The Pentagon said in a statement Thursday that it has âofficially informed Anthropic leadership that the company and its products are deemed a supply chain risk, effective immediately.â
The decision appeared to shut down the opportunity for further negotiation with Anthropic, nearly a week after President Donald Trump and Defence Secretary Pete Hegseth accused the company of endangering national security.
Trump and Hegseth announced a series of threatened punishments last Friday, on the eve of the Iran war, after Anthropic CEO Dario Amodei refused to back down over concerns the company's products could be used for mass surveillance of Americans or autonomous weapons.
The San Francisco-based company didn't immediately respond to a request for comment Thursday. It has previously vowed to sue if the Pentagon pursued what the company described as a âlegally unsoundâ action ânever before publicly applied to an American company.â
The Pentagon didn't reply to questions in time for publication.
Some military contractors were already cutting ties with Anthropic, a rising star in the tech industry that sells Claude to a variety of businesses and government agencies. Lockheed Martin said it will âfollow the President's and the Department of War's directionâ and look to other providers of large language models.
âWe expect minimal impacts as Lockheed Martin is not dependent on any single LLM vendor for any portion of our work,â the company said. It's not yet clear if the designation aims to block Anthropic's use by all federal government contractors or just those that partner with the military.
The Pentagon's decision to apply a rule designed to address supply threats posed by foreign adversaries was quickly met with criticism from both opponents and some supporters of Trump's Republican administration. Federal codes have defined supply chain risk as a ârisk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvertâ a system in order to disrupt, degrade or spy on it.
US Sen Kirsten Gillibrand, a New York Democrat and member of the Senate Armed Services Committee and Senate Intelligence Committee, called it âa dangerous misuse of a tool meant to address adversary-controlled technology.â
âThis reckless action is shortsighted, self-destructive, and a gift to our adversaries,â she said in a written statement on Thursday.
Neil Chilson, a Republican former chief technologist for the Federal Trade Commission who now leads AI policy at the Abundance Institute, said the decision looks like âmassive overreach that would hurt both the US AI sector and the military's ability to acquire the best technology for the US warfighter.â
Earlier in the day, a group of former defence and national security officials sent a letter to US lawmakers expressing âserious concernâ about the designation.
âThe use of this authority against a domestic American company is a profound departure from its intended purpose and sets a dangerous precedent,â said the letter from former officials and policy experts, including former CIA director Michael Hayden and retired Air Force, Army and Navy leaders.
They added that such a designation is meant to âprotect the United States from infiltration by foreign adversaries â from companies beholden to Beijing or Moscow, not from American innovators operating transparently under the rule of law. Applying this tool to penalise a U.S. firm for declining to remove safeguards against mass domestic surveillance and fully autonomous weapons is a category error with consequences that extend far beyond this dispute.â
While losing its big partnerships with defence contractors, Anthropic experienced a surge of consumer downloads over the past week due to people siding with its moral stance. Anthropic has boasted of more than a million people signing up for Claude each day this week, lifting it past OpenAI's ChatGPT and Google's Gemini as the top AI app in more than 20 countries in Apple's app store.
The dispute with the Pentagon has also further deepened Anthropic's bitter rivalry with OpenAI, which it announced a Friday deal with the Pentagon to effectively replace Anthropic with ChatGPT in classified environments.
OpenAI CEO Sam Altman later said he's saying he shouldn't have rushed a deal that âlooked opportunistic and sloppy.â