Palantir Technologies CEO Alex Karp has said the company still relies on AI models from Anthropic, despite the US Department of War (DoW) recently classifying the firm as a âsupply chain riskâ, according to a report by CNBC.
âThe Department of War is planning to phase out Anthropic; currently, itâs not phased out.. Our products are integrated with Anthropic, and in the future, will probably be integrated with other large language models,â Karp told CNBC.
Anthropic had partnered with Amazon Web Services and Palantir in 2024 to support the military with AI capabilities.
The Pentagon last week formally categorised Anthropic as a âsupply chain riskâ â a label typically applied to companies connected to foreign adversaries. The designation means contractors and vendors working with the Pentagon must confirm they are not using Anthropicâs Claude AI models in projects tied to the US military.
However, the transition is not immediate. As reported by CNBC, Claude models are still being used in systems supporting US operations in Iran.
Anthropic has challenged the move. The company filed a lawsuit against the administration of President Donald Trump, arguing that the designation is âunprecedented and unlawfulâ. It is seeking a court order to pause the Pentagonâs action, warning that hundreds of millions of dollars in government contracts could be affected.
According to the Pentagonâs chief technology officer Emil Michael, replacing Claude will take time because it is deeply integrated with military infrastructure.
âYou canât just rip out a system thatâs deeply embedded overnight,â Michael said.
Trump has said that federal agencies will be given six months to remove Anthropicâs products from government systems. However, an internal Pentagon memo suggests exceptions could be granted where the technology is essential to operations and no suitable alternatives are available.
Meanwhile, on Friday, Michael explained the governmentâs concerns during an appearance on CNBCâs Squawk Box. He said the issue relates to the âdifferent policy preferencesâ embedded in the model during training, which could conflict with US military needs.
âThe Department of War is planning to phase out Anthropic; currently, itâs not phased out.. Our products are integrated with Anthropic, and in the future, will probably be integrated with other large language models,â Karp told CNBC.
Anthropic had partnered with Amazon Web Services and Palantir in 2024 to support the military with AI capabilities.
The Pentagon last week formally categorised Anthropic as a âsupply chain riskâ â a label typically applied to companies connected to foreign adversaries. The designation means contractors and vendors working with the Pentagon must confirm they are not using Anthropicâs Claude AI models in projects tied to the US military.
However, the transition is not immediate. As reported by CNBC, Claude models are still being used in systems supporting US operations in Iran.
Anthropic has challenged the move. The company filed a lawsuit against the administration of President Donald Trump, arguing that the designation is âunprecedented and unlawfulâ. It is seeking a court order to pause the Pentagonâs action, warning that hundreds of millions of dollars in government contracts could be affected.
According to the Pentagonâs chief technology officer Emil Michael, replacing Claude will take time because it is deeply integrated with military infrastructure.
âYou canât just rip out a system thatâs deeply embedded overnight,â Michael said.
Trump has said that federal agencies will be given six months to remove Anthropicâs products from government systems. However, an internal Pentagon memo suggests exceptions could be granted where the technology is essential to operations and no suitable alternatives are available.
Meanwhile, on Friday, Michael explained the governmentâs concerns during an appearance on CNBCâs Squawk Box. He said the issue relates to the âdifferent policy preferencesâ embedded in the model during training, which could conflict with US military needs.