The Pentagon has designated AI company Anthropic as a supply chain risk, cutting off its defense work after a dispute over ethical restrictions on its technology. The conflict centers on Anthropic's refusal to allow its AI to be used in fully autonomous weapons, which a top defense official views as an obstacle to military projects like the Golden Dome missile defense program.
Anthropic plans to sue over the designation, which affects its military contractor partnerships. The company states it only sought to restrict its technology from mass surveillance and fully autonomous weapons, asserting that military decisions belong to the Department of Defense, not private firms.
The Pentagon official criticized Anthropic's stance, arguing AI autonomy is crucial for future defense scenarios, such as rapid-response space missile defense. The dispute highlights the broader tension between military aims for autonomous systems and AI companies' ethical governance policies.
Main Topics: U.S. Pentagon and Anthropic dispute, autonomous weapons and AI ethics, supply chain risk designation, military AI integration and future defense scenarios.