The Pentagon has labeled Anthropic a supply-chain risk after disagreements over military control of its AI models, particularly for autonomous weapons and surveillance. This led to the collapse of a $200 million contract, with the Department of Defense subsequently turning to OpenAI.
OpenAI's acceptance of a defense contract coincided with a significant 295% surge in ChatGPT uninstalls, highlighting public and commercial sensitivity to military AI applications. The central unresolved question is the appropriate level of unrestricted military access to powerful AI models.
The article also promotes an Equity podcast episode that will analyze these events and cover other major tech news, including deals involving Paramount, MyFitnessPal, Pinterest, Anduril, and a discussion on the "SaaSpocalypse."
Main Topics: U.S. military AI contracts, ethical AI governance, corporate consequences of defense partnerships, and weekly tech industry news.
The Pentagon has officially designated Anthropic a supply-chain risk after the two failed to agree on how much control the military should have over its AI models, including its use in autonomous weapons and mass domestic surveillance. As Anthropic’s $200 million contract fell apart, the DoD turned to OpenAI instead, which accepted and then watched ChatGPT uninstalls surge 295%. As the stakes keep rising, the question remains: how much unrestricted access should the military have to an AI model?
Watch as Equity hosts Kirsten Korosec, Anthony Ha, and Sean O’Kane break down what startups should know about chasing federal AI contracts, plus the week’s biggest tech stories, from Paramount’s Warner Bros. deal and MyFitnessPal’s Cal AI acquisition to Pinterest’s $1B AI push, Anduril’s $60B valuation, and whether the “SaaSpocalypse” is real.
Subscribe to Equity on YouTube, Apple Podcasts, Overcast, Spotify and all the casts. You also can follow Equity on X and Threads, at @EquityPod.