OpenAI CEO Sam Altman told employees the company cannot make operational decisions for the Defense Department, which will listen to OpenAI's technical expertise but not its opinions on the morality of specific military actions. This followed OpenAI's recent agreement allowing the Pentagon to deploy its AI models on classified networks, a deal Altman acknowledged was initially "opportunistic and sloppy" and is being revised to explicitly prohibit uses like domestic mass surveillance and ensure human responsibility for force.
The agreement came after a standoff with rival Anthropic, which had demanded its technology not be used for mass surveillance or autonomous weapons. Altman also stated he is advocating for the Pentagon to remove its "supply-chain risk" designation of Anthropic, a label typically applied to foreign adversaries.
Main topics: The dynamics of AI company contracts with the U.S. military, specifically OpenAI's new agreement and the preceding tensions with Anthropic; the ethical boundaries and principles (like prohibiting mass surveillance and autonomous weapons) that companies seek to enforce; and the ongoing dispute involving Anthropic's designation as a supply-chain risk.
OpenAI chief executive officer Sam Altman told employees that the company doesnât get to make the call about what the Defense Department does with its artificial intelligence software and suggested the desire to do so may have been part of tensions between the Pentagon and rival Anthropic PBC.
During an all-hands meeting on Tuesday, Altman said the Defense Department made clear it will listen to OpenAIâs expertise about the technologyâs applications, but the federal agency does not want the company to express opinions about whether certain military actions were good or bad ideas, according to a person familiar with the matter. âYou do not get to make operational decisions,â Altman said, according to the person, who asked not to be named since the details are private.
OpenAI declined to comment.
The meeting marked Altmanâs first chance to field questions from employees after OpenAI reached an agreement late Friday to let the Pentagon deploy the companyâs artificial intelligence models in its classified network. That happened after a showdown with rival Anthropic, which had demanded its technology not be used for mass surveillance of Americans or the deployment of fully autonomous weapons.
Anthropic also reportedly asked questions about how its technology was used in the raid to capture Venezuelan President Nicolas Maduro. (Anthropic has denied discussing specific operations with the Defense Department.)
Altman previously said heâd reached an agreement with the department that reflects OpenAIâs principles that prohibit domestic mass surveillance and require âhuman responsibility for the use of force, including for autonomous weapon systems.â He later said that OpenAIâs hasty deal looked âopportunistic and sloppy,â and that the company was working with the department to âmake some additions in our agreement to make our principles very clear.â That includes ensuring that AI isnât used for domestic surveillance of Americans and that intelligence agencies like the National Security Agency canât rely on OpenAI services.
During the all-hands meeting, Altman also said heâs continuing to push for the Defense Department to abandon its designation of Anthropic as a supply-chain risk â a label that has not previously been given to a US company and is typically applied to adversaries of the United States. Altman has previously said he wants to help de-escalate the standoff between the Pentagon and Anthropic.
During an all-hands meeting on Tuesday, Altman said the Defense Department made clear it will listen to OpenAIâs expertise about the technologyâs applications, but the federal agency does not want the company to express opinions about whether certain military actions were good or bad ideas, according to a person familiar with the matter. âYou do not get to make operational decisions,â Altman said, according to the person, who asked not to be named since the details are private.
OpenAI declined to comment.
The meeting marked Altmanâs first chance to field questions from employees after OpenAI reached an agreement late Friday to let the Pentagon deploy the companyâs artificial intelligence models in its classified network. That happened after a showdown with rival Anthropic, which had demanded its technology not be used for mass surveillance of Americans or the deployment of fully autonomous weapons.
Anthropic also reportedly asked questions about how its technology was used in the raid to capture Venezuelan President Nicolas Maduro. (Anthropic has denied discussing specific operations with the Defense Department.)
Altman previously said heâd reached an agreement with the department that reflects OpenAIâs principles that prohibit domestic mass surveillance and require âhuman responsibility for the use of force, including for autonomous weapon systems.â He later said that OpenAIâs hasty deal looked âopportunistic and sloppy,â and that the company was working with the department to âmake some additions in our agreement to make our principles very clear.â That includes ensuring that AI isnât used for domestic surveillance of Americans and that intelligence agencies like the National Security Agency canât rely on OpenAI services.
During the all-hands meeting, Altman also said heâs continuing to push for the Defense Department to abandon its designation of Anthropic as a supply-chain risk â a label that has not previously been given to a US company and is typically applied to adversaries of the United States. Altman has previously said he wants to help de-escalate the standoff between the Pentagon and Anthropic.