The Trump administration has drafted strict rules for civilian AI contracts, requiring companies to grant the U.S. government an irrevocable license to use their systems for any lawful purpose. This follows the Pentagon's designation of AI firm Anthropic as a "supply-chain risk," barring contractors from using its technology for military work due to a dispute over the company's safeguards.
The proposed guidelines from the U.S. General Services Administration (GSA) also mandate that contractors must not intentionally encode partisan or ideological biases into AI outputs. Furthermore, companies must disclose if their models have been modified to comply with any non-U.S. government or commercial regulatory frameworks.
These draft rules for civilian procurement mirror measures the Pentagon is considering for military contracts, representing a broader government-wide effort to strengthen AI services procurement.
Main topics: U.S. government AI procurement rules, Pentagon-Anthropic dispute, contract requirements for AI companies, safeguards and ideological neutrality in AI.
The Trump administration has drawn up strict rules for civilian artificial intelligence contracts that would require AI companies to allow "any lawful" use of their models amid a stand-off between the Pentagon and Anthropic, the Financial Times âreported on â Friday.
The â report comes a day after the Pentagon formally designated Anthropic a "supply-chain risk" and barred government contractors from using the AI firm's technology in work for the U.S. military. That move followed a months-long dispute over the company's insistence on safeguards that the Defense Department says went too â far.
A draft âof the guidelines reviewed by the FT says AI groups seeking business with the government â must grant the U.S. an irrevocable license to use their systems for all legal purposes.
The guidance from the U.S. General Services Administration (GSA) would apply to civilian contracts and is part of a broader government-wide effort to strengthen AI services procurement, the FT reported, adding that it mirrors measures the âPentagon is considering for military contracts.
The White House and the GSA did not immediately respond to requests for comment.
The draft â from the GSA also mandates that contractors "must not intentionally encode partisan or ideological judgments into the AI systems data outputs," the FT reported.
It also requires companies to disclose whether their models have been "modified or configured to comply with any non-U.S. federal government or commercial compliance or regulatory framework," the FT reported.
The â report comes a day after the Pentagon formally designated Anthropic a "supply-chain risk" and barred government contractors from using the AI firm's technology in work for the U.S. military. That move followed a months-long dispute over the company's insistence on safeguards that the Defense Department says went too â far.
A draft âof the guidelines reviewed by the FT says AI groups seeking business with the government â must grant the U.S. an irrevocable license to use their systems for all legal purposes.
The guidance from the U.S. General Services Administration (GSA) would apply to civilian contracts and is part of a broader government-wide effort to strengthen AI services procurement, the FT reported, adding that it mirrors measures the âPentagon is considering for military contracts.
The White House and the GSA did not immediately respond to requests for comment.
The draft â from the GSA also mandates that contractors "must not intentionally encode partisan or ideological judgments into the AI systems data outputs," the FT reported.
It also requires companies to disclose whether their models have been "modified or configured to comply with any non-U.S. federal government or commercial compliance or regulatory framework," the FT reported.