Home » OpenAI Secures Military Contract as Anthropic’s Stand Against Autonomous Weapons Draws Industry Support

OpenAI Secures Military Contract as Anthropic’s Stand Against Autonomous Weapons Draws Industry Support

by admin477351

Anthropic’s stand against the use of its AI in autonomous weapons and mass surveillance has drawn significant support from hundreds of workers across the AI industry, even as the company pays the commercial price of that stand with the loss of its government contracts. OpenAI, meanwhile, has secured the Pentagon business Anthropic vacated — while claiming, remarkably, to share the same principled position that cost its competitor so dearly.

The stand itself was clear and consistently maintained. Anthropic had negotiated with the Pentagon for months over the terms of Claude’s military deployment, holding firm on two conditions that the company considered foundational: no use in weapons systems that can kill without human oversight, and no use in programs that surveil civilians at mass scale. Pentagon officials wanted these conditions removed. Anthropic refused, and the Trump administration acted accordingly.

President Trump’s ban on all federal use of Anthropic technology was the administration’s most forceful statement yet about its expectations for AI companies in the government market. His public framing of Anthropic’s ethics policy as ideological defiance rather than responsible governance was designed to delegitimize the company’s position and make future resistance from other companies less likely by making the cost clear.

The industry’s response to that framing was not the uniformity the administration may have hoped for. Hundreds of workers at OpenAI and Google signed a public letter backing Anthropic, explicitly warning that the Pentagon was trying to divide AI companies against each other to extract compliance that none might provide if they stood together. The letter framed the conflict not as a bilateral dispute but as an industry-wide challenge.

Sam Altman’s Pentagon deal, announced the same night as the ban, claimed to thread the needle — engaging the government while maintaining OpenAI’s own ethical conditions against mass surveillance and autonomous weapons. Whether that needle can actually be threaded in practice will determine not just OpenAI’s fate but the shape of AI development in military contexts for years ahead. Anthropic, for its part, said only that its stand is permanent and that its restrictions have never once prevented a lawful government operation.

You may also like