AI, Pentagon and Anthropic
Digest more
The Pentagon escalated its ongoing dispute with Anthropic PBC on Thursday, making public a threat to effectively ban the artificial intelligence startup from the US military’s vast supply chain.
The Pentagon previously requested Anthropic, OpenAI, Google, and xAI allow the use of their AI models for “all lawful purposes,” to which Anthropic put up the most resistance over fears its AI models could be used for autonomous weapons systems and mass domestic surveillance.
The decision comes ahead of a Friday deadline to reach an agreement or face tough government measures.
Anthropic CEO Dario Amodei said on Thursday the company "cannot in good conscience accede" to the military's terms over the use of Claude.
Anthropic said Thursday that “virtually no progress” had been made in the company’s talks with the Pentagon over the terms of use for its AI models ahead of a Friday afternoon deadline. The
Debates have long swirled around AI and its use in weapons targeting, the idea of no human involvement still an uncomfortable one.
The Defense Department has been feuding with Anthropic over military uses of its artificial intelligence tools. At stake are hundreds of millions of dollars in contracts and access to some of the most advanced AI on the planet.
Until this week, Anthropic was the only AI company cleared to deploy its models on classified networks. Elon Musk's xAI is now the second.
Anthropic has reached a familiar crossroads for a growing tech company: how to scale without compromising the principles that set it apart.
The company's Claude chatbot is one of the few AI systems cleared for use in classified settings. But a standoff between Anthropic and the Trump administration is putting its government work at risk.