Pentagon, Anthropic
Digest more
Anthropic said it sought narrow assurances from the Pentagon that Claude won’t be used for mass surveillance of Americans or in fully autonomous weapons. But after months of pri
The Pentagon previously requested Anthropic, OpenAI, Google, and xAI allow the use of their AI models for “all lawful purposes,” to which Anthropic put up the most resistance over fears its AI models could be used for autonomous weapons systems and mass domestic surveillance.
Anthropic said that it can't "in good conscience" comply with a Pentagon edict to remove guardrails on its AI, despite a Pentagon ultimatum.
Anthropic drew a red line with the DOD over the use of its model, Claude. This is what smart people are saying about that.
A senior official in the Department of Defense accused Anthropic of "lying" about how the U.S. military intends to use the private tech firm's Claude AI system.
Defense chief Pete Hegseth has threatened to force the company to lift guardrails against greater military use of AI.
Start-up Anthropic and the U.S. military are careening toward a clash over government use of artificial intelligence — and whether it should be allowed to kill.
The Pentagon has reportedly asked Boeing and Lockheed Martin to detail their reliance on Anthropic’s Claude chatbot ahead of a Friday deadline for the AI firm to either relax its safeguards or face blacklisting.