Pentagon, Anthropic
Digest more
Anthropic said it sought narrow assurances from the Pentagon that Claude won’t be used for mass surveillance of Americans or in fully autonomous weapons. But after months of pri
A senior official in the Department of Defense accused Anthropic of "lying" about how the U.S. military intends to use the private tech firm's Claude AI system.
The Pentagon may decide to officially designate Anthropic as a "supply chain risk" to push them out of government, sources say.
The Pentagon previously requested Anthropic, OpenAI, Google, and xAI allow the use of their AI models for “all lawful purposes,” to which Anthropic put up the most resistance over fears its AI models could be used for autonomous weapons systems and mass domestic surveillance.
In January, Anthropic “retired” Claude 3 Opus, which at one time was the company’s most powerful AI model. Today, it’s back — and writing on Substack.
Defense Secretary Pete Hegseth gave Anthropic until Friday at 5 p.m. to grant the military unresticted use of its AI technology.
Amodei’s release makes clear that Anthropic is not against the use of its AI by the U.S. military, explaining that “Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications,
The startup is trying to transform its Claude AI model from a coding tool into a much broader workplace technology platform.
Claude Code receives new Remote Control features for long-running tasks; start with /remote-control and open a session URL on mobile, for easy project
Anthropic has pressed for assurances its AI won't be engaged in mass surveillance of Americans or used in autonomous weapons that don't require human oversight