Anthropic Refuses Pentagon Request to Remove AI Safeguards
Digest more
The Pentagon may decide to officially designate Anthropic as a "supply chain risk" to push them out of government, sources say.
The decision comes ahead of a Friday deadline to reach an agreement or face tough government measures.
In January, Anthropic “retired” Claude 3 Opus, which at one time was the company’s most powerful AI model. Today, it’s back — and writing on Substack.
Anthropic said Thursday that “virtually no progress” had been made in the company’s talks with the Pentagon over the terms of use for its AI models ahead of a Friday afternoon deadline. The
DeepSeek, Moonshot and MiniMax created more than 16 million interactions with Claude using roughly 24,000 fake accounts, the U.S. company said in a blog post.
A hacker exploited Anthropic PBC’s artificial intelligence chatbot to carry out a series of attacks against Mexican government agencies, resulting in the theft of a huge trove of sensitive tax and voter information,
Defense Secretary Pete Hegseth has set a Friday deadline for the company to grant full lawful AI to military access or risk losing its $200 million contract and being labeled a supply chain risk.
Anthropic CEO Dario Amodei said on Thursday the company "cannot in good conscience accede" to the military's terms over the use of Claude.