A public showdown between the Trump administration and Anthropic is hitting an impasse as military officials demand the artificial intelligence company bend its ethical policies by Friday or risk ...
San Francisco-based Anthropic, the buzzy artificial intelligence lab and OpenAI rival, has a critical decision to make. It's built up a reputation for prizing safety in AI - but a major customer, the ...
Anthropic drew a red line with the DOD over the use of its model, Claude. This is what smart people are saying about that.
Artificial intelligence company Anthropic rejected the Pentagon's final offer to allow unrestricted military use of its ...
The company behind Claude says it would rather lose defence work than weaken its ban on autonomous weapons and domestic surveillance.
Anthropic said that it can't "in good conscience" comply with a Pentagon edict to remove guardrails on its AI, despite a ...
AI safety and research company Anthropic has told the Pentagon it will not agree to their demands to drop critical safety precautions and grant the U.S. military full access to their AI capabilities.
Anthropic, the AI company behind the Claude chatbot that was founded with a focus on safe technology, appears to be scaling back its safety commitments in order to keep the company competitive, after ...
Anthropic refused Pentagon contract terms allowing broad military AI use, citing bans on autonomous weapons and mass surveillance.