Anthropic is pushing back against the U.S. Department of Defense after military officials demanded the company allow unrestricted use of its Claude AI model. CEO Dario Amodei said Thursday that the company “cannot in good conscience accede to their request,” escalating a dispute over how the technology is deployed in national security operations.

The Pentagon has set a Friday deadline for Anthropic to comply, warning that failure could result in contract cancellation, designation as a “supply chain risk,” or invocation of the Defense Production Act to gain broader authority over the AI. Amodei called the threats “inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”

Safeguards protect democratic values

Anthropic outlined two safeguards it refuses to remove. The first prevents Claude from being used for mass domestic surveillance. Amodei wrote,

“AI-driven mass surveillance presents serious, novel risks to our fundamental liberties.”

The second blocks deployment in fully autonomous weapons, which he said “are simply not reliable enough to power fully autonomous weapons.”

Claude is already extensively used across the Department of War and other national security agencies, including for intelligence analysis, modeling, operational planning, and cyber operations. He stressed that Anthropic has never objected to particular military operations or attempted to limit the Pentagon’s use of its AI outside the safeguards.

The company has also acted to protect U.S. strategic interests. It cut off firms linked to the Chinese Communist Party and blocked CCP-sponsored cyberattacks, forgoing potential revenue in the process.

Public and industry reactions

The dispute has drawn attention from Silicon Valley, lawmakers, and former military officials. A group of employees from Anthropic’s competitors, including OpenAI and Google, expressed support for Amodei’s stand in an open letter. Retired Air Force Gen. Jack Shanahan noted that Anthropic’s safeguards are “reasonable” and that the AI models powering Claude “are not ready for prime time in national security settings.”

The Pentagon has maintained that it does not intend to use AI for mass domestic surveillance or fully autonomous weapons. Spokesman Sean Parnell stated that the department wants to use Anthropic’s model for “all lawful purposes” to avoid jeopardizing operations.

Emil Michael, the Defense undersecretary for research and engineering, criticized Amodei on social media, accusing him of having a “God-complex.” The message has not swayed much of the tech community, which largely supports Anthropic’s ethical stance.

Anthropic’s approach to national security

The company's stance was presented by Amodei as being in line with national security.

“I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries,” he said.

Despite the standoff, Anthropic signaled willingness to ensure a smooth transition if the Pentagon removes its AI. Amodei said,

“Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions.”

At the same time, the company recently revised its Responsible Scaling Policy, dropping a pledge that barred training advanced AI systems without guaranteed safeguards. Critics, including Robert Weissman of Public Citizen, said the Pentagon’s pressure “signals broader pressure on the tech industry” and could discourage other AI companies from imposing similar safety limits.

Broader implications for AI and defense

The standoff highlights tensions between rapid AI deployment and ethical safeguards in national security. King’s College London research found that leading AI models, including Claude, could escalate simulated crises to nuclear conflict in 95 percent of scenarios. Amodei noted that fully autonomous weapons require proper guardrails that do not yet exist.

Anthropic continues to provide its AI to government agencies while maintaining safeguards designed to prevent misuse. The company’s stance underscores the challenge of balancing AI innovation, military needs, and democratic principles.

Anthropic Accuses Chinese Firms of AI Model Theft | HODL FM NEWS
Artificial intelligence company Anthropic says DeepSeek, Moonshot, and MiniMax used 24,000 fake accounts and 16 million exchanges to extract Claude’s capabilities.
hodl-post-image

Disclaimer: All materials on this site are for informational purposes only. None of the material should be interpreted as investment advice. Please note that, despite the nature of much of the material created and hosted on this website, HODL FM operates as a media and informational platform, not a provider of financial advisory services. The opinions of authors and other contributors are their own and should not be taken as financial advice. If you require advice, HODL FM strongly recommends contacting a qualified industry professional.