Pentagon Threatens Anthropic with Defense Production Act Amid AI Policy Dispute

Published on 25 February, 2026

Tensions have escalated between the US Department of Defense and artificial intelligence company Anthropic, with the Pentagon threatening to invoke the Defense Production Act (DPA) if a cooperation agreement is not reached by Friday.


According to a senior defense official, the military branch intends to label Anthropic a "supply chain risk" if the 5:01 PM deadline passes without compliance. This move creates a complex legal scenario, as such a label typically requires government contractors to sever ties with the designated entity, seemingly contradicting the simultaneous effort to force cooperation.


The dispute centers on the usage policies governing Anthropic's AI models, specifically Claude. Recent reports indicated the model was utilized during a January military operation in Venezuela, bringing the company's relationship with the Pentagon under scrutiny. CEO Dario Amodei recently met with Secretary Pete Hegseth to discuss these parameters.


Ethical Red Lines


While Anthropic confirms it supports national security missions, the company has drawn strict ethical boundaries. Spokesperson Maya Humes stated that the firm continues good-faith conversations to ensure government support aligns with responsible model capabilities. Amodei reportedly reiterated two primary prohibitions: the use of products for mass domestic surveillance of US citizens and deployment in physical attacks where AI makes targeting decisions without human input.


A Pentagon official claimed the conflict is unrelated to surveillance or autonomous weaponry, asserting that the department follows the law. The DPA grants the government broad authority over private companies during national emergencies, a power previously leveraged during the Covid-19 pandemic to secure medical supplies and vaccine production.

Comments

Leave a comment