The United States Department of Defense (DoD) has initiated inquiries with major defense contractors, including Boeing and Lockheed Martin, to assess their usage of Anthropic's artificial intelligence model, Claude. According to sources familiar with the matter, this move represents a preliminary step toward designating the AI developer as a "supply chain risk"—a penalty historically reserved for entities linked to adversarial nations, such as China's Huawei.
This potential escalation stems from a growing rift between the Pentagon and the San Francisco-based AI lab. While Claude is currently the only large language model operating on the military's classified systems—utilized in operations involving Palantir—the DoD is reportedly frustrated by Anthropic's refusal to modify its usage policies. Specifically, Anthropic has maintained strict safeguards prohibiting the use of its models for mass surveillance of U.S. citizens and the development of fully autonomous weapons.
A Deadline Looms
Tensions reached a peak during a meeting earlier this week between Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei. Hegseth reportedly issued an ultimatum with a deadline set for Friday at 5:01 PM. The Pentagon is demanding that Anthropic align its policies to allow "all lawful purposes" for military use. Failure to comply could result in the Defense Production Act being invoked to compel cooperation or, alternatively, the formal supply chain risk designation.
Industry insiders suggest the Pentagon is leaning toward the supply chain risk penalty, a move that would force major defense primes to disentangle themselves from Anthropic’s technology. A senior Defense official indicated that while untangling the systems would be difficult, the administration is prepared to ensure consequences for the company's resistance.
Competition and Consequences
Despite the friction, Anthropic has reportedly enjoyed recent success in securing funding and expanding corporate partnerships. However, a supply chain risk designation could severely impact its standing with government-affiliated clients. Conversely, competitors like Elon Musk’s xAI have reportedly agreed to operate under the "all lawful use" standard to gain access to classified military systems. Google and OpenAI are currently in negotiations for similar access, though they face the same safeguard removal requirements.
An Anthropic spokesperson described the recent discussions as "good-faith conversations" aimed at aligning national security support with responsible model usage. The Pentagon, meanwhile, stated it is preparing to execute whatever decision the Defense Secretary makes following the Friday deadline.

Comments
Leave a comment