The U.S. Department of Defense has officially barred the use of Anthropic’s Claude AI model in any DoD work, marking the first time a defense department has prohibited a commercial AI system for military purposes. This move forces contractors to stop using Claude and could prompt tighter AI export rules, reflecting growing concerns about AI’s role in strategic decision‑making and the need to balance commercial availability with national security.
The Pentagon’s designation of Anthropic as a supply‑chain risk marks the first time a U.S. defense department has formally barred a commercial AI model from military use.
According to Last Week in AI, the Department of Defense (DoD) ordered contractors to certify they were not using Claude in DoD work and to wind down any existing use, citing 10 U.S.C. § 3252. Anthropic’s CEO, Dario Amodei, argues the designation is narrow—applying only to contracts tied directly to the DoD—and violates the statute’s “least restrictive means” requirement, a point the company intends to challenge in court.
The dispute centers on contract terms: Anthropic sought explicit red lines against mass domestic surveillance and fully autonomous weapons, while the DoD insisted on access for “all lawful purposes.” In the meantime, reports indicate Claude has supported U.S. military operations in Iran for intelligence analysis, modeling and simulation, operational planning, and cyber operations. The DoD’s move follows a broader pattern of tightening oversight of AI systems that could influence strategic decision‑making, a trend that has accelerated since the 2024 AI Act in the EU and the 2025 U.S. AI policy framework.
Microsoft, Google, and AWS have clarified that Claude remains available for non‑defense workloads via M365, GitHub, AI Foundry, and Google Cloud, underscoring the fine line between commercial and classified use. Meanwhile, OpenAI announced a DoD deal to deploy its models in classified environments, citing prohibitions on mass surveillance and autonomous weapon systems. Amodei dismisses OpenAI’s framing as “straight‑up lies” and “safety theater,” arguing the agreement still hinges on “all lawful purposes.” This back‑and‑forth illustrates how corporate actors are negotiating the same legal and ethical boundaries that regulators are beginning to codify.
Whether this will spur stricter AI export controls or push companies to negotiate more granular agreements remains to be seen.