The Pentagon has designated one of America's top AI companies a national security risk, a label typically reserved for foreign adversaries, and a federal appeals court on Wednesday refused to block that decision, keeping Anthropic's technology out of military systems.
The ruling locks in the government's restrictions for now. It bars defense contractors from using Anthropic's Claude AI system in Pentagon work. Contractors must also certify that the technology plays no role in that work. The court acknowledged Anthropic is likely to suffer harm, but said that harm does not outweigh the government's interest in controlling military access to AI during an active conflict.
“On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of Defense secures vital AI technology during an active military conflict.”
Anthropic first sued in March after the Trump administration moved to block federal use of its systems. The company argues the government crossed a constitutional line on two fronts: that the designation violates the First Amendment by retaliating against its publicly stated views on AI safety, and that it violates the Fifth Amendment by denying Anthropic any opportunity to challenge the decision before it took effect.
In short, Anthropic says the government punished the company for trying to set limits on how its own technology could be used. The company says the designation could cost billions in lost business and seriously damage its reputation with customers and partners.
The dispute began when Anthropic and the Pentagon failed to agree on how Claude would be used in national security systems. Anthropic sought to prevent its technology from being used for mass surveillance of Americans or in fully autonomous weapons. The Pentagon pushed for broader, less restricted access. Negotiations collapsed.
After talks failed, the administration designated Anthropic as a supply chain risk, a label historically applied to foreign-linked firms, not domestic companies. The move was striking given that Anthropic had already secured a $200 million Pentagon contract before the dispute began.
Anthropic is shut out of Defense Department work for now, though it can still work with other parts of the federal government while the case continues.
Read More: Trump Administration Blacklists AI Firm Anthropic. Now the Company Is Suing the Pentagon.
AI Is Already Embedded in Military Systems - Now the Fight Is Over How Far It Can Go
Justice Department lawyers argue the designation has nothing to do with Anthropic's views on AI safety. It stems, they say, from the company's refusal to accept standard contract terms. They warn that unresolved uncertainty over how Claude can be used could disrupt active military operations. Acting DOJ Attorney General Todd Blanche called Wednesday's decision a victory for military readiness.
The dispute is now playing out across multiple courts. A federal judge in California recently blocked part of the Pentagon's action in a separate case, but the D.C. Circuit's ruling leaves the core restriction in place.
Artificial intelligence is already embedded in intelligence analysis, cyber defense, and military planning. For now, the Pentagon keeps that authority. But a final ruling could come within months. What happens next will decide who sets the rules for how AI is used in war.
Editor’s Note: Do you enjoy RedState’s conservative reporting that takes on the radical left and woke media? Support our work so that we can continue to bring you the truth.
Join RedState VIP and use the promo code FIGHT to get 60% off your VIP membership!







Join the conversation as a VIP Member