In Claude We Trust?

Joel Thayer March 22, 2026

The lawsuit between Department of War (DOW) and Anthropic’s contractual dispute has percolated internet forums and policy discussions at a rate that would make even Claude blush. Despite the volume of commentary, we may be missing the plot: the inherent dangers of the DOW signing away its authority to dictate terms of war to a multi-billion-dollar company that claims its technology could end the world.

To be clear, no company should be able to dictate the terms of war, let alone a rich company with a clear ideological agenda that is developing technology as powerful as AI. That is a job for our federal government, including Congress.

Let’s talk about what happened.

Even though most of the negotiations occurred in the twilight of the Biden Administration, the DOW awarded Anthropic a $200 million contract to incorporate Claude in classified U.S. military networks last summer. This was a monumental win for those in the pro-AI movement as it indicated the government’s trust in the technology to handle its most sensitive operations. And up until recently, the DOW and Anthropic seemed to be in a never-ending honeymoon phase. Claude may have even assisted the DOW plan the capture of former Venezuelan President, and indicted narcoterrorist, Nicolás Maduro.

So how did good love go bad? According to reports, Anthropic was apparently unaware of Claude’s use in the Venezuelan operation. Reports further indicate that Anthropic believed that the DOW’s use of Claude in that case may have violated its terms of service.

This raised serious concerns with the DOW, which previously sent out a memorandum clarifying their intent to use frontier AI systems, like Claude, for “any lawful use.” Anthropic pushed back asking for certain prohibitions on mass surveillance and greater control over their system’s use.

Negotiations became hostile and broke down, which caused Secretary Pete Hegseth to call for the removal of Claude in proprietary DOW systems and to designate it a “supply chain” risk—to ensure neither the department nor its contractors were able to use Anthropic’s AI platform.

Legal mechanisms aside, the Pentagon’s principal concern is legitimate.

Anthropic’s prohibitions are extremely vague, which could give this private company more authority over an essential agency whose preeminent ambit is to ensure our nation’s security. Worse, given Anthropic’s willingness to throw its weight around with the Pentagon, what’s stopping the company from exercising that same ability over other contractors, like Palantir? This has grave implications for not just the Pentagon but how we will conduct military operations altogether.

Anthropic’s pearl clutching is histrionic, because this company was not created yesterday. It’s a sophisticated, experienced seller that chose to contract with the Pentagon. It’s no secret that a primary function of Anthropic is to engage in military operations. It’s hard to imagine that the company didn’t see this coming, especially as President Trump has made it clear since his first term that he would champion “peace through strength.”

What’s more, Anthropic is not a regular provider of software. They claim that their AI systems are on track to displace all human labor. Dario Amodei himself has publicly worried that his company could create “the single most serious national security threat we’ve faced in a century, possibly ever.” He believes that the systems his company develops could end up “taking over the world.”

Dario says he has moral qualms about how his technology might be used by DOW. But given that he believes there’s a 25% chance the technology he is racing to develop destroys humanity, he should perhaps consider whether he should continue building this technology in the first place. He would never be willing to do this—likely for financial reasons—so what moral high ground does he claim to be standing on?

As far as the dispute between Anthropic and DOW is concerned, the likely end to all of this is simple: the government uses another AI system. There is precedent. Google’s employees, much like Anthropic’s, protested the company working with the Pentagon to “use its AI technologies for deploying and monitoring unmanned aerial vehicles” (i.e., Project Maven). Google backed out—and Palantir was right in line to take its place. Similarly, there are hundreds of companies not using Claude vying for the same contract Anthropic finds morally offensive.

But the broader problem remains. These AI companies currently have primary control over the most important technology of our time. If Anthropic is truly concerned over the government’s legal ability to conduct mass surveillance, then that conversation is better had in Congress; not in a contractual dispute with a government agency.

Join The
Movement



By providing your information, you become a member of America First Policy Institute and consent to receive emails. By checking the opt in box, you consent to receive recurring SMS/MMS messages. Message and data rates may apply. Message frequency varies. Text STOP to opt-out or HELP for help. SMS opt in will not be sold, rented, or shared. View our Privacy Policy and Mobile Terms of Service.