When the Department of Defense Tried to Strongarm Anthropic (And Why It Should Worry You)

AI
Published:
February 28, 2026

Today, the President ordered every federal agency to immediately cease using Anthropic's AI technology, and the Defense Secretary designated the company a "supply chain risk to national security", a label that until recently had typically been reserved for foreign companies like Huawei and Hikvision.

Anthropics only apparent crime was refusing to give the Pentagon unrestricted access to use their AI for mass domestic surveillance and fully autonomous weapons systems.

 

Let me be clear about what I am and what Evenstar is: I'm a small business owner running a managed service provider in Sarasota. Evenstar depends on Anthropic’s API for a significant portion of our work, all of which help keep our clients more secure and their employees safer. Do we have a direct business interest in their survival? Absolutely. But we also have a far deeper interest in not watching the federal government coerce private companies into building surveillance infrastructure to spy on American citizens.

The Free-Market Argument Nobody's Making

Here's what should bother everyone about this situation regardless of political affiliation: the federal government is threatening criminal prosecution against a private company for setting terms of service on its own product.

 

Anthropic didn't refuse to work with the military, they've had a $200 million Pentagon contract and Claude has been the only AI model approved for use in classified systems. Anthropic was willing to continue that work with two specific (and reasonable) restrictions:

- no fully autonomous lethal weapons

- no mass surveillance of Americans

 

The Pentagon's response was clear: Accept "all lawful purposes" with no restrictions or face economic destruction.

 

This is not free market capitalism. This is the government demanding that private enterprise surrender all control over how their products are used, under threat of regulatory annihilation. The "supply chain risk" designation means any company doing business with the federal government (from defense contractors, cloud providers, and enterprise software companies) must now certify they don't use Claude or Anthropic’s API at all, or risk losing their federal contracts.

 

That is not a negotiation, that is a protection racket.

 

What They Actually Demanded

According to Axios, Pentagon Undersecretary Emil Michael was on the phone offering Anthropic a "deal" even as Hegseth was tweeting the supply chain designation. That deal would have required allowing the Pentagon to collect and analyze Americans' geolocation data, web browsing history, and personal financial information purchased from data brokers.

Read that again if you need to: Not, "help us with intelligence analysis" or, "assist with logistics planning." The explicit demand was for the capability to continuously monitor and profile American citizens at scale using AI.

 

The Pentagon claims they need AI available for "all lawful purposes" and that having to negotiate individual use cases with a private company is unworkable. But "lawful" is doing a lot of work in that sentence. Plenty of surveillance activities might be technically legal while being catastrophically dangerous to civil liberties. The laws governing domestic surveillance were written before AI could analyze every American's public social media posts, cross-referenced against voter registration, concealed carry permits, and protest attendance, to automatically flag people who fit certain profiles.

 

As for autonomous weapons, Anthropic CEO Dario Amodei pointed out something technically obvious but politically inconvenient: "Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons." This isn't a radical ideology, it’s an engineering reality. These systems hallucinate, misinterpret context, and fail in unpredictable ways. You don’t need to dig very deep on Reddit or Twitter to see that OpenClaw agents left to their own devices go rogue with almost comical regularity. Putting them in charge of lethal force with no human oversight is reckless and negligent.

"Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad-hoc manner." 

Amodei Got It Right

Amodei's response to the Pentagon's ultimatum was a masterclass in principled leadership. He didn't rage or catastrophize. He calmly stated the technical reality, the democratic principles at stake, and the company's continued willingness to support national security within reasonable bounds. From his Thursday night letter: "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad-hoc manner."

 

Any rational reader would interpret this take not as obstructionist but offering a partnership with guardrails. He continued: "Using these systems for mass domestic surveillance is incompatible with democratic values... We cannot in good conscience accede to their request." In an environment where most tech CEOs fold immediately when the government applies pressure, Amodei looked at a $200 million contract and the threat of regulatory destruction and said no anyway. Not because it was profitable, but because it was right.

"The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution."

The Desperation Is Showing

The President’s Truth Social response however showed their hand: "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution."

 

Anthropic asked for two restrictions: no mass surveillance of Americans, no fully autonomous weapons. Calling that "strong-arming" is absurd. Threatening "major civil and criminal consequences" if they don't cooperate during the six-month phase-out does not come from a position of confidence, at best it’s flailing. If the Pentagon's position were strong and had any legal basis whatsoever, they wouldn't need the President threatening prosecution. They wouldn't need the Defense Secretary calling this "arrogance and betrayal." They'd just switch to OpenAI or Grok (lol) and move on.

 

On that note, a problem: OpenAI CEO Sam Altman said publicly that his company shares Anthropic's redlines on mass surveillance and autonomous weapons. Hundreds of Google and OpenAI employees have signed petitions supporting Anthropic's position. The Pentagon can't afford to blacklist every major AI company, and the industry is starting to realize it.

What This Means for You and Me

I don't know if Anthropic survives this. The economic pressure is real and immediate. But I know that if they get destroyed or forced into compliance, we all lose something important. We lose the precedent that companies can set ethical boundaries on their own products. We lose the example that standing up to government overreach is possible. And we potentially gain a domestic surveillance infrastructure built on AI systems that even their creators warn aren't reliable enough for the applications being demanded.

 

This isn't about being anti-military or anti-national security. I believe in strong defense and I believe AI has legitimate defense applications. But I also believe in checks on government power, especially when that power involves the capability to monitor and profile every American citizen at scale. The irony is that this administration has spent years railing against tech companies for having too much power and being too "woke." Now they're threatening to destroy one for not being compliant enough with surveillance demands.

If you care about free markets, you should be alarmed that the government is using regulatory threats to force compliance. If you care about limited government, you should be alarmed at the surveillance capabilities being demanded. If you care about civil liberties, you should be alarmed at what "all lawful purposes" actually means in practice.

 

If you depend on Anthropic's technology like I do, you should be preparing for degradation while hoping that Dario Amodei's bet (that principles matter more than short-term compliance) pays off in the long run. Don't stay silent about issues like these. Share this, talk about it, let your representatives know that mass surveillance infrastructure isn't acceptable just because it's dubiously 'lawful.' And if you use Claude of Anthropic's API. keep using it. The best way to support companies that do the right thing is to make sure their principles don't bankrupt them.

 

I've built my business on tools I trust from companies I respect, some of which are run by NSA and other state-intelligence alumnus. Watching one of them get threatened with destruction for refusing to enable mass surveillance of Americans isn't just bad for my bottom line, it’s wholly un-American.

Silent support doesn't pay the bills. Speak up if this matters to you.
Make Some Noise
Make Some Noise
arrowarrow
Ryan McKee
Chief Ranting Officer
Ryan McKee is the owner of Evenstar MSP, a managed service provider based in Sarasota, Florida. Ryan specializes in Microsoft 365 environments and enterprise-grade security for small and mid-sized businesses. As a daily user of Anthropic's Claude for technical work and client services, he has a direct business interest in the company's survival - and a deeper interest in ensuring private companies can set ethical boundaries on their own products.

More recent insights

Cybersecurity awareness, Microsoft 365 guidance, and technology best practices for Southwest Florida businesses. Expert perspectives on the threats, processes, and tools that matter most to small business.