This story on HackerNoon has a decentralized backup on Sia.
Transaction ID: yJqTNSqdatataRC2iKp0S7Y2mvGMMT91OZVXd6eHkQ8
Cover

London Is Coming for Anthropic

Written by @unusualwriter | Published on 2026/4/8

TL;DR
After Anthropic refused Pentagon demands to enable autonomous weapons and mass surveillance, the U.S. government did something unprecedented: it branded an American AI company a national security threat using laws built for foreign adversaries like Huawei. A federal judge temporarily blocked it, but the Pentagon immediately claimed the ban still stood under a separate statute. With two lawsuits pending and billions in revenue at risk, Britain moved in, pitching Anthropic on a London office expansion and dual stock listing ahead of a late May visit by CEO Dario Amodei. The irony: Claude remains the only AI approved for Pentagon classified networks, meaning Washington is fighting to exile a tool it cannot replace.

I. The Push

America did not misplace Anthropic. It pushed it.

For years the relationship looked like a success story. Anthropic was the first major AI lab cleared to handle classified material. Pentagon contracts, intelligence community access, armed services integration. Claude was, and remains, the only AI model approved for use on Pentagon classified networks. That is not a small thing.

Then it refused two things. No autonomous weapons. No mass domestic surveillance. Prior administrations had disagreed and kept working anyway. Pete Hegseth did not keep working.

Trump ordered a government-wide stop on Anthropic products. The Pentagon, now calling itself the Department of War, slapped Anthropic with a supply-chain risk designation, a legal label previously reserved for foreign adversaries. One contracts lawyer called it "the contractual equivalent of nuclear war." The comparison the government was making, by its own actions, was to Huawei.

II. The Fight

Anthropic sued. The California lawsuit argued Hegseth had exceeded his authority and that the designation was not a security decision but retaliation for public dissent.

The most damaging detail came from inside the government's own filings. A court submission included a one-paragraph email from Emil Michael, the Pentagon's own negotiator, sent the day after the designation was finalized, saying the two sides were "very close here" on the exact issues now cited as national security threats. The man who blacklisted Anthropic told its CEO the next morning they were nearly aligned. That is not a security decision. That is a bargaining chip.

Judge Rita Lin agreed something had gone wrong. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,"  she wrote, blocking the designation temporarily.

The government's response was immediate. Hours later, Michael posted on X that the designation was "in full force and effect" under a separate statute outside the judge's jurisdiction.  A second case in the DC Circuit remained pending. The fight had not ended. It had relocated.

III. The Opening

While Washington and Anthropic traded court filings, Keir Starmer's government got to work.

British proposals include an expanded London office and a dual stock listing, with the pitch going directly to Dario Amodei during a late May visit. OpenAI had already committed to making London its largest research hub outside the U.S. Google DeepMind has been based there since 2014. Britain is not offering Anthropic a quiet refuge. It is offering it a seat in a race already running.

Insiders say Anthropic's AI is vastly better for warfare than any competitor, and it could take ChatGPT, Gemini or Grok months to come close. Neither side can fully walk away. But a company with that kind of leverage choosing London over Washington is a different story from a desperate company grabbing a lifeline.

Washington created this opening. London just noticed it first.

[story continues]


Written by
@unusualwriter
I aim to rewrite the future. The journey has started.

Topics and
tags
ai-geopolitics|uk-tech-policy|claude-ai|ai-regulation|supply-chain-risk|london-tech|anthropic|hackernoon-top-story
This story on HackerNoon has a decentralized backup on Sia.
Transaction ID: yJqTNSqdatataRC2iKp0S7Y2mvGMMT91OZVXd6eHkQ8