Hey Hackers!

Welcome back to 3 Tech Polls, HackerNoon's Weekly Newsletter that curates results from our Poll of the Week, and 2 related polls around the web. Thank you for having voted in our polls in the past.

This week, we're talking about the biggest controversy in AI: Anthropic's ban from all U.S. federal use after refusing to strip its safety guardrails.

Anthropic was banned from all U.S. federal contracts after refusing Pentagon demands to remove AI safeguards on mass surveillance and autonomous weapons. The decision cost them a $14 billion contract that OpenAI immediately snagged. The Pentagon designated Anthropic a national security supply chain risk, the first time this label has been applied to a U.S. tech company rather than foreign adversaries like China or Russia.

The question at the center of this firestorm: Should AI companies compromise their safety guardrails to secure government contracts?

HackerNoon Poll: Did Anthropic make a hero move or a fatal mistake?

315 Voters weighed in:

When you combine the first two options (Hero Move + Necessary Friction), 61% of voters support Anthropic's decision. The other 39% split between distrust, business concerns, and national security worries.

HackerNoon community members were vocal:

100% super chad hero move that will earn them the respect of the entire world for generations to come!" - @projenix

Technology should be used for peace and human benefit, not for war!" - @hacker27397875

That's the HackerNoon community's stance. But what does the broader prediction market think about how this plays out?

Want to say your piece? Share your thoughts on the poll results here.

🌐 From Around the Web: Polymarket Pick

Will Pete Hegseth ban Claude by March 31?

When the Polymarket market launched on February 16, 2026, traders gave the ban only a 15-27% chance. Most believed the Pentagon needed Claude too much to actually follow through on threats.

Then Anthropic CEO Dario Amodei released his public letter on February 26, refusing Pentagon demands. Within hours, the odds spiked from 27% to 49%, a 30-point jump. Traders saw the refusal and immediately priced in higher ban risk.

The spike didn't last. Within 24 hours, odds corrected to 34%, then crashed to 13% as traders reassessed. The market bet on pragmatism. Banning Claude meant losing access to one of the best AI systems. Threats are cheap, but actually banning is costly.

On February 27, Defense Secretary Pete Hegseth officially designated Anthropic a supply chain risk. President Trump directed all federal agencies to phase out Claude within six months. The market currently shows 99% YES. The ban happened.

🌐 From Around The Web: Kalshi Pick

Will the Pentagon designate Anthropic a supply chain risk?

Kalshi ran a parallel market asking whether the Pentagon would formally designate Anthropic as a supply chain risk.

Like Polymarket, Kalshi traders initially bet against the ban. Early odds suggested traders viewed Pentagon threats as negotiating tactics rather than genuine policy.

The market resolved YES when the Pentagon sent official notification to Anthropic leadership.

Defense Secretary Pete Hegseth said in a statement:

This has been about one fundamental principle: the military being able to use technology for all lawful purposes. The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk.

The designation requires defense vendors and contractors to certify they don't use Anthropic's models in Pentagon work. The label is typically reserved for entities controlled by foreign adversaries when national security or espionage concerns arise.

Senator Kirsten Gillibrand (D-NY), a member of the Senate Armed Services Committee and Senate Intelligence Committee, called it "a dangerous misuse of a tool meant to address adversary-controlled technology."

The ban sends a message to every AI company: safety concerns don't outweigh compliance demands.

For now, Anthropic is fighting in court while OpenAI runs Pentagon AI. Whether Anthropic's stance earns them "the respect of the entire world for generations" or becomes a cautionary tale about the cost of principles depends on whether courts uphold the unprecedented designation.

That's it for this week.

Until next time, Hackers!