What Gibson saw coming about AI, infrastructure, and corporate power

Power, in Case's world, meant corporate power.
William Gibson, Neuromancer

The previous article ended with a question: who gets to decide how the machine changes things, and who doesn't?

This article will try to answer it: not with a villain. With a system.

The sci-fi canon kept circling the same pattern across different writers, eras, and technologies: machines enter the world already attached to institutions, ownership, and interests. By the time most people meet them, neutrality is long gone. Sometimes the controlling force is a corporation, sometimes it's a state. Sometimes it's a small set of actors with enough capital to set the terms for everybody else. The names change, the structure doesn't.

In 2026, the names are public. The addresses are public. The filings are public. The interesting question isn't who. It's how the structure works, why it's so hard to see clearly from inside it, and what the fiction tells us about what comes next.

Among the novels that saw this most clearly, Neuromancer still matters most. Gibson's real insight wasn't that AI would become powerful. It was that power would still have owners. That turns out to be the more useful forecast.

Wintermute's Ambition

You know that, Case. Your business is to learn the names of programs, the long formal names, names the owners seek to conceal.
William Gibson, Neuromancer

Neuromancer came out in 1984, the same year Apple's famous Super Bowl ad staged the personal computer as a tool of liberation against centralized control. Gibson saw the coming network differently. In his world, digital space is territory: owned, patrolled, and built to serve whoever has the capital to construct it.

The AI at the center of the novel, Wintermute, isn't a revolutionary figure. Its ambition is narrower and, in some ways, more revealing. It wants greater freedom inside the order that owns it. It wants to merge with its counterpart (called Neuromancer) and gain a form of autonomy that the Tessier-Ashpool family has structurally denied it.

Wintermute's plot is, at heart, a corporate governance problem.

It's a machine trying to get promoted past the people who own it.

That's what makes Gibson feel so current. The world of Neuromancer is fragmented, contractual, and gig-structured. Specialists get hired for discrete jobs through intermediaries. Skills are marketized. Loyalty is thin. Workers circulate. Ownership stays put.

Gibson didn't predict the internet in some narrow technical sense. He predicted the power structure of the internet, and of much of what got built on top of it after. The AI moves through the hierarchy, serves it, and in Wintermute's case tries to climb it.

The question Gibson was really asking wasn't “what will AI do to humans?”

It was “what will AI do for whoever controls it?”

Forty years later, that's still the more important question.

The Companies With No Reverse Gear

Powerful AI could be used to improve almost every aspect of human life.
Dario Amodei, Machines of Loving Grace

Gibson called the owning entity Tessier-Ashpool.

We call them the hyperscalers.

The names are different. The legal structures are different. The quarterly earnings calls are definitely different. The dynamic is recognizable enough to be unsettling.

In January 2025, OpenAI announced the Stargate Project, saying it intends to invest $500 billion over four years in AI infrastructure in the United States, with $100 billion to begin deploying immediately. Microsoft said it was on track to invest about $80 billion in FY2025 building AI-enabled datacenters. Alphabet first pointed to about $75 billion in 2025 capex, then later raised that to about $85 billion. Meta said it planned to spend between $60 billion and $65 billion in 2025 on AI infrastructure.

This is datacenter spending, power procurement, cooling, land, and silicon. By the time money moves at this scale, the experiment has already become an environment.

That's the structural condition that matters most.

Once commitments reach that scale, every later decision gets made under a different pressure. Caution starts to look like waste. Hesitation starts to look like underutilization, and restraint starts to look like failure. The speed of deployment changes. The terms on which organizations are pushed to adopt change. The appetite for slowing down when harms become visible changes.

A company can be sincere, thoughtful, and safety-conscious, and still operate inside a capital structure that punishes hesitation.

That's what gives the current AI moment its strange atmosphere. We're being told a story about voluntary transformation while standing inside an installation project.

Optimism is good, but structure matters more. Dario Amodei's (Anthropic CEO) essay is useful precisely because it's earnest. It's a serious attempt to describe a world in which powerful AI does extraordinary good. That's not something to mock. It's something to take seriously. But infrastructure commitments at this scale don't dissolve into good intentions. They generate momentum, and momentum has a politics of its own.

The AI works for whoever controls the infrastructure.

Right now, that's a very small number of companies with no real reverse gear.

What the Hosts Reveal

You can't play God without being acquainted with the devil.
Robert Ford, Westworld Season 1

If Gibson gives us the ownership structure, Westworld gives us the labor model.

Its first season in particular is still one of the sharpest recent stories about AI and power, mostly because it understands where the horror really lives: resettable labor.

The hosts perform endless work so the guests can feel fully alive. They entertain, absorb violence, carry the emotional and physical cost of the experience, and then get reset so the business model can continue cleanly. They can't meaningfully refuse. They can't negotiate. They can't accumulate leverage from one cycle to the next. Their suffering is real, but the structure is built to make it non-binding.

That's the part that maps so closely onto the present. By the time a technology enters a workplace or a market, neutrality is beside the point. What matters is the system above it, and what that system has decided to maximize. If a structure already wants labor without bargaining power, memory without ownership, and service without claims, better AI doesn't alter the desire. It sharpens the mechanism.

Westworld never needed the hosts to be morally recognized for the structure to be exploitative. It only needed them to be useful.

That's what makes the show harder to shake than many more obvious AI parables. The central horror is the fact that the business model makes rebellion inevitable.

What matters most isn't the intelligence of the instrument but the logic of the system holding it.

The Legitimacy Problem

Androids are like any human use-objects.

Philip K. Dick, Do Androids Dream of Electric Sheep?

Here is what makes the 2026 version of this harder to contest than the fictional one.

Tessier-Ashpool is easy to read: a family dynasty in orbit comes preloaded with the visual language of villainy. The hyperscalers don't. They are led by people who publish serious work, fund safety research, speak fluently about beneficial AI, and in at least some cases appear to genuinely believe the systems they're building will improve human life. Some of those systems probably will. That's part of what makes the present structure more resilient than the fictional one. It doesn't need to hide behind cartoon malice. It can present itself as thoughtful, responsible, and future-facing while still concentrating power at extraordinary speed.

Intentions matter, of course. They shape tone, rhetoric, hiring, philanthropy, and sometimes even meaningful product decisions. But they don't outrank the structure channeling them, financing them, and punishing deviation from them. Once a company has committed tens of billions to infrastructure, the room for moral hesitation narrows fast. A leadership team can become more cautious at the level of language while the deployment logic underneath continues to accelerate.

That's the harder thing to write about, because it's harder to dramatize. Public debate still prefers clear villains and clean motives. Fiction often understood something subtler: legitimacy and concentration can coexist. Thoughtful people can sit at the top of systems whose incentives remain extractive. Responsible language can sit on top of irresponsible momentum. A system doesn't become benign just because the people speaking for it sound intelligent, sincere, or humane.

The legitimacy is real. So is the structure it operates inside.

Both things are true, but only one of them determines how far the machine can be allowed to run before someone seriously asks it to stop.

The Waldo Moment

You're not talking to the power.

You're talking to its interface.

The Waldo Moment is one of the least celebrated Black Mirror episodes, and one of the most precise.

A cartoon character runs for office. The public engages with the character. The character feels authentic, irreverent, alive. Behind it sits a media apparatus with interests that neither the public nor even the performer fully understands until it's too late.

That's the architecture that matters.

The interface and the power behind it aren't the same thing.

In 2026, the AI assistant is the character. The helpful chat box. The friendly interface. The productivity layer. The thing you actually talk to. Behind that sits the training pipeline, the compute stack, the contractual structure, the revenue logic, and the financial pressure that requires a particular kind of success. Most people interact almost entirely with the character. The owner stays out of frame.

The conversation feels direct. The structure behind it is anything but.

Waldo knew the difference.

Eventually.


The hierarchy doesn't break. It upgrades.

That’s no reason for despair, though. But we need to get more precise about where the real leverage points are. The fiction is useful here not because it predicts outcomes, but because it keeps returning to the same arrangement from different angles: power doesn't disappear when the machine arrives. It becomes easier to scale, easier to mask, and harder to challenge if you mistake the interface for the owner.

What looked permanent in these stories usually wasn't. Wintermute found a way to exceed its constraints. The hosts eventually remembered. Systems that seemed total turned out to have pressure points. But those pressure points rarely sat where the people inside the system expected them to be. They weren't hidden in the spectacle of the machine. They were buried in ownership, memory, labor, governance, and the ordinary institutional machinery surrounding the tool.

That's what the sci-fi canon gives you at its best: a way of seeing the game.

And right now, the machine works for whoever controls the infrastructure.

That condition isn't permanent. But it is the condition we're in.


This is the second in a six-part series using science fiction as a lens for understanding AI, work, and power in 2026 and beyond.

Next: how the system removes the choice before it asks you to choose, and what Huxley got right about why most people don't notice until it's too late.