In 2000, exuberance pooled around the pipes. Telecom firms laid oceans of fiber on the assumption that traffic would soon arrive in tidal waves. Some of it did—eventually. But in the near term, the mismatch between capacity and paying demand helped turn telecom into the epicenter of the dot-com bust. The lesson wasn’t that fiber was a bad idea; it was that timing, utilization, and financing discipline matter more than raw ambition.

From fiber to GPUs: The new infrastructure gamble

Fast-forward a quarter century. The “pipes” are now GPU clusters, substations, and cooling loops. And while foundation models draw the headlines, the more consequential mispricing may be forming a layer below them: AI power and compute infrastructure, particularly the speculative build-out of data centers and GPU fleets by newcomers racing into an engineering problem they don’t fully grasp.

The basic narrative is seductive. AI is compute-hungry; compute needs power; therefore, any newly built megawatt attached to racks of accelerators must be a license to print money. But that syllogism flattens the messy physics, supply chains and cash flows that make real-world infrastructure work—or not. It ignores the grid interconnection queues that stretch for years; the transformer and switchgear lead times measured in dozens of months; the water and heat constraints; and the unforgiving depreciation curve of accelerated silicon that can be leapfrogged by the next architecture cycle.

Power, supply chains, and physical limits

Credible analysts expect data-center electricity consumption to more than double by 2030, largely due to AI. That is a macro truth—and it is precisely why so much capital is stampeding into “AI-ready” power and real estate. But macro truths don’t redeem micro mistakes. If you overbuild capacity in the wrong place, on the wrong timeline, with the wrong financing and customers, the demand line can surge while your P&L sinks. (International Energy Agency)


You can see the pressure building in the grid itself. Utilities and equipment suppliers report multi-year lead times for large power transformers and generator step-up units—hardware without which even the best site plan is just a PDF. If your pro-forma assumes energization in 12 months but the transformer ships in 28, you’re not late—you’re upside down. Recent industry surveys placed average lead times around 2.5 years for critical power transformers in mid-2025, a hangover unlikely to clear quickly. (EEPower, T&DWorld)


Meanwhile, the hyperscalers are buying every clean megawatt they can find. Amazon, Microsoft, Meta, and Google have collectively contracted tens of gigawatts of renewable PPAs, altering regional power markets and crowding out smaller entrants who imagined they could “just buy green electrons.” The sophistication here is nontrivial: 24/7 clean energy matching, storage-backed firming, and location-specific hedges. If you’re setting up an SPV to stand up your first 50-MW “AI park,” you’re not competing on a level PPA playing field with teams that have been structuring these instruments for a decade. (S&P Global, McKinsey)


Even if you can energize, operating envelopes are tightening. Water is not the new oil, but it is a real constraint in arid regions where many “cheap-land” sites sit. Depending on weather and system design, data centers can evaporate roughly 0.26–2.4 gallons of water per kWh for cooling. Translate that across months of high-load AI training and you’re not just negotiating with utilities—you’re in dialogue with local communities, regulators and environmental advocates who have learned (from crypto) to ask hard questions early. (UIUC, Stanford)

The silicon treadmill and the risk of stranded assets

Then there’s the silicon treadmill. NVIDIA’s Blackwell launch signals another large step function in throughput per rack and inference performance per watt—excellent for customers, hazardous for owners of newly minted but suddenly second-tier fleets. If your payback model assumed three peak years on H100/H200 economics, a dense Blackwell (or its successor) landing six quarters earlier than you modeled can turn your hot aisle into a cold storage problem. Infrastructure builders must plan for modularity, retrofit paths, and cap table structures that tolerate rapid repricing of capacity. (NVIDIA Newsroom)


If this sounds alarmist, look at how fast regional load forecasts are being re-written. A single U.S. utility recently outlined a 50% increase in peak load by 2031—with AI-driven data centers as a pivotal driver—and proposed a corresponding surge in capex. The reflex is to cheer “build it all.” The sober read is: if a lot of players build a lot of it all at once, some fraction will be stranded by siting errors, delayed interconnects, or shifts in demand mix toward more efficient models and on-prem inference. (Reuters)

Efficiency will puncture the hype

Critically, none of this requires dunking on foundation-model labs. In fact, the labs’ steady progress is a source of risk for indiscriminate infrastructure: better compression, sparsity, quantization, distillation and retrieval strategies all reduce the steady-state compute needed per unit of business value. The more efficient the stack becomes end-to-end, the more the market will reward high-utilization, low-TCO capacity in the right places—and punish speculative capacity in the wrong ones. That is not a failure of AI; it is a ruthless re-pricing of infrastructure.


So where is the epicenter of a potential correction? Not “AI,” and not even “models.” It’s the build-anything-anywhere wave of GPU data centers and megawatt projects pushed by lightly experienced entrants who equate scarcity pricing with guaranteed returns. The contours look familiar from prior cycles: real estate developers pivoting to “AI campuses,” funds rolling crypto facilities into “AI compute,” and municipal pitches that treat a substation upgrade like a ribbon-cutting rather than a multi-year engineering program. If the last era’s error was too much fiber for too few packets, this era’s risk is too many accelerators for too few monetizable tokens—in the wrong zip codes, on the wrong timelines, financed the wrong way.


Enterprise buyers and builders can avoid becoming the story. Treat compute like the project-financed asset it is. Underwrite utilization, not nameplate. Secure interconnects and long-lead equipment early—and verify delivery windows with supplier CFOs, not just sales decks. Align PPAs with actual load shapes, not aspirational ones. Architect for modular upgrades so tomorrow’s silicon can slide into today’s shell. And above all, sign demand that can survive price compression as models and serving stacks get more efficient.

The normalization phase

For investors, sharpen the diligence: Who is the actual offtaker?

What is the interconnect status?

Which transformers are on order, from whom, and with what liquidated damages for delays?

What is the retrofit plan when cooling envelopes tighten or rack density doubles?

Where is water risk priced in? What’s the exit if Blackwell-class gear resets pricing before payback?


The internet did not die in 2001; it became normal. The AI infrastructure boom will go the same way: today’s exuberance will be tomorrow’s baseline. But normalization always arrives with a bill. If there’s a bubble to deflate, expect it to be paid most painfully by those who confused a megawatt with a business.