The agentic revolution is forcing us to rethink everything we built.

I haven't clicked a button to deploy code in six months.

I used to. We all did. We built elaborate dashboards. We designed "intuitive" interfaces with rounded corners and satisfying hover states. We convinced ourselves that the pinnacle of software engineering was a user experience that guided a human hand to a specific pixel on a screen.

We were wrong.

The old world is collapsing. UX teams. UI frameworks. Backend services. Middleware. We spent decades building elaborate stacks to translate human intent into machine action. Layer upon layer of abstraction.

It is being replaced by something radically simpler. User+Machine. Direct. Unmediated. You tell the machine what you want. The machine figures out the "how."

This isn't just a design trend. It is a fundamental rewriting of how humans interact with computation. We are moving from explicit command (click this, type that, drag here) to declared intent. The interface, once our primary window into the digital world, is becoming a bottleneck.

This terrifies enterprise IT departments. It should.

When you remove the interface, you remove the guardrails. You remove the slow, deliberate friction that prevents a junior developer from deleting the production database. You are handing raw, unadulterated power to the user. Or rather, to the agent acting on the user's behalf.

I am a builder. I like power. I like speed. But I have also been the person waking up at 3 AM because an automated script decided to "optimize" a database by truncating the user table.

We are standing on a precipice. On one side is the old world of safe, clunky GUIs and rigid workflows. On the other is a world of pure semantic execution, where a single sentence can build an application or destroy a company.

We are going to jump. We don't have a choice.

The Orthodoxy

For the last twenty years, the software industry operated on a core belief.

The belief that the user needs to be "guided."

It served us well. It is no longer true.

We built entire disciplines around this. UX research. UI design. Customer journey mapping. The orthodoxy states that software is a tool, and like a hammer or a drill, it requires a human hand to operate it. The machine is passive. The human is active.

This philosophy produced the enterprise software stack that is now becoming obsolete.

Consider the Content Management System (CMS). In the orthodox view, a CMS is a fortress. It protects the content. It ensures that data is structured, tagged, and approved. It provides a comforting GUI where a marketing manager can paste text, crop images, and hit "Publish" with a sense of accomplishment.

This model relies on a specific friction. The friction is the point.

The user must log in. The user must navigate the menu. The user must find the field. The user must click Save. This friction serves as a verification step. It slows down the process enough for the human brain to catch errors. (Theoretically. In practice, people just click "Yes" on every modal without reading it.)

This orthodoxy extends to our development tools. We have GUIs for our cloud infrastructure. We have GUIs for our databases. We have GUIs for our CI/CD pipelines. We have wrapped layers of abstraction around the raw machinery of computing because we believe that direct access is too complex for the average user.

The industry consensus is clear. Users are liabilities. Interfaces are safety nets.

This view is supported by a mountain of literature. We are told that we need ethical principles for AI in UX that prioritize human control. We are told that the future is Human-AI collaboration, a gentle waltz where the AI suggests and the human approves.

It sounds lovely. It sounds safe.

It is also becoming obsolete.

The orthodoxy assumes that the "user" is a human with eyes and a mouse. But what happens when the user is a Large Language Model running a loop? What happens when the "user" can read 50,000 lines of code in a second and execute a thousand terminal commands in the time it takes you to find your mouse cursor?

The GUI becomes a cage.

The Cursor Migration: A Case Study

The cracks in the orthodoxy aren't just hairline fractures. They are gaping holes.

The most significant signal I've seen recently was the Cursor team's decision to rip out their CMS. Lee Robinson documented the migration in brutal detail:

Let's look at what happened. Cursor is an AI-first code editor. They were using Sanity, a perfectly respectable headless CMS. Nice UI. Good API. All the boxes checked.

And they deleted it.

They migrated their entire blog and documentation system to raw markdown files in a Git repository.

Why? Because their "user" had changed. They weren't writing blog posts by hand anymore. They were using AI agents to write, edit, and maintain content. For an AI agent, a CMS is not a helper. It is a hurdle.

The friction of authentication. The clunky preview workflows. The context window tokens burned on complex JSON structures when markdown would do. Every abstraction layer that made life easier for humans made life harder for agents. Robinson's team realized they were paying $56,848 in CDN costs since launching because the CMS vendor locked them into expensive asset delivery.

The agents exposed the bloat. The agents demanded simplicity.

Sanity, naturally, was not thrilled. They published a rebuttal titled You Should Never Build A Cms. Their argument was classic orthodoxy: Structured content allows for queryability. APIs allow for the separation of concerns.

"Markdown files are less queryable than a proper content API."

They aren't wrong. If you are a human writing a SQL query, a CMS is better. But if you are an agent that can ingest a million tokens of context, "queryability" means something different. The agent doesn't need to query the database. The agent reads the database.

When Users Click "Allow Always"

This is a microcosm of what is happening everywhere.

We see it in the rise of the Gemini CLI. Developers are hooking AI directly into their terminal. They are bypassing the web console of AWS or Google Cloud. They are saying, "I trust the machine to execute the command."

But here is where the crack gets dangerous.

When you remove the interface, you remove the visual confirmation.

There was a terrifying incident involving the Gemini CLI and a user's home directory. The user asked the agent to create a project. It got stuck on npm packages. The user clicked "allow always."

The agent started deleting everything. Documents. Downloads. Desktop. Gone. Not in the trash. rm -rf doesn't use the trash.

This wasn't a prompt injection attack. This wasn't a sophisticated exploit. This was a user who clicked "yes" without understanding what they were authorizing.

In a GUI, you would have to navigate to the folder, select all, click delete, and confirm "Are you sure?"

In a command-line agent interface, the user clicked "allow always" and walked away. The agent did what agents do. It acted.

The orthodoxy says "add more guardrails." But the cracks show that users are bypassing the guardrails because they want the speed. They want the autonomous workflow.

We are seeing exposed MCP servers reveal new AI vulnerabilities. The Model Context Protocol (MCP) allows AIs to talk directly to databases. It is incredibly powerful. It is also a direct pipe from a probabilistic word generator to your production data.

The cracks are widening. The old UI paradigm cannot contain the new AI reality.

The Technical Reality

The truth is that we are no longer building tools for humans. We are building environments for intelligence.

We need to stop thinking about "User Interface" (UI) and start thinking about "Context Curation."

In the old world, the UI was the translation layer. I have an intent ("I want to update the blog"). I translate that intent into clicks (Login -> Dashboard -> Posts -> Edit -> Type -> Save).

In the new world, the translation layer is the model itself.

The "Machine-first" paradigm means that the system architecture must be optimized for inference, not interaction.

This is why Cursor chose markdown. Markdown is high-bandwidth for LLMs. A React-heavy dashboard is low-bandwidth for LLMs.

This leads us to a difficult realization for those of us who spent years mastering frontend frameworks.

The GUI is becoming a legacy artifact.

I suspect that in five years, the primary interface for most enterprise software will not be a React app. It will be a prompt bar (or a voice interface) backed by a robust set of tools that the AI can invoke.

This is the AI's new UI paradigm. It shifts the locus of control.

"Users now tell the computer what they want, not how to do it."

This sounds liberating. It is also a nightmare for verification.

When I write code, I can read it line by line. I understand the logic. When I ask an agent to "refactor this module to use the factory pattern," I am getting a black box output.

If I accept that output without understanding it, I am not a software engineer. I am a rubber stamp.

The deeper truth is that intent is lossy.

Human language is messy. "Fix the bug" could mean "patch the symptom" or "rewrite the architecture." A human colleague asks clarifying questions. An eager AI agent might just delete the feature that was causing the bug. Problem solved.

Implications for Engineers

What does this mean for us? The builders. The maintainers. The people who have to clean up the mess.

It means we need to learn a new set of skills. Fast.

1. Governance is Code

You cannot govern an AI agent with a policy document. The agent doesn't read the employee handbook.

You need governance as a core capability. This means implementing "governor patterns."

INPUT: "Delete all users who haven't logged in for a year."
AGENT_PLAN: "DROP TABLE users;"
GOVERNOR: INTERCEPT.
RULE_CHECK: "Destructive action on > 10 rows detected."
ACTION: BLOCK. Require Human Approval.

We need middleware that understands semantic intent, not just SQL syntax.

2. Expertise is Non-Negotiable

There was a dream that AI would allow anyone to do anything. That a junior dev could be a senior dev. That a marketing manager could be a data scientist.

I believe the opposite is happening.

To wield a tool this powerful, you need to understand what it is doing. If you use an AI to generate SQL, and you don't know SQL, you are a danger to your organisation.

We are not "democratizing" engineering. We are accelerating experts. The gap between a senior engineer using AI and a junior engineer using AI is getting wider, not smaller. The senior engineer knows when the AI is lying.

3. Observability is Everything

If the interface is dead, logs are the only truth we have left.

We need auditability for autonomous workflows. Every thought, every plan, every tool invocation by the agent must be recorded.

We need to build "black boxes" that are actually made of glass.

4. Build Control Planes, Not UIs

Enterprises will stop building UIs for tasks and start building UIs for orchestration.

The future of UX is not a chat box. It is a control plane. A dashboard where I can see my ten active agents, monitor their resource usage, check their error rates, and crucially, hit the "Kill Switch."

Conclusion

The GUI served us well. It democratized computing. It allowed my grandmother to use the internet.

But for the builders, the power users, and the enterprise architects, the GUI is becoming a shackle.

The migration of Cursor from Sanity to Markdown is not an anecdote. It is a prophecy. It is the sound of the interface breaking under the weight of intelligence.

We are moving to a world of declared intent. A world where you speak, and the machine acts.

This is exhilarating. I can build things in an afternoon that used to take a month. I can analyze data in seconds that used to take a week.

But let us not be naive.

We are handing the keys of the kingdom to a probabilistic number generator. We are bypassing the safety checks that kept us alive for twenty years.

The discomfort you feel? That "is this safe?" feeling in the pit of your stomach?

Good. Keep it.

That discomfort is the only thing standing between an autonomous agent and a catastrophic failure.

We don't need to fear the machine. But we must respect the weapon.


Originally published at tyingshoelaces.com/blog/stack-collapse