We are building the most advanced AI systems in history, yet the best way to control them relies on paradigms from the 1970s. I see developers wrestling with this friction daily, trying to bridge the gap between bleeding-edge models and legacy toolchains.

As of 2026, a simpler and clearer trend is emerging: the Command Line Interface (CLI) is becoming the most practical foundation for building agents. Tools like OpenClaw and the Google Workspace CLI demonstrate this well. Handing an agent a raw shell and a file system is often faster and more reliable than wrapping it in a complex protocol. As Eric Holmes noted in MCP is dead. Long live the CLI, modern LLMs already excel at using standard command-line utilities. These tools are lightweight, trivial to debug, and compose naturally.

To understand why this approach works, we need to look at where agent protocols struggled, and why the path forward means returning to the Linux philosophy.

The Problem with MCP (Model Context Protocol)

Model Context Protocol (MCP) was one of the first major agent protocols to emerge. Heavily inspired by how VS Code extensions work, its goal was noble: create a standardized way to expose all available resources and tools to an LLM.

In practice, however, MCP suffers from severe architectural flaws for everyday agentic workflows:

🛠️ MCP: Experience from the Field

A2A (Agent-to-Agent) Is Not a Tooling Protocol

For a long time, I believed Agent-to-Agent (A2A) communication would emerge as the compelling alternative to tool-binding protocols like MCP. While A2A remains the best way to orchestrate agents across different domains, it is not a protocol meant for low-to-medium level tool execution:

🛠️ A2A: Experience from the Field

Agent Skills Are Too Abstract

I previously thought Agent Skills were going to be the golden hammer — a concept I explored in MLOps Coding Skills: Bridging the Gap Between Specs and Agents. I was wrong.

🛠️ Agent Skills: Experience from the Field

Rediscovering the Linux Philosophy

If bloated protocols and rigid abstractions are slowing us down, what is the alternative? We need an approach to agent tooling that is dynamic, composable, and lightweight. Unsurprisingly, the industry is circling back to the CLI (Command-Line Interface).

For developers, treating the CLI as the primary interface for agents has undeniable benefits:

What we are doing is rediscovering the Linux Philosophy:

  1. Everything is a file (or a text stream).
  2. Write small tools that do one thing well.
  3. Combine them to solve complex problems.

🛠️ Experience from the Field

Side Note: I finally feel less guilty about the 1283+ commits on my dotfiles.

What’s Next? From Linux to Kubernetes for Agents

Right now, the raw CLI is the 80/20 rule of agent development. It gives us maximum leverage with minimum setup, accomplishing 80% of what we need and making developers extremely productive locally. But local development is not the final destination.

While giving an agent CLI access is like giving it a personal UNIX terminal, security and scale are the ultimate blockers for the CLI in production.

You cannot simply hand an autonomous LLM unconstrained bash access to your AWS account or your production database. Real-world deployment requires a rigorous security layer: strict authentication, Role-Based Access Control (RBAC), sandboxing, and immutable audit logs. This is a highly complex problem that raw CLI execution cannot solve safely at an enterprise scale.

In traditional software, we didn’t abandon the Linux philosophy to build the cloud; we containerized and orchestrated it. We bridged the gap between a single bash instance and a globally distributed system.

We need the exact same evolution for AI. To bridge the gap between heavy, bloated agent protocols and the lightweight-but-insecure raw CLI, we need a new paradigm. We need something that adheres strictly to the composability of Linux, but is built for autonomous systems at scale.

My bet is that we don’t just need another protocol; we need an Operating System for Agents.

Just as the industry needed Kubernetes to safely orchestrate Linux containers across vast server networks, our agents will need an orchestration layer built specifically for AI workloads:

The next frontier is building the Kubernetes to run it.