By: Meredith Bradford
Photo Courtesy of Tethral
Your AI agent can turn on a light. It can check your calendar. It can set a timer. Each of those is a solved problem. But when you say "leaving in five minutes" and the house needs to arm security, step down lighting, adjust climate to away mode, kill the media, and queue a morning state for when you get back, no single agent call handles that. You need coordinated execution across five or six protocols simultaneously, and that is where the current stack falls short.
Where the Protocols Run Out
MCP gives agents structured access to APIs. A2A lets agents delegate to each other. OpenClaw is building open standards for translating agent skills into real-world activity. Each of these solves a real slice of the problem. But when the target is a Zigbee light, a Matter lock, a proprietary HVAC controller, and a calendar API that all need to respond as one coherent action from a single natural language intent, you need something that sits across all of them. The individual connections exist. The orchestration layer does not, or did not.
Tethral is building one. It sits between user intent and a device landscape that includes major smart home ecosystems, common IoT radios, and web-based services, interpreting natural language and decomposing it into coordinated actions across whatever protocols exist in a given environment.
How It Actually Works
Execution is local-first. Automation logic runs on the home network rather than round-tripping through cloud, which matters when coordination needs to happen in milliseconds across protocols with different latency profiles. Cloud round-trips add enough variability that tightly timed sequences break down. Running locally keeps the timing deterministic.
Intent decomposition is the interesting architectural choice. A command like "hosting tonight" is not a single API call. It is a coordination graph: lighting scenes across rooms, climate adjustments, audio routing, notification suppression, possibly doorbell behavior changes. That graph is not predefined. It gets generated at runtime based on available devices, user history, and current environment state. Tethral does not run static automations. It composes them on the fly, which is a different architecture from scene-based systems like HomeAssistant routines or IFTTT chains. Those are preconfigured sequences. This is runtime composition.
Who Built This
John Lunsford, Tethral's founder, came at this from the operations side before the research side. He worked as a security engineer with the Department of Justice, then moved into a senior AI and safety research role at a major technology company where he shipped consumer products and co-led the enterprise design partnership with OpenAI. He holds a PhD from Cornell with fellowships at MIT and Oxford, focused on autonomous system-to-society adoption. Beyond the platform itself, he designed Tethral's own transformer architecture and coordination protocol built specifically for multi-agent, multi-device orchestration, a new control plane rather than an adaptation of existing ones. He writes about the orchestration problem in more depth on
His take is that the home is a convenient test environment but the architecture is not home-specific. Orchestrating intent across five incompatible IoT protocols with local-first execution and runtime graph composition is a general coordination problem. The home is just where you can build it with real users and real devices.
Tethral has a working product and a partnership with the Connectivity Standards Alliance. Early stage, actively building. If you work on the boundary between AI reasoning and physical execution, it is worth a look.
This story was distributed as a release by Jon Stojan under HackerNoon’s Business Blogging Program.