There’s a growing realization across the AI ecosystem that models are no longer the bottleneck. The real constraint is everything around them: data access, governance, orchestration, and the ability to turn outputs into actions. That’s where most enterprise AI projects stall, not because the model isn’t capable, but because it isn’t meaningfully connected to the business.

Domo’s latest set of releases quietly tackles that problem head-on. Underneath the surface, what looks like an AI Agent Builder announcement is actually a much deeper architectural shift. Domo is positioning itself as the coordination layer between AI models and enterprise systems, with a focus on making agents usable, governed, and deployable at scale.

At the center of that shift is a tightly integrated stack: AI Agent Builder, AI Toolkits, a centralized AI Library, and the Domo MCP Server. Together, they form something closer to an agentic runtime than a feature set.

From prompts to programmable agents

Most teams experimenting with AI today are still operating in a prompt-centric world. Even with advanced tooling, the core interaction model is still a user asking a question and receiving a response. That model breaks down quickly when you try to operationalize it across a business.

Domo’s approach is to move away from prompts as the primary interface and toward agents as programmable units of work. The AI Agent Builder allows users to define agents not just by what they say, but by what they can do. That distinction is critical because agents are configured with access to governed datasets, predefined workflows, and specific operational permissions. They are not just generating text, they are executing tasks within a controlled environment.

This is where AI Toolkits become the underlying abstraction layer. Toolkits package together capabilities such as data access, transformation logic, API calls, and workflow triggers into reusable components. Instead of rebuilding logic for every use case, teams can define a toolkit once and assign it to multiple agents.

The result is a composable system where agents inherit both capability and context. A financial analysis agent, for example, is not just trained on financial concepts. It is connected to live datasets, governed by access controls, and equipped with the specific tools required to query, calculate, and act. That packaging of logic starts to resemble software engineering patterns more than traditional analytics workflows.

One of the more underappreciated pieces of the stack is the AI Library. While it may sound like a simple repository, it functions more like a control plane for agentic systems.

The Library centralizes how agents, toolkits, and AI-driven workflows are managed, versioned, and deployed. This becomes increasingly important as organizations move beyond isolated use cases and start operating multiple agents across departments.

Without a central management layer, agent sprawl becomes inevitable. Different teams build slightly different versions of the same logic, governance becomes inconsistent, and debugging becomes nearly impossible. Domo’s approach is to treat agents as managed assets within a governed system, allowing organizations to curate, reuse, and evolve AI capabilities in a structured way rather than starting from scratch each time.

This mirrors how modern software platforms handle services and APIs and suggests that AI development is beginning to follow similar patterns.

The most technically significant component of the announcement is the Domo MCP Server, which implements Model Context Protocol as a bridge between external AI models and internal enterprise systems.

This solves a problem that has been quietly limiting the usefulness of AI in production environments. Most large language models operate in isolation from the systems that actually matter, including data warehouses, operational tools, and business workflows. Integrating them typically requires custom connectors, duplicated logic, and careful handling of security risks.

Through the MCP Server, external models can securely query datasets, trigger workflows, create dashboards, and interact with applications inside Domo without bypassing governance controls. This effectively turns Domo into an execution layer that sits between AI models and enterprise infrastructure.

The implications are significant. Instead of choosing a single AI vendor and building around it, organizations can adopt a multi-model strategy while keeping their data and workflows centralized. The model becomes interchangeable, while the system that connects it to the business remains consistent.

One of the consistent failure modes in enterprise AI is treating governance as an afterthought. Access controls, data policies, and auditability are often layered on after systems are already in use, which introduces risk and limits adoption.

Domo’s platform takes a different approach by embedding governance directly into the data and agent lifecycle. Updates across the platform reinforce this shift, giving administrators more precise control over how data is accessed, delivered, and experienced across the organization.

This matters for agentic systems because agents inherit the permissions of the data and tools they are connected to. If governance is inconsistent, agents become a liability. If governance is embedded, agents can operate safely within defined boundaries. The result is a system where execution does not come at the expense of control.

Preparing data for agentic workflows

Agentic systems are only as reliable as the data they operate on, which makes data integration and preparation a critical part of the stack.

Domo’s updates to Magic ETL and its broader integration layer focus on reducing friction in how data is connected, transformed, and maintained. AI-guided connectivity simplifies how teams integrate new data sources, while enhancements to observability and pipeline management improve reliability over time.

The introduction of support for unstructured data expands the scope of what agents can access. Documents can be ingested, processed, and made searchable alongside structured datasets, enabling use cases that combine traditional analytics with document-based context.

This is an important step because many real-world workflows depend on both structured and unstructured data. Bringing those together within a governed environment increases the range of tasks agents can perform.

Taken together, these updates suggest that Domo is evolving beyond its origins as a business intelligence platform.

The combination of agent tooling, a centralized management layer, governed data pipelines, and MCP-based integrations creates a system where AI is not just layered on top of the business, but embedded within it. Instead of building isolated features, Domo is building an environment where agents, data, and workflows are coordinated in one place.

That shift is what turns AI from something interesting into something operational. And if this model holds, the companies that win in AI won’t just have better models. They’ll have better systems for putting those models to work.