Search systems have historically been optimized for retrieval: given a query, return the most relevant documents. That model breaks down the moment user intent shifts from finding information to solving problems.

Consider a query like:

“How will tomorrow’s weather in Seattle affect flight prices to JFK?”

This isn’t a search problem. It’s a reasoning problem — one that requires decomposition, orchestration across multiple systems, and synthesis into a coherent answer.

This is where agentic search comes in.

In this article, I’ll walk through how we designed and productionized an agentic search framework in Go — not as a demo, but as a real system operating under production constraints like latency, cost, concurrency, and failure modes.

Keyword and vector search systems excel at matching queries to documents. What they don’t handle well is:

Agentic search treats the LLM not as a text generator, but as a planner — a component that decides what actions to taketo answer a question.

At a high level, an agentic system must be able to:

  1. Understand user intent
  2. Decide which tools to call
  3. Execute those tools safely
  4. Iterate when necessary
  5. Synthesize a final response

The hard part isn’t wiring an LLM to tools. The hard part is doing this predictably and economically in production.

High-Level Architecture

We structured the system around three core concerns:

Here’s the end-to-end flow:

Each stage is deliberately isolated. Reasoning does not leak into execution, and execution does not influence planning decisions directly.

Flow Orchestrator: The Control Plane

The Flow Orchestrator manages the full lifecycle of a request. Its responsibilities include:

Instead of a linear pipeline, the orchestrator supports parallel execution using Go’s goroutines. This becomes essential once multiple independent tools are involved.

Query Planner: Mandatory First Pass, Conditional Iteration

The Query Planner is always invoked at least once.

First Planner Call (Always)

On the first invocation, the planner:

Even trivial queries go through this step to maintain uniform behavior and observability.

Lightweight Classifier Gate

Before invoking the planner a second time, we run a lightweight classifier model to determine whether the query is:

This classifier is intentionally cheap and fast.

Second Planner Call (Only for Multi-Step Queries)

If the query is classified as multi-step:

This prevents uncontrolled planner loops — one of the most common failure modes in agentic systems.

Tool Registry: Where Reasoning Meets Reality

Every tool implements a strict Go interface:

// ToolInterface is the tool interface for developers to implement which uses
// generics with strongly typed 
type ToolInterface[Input any, Output any] interface {
  // Execute initiates the execution of a tool.
  //
  // Parameters:
  // - input: Strong typed tool request input.
  // - output: Strong typed tool request output.
  // - toolContext: Additional output data that is not used by the agent model.
  // - err: structured error from tool. in some cases error is passed to LLM. eg: no_response from tool
  Execute(ctx context.Context, requestContext *RequestContext, input Input) (output Output, toolContext ToolResponseContext, err error)

  // GetDefinition gets the tool definition sent to Large Language Model.
  GetDefinition() ToolDefinition
}

This design gives us:

The Tool Registry acts as a trust boundary. Planner outputs are treated as intent — not instructions.

Parallel Tool Execution

Planner-generated tool calls are executed concurrently whenever possible.

Go’s concurrency model makes this practical:

This is one of the reasons Go scales better than Python when agentic systems move beyond prototypes.

Response Generation and Streaming

Once tools complete, responses flow into the Response Generator.

Responses are streamed via Server-Sent Events (SSE) so users see partial results early, improving perceived latency.

Caching Strategy: Making Agentic Search Economical

One production reality became clear almost immediately:

LLM calls have real cost — in both latency and dollars.

Once we began serving beta traffic, caching became mandatory.

Our guiding principle was simple:

Avoid LLM calls whenever possible.

Layer 1: Semantic Cache (Full Response)

We first check a semantic cache keyed on the user query.

This delivers the biggest latency and cost win.

Layer 2: Planner Response Cache

If the semantic cache misses, we check whether the planner output (tool plan) is cached.

Planner calls are among the most expensive and variable operations — caching them stabilizes both latency and cost.

Layer 3: Summarizer Cache

Finally, we cache summarizer outputs.

Each cache layer short-circuits a different part of the pipeline.

Lessons from Production

A few hard-earned lessons: