Every system changes once it reaches a particular scale. Traffic grows unevenly, assumptions stop holding, and design decisions that once felt minor begin to shape everything that follows.

This article traces the engineering career of Sai Sreenivas Kodur, from building large-scale search and recommendation systems in e-commerce to leading enterprise AI platforms and domain-specific data products.

Along the way, it looks at how working at scale shifts an engineer’s focus from individual components to platform foundations, data workflows, and team structures, especially as AI changes how software is built.

Early Foundations in Systems and Machine Learning

Sai Sreenivas Kodur completed both his bachelor’s and master’s degrees in Computer Science and Engineering at the Indian Institute of Technology, Madras.

During his undergraduate and graduate studies, he focused on compilers and machine learning. His research explored how machine learning techniques could be applied to improve software performance across heterogeneous hardware environments.

This work required thinking across layers. Performance was treated as a system-level outcome shaped by algorithms, execution models, and hardware constraints working together. Small implementation choices often produced large downstream effects.

The academic environment emphasized rigorous reasoning and first-principles thinking. By the end of graduate school, the most durable outcome of this training was not familiarity with specific tools, but the ability to learn new systems deeply and adapt to changing technical contexts.

Search and Recommendation Systems at Scale

Sai’s early industry roles involved building and leading search and recommendation systems at large Indian e-commerce platforms, including Myntra and Zomato.

These systems supported indexing, retrieval, and ranking across catalogs of more than one million frequently changing items. They handled approximately 300,000 requests per minute.

At this scale, system behavior reflected multiple competing constraints. Index freshness had to be balanced against latency requirements. Ranking quality depended on data pipelines, infrastructure reliability, and model behavior operating together.

Many issues surfaced only after deployment. Design decisions that appeared correct in isolation behaved differently once exposed to real traffic patterns, delayed signals, and uneven load distribution.

This work reinforced the importance of aligning technical design with product usage patterns. Improvements in relevance or performance required coordination across distributed systems, data ingestion, and application behavior rather than isolated changes to individual components.

Startup Environments and Broader Engineering Exposure

Early in his career, Sai chose to work primarily in startup environments.

These roles offered exposure to a wide range of engineering responsibilities, including system design, production operations, and close collaboration with product and business teams. Technical decisions were closely tied to customer requirements and operational constraints.

In these settings, the effects of architectural choices surfaced quickly. Systems with weak foundations required frequent rework as usage increased. Systems built with precise abstractions and reliable pipelines were easier to extend over time.

This experience broadened his perspective on engineering. Systems were defined not only by code and infrastructure, but also by how teams worked, how decisions were made, and how platforms were maintained as they grew.

Building Food Intelligence Systems at Spoonshot

Sai later co-founded Spoonshot and served as its Chief Technology Officer.

Spoonshot focused on building a data intelligence platform for the food and beverage industry. The core system, Foodbrain, combined more than 100 terabytes of alternative data from over 30,000 sources with AI models and domain-specific food knowledge.

This foundation powered Genesis, a product used by global food brands such as PepsiCo, Coca-Cola, and Heinz to support innovation and product development decisions.

Building Foodbrain involved working with noisy data sources, evolving domain requirements, and enterprise reliability expectations. The system needed to accommodate changing inputs without frequent architectural changes.

Under Sai’s technical leadership, Spoonshot raised over $4 million in venture funding and scaled to a team of more than 50 across the US and India.

During this period, he introduced data-centric AI practices by creating a dedicated data operations function alongside the data science team. This reduced the turnaround time for new model development by 60% while maintaining accuracy above 90%.

Enterprise AI Platforms and Reliability

Sai later served as Director of Engineering at ObserveAI, where he led platform engineering, analytics, and enterprise product teams.

The platform supported enterprise customers such as DoorDash, Uber, Swiggy, and Asurion. These customers had strict expectations around reliability, performance, and operational visibility.

Scaling the platform to support a tenfold increase in usage required changes across infrastructure, data ingestion pipelines, and observability practices. These efforts contributed to more than $15 million in additional annual recurring revenue.

Alongside technical scaling, Sai focused on building engineering leadership capacity. He helped define hiring frameworks, conducted over 130 interviews, and hired senior engineering leaders to support long-term platform development.

This phase highlighted how organizational structure influences system outcomes. As platforms grow more complex, coordination, ownership, and decision-making processes become part of the technical system.

From Systems Engineering to AI-Native Teams

Across roles, Sai maintained hands-on involvement while gradually expanding into broader technical leadership responsibilities.

His focus increasingly shifted toward platform foundations and workflows that allow teams to work effectively with complex data and AI systems. Mentorship of senior engineers and investment in precise abstractions became essential parts of this work.

His research publications reflect this practical focus. Papers such as "Genesis: Food Innovation Intelligence" and "Debugmate: an AI agent for efficient on-call debugging in complex production systems" examined how AI can support product and engineering workflows.

Debugmate demonstrated a 77% reduction in on-call load by assisting engineers with incident triage using observability data and system context.

Long-Term Engineering Foundations

Looking across Sai Sreenivas Kodur’s career, a consistent theme is an emphasis on building systems that remain reliable as complexity increases.

As AI accelerates software development, this focus becomes more critical, especially when teams begin building truly AI-native software teams rather than layering AI onto existing architectures. AI agents introduce new workloads and different patterns of system usage. Data and infrastructure platforms originally designed for human users must adapt to support these changes.

Rather than focusing on individual productivity gains, this work centers on platform foundations, data workflows, and team structures that can scale over time.

The career reflects an engineering approach grounded in clarity, durability, and long-term impact.

Sai Sreenivas Kodur - Image | LinkedIn