inDrive is a ride-hailing company operating in 48 countries with a unique peer-to-peer pricing model that enables fair trip costs. Across all products, more than 600 engineers work in 70+ teams.

The topic of performance and productivity remains one of the hottest debates in the software industry. Every company is trying to find its own answer, shaped by internal context, current challenges, or best practices observed in research and publications such as Pragmatic Engineer: Measuring Developer Productivity-Real-World Examples and Developer Productivity with Nicole Forsgren.

Understanding why organizations need this is essential. The main reason is that today’s market has changed. Companies now want more control and better efficiency from what they already have - their processes, people, and tools. This becomes especially important when scaling, since hiring more people alone is no longer a sustainable way to grow.

And if we want to answer these hard questions, we need to define and manage metrics that help achieve the organization’s specific goals. It’s important to note that at inDrive, Performance is one of our core values - alongside Purpose and People. That’s why performance isn’t just about results; it’s part of who we are as a company. We strive to understand impact and create it at every level of the organization.

We identified several key factors that led us to develop our own approach to performance:

  1. Since 2020, the company has been experiencing rapid business and engineering growth in both the number of engineers and teams.
  2. This scale creates the need for processes and tools that ensure predictable outcomes and support sustainable scaling.
  3. Leadership must rely on data-driven insights to understand how teams perform within these processes, identify bottlenecks, and make informed managerial decisions.
  4. Finally, metrics must be aligned with strategic and operational goals - only then do they become drivers of meaningful progress rather than isolated statistics.

Performance as a System

It’s important to note that performance and productivity are closely connected. Productivity reflects how effectively your delivery processes work, while performance shows what outcomes they actually produce. Different companies interpret these terms and approach metric design in very different ways. Some focus mainly on individual contributor productivity, while others track metrics only at the team level. But I believe this complex challenge requires building a comprehensive system that operates across all levels of the organization.

At inDrive, I implemented a system that includes:

The example below shows how a metric within the Performance domain cascades across different levels:


The Single Analytical System

The concept behind the Single Analytical System is straightforward - it’s a set of dashboards that bring together metrics from multiple sources (Jira, Grafana, Kibana, PagerDuty, HR systems or in-house tools) into a single one-pager view. In other words, you have a pocket-sized overview of the entire landscape. And when deeper analysis is needed, you jump directly to the underlying data source, such as a detailed dashboard in Grafana. To implement the dashboards, I designed a five-tier structure: from the division level down to the sandbox. Each tier can have its own dashboard or a family of dashboards, enabling performance analysis of a specific organizational entity or providing a cross-team or cross-service view of a particular metric.


1. Division Level

Includes key technology metrics aligned with company and divisional strategy across five domains:

One of the dashboard sections looks as follows:

All metrics are shown in dynamics to track trends over time, with signals indicating whether each metric has reached its target value or not.

The dashboard allows CTO and divisional leadership to assess efficiency, identify focus areas, and understand the influence of specific clusters.

2. Cluster Level

The dashboard at this level includes all cluster metrics organized within the previously defined domains:

Used by Directors of Engineering/Product and CTO to improve performance and other company-wide processes such as Annual Performance Review.  Metrics are usually analyzed monthly during cluster metrics analysis.

3. Team Level

Mirrors the cluster level but with team-specific context:

I designed the dashboard and metrics to work for any team - whether they use Scrum or Kanban. This makes the system flexible while keeping the evaluation consistent.

This is the primary management tool for Engineering managers for supporting data-driven planning, stakeholder alignment, and continuous improvement during both day-to-day operations and key events like sprint planning or retrospectives.

4. Individual Contributor Level

The dashboard includes engineer-level productivity data in five key areas, such as collaboration,  work quality, workload health, development experience and AI adoption.

Used by Engineering managers during day-to-day work to maintain a high level of productivity and identify growth areas.

5. Sandbox Level

Contains deep-dive dashboards for SMEs managing specific metrics across the organization: enabling advanced analysis and experimentation. For example, Time-to-market or team maturity level.

Used by SMEs or any managers on demand for deep-dive analysis.

Conclusion

Building a system that allows organizations to manage, evaluate, and improve engineering performance is crucial - it enables data-driven understanding of the current state and helps launch improvement initiatives at multiple levels.

At the same time, we recognize the inherent risks of over-reliance on metrics - they can be misinterpreted or gamified. As part of the Engineering Excellence Team, I promote the mindset that metrics are not the goal - they are signals to guide management decisions. Metrics may miss context, reflect short-term fluctuations, or mislead without proper analysis. Their real value lies in comprehensive, contextual evaluation, enabling managers to answer key questions during planning, reviews, and performance discussions, and to foster a data-driven culture grounded in accountability and learning.

Learn More

To learn more about our analytical system, career model, and engineering practices, explore our Public Engineering Handbook.

Igor Novoseltsev

Staff Coach, Engineering Excellence Team

References

  1. inDrive. About the company. https://indrive.com/company
  2. inDrive Public Handbook. https://github.com/inDriver/handbook
  3. Elluminati Inc. How inDrive Works: Business & Revenue Model. https://www.elluminatiinc.com/how-indriver-works-business-revenue-model/
  4. Gergely Orosz. Pragmatic Engineer Blog: Measuring Developer Productivity: Real-World Examples; Developer Productivity with Nicole Forsgren.
  5. Forsgren N., Humble J., Kim G. (2018). Accelerate: The Science of Lean Software and DevOps. IT Revolution Press.
  6. Forsgren N. (2021). The SPACE Framework. Microsoft Research.
  7. Google Engineering Productivity Research. https://research.google/pubs/engprod/
  8. State of DevOps Reports (DORA). https://dora.dev/

Developer Experience (DevEx) Research – What Actually Drives Productivity. https://getdx.com/research/devex-what-actually-drives-productivity/