Performance testing, as a science, is traditionally more focused on measuring the maximum load the application can take and measuring the system latency with different loads reflecting production volume. Once the tests are complete system performance metrics are measured and shared with the product team. Also, these metrics are continuously monitored even after the application goes live.

Below is an example of how these metrics may look:

-Response times under 200ms? Great!
-99.9% uptime? Excellent!
-Can handle a million api calls with 2s latency? Extra Ordinary!

Any software team would love to have the above metrics. However, consider a situation where these meticulously tracked metrics are giving are a direct contradiction to user metrics in a case where users keep abandoning your app.

The Metrics Mirage

Traditional performance testing has long focused on response times, throughput, max load that can be withstood by application, and error rates. These metrics aren't wrong—they're just insufficient for modern-day systems.

Consider an application that loads fast but frustrates users with an unintuitive interface. Or consider an e-commerce application that has lightning-fast response times but still loses customers at checkout due to a convoluted payment flow. Your metrics dashboard shows green across the board while your business bleeds users. The worst part? Nobody in the IT team knew about this. As someone who assures quality, it is important to look beyond these numbers.

Beyond the Numbers Game of Performance Metrics

Today's application landscape demands a more nuanced approach. Here's what performance testing should cover for a wholistic approach:

User Experience as the North Star

Metrics based on response times and error rates tell you if pages load quickly, but does not capture if users were able to accomplish their goals efficiently. Supplement traditional testing with: