Your Apdex score is great, so why are users complaining?

In the world of mobile observability, there’s a tempting shortcut to reliability: track a few high-level metrics, like with an Apdex score, and assume that these numbers reflect the full customer experience. But in practice, this approach often hides more than it reveals, because the context of user experience isn’t reducible to a few broad percentages.
Let’s talk about how to evolve from generic performance indicators to meaningful, actionable metrics that actually reflect how your end-users experience your app.
The problem with oversimplified metrics
An Apdex score is a generalized calculation for app health or user experience. On the surface, these indicators would be great to put numbers behind! The problem is that they treat every single experience and every type of error equally. That’s rarely how apps work in production.
Take networking, the lifeblood of any application. Users get their content and store all their activity with secure, timely interactions with the rest of your software system. Apdex scores are generally applied to networking in ways that oversimplify what’s really going on.
For example, lumping together all network calls, whether they’re high-priority API calls or analytics pings, can distort the picture. Fixing a problem that generates a flood of 400 errors from a third-party analytics endpoint might improve your score…but will it actually improve the user experience?
In contrast, think about the purchase flow in an app. The network requests in the flow are likely a low-volume path relative to the rest of the activity in your app, but they’re the most critical ones. After all, those requests create revenue! If those API calls are slow or failing, the impact on users can be huge, even if your Apdex score stays high.
Why “good” metrics can still miss the mark
I’ve worked with many teams who came to us saying: “Our Apdex score is great, but users are still complaining the app is slow and we don’t know why.”
The problem? They hadn’t filtered their metrics to focus on what’s truly important. When everything is measured the same way, it becomes impossible to identify what needs fixing.
Start with what matters most for you (and your end-users)
For another scenario, if you measure overall page load latency, that means it’s equally important that a settings page and a bank deposit page load quickly. This “all data is equal” approach will lead you to track a lot of screens that don’t matter or occur frequently, drowning out the performance indicators of mission-critical flows.
To get real insight, teams need to define what actually matters. For most apps, login is essential. For e-commerce apps, it’s the purchase flow. You want to isolate these key paths and create custom metrics from the available data:
- Track first-party network calls specific to those flows.
- Measure error rates and latency only for those endpoints.
- Avoid aggregation for less important or noisier data.
Real-world example: Hidden issues behind a “great” Apdex score
One Embrace customer, operating in the Americas and serving small business customers with a mobile app, had a high Apdex score but persistent user complaints. Even their own company executives were experiencing slow page loads.
When we dug into their data, we discovered that more than 50% of their network calls were third-party. They were frequent, low-priority, and full of unimportant noise. These calls drowned out signals from the more important paths that their backend team actively built.
Even specific API calls looked fine in isolation. However, when we examined the activity in individual user timelines in Embrace, we saw something critical: some API calls were being made 20 times instead of once. Multiple development teams, working independently, were triggering the same request over and over. This created massive bottlenecks while networking calls queued up and overwhelmed the hardware, resulting in real slowness and frustration for real users.
The maturity curve of mobile metrics
Most teams follow a natural progression when it comes to observability:
- Start with basics. Start with a crash-free rate, in a unified dashboard.
- Add network span forwarding. Trace what happens end-to-end from the client to the API.
- Measure key flows. Capture performance across rendering, networking, and parsing.
- Establish baselines and SLOs. Once you understand timing and behavior, set service-level objectives to catch degradation over time.
This journey moves from generic, context-less aggregates to detailed insights that actually help you solve real performance problems. Once the full picture was clear, it became easy to prioritize: “This view rendering matters. That request is just noise.”
I like to tell our customers to build mobile metrics that matter, and focus on clarity, context, and customization. Metrics should reflect real user experiences, not just aggregate success. Start with what’s critical, dig into the actual flows, and build observability that helps you improve, not just monitor.