WEBINAR Nov 6: End-to-end mobile observability with Embrace and Grafana Cloud. Learn how to connect Embrace mobile telemetry with Grafana Cloud data.

Sign-up

Mobile performance traces 101: Shifting your mindset from backend to mobile observability

Mobile performance trace compared to a distributed trace

Observability teams often ask why we don’t show monolithic traces for issues like slow startups. While they theoretically reveal all services involved, they’re hard to interpret, and distract from what’s critical: the mobile experience context.

We are frequently asked by observability teams why using tracing to measure mobile app performance differs from using distributed tracing to measure backend services. They want to connect a mobile trace to its corresponding distributed trace and see everything in a single, monolithic trace.

After all, isn’t end-to-end visibility just seeing all the frontend spans alongside all the backend spans?

Well, not really.

In backend observability, distributed traces shine at showing the path of a single request across multiple services, containers, and nodes.

However, that same approach falls short when it comes to mobile apps. The concept you need instead is the mobile performance trace — a trace built around the user’s experience on a single device, not the backend service topology.

In this post, we’ll cover why mobile observability requires a different approach to tracing than backend observability.

1. Distributed trace — Optimized for distributed systems

A distributed trace answers the question, “How does one request move through my distributed system, and where is it slowing down?”

Characteristics:

  • Scope: One request traversing multiple backend services and nodes.
  • Unit of analysis: An endpoint and its downstream calls.
  • Goal: Find latency or errors along that request’s call chain.
  • Context: Everything outside this call chain is irrelevant (and adding it creates noise).

2. Mobile performance trace — The user’s point of view

A mobile performance trace starts and ends on the user’s device and measures all relevant activity for a specific user action, such as “load the shopping cart” or “post a photo.”

Characteristics:

  • Scope: A single computer — the smartphone — where CPU, memory, battery, and network are shared by all tasks.
  • Unit of analysis: A user activity that can involve UI rendering, I/O, animations, local computation, and background tasks.
  • Goal: Understand the user experience, not just the backend latency.
  • Context: Includes spans tied to the action and spans showing competing background activity on the device.

3. “I want all the data” — Why you can’t have it all in one trace

A common request from observability teams is, “If I have a mobile performance trace, why can’t I see all spans in one giant waterfall?”

Reason: Because a mobile performance trace can involve multiple distributed traces, one for each backend request.

Mobile performance trace including full distributed traces
A monolithic trace that contains all mobile activity and all corresponding distributed traces is difficult to interpret. (Click the image to enlarge it.)

Example: Loading the cart might trigger:

  1. Cart Items API call (distributed trace #1)
  2. Catalog API call (distributed trace #2)
  3. Recommendations API call (distributed trace #3)

These traces are independent in the backend. Combining them under one root span creates an unrealistic call chain, inflated durations, and misleading causal relationships. In other words, super-long unrelated traces affect short traces from other services.

It also makes it really hard for each team to get visibility into their own stuff. A team that owns their own vertical product can’t focus on their piece and understand if there is a problem.

Mobile performance trace with distributed traces highlighted
This monolithic trace example highlights how much noise the distributed traces can create, as only one distributed trace is meaningfully affecting the entire mobile operation. (Click the image to enlarge it.)

The second image above illustrate that the distributed trace /api/recommendations  is affecting the entire operation. Showing details about the remaining services /api/cart and /api/catalog only adds noise to the analysis.

4. Backend vs. mobile: The mindset shift

To analyze a mobile performance trace, it’s enough to have the client-side network request, which happens to be the root span of a distributed trace in the backend observability platform. We can think of the client-side network request as a “pointer” to a distributed trace.

Mobile performance trace separated from distributed trace
A mobile performance trace only needs the client-side network request that acts as the root span for the distributed trace, not the full details of every distributed trace. (Click the image to enlarge it.)

Backend observability

  • Focuses on distributed nodes and services.
  • Root cause analysis happens in the context of a single request.

Mobile observability

  • Focuses on a single, resource-constrained device.
  • Root cause analysis must consider all competing activities on the device at that moment.
  • Consider each affecting distributed trace as a reference but not details about them.

A note on “end-to-end observability”

The phrase end-to-end observability is often used in backend discussions to mean following a request from the first hop to the last hop in a distributed system.

In mobile, however, the same phrase can be misleading — it might suggest a single trace ID linking everything from the tap to all backend responses, which is not how mobile performance traces work. Instead, mobile end-to-end visibility means combining:

  • The full device-side activity for a user action, and
  • The relevant distributed traces for each network call — without merging them into a fake monolithic trace.

Analogy:

  • Distributed trace: Tracking a package through multiple distribution centers.
  • Mobile performance trace: Watching everything happening in your kitchen while you cook — the blender, oven, and dishwasher all competing for the same electrical outlet.

5. Why the traceparent is only injected in the network request span

In mobile observability, the traceparent header is injected only into the network request span — not into the entire mobile performance trace.

Why?

  • Each backend request starts its own distributed trace.
  • The mobile performance trace is the logical container that groups these requests from the app’s perspective.
  • There’s no single trace_id that links unrelated API calls in the backend. This behavior is defined by the W3C trace context specification, which defines to attach the header to outbound network requests for distributed traces.

6. The risk of mixing everything

Waterfall visualizations in backend tools are designed for request-response analysis.

For mobile actions (human-driven, async, multi-threaded), merging all spans leads to:

  • Unreadable, ultra-long timelines.
  • Irrelevant spans drowning the real issue.
  • Wrong answers about root causes.

Better: Keep distributed traces for backend RCA, and use mobile performance traces for user experience RCA.

7. Key takeaways

  • Distributed traces: Request-centric, backend-focused, great for service-to-service analysis.
  • Mobile performance traces: Activity-centric, device-focused, designed to reflect the real user experience.
  • End-to-end observability in mobile does not mean a single trace ID across everything — it means a combined, contextual view without false dependencies.
  • Don’t merge unrelated distributed traces — it creates noise, not insight.
  • Mobile observability is about what the user experiences, not what the backend sees.
Embrace Deliver incredible mobile experiences with Embrace.

Get started today with 1 million free user sessions.

Get started free
Related Content
mobile-app-with-performance-gauge

Optimizing mobile performance with tracing

Find out how to optimize your mobile app performance with tracing. With tracing, you can create a better mobile experience for your users and improve revenue.