MAY 4-8: Web Performance Week - Five days of expert sessions on synthetic monitoring, real user data, and Core Web Vitals from the people building the tools.

Sign-up

Browser OpenTelemetry without the compromises: How to stay vendor-neutral and still get full RUM

Embrace, OpenTelemetry and Reddit Graphic

A real-world React SPA setup surfaced a question about browser OTel. The answer points to a better way to instrument the web.

A Reddit question that stayed with me

A developer posted in r/Observability with a setup that’s increasingly common: React SPA on NGINX, OpenTelemetry JS SDK already wired up, telemetry flowing through a custom reverse proxy to Splunk. Their client wanted to add Grafana Cloud as a second destination. They considered Grafana Faro because it handles CORS natively and is purpose-built for browser RUM.

But the client had been burned by a proprietary SDK before and had a hard requirement: Pure OpenTelemetry only, nothing vendor-specific.

I answered the post because this is a problem I deal with directly in my work on the OTel Browser project. The client made the right call. The entire point of OpenTelemetry is that your instrumentation code shouldn’t be tightly coupled to any one backend. Proprietary SDKs on day one means enormous switching costs later and a vendor that has leverage over you.

But there’s a problem. Almost no observability backend supports CORS natively on their OTLP ingestion endpoint. They’re all built for server-side collectors. Browsers aren’t part of the design.

It’s a problem that deserves a longer treatment than a Reddit comment. Here’s what else I want to talk about.

The browser problem

OTLP ingestion was designed for server-side collectors. The assumed flow is: Your app emits telemetry, an OTel Collector picks it up, the Collector forwards it to whatever backend you’re using. That works on a server, but a browser can’t run a Collector. And when a browser tries to make a direct cross-origin request to a remote OTLP endpoint, it gets blocked. Every time.

No backend accepts it directly. Not Splunk Cloud. Not Grafana Cloud. Not Datadog. Not Elastic. Not even Jaeger, which has had an open GitHub issue for CORS support since 2023.

The developer in that thread had figured out the right workaround: Deploy a collector that supports CORS as a gateway. In their case, Grafana Alloy sitting in their client’s Azure environment, configured to accept browser traffic and fan out to both Splunk and Grafana Cloud.

otelcol.receiver.otlp "default" {
  http {
    endpoint = "0.0.0.0:4318"
    cors {
      allowed_origins = ["https://your-frontend-origin.com"]
      allowed_headers = ["*"]
      max_age = 7200
    }
  }
  output {
    traces = [otelcol.exporter.otlphttp.grafana.input]
    metrics = [otelcol.exporter.otlphttp.grafana.input]
    logs = [otelcol.exporter.otlphttp.grafana.input]
  }
}

This is the standard pattern right now. It works, it’s composable, and it keeps the vendor-neutral constraint intact.

But it doesn’t answer the harder question.

The real question: Was it worth it?

Buried at the end of the post was this:

“For those who’ve done browser OTEL without Faro — was it worth it vs just using a RUM tool, or did you end up missing the session tracking and web vitals?”

This cuts to the core of where browser OTel is right now.

Raw @opentelemetry/sdk-trace-web gives you traces and logs. That’s useful. But traces alone don’t tell you how your users are experiencing your site. You also need session tracking, Core Web Vitals capture, click instrumentation, error handling, and user journey visibility. And if you go raw OTel, you’re building all of that yourself.

These aren’t small gaps you can paper over in a sprint. Session tracking alone is a real project. Core Web Vitals (LCP, INP, CLS) are what Google uses to rank your site. They have direct business impact. Not capturing them isn’t an option.

So in practice: A team commits to pure OTel, gets traces working, then realizes they need all this other stuff. They pull in a RUM SDK to fill the gaps. Suddenly they’ve reintroduced a vendor dependency and the vendor-neutral goal is gone.

What Embrace & the OTel community are building

There’s active work on a dedicated OpenTelemetry Browser SDK and instrumentations that address exactly this gap: Session tracking, web vitals, click events, error capture, all built on OTel foundations and fully portable. We pushed for a dedicated browser repo separate from Node because browser instrumentation has different enough constraints that it needed its own home. The ongoing effort involves migrating browser-specific instrumentation out of the Node JS core and contrib repos into that home.

At Embrace, two of us are maintainers of OpenTelemetry Browser. I work on it directly. It’s the right long-term answer for the ecosystem, but it’s not done yet. If you need to ship something in the next quarter, “it’s coming” isn’t a plan.

If you want to follow the work, file issues, or contribute, come join us! We’re looking for contributors and your feedback helps shape what gets prioritized.

What I recommend today

The Embrace Web SDK is open source and built natively on OpenTelemetry. Not wrapped around it, not bolted on top of it. OTel primitives are the foundation and it behaves the way you’d expect an OTel-first tool to behave.

Two things matter most for a setup like the one in that thread:

  1. The Embrace backend accepts browser traffic directly. You can send from the browser to Embrace without needing a proxy. If you want the gateway for fanning out to multiple backends, that still works. But you’re no longer forced to maintain one just to get browser telemetry to land somewhere.
  2. It accepts custom exporters. You can configure the SDK to send to your Alloy pipeline, to Splunk, to Embrace, or to all three at once. When you want to change backends, you change the exporter config, not your instrumentation code. That’s what vendor neutrality actually looks like: Not just a philosophical commitment, but a concrete architectural property.

For the developer in that thread, that means: Drop the Embrace SDK into the React SPA, keep the Alloy gateway if you want the multi-destination fan-out, and you’ve got full RUM (session tracking, Core Web Vitals, error handling, user journey visibility) without asking the client to compromise on their vendor-neutral stance. The instrumentation stays clean and the backend choices stay yours. Here’s an example sending telemetry to Embrace and a third-party like Grafana Cloud.

import { initSDK } from '@embrace-io/web-sdk';
import { OTLPLogExporter } from '@opentelemetry/exporter-logs-otlp-http';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';

initSDK({
  appID: "YOUR_EMBRACE_APP_ID", // Send to Embrace
  appVersion: "YOUR_APP_VERSION",
  spanExporters: [
    // Send to third-party
    new OTLPTraceExporter({
      url: 'https://example.com/endpoint/for/traces',
      headers: {
        'Authorization': 'Basic TOKEN'
      }
    }),
  ],
  logExporters: [
    // Send to third-party
    new OTLPLogExporter({
      url: 'https://example.com/endpoint/for/logs',
      headers: {
        'Authorization': 'Basic TOKEN'
      }
    }),
  ],
  defaultInstrumentationConfig: {
    network: {
      ignoreUrls: ['https://example.com/endpoint/for/traces', 'https://example.com/endpoint/for/logs'],
    },
  },
});

SDK source

Custom exporter setup 

You don’t actually have to compromise

I’ve spent my career in web performance and observability, and something still bothers me about how the industry treats browser observability. Most of the big observability platforms were built for servers. When browser monitoring became something teams wanted, those platforms added it as a feature, not a focus. A checkbox in a sales conversation.

In practice that means sampled data and high-level metrics. Not the full picture of what individual users are experiencing. Full-fidelity browser observability, where you can trace a slow LCP back to the third-party script that caused it or connect a frontend error to the backend trace it triggered, has historically been either a specialized tool or a very expensive add-on. Datadog’s full-fidelity offering would cost most organizations around 10x what they’re paying for sampled data. Most teams just accept the tradeoff.

Embrace was built differently. Newer architecture, full fidelity from the start, at a cost that actually works. And because it’s built on OTel natively, it slots into whatever observability stack you’re already running: Honeycomb, Chronosphere, Grafana, or anything else.

Embrace also offers synthetic testing through its recent acquisition of Speedcurve, the best in the business. This isn’t something you can realistically build yourself. Synthetic testing catches performance regressions before your users do, and combining it with real user monitoring from the same platform gives you the full picture: What your users are experiencing now and what they’ll experience after your next deploy.

The developer on Reddit was asking a practical question about CORS headers. But underneath it was a more important question: Can you have both vendor neutrality and a real understanding of your users’ web experience? You can.

Where to start

If any of this resonates with what you’re building or evaluating:

You shouldn’t have to choose between the right architectural decision and actually understanding your users. With the right tooling, you don’t have to.

You can read the original post from r/Observability. I’ll gladly continue the conversation if anyone has thoughts or questions.

Embrace Deliver incredible mobile experiences with Embrace.

Get started today with 1 million free user sessions.

Get started free
Related Content