Embrace has officially acquired SpeedCurve! Together, we’re redefining the future of user-focused observability.

Learn more

AI isn’t here to replace engineers. It’s here to free them from toil.

Software engineering as a discipline is changing rapidly. 

The ways we code, and the mechanisms through which we build and operate software today look very different than they did even two years ago. We deliver code faster, often relying on more advanced frameworks, platforms, and abstractions to accelerate our development. Our systems are more distributed, and production environments are more dynamic and complicated. 

And increasingly, engineers are working alongside AI tools that promise to accelerate everything from coding to debugging. 

Observability is firmly in the throes of this evolution. And it’s kind of unique in that, by its nature, observability will both be influenced by, but also influence, how organizations build and deploy software with AI. 

At Embrace, our task as a product and engineering team is to define how we will operate in this space in a way that future-proofs us for the transformation our industry is experiencing. 

And then, of course, we must go create it! For Embrace, this has meant building an MCP server to lay the foundation for accessible, adaptable AI workflows for observability – in whatever environment engineers are using.  

Let’s talk about our vision, and the context that brought us here.

AI can reduce observability toil

Before AI even entered the picture, the observability space had already been changing. Much of this was in response to the complexities of modern cloud architectures demanding better, more in-depth monitoring of components across tech stack. Teams got much better at fixing the simpler performance problems, as shown by crash-free rates going up across the board, and were asked to now solve more complex issues within their apps. 

The industry grew, more tools were adopted, and the telemetry generated exploded. But the value of all of this information is not being fully realized. 

Instead, we see engineering teams drowning in data, dashboards, and alerts. Tool sprawl has become the norm, with teams struggling to correlate telemetry from a web of proprietary systems. Engineers end up spending more time stitching together information across data sets than actually solving problems.

Then there’s also the challenges of time and expertise. 

Much of observability work has traditionally been very manual and time-consuming. It assumes deep, often tribal familiarity with the code being observed, and that engineers essentially know what to look for in advance. It also requires them to have the time and cognitive bandwidth to continuously context-switch between dashboards, logs, traces, codebases, and tickets. That is often an unfair and ineffective resource demand. 

In reality, the observability process is full of friction – from balancing an overwhelm of data, to knowing what to instrument in an unfamiliar codebase, to having the time and expertise to get into the depth of analysis where the best insights surface. 

This friction often drains engineers of creative, build-focused time, and it can keep teams reactive instead of proactive. It also limits the real value that observability data could deliver.

AI has enormous potential to change this dynamic, but only if it’s applied thoughtfully.

Our philosophy on AI is that its potential for real transformation is in working with engineers to handle the toil, making room for human creativity and higher-order thinking. And this is the foundation we’re building on.

Building for longevity over hype

The expectations around AI in software products are, in many cases, exceeding the reality of what these tools are capable of, in the near term, and how much value they actually provide. 

A chatbot layered on top of an existing platform can never magically replace hard engineering work. At the end of the day, it was never really meant to. 

Implementations of AI that deliver lasting value (not hype) recognize and appropriately leverage the strengths and of all pieces of the puzzle: the data, the pipelines, the models, the agents, and the end-user platforms.

LLMs are limited

Relying too much on Large Language Models (LLMs) for insight into your data, for example, is a pitfall to be avoided. 

While LLMs are powerful, they have real limitations and cannot exist as the final solution on their own – at least not for an observability use case where advanced analysis of, and modeling based on, varied telemetry is the task at hand. 

LLMs are not analytics engines, and are therefore not designed for analysis techniques like clustering, classification, prediction, or causal inference. They excel at synthesizing information, summarizing patterns, and telling coherent stories, but only when the underlying analysis has already been done.

Reinventing foundational models is not the answer 

At the same time, it is not a good use of resources for most companies to build fully proprietary AI platforms which include data, models, and agents that vertically integrate across every layer of the stack. It’s also largely at odds with the ethos of open, accessible systems that the industry is moving toward. We want to break down silos in observability, not build more. 

Leaning into our own expertise and figuring out how to contribute to the foundation model ecosystem, rather than competing with it, emerged as a much better approach.

That has become our AI philosophy at Embrace. We are not trying to out-model the foundation model ecosystem. We are trying to make every model make better decisions with real production observability context that isn’t available anywhere else. 

This ultimately means meeting customers where they’re at. Different organizations will adopt AI in different ways. Some will build internal agents, some will rely on third-party tools, and others will mix and match across their stack. Being able to use Embrace within their chosen AI workflows – to elevate those workflows – is how we can provide real, lasting value to our customers. 

Our differentiator is our data

Where Embrace excels is in our high-fidelity production telemetry that is grounded in real user behavior.

We capture the myriad of events and conditions that exist and interact with each other in the frontend environment. This is done with a high level of detail and unmatched connectivity to how actual users experience software. 

Using this data as the fuel for advanced analysis that can then feed models and agents downstream is where we operate in the AI-for-observability landscape. 

Essentially, we are building the machine learning and statistical pipelines that transform raw telemetry into meaningful signals. The analysis that generates these end signals must happen before an agent even sees the data it’s going to advise on. That’s because an LLM can’t reliably infer significance, prioritize impact, or detect anomalies without the statistics behind it.

Once these signals are handed off to AI agents via integrations, the agents can play to their own strengths: synthesizing them into narratives that engineers can act on.

This approach lets us help teams actually surface real insight and prioritize the problems they’re solving, rather than just churn out a list of issues from simple queries. The way we’re doing it is via an MCP server.

MCP is a strategy that scales

For reasons we discussed above, we knew we didn’t want to reinvent AI models or force customers into a proprietary workflow.

Instead, we wanted to (1) empower our customers with real signals from data they couldn’t get anywhere else and (2) fit interoperably within their broader AI architecture. 

That’s why supporting the Model Context Protocol (MCP) was the right approach for us.

MCP provides standards and guardrails for how context flows between systems. Much like OpenTelemetry standardized telemetry collection, MCP standardizes how rich, structured context can be shared between tools and agents. The result is alignment across the ecosystem, and maximum benefit for our customers.

Rather than “just another AI feature”, MCP is a foundation for how our customers can extract the full value of their app performance data through agents and other tools. 

It serves as:

  • A protocol for context
  • A bridge between tools
  • A way to make observability usable outside dashboards

This is essential in today’s workflow, where engineers are increasingly using varied AI systems and tools across the lifecycle of development, deployment, and monitoring. These include things like code assistants, autonomous and semi-autonomous agents, CI automation processes, and GitHub workflows. 

The best way we can provide value for our customers is to make production-grade context accessible across these tools and workflows, wherever it is needed and with the right level of processing.

To visualize this more concretely, think about what the new engineering assembly line will soon look like through the lens of monitoring and observability: 

An engineer asks an assistant about trending production crashes. The agent pulls representative data from Embrace via MCP and correlates it with recent code changes. It proposes a fix, opens a pull request via GitHub, documents the change, and updates downstream workflows. 

This all happens without the engineer manually stitching systems together. It’s a seamless, interconnected system that relies on immense amounts of data having already been churned behind the scenes. The engineer is able to direct process and action, but doesn’t need to operate in the weeds for hours and hours. 

Reducing friction everywhere

MCP isn’t just about data access, however, it’s about eliminating friction across the entire observability experience – as the prior example alludes to. 

Any engineer involved in observability will have made a decision, at some point in their career, to skip potentially valuable work – not because the work is not important, but because the activation cost is too high. Instrumentation that “would be nice to have” to enrich an app’s telemetry, for example, or features that unlock deeper insight but require setup time simply do not get utilized. Analysis that could surface critical patterns gets skipped over because it feels too expensive to even attempt. 

When engineers have to constantly balance feature work and customer requests with observability, the latter is often shelved because the tradeoff is simply not worth it. 

Implementing successful AI workflows anchored to an MCP server can change that equation. Imagine being able to ask your agent to run a checklist on your Embrace-recommended integration goals, then have it tell you what you’re missing, and then run the instrumentation for you automatically.  

As the activation cost of this type of work gets lower and lower, entirely new categories of analysis become feasible. Customers can discover value they didn’t even know was available in their data, and even in the Embrace platform itself as the MCP server bridges the gap between their IDE and pages on the dashboard.

AI handles repetition, humans make decisions

Our MCP strategy hinges on the core belief we hold about the future of AI in engineering. AI is not here to replace engineers; it’s here to free them from the manual, time-consuming work that creates a barrier between engineering potential and engineering reality.  

AI tools and systems are best equipped to handle the grind. Manual triage, repetitive analysis, instrumentation checks, and surface level analysis are replaced by judgement, creativity, expertise, and decision making done by human engineers. 

When engineers are freed from toil, they can focus on designing better systems, solving harder problems, and building software that truly serves end users.

Embrace Deliver incredible mobile experiences with Embrace.

Get started today with 1 million free user sessions.

Get started free
Related Content