NEW REPORT! Overcoming key challenges in mobile observability: A guide for modern DevOps and SRE teams

Download now
OpenTelemetry

Holiday Sweaters & OTel Insights: Recapping our “2025 Unwrapped” panel

Watch this OpenTelemetry expert panel to learn about key advancements in the space and what’s on the horizon for 2025. And to see cool holiday sweaters.

Recently, a jolly group of OTel experts and enthusiasts gathered to discuss what we loved about OpenTelemetry in 2024 and what we are most excited for with OTel in 2025.

Well, that was part of it.

We also wore some fun holiday sweaters and ran polls to learn about everyone’s favorite holiday traditions. I was surprised at some of the findings, including:

  • The audience’s favorite holiday drink was mulled wine.
  • The audience’s favorite holiday song was “All I Want for Christmas Is You.”

I know I was definitely more surprised by one of the panelists wearing a Popeye’s sweater – yes, Popeye’s, the fried chicken establishment – as a holiday sweater. But rest assured, there is an explanation.

Wherever you are on your OTel journey, there’s something for you in this wide-ranging discussion. Here’s just a small sample of topics we covered:

  • How the end of the zero interest rate phenomenon is forcing change in observability and platform engineering strategies.
  • What observability 2.0 REALLY means, and why OpenTelemetry plays such a big part.
  • Key OTel improvements in developer experience, including SDK stability work, the new file config schema, and the growth of libraries using OTel-native instrumentation.
  • OTel innovation happening in CI/CD observability and better interoperability with Prometheus.
  • Improving OTel support for client-side observability with entities and resource providers.

Check out the full panel below, and scroll past the video to see a few of the best quotes from our discussion as well as to access the full transcript. And, of course, have a happy holidays and an even happier new year!

Watch the full video here

Key quotes from the panel

Here’s a few of the best quotes from this panel:

Hazel Weakly

  • “I decided to do the thought leadership thing, because you can’t be the only one doing it, Charity.”
  • “We talk about observability, and then what? Right? Nobody answers the ‘and then what.’ You just say, you spend a bunch of money, you do nothing, and then what? And I’m like, well, the ‘and then what’ has to be the whole business can act on it. Like, that’s why I made my definition [of observability], you know, ‘the process in which one develops the ability to ask meaningful questions, get useful answers, and then act effectively on what you learned.’ And if you can’t actually do the ‘act effectively’ part, you’re not really getting real value from observability.”
  • “Why isn’t all the data in the context of the entire company in one area? Why are we separating our business intelligence from our OpenTelemetry? Why are we separating our BI from our marketing, from our sales people, from our product people… why are all these people in different silos with different tools and different data buckets? Why can’t we think of it as a lakehouse architecture and stick everything together and then be able to correlate everything in the entire company? Understand the business from the perspective of business.”

Charity Majors

  • “The whole promise and dream of OTel is for you to stop having to do so much custom, bespoke, toil-ridden work when it comes to instrumentation and telemetry, and it’s really exciting to see that starting to pay off.”
  • “I feel like the borders of tools are what so often create silos, right? You’ve got all these teams who have their own source of truth. They spend all their time kind of debating the nature of reality instead of collaborating on solving a problem. When in fact, almost all of the really interesting questions that you want to ask are some blend of application, systems, and business stuff.”
  • “The number one thing that most companies can do to have better engineering outcomes is to shorten the amount of time from when you’re writing the code and when it’s in production. And CI/CD is that link.”

Dan Gomez Blanco

  • “We’re starting to see that now in many of the open source libraries that are now natively instrumented with OTel, is that, hopefully, in the future, we can move away from that concept of agents and instrumentation libraries that will add telemetry on top of all their open source libraries into a world where that telemetry comes out of the box from those libraries.”
  • “We were talking about understanding the business and understanding a complex distributed system and how all that starts on the client side. If you don’t start there, what are you doing? I think when we, because we’ve done this in the past in the backend where, everyone followed the Google SRE book, and they used to get an SLO per service. It’s like that SLO is meaningless if it doesn’t link back to what the user is experiencing, right? So when you think about your SLOs, you’re thinking about, so what is your actual end-user experiencing, and then connecting the dots all the way down to your backend.”
  • “The bit that got me excited in 2024 was seeing, for example, the stability of OTel becoming a thing. You’re making it easier to provide OTel in an easy way for everyone in your company, in your organisation. So if you run a platform engineering team, things are becoming a lot easier. That’s, like, stabilization of SDKs and also of the Collector itself.”

Adriana Villela

  • “We need to have observability of our CI/CD pipelines, right, because that’s the stuff that puts our shit into production. And if we don’t know what happens when something fails, well, we’re gonna be scratching our heads forever and ever and ever.”
  • “Observability has the ability to help us create more sustainable systems.”
  • “There’s still a lot of folks out there who have separate systems for sending their different signals. And I mean, honestly, it defeats the purpose of what we’re trying to accomplish with OTel, because one of the powerful aspects is that correlation of the three main signals, right, the traces, the metrics, and the logs. But if you’re sending logs here and traces here and metrics there, then what are you really getting out of it? So, it’s been interesting seeing that those conversations are still happening. And I hope that, as we get into 2025, a lot of vendors out there, I think, have come to the conclusion that we need this unified observability.”

Hanson Ho

  • “Performance and observability is not really a thing that people [working in mobile] worried about before. People wanted new features. They wanted, you know, flashy new screens. I call it a vibes-based prioritization and stack ranking. And, you know, working in this field, you have receipts. You can show what your numbers can do in terms of helping debugging, finding out problems, reducing costs. So spending a little bit of money here makes you more money down the line.”
  • “We talk a lot about unified observability, the braid, context. All that is great. All that sounds really resource-intensive running on a really cheap device that was made eight years ago. So how do we do both curation on the event side, but context in the braiding side? How do we do that successfully? Things emerging in OTel that’s really exciting is the entities, and the resource providers, allowing us to pack in context in a dense way so that we could do what we want without having to selectively sample or decide how many of these context changes we model.”
  • “At the end of the day, people use apps to do stuff. Are people getting shit done? You should measure that. If you don’t know if performance matters, well, are people getting shit done? If people are getting shit done, great.”

Full transcript

Colin: Hello everyone, and welcome to today’s panel. You’re looking at several holiday adorned people, and we are so excited to do today’s panel: Holiday Sweaters and OTel Insights. I am Colin Contreary. I’m the head of content at Embrace. I will be today’s moderator. We’ve got a wonderful panel of OTel experts here who are just itching to share what they loved about OTel in 2024, what they’re excited for in 2025. Some of them might also be itching because of their sweaters.

Alright, so, everyone, what is the best holiday drink? So, please put your answers here. And while you’re answering it, let’s go around and have the panelists introduce themselves and share a fun tidbit. Perhaps the origin story of their sweater or their hot take on the best holiday drink. So, let’s start with Hazel. Hazel, can you introduce yourself?

Hazel: Hey there, my name is Hazel. I have thoughts, lots of thoughts, and I never stop thinking. They never stop thinking. So my hot take would be I love eggnog. But increasingly in late years, it has stopped loving me back, and I don’t really care. It can be a one-sided relationship. It’s fine. But if I need to function afterwards, I’ll go to mulled wine.

Colin: Nice. Let’s go next to Charity.

Charity: My name is Charity Majors, co-founder and CTO of Honeycomb. I’m partial to hot toddies. I would like to make sure that everyone has on their calendar, January 11th is National Hot Toddy Day.

So, you know, I would be drinking spiced wine if it wasn’t 10AM, because I really try to, like, celebrate hot toddy day every year. And my sweater, for those of you who live in San Francisco or the Bay Area, I can’t move my head out of frame or I’ll disappear, but you might recognize this map. It’s the map of the BART, and there’s headlights. And you can’t hear it, I don’t think, but when I push the button on my sweater, you hear the BART train entering the tunnel. Boop. Boop. Boop. Boop. I am so excited for this theme because this is the first time I’ve ever gotten to wear this sweater. So thank you.

Colin: Awesome. That sounds awesome. Dan, how about you go next?

Dan: Hi, I’m Dan. I’m a principal engineer at Skyscanner, and I lead up observability, and I’m also part of the OpenTelemetry Governance Committee. And, I think my favorite holiday drink would be, I don’t think it’s in the list. It’s mulled cider, which I think is different to apple cider, because it’s alcoholic. It’s very popular in the UK. You get mulled wine, you get mulled cider. I do like a glass of mulled cider. I do not have that now here, this is not alcoholic. But, yeah. And my jumper, I think you only get to see taters.

But, like, for any Lord of the Rings fans, you will see that this is the trifecta of Christmas, Lord of the Rings, and potatoes. So you need potatoes for Christmas as well. Can’t have Christmas without potatoes. So that’s me.

Colin: That is awesome. Yeah, I was struggling with whether to put a bunch of alcoholic options in the holiday drinks or not. We’re gonna see where people land. But, Adriana, can you introduce yourself next, please?

Adriana: Yeah. My name is Adriana Villela. I am a principal developer advocate at Dynatrace. Joined, like, a month ago. So I actually work with Dan in the OTel end user SIG. We’re both maintainers there.

And, so my favorite drink for holidays, I would say, would be cider, but for all the time is bubble tea. You can bubble tea all the days. Those who know me know I’m addicted. Also, my holiday sweater is Clippy. I saw someone wearing one of these at a conference somewhere a couple of years ago, and I’m like, this is, like, the best sweater I have ever seen. I must have it. And I asked my husband to get me this sweater, I think, for Christmas, like, two Christmases ago, and he delivered. So yes.

Colin: Gotcha. Awesome. Now, Adriana, you also have to get this sweater worn while you can because the kids these days, they’re not gonna know who Clippy is.

Adriana: I know. I know. Right? So many fun memories of Clippy.

Colin: Yeah. And let’s meet our final panelist. Hansen, can you introduce yourself?

Hanson: Sure. My name is Hanson Ho. I am a mobile performance and observability sicko here at Embrace. My favorite drink is, holiday drink is hot chocolate, but it’s gotta be mint. Mint chocolate is something else. And my sweater is Popeye’s. Ignore the burgers. And it’s Christmas-related because I got it at a White Elephant gift exchange a few years ago from my former colleague, Eric Rosenberg, who loves burgers. So, and I love Popeye’s, so this is, like, the perfect melding of the sweaters.

Colin: Interesting. Popeye’s gift at a White Elephant. It’s the first I’ve heard that. Alright. Thanks, everyone, for introducing yourselves. We’ve wrapped up the poll. Let’s see what the people say… mulled wine was the winner. I think we had, Dan, you said mulled wine, right? All right. Mulled wine. All right.

Well, let’s get into the meat of this wonderful panel discussion. So, I’d like to start with Hazel. So, Hazel, what did you love most about OTel in 2024, and what are you most excited for in OTel in 2025?

Hazel: So the thing that not necessarily that I loved the most about OTel, but I think it defined OTel in 2024, which was the end of the zero interest rate phenomenon. The huge question everybody had in infrastructure, all the engineering leaders have this uniform question for the singularity that I’ve never seen, which is, how do you get that return on investment?

How do you show the value? How do you ratchet back the costs? That cost management, that ROI, that, okay, we have this giant bill. Is it useful? That question, I think, more than anything else, really defined so many things in 2024.

And so for me, that changed a lot of platform strategies because for a lot of people, you could just roll it out. For the first, like, last ten years, you can just roll things out. You can say, here’s a giant wad of cash, let’s worry about it later. Let’s now go from zero to one, or one to two, or two to three, and they don’t even worry about figuring it out. And now you have to go all the way up the maturity ladder and start at five.

And that changes the whole concept of the maturity ladder. Like, how do you even do that? That’s 2024 for me. And that really, really changed the game. People started having to go, well, I don’t wanna be an OTel expert, but I kinda have to be to figure this out.

And now everybody in the industry is kinda struggling with that.

Colin: Nice. Alright. Well, let’s dive into that a little bit. I see a lot of head nodding. Charity, did you wanna chime in a bit on this? Oh, you’re muted.

Hazel: You’re muted.

Charity: I flashed my own sticker at everyone. You are muted. The end of the zero interest rate phenomena is, like, the best thing’s happened to engineering in a very long time. I know there’s a lot of sort of painful corrections going on, but, like, it’s been, like, free money. Like, it’s not real growth. It’s inflationary. It’s not real.

Like, cost is a factor of architecture, right? Having to ground what we do and what we build in real results is what engineering is all about. And we really got kind of disconnected from that for a solid decade there. So I, like, I am here for this painful yet inevitable disruption.

Hazel: It’s like when you have to answer a quiz and all the questions are right, and you don’t even get to learn anything. But when you actually have this forcing function of, oh, yeah, some answers can be wrong.

Charity: Yeah. Costs are real. Costs and failures are how we move the industry forward.

Dan: But I think on the cost factor as well, I think some of the things that happened during the COVID pandemic as well, affected, you know, my sort of, like, area of, of, you know, like, travel tech, you know. In travel, like, we went through a period of, like, hey, we need to tighten our belts. We actually cared about costs more than we ever did before because we’re at, like, a really low percentage of our revenue. So now we need to optimize wherever we can.

And at that time is when we were, like, adopting OTel as well. And one of the things that I found out is that, you know, something that people, sometimes, you know, like, don’t think about it too much, but it’s, like, saving costs by decreasing the quality of your observability. What I think is actually probably the opposite, which is, like, you can get better observability at a lower cost. If we use, like, you know, each signal is supposed to be used, and you don’t have a bunch of logs that you never look at. But it’s, like, that’s basically the bit where, like, you can improve your observability and as well decrease your costs.

That’s the sort of, like, ROI aspect that I think it’s so important and something that we’re getting better at.

Hanson: Yeah. And in mobile it’s especially important. Performance and observability is not really a thing that people worried about before. People wanted new features. They wanted, you know, flashy new screens. I call it a vibes-based prioritization and stack ranking. And, you know, working in this field, you have receipts. You can show what your numbers can do in terms of helping debugging, finding out problems, reducing costs. So spending a little bit of money here makes you more money down the line.

I am so glad we’re talking about this because I published a blog post last night, and a third of it is about this stuff. And I’m glad it got out there yesterday because, these are very, very similar words.

Colin: Nice. Is there anything about some of these like, Dan, I’d love to know a little bit more. You mentioned at Skyscanner, y’all having to reevaluate, like, the tooling you did, basically, when COVID happened in this. I know you previously mentioned you’re interested in OTel changing these platform strategies. Is there anything, like, you can share about what y’all did specifically at Skyscanner recently with OTel?

Dan: I think it was, I mean, even adopting OTel and also, like, keeping everything vendor neutral. And so, like, the most important part is that as you approach that from the platform engineering perspective, you want to provide a unified observability platform to the rest of your company, right?

And, that didn’t look unified at all before OTel. It was, like, twelve different systems or eighteen, or I don’t know how many. But we, that was the aspect that we went with, that was the mindset that we approached this with. As much as, like, improving our observability as much as it was that as it was, simplifying our platform. And so with that simplification came, you know, cost savings and infrastructure, cost savings and maintenance toil as well, and the number of, like, engineers that are, you know, maintaining custom abstractions, for example, on top of telemetry clients.

All that basically, simplified, at the same time improving the observability that was, like, in a way, it sold itself. You know, when it was, like, you know, I was, communicating the long term strategy in a time of, like, in a challenging time for the travel industry. It sort of like sold itself. It was like, we can run this cheaper and get more out of it. Yeah. I didn’t have many people say, no. Let’s not do this. So, that was good.

Adriana: I think one thing that you touched on that I think we should hopefully start seeing more of, you’re talking about, like, unifying things. There’s still a lot of folks out there who, like, have separate systems for sending their different signals.

And I mean, honestly, it defeats the purpose of what we’re trying to accomplish with OTel, because one of the powerful aspects is that correlation of the three main signals, right, the traces, the metrics, and the logs. But if you’re sending, you know, logs here and traces here and metrics there, then what are you really getting out of it? So, it’s been interesting seeing that those conversations are still happening. And I hope that, you know, as we get into 2025, like, a lot of vendors out there, I think, have come to the conclusion that we need this unified observability, right?

You know, Ted Young talks about it, instead of the three pillars, it’s the braid of observability. Like, really making sure that, it would be nice to see, like, more vendors taking advantage of it, but also the practical benefit that we have vendors offering unified observability platforms.

Charity: Unified is one of those words that, you’re right, all of a sudden, everyone’s saying it. And you gotta ask some follow-up questions there because for some observability vendors, what they mean is you have one unified bill.

Adriana: Yeah.

Dan: That’s very true.

Charity: Others get another step down the line. They’re like, you can have it all in one unified visualization. You can get it all on the same pane of glass. But the superpower is unified storage. Not storing again and again and again and again and again with few or no, like, predefined, you know, sort of joins, but, like, being able to slice and dice, and zoom in and zoom out, and derive insights and, like, go down to precision tooling, you know, being able to, like, store it in this rich wide sense.

And this is where I think a lot of people are like, oh, but OTel doesn’t support that. It is false. Under the hood in OTel, everything is an event. And this is where, I think, this year, we started to talk about, like, sort of the generational gap, the generational difference between the sort of the three pillars world where you had all these different storages and the sort of unified world, which I think of it as 1.0 and and and 2.0. OTel does both. It’s so exciting. Like and and I feel like so much of this work actually gets done for you under the hood as you’re adopting OTel, which makes it which is like, the whole promise and dream of OTel is for you to stop having to do so much custom, bespoke, toil-ridden work when it comes to instrumentation and telemetry, and it’s really exciting to see that really starting to pay off.

Dan: Definitely.

Colin: Nice. You jumped me a little, Charity, because I was gonna send it your way now for your–

Charity: Oh, I’ve got more to say about that.

Colin: Yeah, so let’s go to Charity. Charity, what did you love in 2024, and what are you excited for in 2025 in terms of OTel?

Charity: Yeah. So one of the things that made me really excited in 2024 was, you know, funny, we’ve been out here kind of, like, yapping about a lot of this stuff for a long time. And 2024 was the year where I feel like startups stopped recreating Datadog over and over and over. Just cheaper Datadog, cheaper Datadogs, which Datadog is a great company. They make a great pro. They are the last, best, strong– They’re incredibly mature. You know, I’m not trying to talk shit about Datadog, but there’s a lot of, like, me-toos out there just like, “We’re a cheaper Datadog.”

2024 is the year that I feel like observability startups started to hone in on a different sort of unified wide events. A lot of these are ClickHouse-based. You know, I feel like there are always going to be use cases for, like, very metrics-heavy tools, you know, for three pillars-y tools. But I do feel like the sort of gravitational center is shifting in the direction of observability 2.0, specifically around most of the stuff around infrastructure. Infrastructure is commoditized. Most of us shouldn’t be spending a lot of our time worrying about infrastructure.

Our time and focus and attention goes to our crown jewels, the code that makes us us, right? And that’s what observability 2.0 is so much about. It’s about understanding in a really rich detailed world the code that makes you you. And, whereas I feel like the key difference between 1.0 and 2.0 is how we unify the storage, but there are so many ripple effects and ramifications.

And one of my favorites is that I feel like observability 1.0 land was very much about how you operate your code. You know? Errors and outages and downtime and MTTR and MTTD and bugs and just, like, always focus on the sort of the dark side, right? But observability 2.0, while it encompasses those things, it’s much more about just understanding the impact of what you’ve done.

So it’s understanding what you’ve created and how it’s reaching people in the real world, that sort of intersection of code and users and production. Sometimes it’s like, “Oh my god. How are people using it? They’re using it like that? I didn’t predict that. This is interesting. Let’s pull on this thread. Let’s tinker with some of that. Let’s try some of these things.”

You know, sometimes it’s just understanding. There’s so much to understanding that is more than just fixing bugs and, like, and nines of uptime. So I feel like observability 2.0 is like the substrate that helps you connect and tighten these feedback loops between when you’re writing the code and you’re understanding.

And so that, you know, I use this metaphor. I’m super blind. I need to wear glasses, you know? And you put your glasses on before you go barreling down the freeway. Because otherwise, you’re like, oh, shit. You know? You’re just, like, swerving. You spend all your time, like, you know, course correcting or hitting things.

It shouldn’t feel like that. You know, when you’re building, you shouldn’t be doing the [Charity mimes driving erratically]. It should feel like you’re just building. You’re just driving, you know? You’re just building and creating. And so I’m really excited. I feel like this is unlocking just waves of not only value, but happiness.

Colin: Nice. Yeah. And I saw a lot of head nods, so I think a lot of people agree. Hansen, I know you are big on this. I saw a lot of head nodding from you. So, tie goes to the most head nods. Hanson, do you wanna chime in?

Hanson: Mobile needs so much context when it wants to understand how things are operating and context and having events that are related to each other. Having that potentially relate to not only things happening during that short period of time, but longitudinally. You know, if you have bad performance now, it may affect how you use it in the future. All this is really difficult to capture if you’re just basically capturing just basic metrics. The context and the changing context and how those context changes allows you to basically say what matters to users when things change.

And the ability to capture that will greatly enhance the usability and the power of mobile observability, and we need this. In 2025, let’s go. All the context, all the time. Be careful about how many events.

Hazel: Yeah, I cannot agree more. And so, like, one of the big differences that I think about with observability 1.0 versus 2.0 is, like, a 1.0, as Charity mentioned, you make a lot of decisions at the write time about what data you want, what data you need, and how you’re going to use it. But with observability 2.0, you make a lot of decisions at write time about what data you wanna correlate. And so you can’t change your mind later.

You can’t go back and say, oh, I wanna correlate this now. You have to pick it at write time. But you get to pick what data you wanna correlate, and that has so much flexibility. But with mobile specifically, it’s so interesting because mobile, I think, is one of the best examples of where OpenTelemetry at the underlying layer is flexible enough to really support mobile super well. But the actual endpoints and the actual APIs and SDKs that we give mobile right now are a little painful to work with.

[Looks at Hanson] You’re laughing because we’ve talked about this. And the fact that you had to build this funky snapshot thing and essentially embed the concept of a write ahead log in between where you send the data off and where you get the data, just like in this magical little thing, you have this write ahead log in order to create the ability to have a distributed transaction because OpenTelemetry spans, or distributed transactions, and those don’t work very well with the lossy, you know, one end. And so to me, that means that, like, in 2025, what I’m really looking for is I want us to think of, like, the OpenTelemetry pipeline less as a separate magical little thing in the corner. Like, we already think of it as, oh, we have all of our OpenTelemetry, all of our observability in one area. Huge improvement.

But why is it not all of the data in the context of the entire company in one area? Why are we separating our business intelligence from our OpenTelemetry? Why are we separating our, you know, BI, from our marketing, from our salespeople, from our product people, from our like, why are all these people in different silos with different tools and different data buckets? Why can we not just actually think of it as, like, a lakehouse-type of architecture and stick everything in there and then be able to correlate everything in the entire company? Understand the business from the perspective of business.

Adriana: Yeah, I love that. And it plays into the fact that I think really from the beginning, we’ve been treating, like, observability as, like, this little peripheral thing. And it’s like it’s part of, like, a bigger thing, part of reliability. It’s part of CI/CD. It’s part of development. Like, let’s just make sure we give observability the love that it deserves.

Charity: Yeah, I am so excited to hear– I think one of the other big shifts between 1.0 and 2.0 is this idea that, you know, historically, with telemetry, context has been so expensive that you have to think, “Oh God, can I afford to add this attribute?” Most observability engineering teams spend an outright majority of their time managing cardinality. They should be called cardinality engineering teams because it’s like, “Oh God, can I afford to stick this much data–” Which is so backwards, right?

But, like, now that I think that the 2.0 model makes cardinality cheap and effective, and you can have as much, rounding up to about as much as you want of it. Now we can pack this stuff in, right? I feel like the borders of tools are what so often create silos, right?

You’ve got all these teams who have their own source of truth. They spend all their time kind of debating the nature of reality instead of collaborating on solving a problem. When in fact, almost all of the really interesting questions that you want to ask are some combination, blend of application, systems, and business stuff. Austin Parker has this great blog post that he wrote almost a year ago where he’s listing out a bunch of questions like, “What is the cost of goods sold per request per customer with real-time pricing of resources?” Right?

Like, “What’s the relationship between system performance and conversions by funnel stage, by geo, by device, by intent signals?” Right? You can’t separate these things. And it gets, like, dramatically more powerful the more of this context you can bundle together because your ability to correlate things goes off the charts.

Hazel: It’s funny you mentioned that blog post because that one was actually a response to the one that I wrote.

Charity: I know.

Hazel: I decided to do the thought leadership thing, because you can’t be the only one doing it, Charity. I decided to do the thought leadership thing and redefine observability. But really, like, we talk about observability, and then what? Right?

Nobody answers the “and then what.” You just say, you spend a bunch of money, you do nothing, and then what? And I’m like, well, the “and then what” has to be the whole business can act on it. Like, that’s why I made my definition, you know, “the process in which one develops the ability to ask meaningful questions, get useful answers, and then act effectively on what you learned.”

And if you can’t actually do the “act effectively” part, you’re not really getting real value from observability. If you can’t ask meaningful questions, you’re probably on observability 1.0.

Charity: Yeah.

Hazel: And if you, you know, have useful answers somewhere, you have some observability, right? But that third one, to act effectively. We still don’t really have that. People don’t ask about that. They don’t talk about that. And the rest of the business doesn’t even notice what we’re doing, and we haven’t tried to help them. And I think that’s a failure of engineering.

Charity: So much of what I hope to see in 2025 and I really feel like we’re kind of in the twilight of the DevOps area because people don’t have dev teams– like, the the philosophy is eternal– but people don’t have dev teams and ops teams who are trying to communicate and collaborate, right? Now we have engineers who write code and own it in production, which is great.

So, like, what’s the new frontier? You know, Christine and I have been talking a lot about this, and, like, our new company mission is all about, you know, explaining engineering in the language of the business. If you look at, like, C levels, the only C level role that there is no template for is CTO.

They’re all over the fucking map, right? And if you look at, like, the core group of execs, VP Eng is just rarely in it. Like, they’re kind of second-tier junior varsity, you know? And it took me a long time to figure out why this was, and I think it’s because the way exec teams function – functional exec teams function – is that’s your one team. You co-own all major decisions about where to invest company resources: time, energy, money. And if you can’t co-own a decision effectively, you know, and this is the problem with engineering, right?

Engineering are kind of the artistes of the company. They do what they do, and, you know, the rest of the exec team really struggle to co-own those decisions. They kind of have to take it on, “Because I say so” or “Because I have a track record of being more or less right in what I say.” You know? That’s will not stand, you know?

Colin: Now before we go further, because we’re getting hot and heavy, but it’s time for another hot poll, and we can continue the discussion. Let’s launch the next poll. Let’s learn some more holiday tidbits. And then as everyone’s answering the poll, Hazel, take it away.

Hazel: So the thing that really gets me about, like, the artist thing and the CTO thing, I have so many times explained to people, can you just imagine a board meeting? And then, you know, the marketing person comes up and says, “Marketing is an art. This headcount, I can’t do an… don’t ask me to…” You would fire them, right? They can break it down into the numbers.

The sales, like, chief salesperson, they can break it down into numbers. They can hit this art, this beautiful, intangible, human thing, and they can give you spreadsheets and numbers, and then the CTO walks in the door ten minutes late, wearing hoodies and baggy jeans, smoking a blunt, and they’re just like, “Yeah, I’m gonna need 20% more headcount year over year. I’m gonna do this maintenance thing. We’re gonna turn the numbers into numbers, man.”

What do you want me to explain? I can’t explain this. “It’s an art, man. Just don’t mess with the art. The art.” And for 20 years, we’ve allowed CTOs literally just run around and do this. And never actually understand–

Charity: Hey. Hey. Hey. You’re coming after my job now, Hazel. Come on. Just kidding.

Colin: That was awesome. Hazel, I need to see you do more voices.

Charity: If I could say one corollary to that, though. It’s not that this is all engineering’s fault, because the rest of the business, too– Like, so many things about running an effective R&D org sound crazy. They’re counterintuitive, right? Like, the fact that, putting, you know, your code on, like, a two-week holiday or two-month holiday freeze, is actually really bad for reliability. A lot of the stuff is not intuitive, right?

And so your CEOs, the rest of your exec teams, they gotta at least, like, read Accelerate, right? Because they’ll do so much unintentional harm. And I’ve seen, like, sales execs be like, “Why isn’t engineering held to this? We’re held to numbers. We’re held accountable. Why is it that they don’t get punished when they should–”

And it’s like a different discipline. There does need to be accountability. There does need to be the shared understanding and ownership of decisions. But it’s not all engineering’s job to come and explain it on business– because there are differences, right? And so I feel like this is something where everyone’s got something to level up on for us to get to the glorious future.

Colin: Nice. Awesome. And we went through the whole poll, and I’m just curious really quickly if any of y’all, your favorite is one of the answers. The winning one was “All I Want for Christmas Is You.” Interesting.

Adriana: What happened to the Paul McCartney Christmas song, y’all?

Colin: You realize how many Christmas songs there are when you try to put a poll that has five, or six options. Let’s now turn to Dan. Dan, I would love to get your thoughts on what you were most excited about in 2024 for OTel and what you’re looking forward to in 2025.

Dan: Okay. So, I think 2024, I’ll probably have to go with, in general, like, making platform engineering a bit more stable, a bit easier. I think we’re seeing, like, platform engineering as a focus for, like, many organizations and many tech, like, technologies as well.

Like, for example, like, you see Gen AI, now everyone’s talking about platforms for Gen AI clients and so on. And I think, the bit that got me excited in 2024 was seeing, for example, the stability of OTel becoming a thing that, you know. You’re making it easier to provide OTel in an easy way for everyone in your company, in your organization. So if you run a platform engineering team, things are becoming a lot easier. That’s sort of, like, stabilization of, like, SDKs and also of the Collector itself. There’s been a lot of focus on getting the Collector, working on stability in the Collector.

And, yeah, so that’s basically one of the aspects that, I think, yeah, being able to run a platform engineering, sort of, like, team focused on providing OTel has become a lot easier. And one of the things that it was really good to have seen at Observability Day was a talk on a file config. So if you think about it, there are so many different ways that you can have of configuring the OTel SDK. So I think what you want to do is, like, you know, what you normally do, as a platform engineering team, is provide a way for the SDK to be, like, something that people don’t need to worry about, right?

There’s a lot of, like, knobs and sort of, like, things that you can tune in the SDK, you can change. But probably your engineers, your, like, developers, they don’t really need to care about the majority of those things. So if you say, hey, here’s my prepackaged thing – the only thing you need to do is run this, and then we’ll just apply my standards. Like, you know, maybe different from other company’s standards, but these are my standards. How we’re gonna support, how we’re gonna configure OTel.

Now the way to do that in every language is different, because they are, in a way, need to be different. They are, like, you know, things that are, like, language-specific, and that it makes more sense to do it in a certain way in each language. But now we’re basically standardizing on a single file config schema, right? And that means that as a platform engineer, you can just state that config, apply it everywhere. As a user of those tools that your platform engineering team may provide, you basically can take that config and then sort of, like, work on top of that config, extend it, or, like, add your own things. Because at the end of the day, you know, each service may be different. You may have different requirements.

But the whole point is, like, we’re trying to make it a lot, like, easier to just get started with the with the OTel SDK. And as well, the changes that have been happening in the documentation side of things. So, again, you know, making it easier for people to, like, get started with OTel. The documentation on the website has improved dramatically. We’re making it, like, you know, so easy now to get to get started with OTel and to understand how to configure it, how to use it, and so on.

And then if we’re thinking about 2025, and we’re thinking about that split between what’s the SDK, what’s the API, and how that sort of, you know, relates to each other. And the way I like to think about it is that platform engineers care more about the SDK and then sort of, like, providing that config to the rest of the organization. Their users, which is, like, every engineer in the company, that’s, you know, as Charity was saying, like, the most important stuff is your custom stuff. You want to add your custom stuff to your telemetry. That’s part of the API.

And the good thing here is, like, there’s no longer a need to provide those custom abstractions on top of those telemetry clients. Because I was part of the people building those abstractions in the past because you had to provide something on top of it that meant that you didn’t tie yourself to a single implementation, right? And that’s what the OTel API does, and that’s what sometimes people miss.

And, Austin and I did a lightning talk at KubeCon in Salt Lake City about this specific thing, which is that with OTel, you’ve got the API. You can rely on the API. If you don’t like the implementation, you can change it. I mean, hopefully not, because there’s a different implementation of the SDK, but if you didn’t like it, you could do your own. You’d no longer need to have your or so, like, abstractions on top of the OTel API.

And what that will bring in 2025, and we’re starting to see that now in many of the open source libraries that are now natively instrumented with OTel, is that, hopefully, in the future, we can move away from that concept of agents and instrumentation libraries that will add telemetry on top of all their open source libraries into a world where that telemetry comes out of the box from those libraries, right?

They’re using the API. They’re using the standards, the semantic conventions. Like, things describe themselves. I don’t need to go and describe it for you. That library describes itself using the OTel API, using those semantic conventions.

And in a way, you know, making it, like, for the end user, basically, for the engineer, it’s either, like, you get the SDK out of the box, everything works, and then you install a library, and then you can then configure what you want to do with that data. But everything is in a standard way, so you no longer need to, you know, apply any sort of custom abstractions and so on. So that’s my things to look forward to.

Hazel: I’m super excited for that. Super excited. Because, you know, it’s so frustrating to have to instrument all your third-party dependencies in order to get anything useful. The one thing that I’m interested in is when OpenTelemetry first came out, that there was this kinda conception and notion that you weren’t really going to have a tree depth bigger than, like, three. Deeper than, like, you know, three or four. You had, you know, one span for the whole thing. But for the whole process there may be one span for the microservice, and maybe that one goes one more and then you have three.

And now with so much auto instrumentation, you get to, like, twenty levels deep, and then you finally get to your call. And then that breaks so much tooling out there because one of the magic things that slice and dice implies, like, a prominent database. And those can’t join things together. So then you end up with this, “Okay, well, I need to do a whole bunch of extra magical stuff to carry around this concept of a root span and then thread that in my code. And then remember to add a bunch of extra stuff onto that one because you have so many extra layers.” And how do you see that problem of what you write is not necessarily what you wanna see? And how doing that in a way that’s more ergonomic. Like, I feel like that’s one of the last huge challenges of ergonomics. How do you see that being addressed?

Dan: Yeah. I think there are things as well, like, on that, basically, where, like, imagine and this is, like, I guess, a challenge, which is, like, you know, you start to use the OTel API everywhere. Every library out there will be using the OTel API. Then you need to be able to configure what you want out of that as well. Like, you know, it’s like you’re using that, but there is a standard way to say, “I would like maybe that to stop, like, emitting that event, but I’d still want the context being sort of, like, propagated through.” Right? “I don’t want anything to stop, or I want that to be, you know like, I think there is, like, things to evolve in there.”

How do you want the telemetry from a particular instrumentation package to actually, you know, emit telemetry in a certain way? I think that one of the things that I loved when I started to get into the metric side of OTel was the concept of, like, metric views. Which is something that, you know, metric views as a way of, like, informing yeah. I mean, that’s the way that you, as a library developer, think that is the best way to see or to instrument your app.

I will use that, but maybe I want to remove a certain attribute, or maybe I want to add something else. And I think there are things now being done to be able to, like, do that with with tracing as well and, you know, enable specific things or, like, be able to, like, skip a span but still, you know, get through this sort of, like, hierarchy of spans and that being untouched. So there are things that, you know, I think that I’m looking forward to that being more something that people can do. Putting people in charge of the telemetry that is emitted at the end of the day.

Colin: Nice. Definitely. Yeah. I mean, OpenTelemetry, OTel-native instrumentation, tools interoperating much better. I know, Adriana, you are passionate about this, so I do wanna send it your way, because I know you have a nice tidbit to say about that. But, Adriana, 2024, 2025, what were you excited about, and what are you looking forward to?

Adriana: Well, so, a couple of things, that stuck out for me. So first of all, I’ve noticed that, there’s more, it looks like we’re seeing Prometheus and OTel playing a lot nicer together. I’m not saying that they didn’t play nice together before, but it’s nice to see that, first of all, Prometheus ingests OTLP, which is awesome. There is work being done behind the scenes also to make sure that there’s interoperability between OTel and Prometheus, especially around, like, how metrics are named so that there isn’t that confusion. Because like it or not, like, a lot of the world still uses Prometheus.

And so, like, you know, if you’re used to using Prometheus and now you’re, like, you’re using OTel to ingest your metrics and maybe send those metrics not necessarily to Prometheus, but somewhere else that also stores metrics, but you’re used to the Prometheus way of doing things. It’s kinda nice to see that we, you know, there’s an effort being made by both sides to ensure that compatibility.

Another interesting thing that I saw in 2024 was around, I just found this out, I think, attending KubeCon, where OpenMetrics was actually archived, and now it’s, like, part of the Prometheus. It’s either part of the Prometheus project. I think that’s, hang on. I had pulled that up actually.

Hazel: It got merged into Prometheus.

Adriana: Yeah, merged into Prometheus is what it says on the CNCF blog post. So that’s kind of I think that’s really nice to hear because, you know, like, when I first got into OpenTelemetry, it was like, oh, there’s, you know, there’s the OTel metric herd. And then there’s this OpenMetrics thing. And then there’s Prometheus, and it’s like, oh, this is gonna be really, really challenging from a metric standpoint. So it looks like we’re all, like, looking to play nice.

Another cool thing I saw in 2024 was, I think we’re having more of a conversation around CI/CD observability. I believe there was a working group around OTel CICD, and I think they’re actually part of, like, OTel. They’re, like I think there’s, like, a CI/CD SIG now.

And that’s really important because, you know, we keep forgetting about how CI/CD like, we need to have observability of our CI/CD pipelines, right, because that’s the stuff that puts our shit into production. And if we don’t know what happens when something fails, well, we’re gonna be scratching our heads forever and ever and ever. So, that’s 2024.

For 2025, I think, you know, as I mentioned earlier, really making sure that we’re not talking about the three pillars. You know, let’s talk about unified observability, and unified observability in the way that Charity put it, which is, like, you know, these things are correlated.

It’s not like they’re stored in three different databases by the same vendor. Like, you know, otherwise, our data is not useful to us. But another big one that, I’m super passionate about, super excited about for 2025, you know, we’re seeing a lot of stuff around climate change. Like, I live in Toronto, Canada. We’ve barely gotten snow. Like, Toronto isn’t necessarily the coldest city in Canada, but, like, come on. We’ve been above freezing for, like, a chunk of, you know, winter. I’m like, why? And it’s climate change. And, you know, I think, observability has the ability to help us create more sustainable systems.

So I think, you know, joining forces, there’s a SIG in, sorry, there’s a TAG in the CNCF called tech sustainability. They do a lot of work to promote awareness around sustainability in tech. And I think, using the power of OpenTelemetry to really help inform and have more sustainable systems, I think, is really, really important, especially as we continue to see, like, just wacky weather events going on.

Charity: Huge plus one to all of that CICD stuff. Pinterest wrote this great blog post in the last week about how they use, I think it’s instruments from Honeycomb actually, but some of our earliest customers actually, one of the most interesting use cases for traces that is not just, like, microservices hopping around is your build system. Which tests are failing? Which can you parallelize? You know, being able to visualize. It’s the most important thing.

The number one thing that most companies could do to have better engineering outcomes is shorten the amount of time between when you’re writing the code and when it’s in production. And CICD is that link, right? And instrumenting it is the step. Like, it’s like, we’re all so busy. There are so many things we wanna do, but like to simplify so much complexity for most folks, I’m just like, pay attention to your build pipelines.

Get that short, under an hour. Fifteen minutes if you can do it, but under an hour because then you can ship smaller diffs more consistently. You can ship one engineer’s change at a time instead of batching up all the things that have changed in the past week. What could possibly go wrong? So huge plus one to all of that.

Hazel: Huge plus two as well.

Colin: But if we shorten builds too much, Charity, people won’t have an excuse to have a very nice, long, relaxing coffee break. Everyone will just have to chug espresso because there’s no reason to sit around and do that stuff.

Charity: Or they can just go home early.

Colin: Or you just go home early. That’s true. I do want to make sure we hear from Hanson a little bit about 2024 and 2025, Hanson. What, what did you love, and what are you looking forward to?

Hanson: Well, 2024, I think, is the emergence of a lot of these mobile use cases. And I think 2025, hopefully, will be the fully fleshing out of these mobile use cases. I mean, we talk a lot about unified observability, the braid, context. All that is great. All that sounds really resource-intensive running on a, you know, really cheap device, you know, that is made eight years ago. So how do we do both curation on the event side, but context in the, kind of, you know, braiding side?

How to do that successfully? Things emerging in OTel that’s really exciting is the entities, and the resource providers, allowing us to pack in context in a dense way so that we could do what we want without having to, you know, selectively, sample or decide, how many of these context changes we model. Is there gonna be, do we have to, like, debounce? You know? No. No. No. We could do this, store this efficiently, and then we could sort it out on the server side, as long as the client side capture is reasonable. And then to be able to do that, we’ll open up a lot more use cases, IoT devices, your TVs, your cars, anything that runs Android potentially can run OTel or, you know, OTel SDK and capture data. So I’m looking forward to seeing different use cases.

I mean, I talk about Android phones being low power. Well, we have your little tiny little, you know, Raspberry Pi, you know, or something that runs, you know, embedded. What if that starts reporting telemetry? It’s gonna be exciting. We’re gonna be able to diagnose problems that previously we had to, like, go to the field. Look at debug logs or some oh, how do I repro this? Well, have you tried this? Have you tried these conditions? You can’t really do that sometimes in mobile.

You can’t walk into the elevator at the same time as you’re turning, you know, switching your Wi Fi signal. So being able to capture that live efficiently and for us to analyze later on could tell us so many things that we previously didn’t know. And having to link that to the server side will be incredible. I mean, imagine if you’re only tracking your SLOs right now with backend data. If your frontend, if your apps don’t actually make a successful request to the server, you don’t even know it failed.

So if you’re not capturing mobile data, you’re not getting observability from a very important part of your chain. So I’m looking forward to people realizing that and saying, “Hey, maybe we should know what happens on a device even if it doesn’t talk to the server.” So very, very looking forward to that.

Dan: Yeah.

Hazel: Oh, go ahead. I’ve talked enough.

Dan: I just wanted to, like, link it back to, like, almost, like, three steps. We were talking about understanding the business and understanding, like, a complex distributed system and how, like, all that starts on the client side.

If you don’t start there, like, you know, what are you doing? I think when we, because we’ve done this in the past in the backend where, like, you know, everyone followed the Google SRE book, and they used to get an SLO per service. It’s like that SLO is meaningless if it doesn’t link back to what the user is experiencing, right? So when you think about your SLOs, you’re thinking about, you know, okay, so what is your actual end user experiencing, and then connecting the dots all the way down to your backend.

And most importantly, I think, Charity mentioned, like, Accelerate as a book that was, you know, making the rounds, like, a couple of years ago, last year as well. Now it seems like Team Topologies is the book that everyone’s reading. So if you think about Team Topologies as the sort of like, you know, how do teams interact with each other? If you don’t have observability, you don’t know how things happen in terms of, like, when you’re doing, like, x as a service. You got one team relying on another team and that, you know, those boundaries of, like, subdomains and sort of, like, those boundaries between subsystems in a system.

Like, you need the SLOs to sort of, like, have those contracts between that so that ultimately relates to the user experience, right? So if you don’t have that, if you don’t have observability that’s able to tell you how all this links together, then your SLOs are a bit meaningless. Like, somebody has, like, you know, a target here that is completely misaligned with a target there because there is no connection. I think that’s what I’m looking really forward to as well in the, in the world of client-side observability.

Hazel: I’m so glad you brought that up. Because I actually wanna pull on that thread just a little bit. And you gave it to me, and I’m gonna unravel the sweater, so to speak. So one of the favorite, bits of Biden that I’ve ever read from Donella Meadows is, they have this amazing blog post on, like, leverage points you can use to influence change in the system. And your company is the system, right?

The industry is the system. It is all systems. And from order of most effective to least effective, number one is the power to transcend paradigms. And number two is the mindset or paradigm out of which the whole system arises. Number three is the [whole?] system. Number four is the power to self-organize the structure. And then all the way down towards the bottom there is the numbers. And there’s the parameters, and there’s things like, you know, the size of things and the structure of, like, the information. And with observability, what we have, what we focus on is at the very bottom with the things that aren’t actually that impactful.

But at the very top, it’s that mindset. And that’s why Observability 2.0 is such a changer because it changes the mindset. And what I’m really looking forward to is not even just can we ship faster. Are we shipping the right thing? Nobody asks that question.

Are we actually like, can we correlate all the technical aspects of the business? Cool. Awesome. But how do you know that, demographically speaking, you’re building what your main customer base wants? You have customers that don’t buy things. You have the customers that buy things. Are those people getting what they want? Or are you turning them like, what’s happening? How many people, how many product managers, how many designers, how many whatever actually know how much every part of the application, every part of the system is being used. Like, it is 2024, about to be 2025, and these people still don’t have, you know, page views, usage numbers of anything.

What are we doing? I’m looking forward to changing the paradigm of giving people such useful information that we stop even thinking about, oh, we should ship faster. We should do this better. Are we doing the right thing? Are we even thinking about this in the right way?

Are we getting to a point in which we’re acting deliberately and effectively and learning as a group of people about what we need to do and about what people need. And then are we able to make effective action to get there? Should we do it rapidly? Sure. But are we going in the right direction, or are we hamster wheel spinning as fast as possible somewhere?

Adriana: And I think you hit on a really important point. Because I think, you know, a lot of the times in tech, like, people just jump on bandwagons so blindly, right? I mean, we’ve seen all these Agile and DevOps transformations because, you know, Agile and DevOps transformations in all sorts of organizations because some exec got a presentation at some retreat and comes back, “Cool. We should do that.”

So I think, like, your point to, like, you know, it’s the paradigm shift, but it’s, like, making sure that it’s not the blind paradigm shift as well.

Hanson: At the end of the day, people use apps to do stuff. Are people getting shit done? You should measure that. If you don’t know if performance matters, well, are people getting shit done? If people are getting shit done, great. Does performance matter? Mhmm.

Charity: I do feel like part of this is such a big topic that we talk about, like, how to know if you’re doing the right things for, like, another hour and a half at least, and we’d be just getting started. But, like, I feel like I do feel like speed and efficiency is a big part of helping, equipping, people to ask the right questions about are they doing because what happens if that feedback loop is so long and laggy and lossy is people forget what the point was by the time they get to the end, right?

Hazel: Yeah. And there’s even so much research that shows that, like, if the time to ask a question is long enough, people just don’t.

Charity: Yeah. Yeah.

Hazel: We’ve seen that so many times.

Charity: Yeah. Having a this is I wanna do this. This is broken, or we should do it, whatever. And then being able to quickly turn around and experiment to validate or invalidate it before you forget what the point of the work was is, like, a really critical part of it.

Colin: Yep. Now we are, I agree with you, Charity. I feel like we could talk about OTel and observability for a really long time. I know we are coming up on time, though, so I do want to at least get to one question, and I think I have a good one that everyone in this group might have a spicy hot cider take on.

So, let’s do this one question before we wrap up, which is, “What can we do to make adopting OTel easier?” Now I realize that is a very broad question, and we only have a few minutes. So maybe we can go through everyone maybe, just a few sentences, like your take on this. It could be from the developer perspective. It could be from the business perspective, organizational, buy in. Like, this is such a broad question. There’s infinite answers, but maybe we can get a short snippet of everyone’s take on just 2025, OTel’s big, how can we make adopting it easier? So maybe let’s go, Hazel, would you like to go first?

Hazel: Yeah. I think the huge thing is, one, you need awareness. Two, you need a really, really clear cut path forward to trying it out. And then three, we need this type of alignment structure around people who try it out, people who actually go forth and do the thing, and people who, like, need to have to buy in. Like, you need to get an understanding of how that whole pipeline works. And then I think a huge indicator of success there is going to be, can someone try it out in literally five minutes? That’s the benchmark. You should be able to Google what is this observability thing? And then get something running on a dashboard somewhere that you can see or something running with data in your whole company, five minutes.

And I know that sounds impossible now, but a lot of things sounded impossible until we did it.

Charity: I don’t think that sounds impossible at all. I think that inside companies, I think that platform engineering teams are such a critical link in the chain. Part of making OTel go global, take over the world, is making sure that most people don’t need to know that they’re doing OTel, right? Not everybody has to be an expert. In fact, you should have to have as few experts as possible, right? And part of what the platform engineering remit is is managing the interfaces between vendors and internal customers, right?

Only the folks in the platform engineering team should really need to be experts in OTel stuff so they make the right product. If you think about it, vendor engineering is the most powerful engineering role in most of ours. You’re leveraging tens or hundreds of millions R&D spend from other companies and that one little touchpoint and then exposing it to the rest of your company. So writing libraries, writing little layers of abstraction that make it feel native, make it feel intuitive, make it feel like part of your tool stack, not some other vendor’s tool, is like you should be using OTel every single day without even really thinking about it.

Colin: Nice. Make it vanish. You don’t even know it is there. Dan, how about you?

Dan: Yeah. Basically, I’ll just echo what Charity said, but, like, you know, it’s about making it so easy that you don’t need to care about it. And as I said before, you know, if you’re thinking about configuring how you aggregate your telemetry, that shouldn’t be something that the end developer, like, you know, the person that is developing software, they should be focused on developing software, needs to care about. So something that you just plug in there, and this is about, like, hey. Here’s, you know, here’s a config file. You put that in there as an environment variable. That’s it. That’s all you need to do.

Then I guess from the point of view of adopting it well, which is another big aspect, or, like, making people know that, you know, people have been following runbooks for ages, is making them see the value with, like, call it game days, call it, like, war games, something like that. Something that, basically, you can throw them in there and say, like, “Hey, here is, like, a lot of good telemetry, and we’re gonna tell you how to use it and how to get value out of it.” That, to me, is also a good way of getting people to understand what it all means, basically.

Colin: Nice. Awesome. We’ve got two minutes left and two people. So, Adriana, what is your hot take?

Adriana: Yeah. I would say, like, putting my developer hat on, making sure that developers can instrument their code. But, you know, along the lines of what everyone else has said, just making sure that it’s easy, though. Because if it’s gonna be hard, if it’s gonna be a pain in the ass, nobody’s gonna wanna do it. But it starts with the developer because that’s the code. You know, the application code gets instrumented, and that allows us to understand what’s going on with our systems.

Colin: Nice. Awesome. And, Hanson?

Hanson: To piggyback off of that, it’s considering the user. Who is actually doing this? Mobile engineers are not backend engineers. They may not even know what a trace is. So to be at the level of where they are, whether it’s, you know, an indie game dev with two people, whether it’s Giant Corp with fifty people on their mobile teams. Not all of them know what tracing is. Not all of them know what observability is.

Meet them where they are. Use idiomatic APIs. It doesn’t work if it just works. It has to work well. It has to be idiomatic. If you can have something that they can use, ideally without, you know, knowing what OTel is, but still being able to get the value from it, that is perfect. And I think that’s a bit missing right now, but we will get there. And, that’ll be great.

Colin: Awesome. I love it. I love those wrap ups, and we are at time. This has been so fantastic. I wanna give a big, big, big thank you to this wonderful panel. Thank you all for being here. Thank you all for wearing your wonderful holiday sweaters and talking OTel.

For people in attendance, like I said, we’ll be sending a follow-up email. Yeah. Everyone show your sweater, camera down. Full sweater shot.

We’ll be sending an email with a survey. We’d love to get your feedback so we can improve these. And as always, well not always, it’s the end of the year. So have a happy holidays and an even happier new year. Thank you, everyone. Let’s get ready for 2025!

Embrace Mobile observability with OTel

Check out Embrace's open source SDKs that are built on OpenTelemetry.

Learn more

Build better mobile apps with Embrace

Find out how Embrace helps engineers identify, prioritize, and resolve app issues with ease.