Embrace has officially acquired SpeedCurve! Together, we’re redefining the future of user-focused observability.

Learn more

An OTel Carol: Past, present, and future of OpenTelemetry panel recap

OpenTelemetry panel with speaker headshots

Join your favorite OTel thought leaders for a lighthearted journey through the past, present, and future of OpenTelemetry. We’ll cover early challenges, key improvements to the spec, tooling, and developer experience, and exciting developments we’re most looking forward to in 2026.

Recently, I sat down with a fun group of OTel experts to look back at all the great achievements in the OpenTelemetry project over the last year. Well, to be fair, not all of them. Have you seen how big OTel is?

Instead, we picked some of our favorites, everything from semantic conventions to tooling to new special interest groups (SIGs). And given the holiday season, we decided to share them in “A Christmas Carol”-esque journey through the past, present, and future of OpenTelemetry.

Wherever you are on your OpenTelemetry or observability journey, there’s definitely something for you in this wide-ranging discussion. Here’s just a small sample of topics we covered:

  • How declarative configuration simplifies OpenTelemetry setup across your entire system
  • What are the improvements in OpenTelemetry support for mobile and web
  • Why Instrumentation Score helps engineering teams understand and improve telemetry quality
  • Why 2026 is going to be the year of OpenTelemetry Weaver
  • How community-driven contributions (like the Go compile-time instrumentation) are growing in OTel and how that creates more sustainable projects
  • What’s the future of Kotlin support in OpenTelemetry
  • What’s in store for AI support in OpenTelemetry

And so much more! If you want to watch people in holiday sweaters get you up to speed on the last year in OpenTelemetry, then check out the full panel below. Scroll past the video for some of the biggest highlights from our discussion as well as to access the full transcript. See you at the next one!

Watch the full video here

The Ghost of OTel Past

In this section of the discussion, we covered the best new projects that shipped over the past year.

Declarative config

Dan shared the new support for declarative configuration in OpenTelemetry SDKs. Currently, it’s only implemented in Java, but work on the other languages is coming along. Declarative config uses a YAML file instead of environment variables, and it enables engineering teams to define configurations once and then have all instrumentation libraries pull from that one source of truth.

Instrumentation Score

Juraci then discussed the launch of Instrumentation Score earlier this year, and though it’s not an OpenTelemetry project, it’s an exciting way for engineering teams to score and improve their OpenTelemetry instrumentation. Essentially, observability experts document the good and bad things about telemetry and create a standardized scoring system to evaluate telemetry quality. Teams can then understand their telemetry quality and get feedback on how to improve it. The rules are based on OpenTelemetry semantic conventions and community best practice, with transparent calculation methodology.

As Juraci shared, “For instance, a span name should not be a high cardinality name, so that you should be able to group them and you should be able to see them on a dropdown box somewhere and perhaps aggregate those and create metrics out of those specific types of spans. Or perhaps your telemetry should have a service.name resource attribute, or perhaps your logs in production should not be debug mode for more than 14 days. So those are things that we know as an industry, we know as practitioners, but we never encoded that anywhere.”

Adriana was already thinking about possible future integrations between Instrumentation Score and OpenTelemetry, “I did want to mention one thing that I think can be kind of exciting is maybe some integration with OpenTelemetry Weaver because, you know, Weaver allows you to not only define schemas, but also validate schemas. So I think it would be really cool to see how Instrumentation Score and OpenTelemetry Weaver can at some point play together in the future.”

Community donations

This past year saw, not just an increase in code donations to the OpenTelemetry project, but new community-driven donations. As Dan said, “The one that comes to mind is the donation of Go compile-time instrumentation where Alibaba and Tiktok and Quesma, they all got together as part of the same thing. So we almost had two donations happening at the same time. And then they agreed on it as a community to say, ‘Hey, let’s form a new SIG, a new special interest group together and come up with the best of each.’ And that to me is the spirit of open source and to basically get people to collaborate.”

OpenTelemetry Kotlin API and SDK

Hanson also shared the OpenTelemetry Kotlin API and SDK that Embrace is in the process of donating to the OpenTelemetry project. “We’re going through this process, as Dan mentioned, about trying to not just create some code and chuck it over the fence and say, here you go, there’s code, here’s a repo, a donation. It almost is a forcing function to create a community around because really the code as it is right now is just the beginning. The forming of the community is what’s important.”

The Ghost of OTel Present

In this section of the discussion, we covered the best improvements to existing OpenTelemetry tools, libraries, and working groups.

Community feedback improvements

One of the biggest community improvements was the introduction of surveys run by the End User SIG to more effectively share feedback between from OTel users to OTel maintainers. Adriana shared how Andrej Kiripolsky (new member of End User SIG) and Ernest Owojori (LFX mentee) helped create a better framework for running and analyzing OTel surveys to deliver more helpful insights to the different SIGs.

OpenTelemetry Weaver

Juraci then shared a bit more about OpenTelemetry Weaver and how it became a mature project this year. While in previous years, Weaver was mostly used internally within OpenTelemetry, 2025 saw growing adoption by engineering teams. “I think this is really the year of Weaver. It is the most promising tool that we have there in terms of measuring and fixing instrumentation quality and enforcing the governance side of telemetry. So how do I ensure that all of my applications within my company have a specific attribute? I can enforce that using Weaver. I can enforce that in my pipelines. I can do live checks. But it all starts with a defined schema for that. And that comes from Weaver.”

Logging API

This year also saw big changes to the Logging API, including making it public-facing and able to emit events that contain the OpenTelemetry Context. As Dan mentioned, “Using the Logging API, you can emit events. You can start to think about span events in a different way, because ultimately, if you emit an event that has context – and in OpenTelemetry everything has context – that’s what an event should be tied to, a particular span or a particular trace context, right? So using the Logging API has benefits, but also the way that we emit these events and the way that some of the improvements have been to support complex attributes.”

OTel support for mobile and web

There’s also been significant improvements in OpenTelemetry for mobile and web. Hanson shared, “There’s been tremendous progress made this year in terms of maturity, in terms of stabilization, in terms of properly structuring the projects, and how things are consumed. There’s a new SIG for browsers, specifically tackling instrumentation and semantic conventions. On iOS or Swift, there’s a push to separate the API and SDK and allow greater modularity in consumption of those artifacts. And OTel Android is also shaving down some API edges, and we released an RC candidate, which will probably be released sometime next year towards stability with a supported API of configuration.”

The Ghost of OTel Future

Next, we turned the discussion towards the most exciting projects that are coming up soon in 2026.

OpAMP (Open Agent Management Protocol)

Adriana shared that OpAMP is poised to have a big 2026. It’s a protocol that was developed to manage a fleet of collectors, but it can be tricky for engineering teams to set up. She’s looking forward to abstraction layers getting built on top of OpAMP to make it easier for developers to use, in the same vein as Kubernetes. “Even having OpenTelemetry having its own little abstraction layer to make it easier to get started with OpAMP, that’s what I would love to see. I would also love to see more people giving a little more love. There’s an OpAMP bridge for the OTel operator. I’m a huge fan of the OTel operator, another little unsung hero of OpenTelemetry, I would love to see more people tapping into that functionality.”

Improving stability

There have also been a lot of discussions around improving stability across the OpenTelemetry project. Dan asked the audience, especially engineers on teams using OpenTelemetry at scale, to provide feedback to the governance committee. An example of the complexity in establishing stability criteria that Dan mentioned is, “What does it mean for an instrumentation library to be stable? The instrumentation library itself can be stable, it can be producing [telemetry], the code itself is stable, but maybe the semantic conventions that it’s based on are not stable. Do you call that instrumentation library stable? I mean, as a user, do you understand that you’re using a stable component?”

OTel Kotlin community

Hanson shared his excitement for the Kotlin community to get involved in OpenTelemetry. “2026 is about forming the community and really looking at people who are previously using other technologies to achieve the same thing. On Kotlin Multiplatform, for instance, you’d have to have different instances of OTel SDKs to actually send your data back to the collector. Kotlin Multiplatform, for instance. You’d have to have different instances of OTel SDKs to actually send your data back to the collector. With the Kotlin SDK, you can only have one with multiple platforms: web, iOS, Android.”

AI support in OpenTelemetry

We also had to talk about AI in this observability discussion, and there’s a lot of work being done to support AI in OpenTelemetry. There are discussions about creating Generative AI semantic conventions so that different SDKs and tools can speak the same language when observing agents that perform Generative AI LLM calls. He also touched on a new proposal for OpenTelemetry for an MCP server.

Juraci shared, “I can definitely see people using Instrumentation Score rules as a context for their coding agents so that they can have better instrumentation without being OTel experts. So they can tap into our brains by using that knowledge base without having to actually know the telemetry that much. We are seeing the same kind of experimentation going on on our side with instrumentation agents. So you plug in an instrumentation agent, a coding agent, into your code, and then it can suddenly configure OTel for you without knowing OTel.”

Resources for learning about OpenTelemetry

Here are the OpenTelemetry resources the panelists mentioned during our discussion:

Full transcript

Colin Contreary: All right. Hello everyone and welcome to today’s event, “An OTel Carol: Past, present, and future of OpenTelemetry.” I’m Colin Contreary. I’m the head of content at Embrace. I will be today’s moderator. We’ve got a wonderful panel of OTel experts who are all ready to take us on a journey through the past year in OpenTelemetry. That’s right, audience. You are all essentially Ebenezer Scrooge and you’re going to be visited by the ghosts of OTel past, present, and future to learn about what’s truly important this holiday season and in life… better observability!

First, we’ll see the Ghost of OTel Past and learn about the best new projects, libraries, and tools that launched in 2025 and how they’re solving some previous challenges with OpenTelemetry.

Then we will visit the Ghost of OTel Present to learn what existing OTel projects, libraries, and tools had the best improvements in this past year.

And finally, we will shudder in excitement at the Ghost of OTel Future and learn about upcoming proposals and standards and improvements to look out for in 2026. So a lot of great stuff we’re gonna cover. And if you have any questions about OpenTelemetry or about any of the topics that we are going to cover, please, we would love to answer them.

So ask your questions using the Q and A section and we will answer them either during that section or at dedicated Q and A time at the end. And with all of that out of the way, let’s dive in. Let’s start with a fun poll question as we start to meet our panelists. So as I launched the poll, Dan, why don’t we start with you giving an intro and maybe sharing a fun holiday item you–

Dan Gomez Blanco: Cool.

Colin Contreary: Wait, I gotta go first. Dan, what am I doing? Did I forget to go myself? I said I’m Colin Contreary, but I didn’t do my item. I’m sorry, Dan. I’m pooching you. Look, I just want to share this. This is, it’s a macaroni and cheese ornament. My daughters absolutely love it. You cannot see it, but it’s starting to peel away from the felt because they rip it off the tree. They love it so much. So my, my holiday item is this mac and cheese ornament. Sorry, Dan, I jumped your intro, but now you can go, Dan.

Dan Gomez Blanco: That’s cool. Yeah, I’m Dan. I’m a principal observability architect at New Relic. And with Adriana, I’m a maintainer of the end user special interest group, the end user SIG. And let me tell you about my favorite bobble, which is very cool like that, but cool thing it’s got is that it’s got chocolates inside. You can open it. How cool is that?

Colin Contreary: Now do you refill it with chocolates every year, Dan? Is this the first year you’ve had it?

Dan Gomez Blanco: It’s every day.

Colin Contreary: Maybe we’ll see you eat one during the panel. We’ll find out. Thank you so much, Dan. Adriana, can you go next?

Adriana Villela: Yeah, my name is Adriana Villela. I am a principal developer advocate at Dynatrace. And as Dan said, we work together on the OTel End User SIG alongside Reese Lee. And my super awesome holiday item is, oops, my elf on a shelf looking mischievous as ever. And she has a special little custom made crochet skirt made by a friend of mine.

Colin Contreary: Very fun. Awesome. How about next we go to Juraci?

Juraci Paixão Kröhling: Yeah, my name is Juraci. I’m a software engineer. I’m a co-founder at OllyGarden as well. And I’m part of the governance committee for OpenTelemetry. So, and since the beginning of the year, I’ve been fighting the good fight of bad telemetry. So I’m fighting bad telemetry. And my favorite holiday item is this one. So this is the first, I don’t know, mulled. What is the name in English of that? A mulled wine. A mug that I got like 15 years or so ago, perhaps a little bit more in Vienna, Austria. And this is the very first mug that I got that started our collection of travel mugs. So yeah.

Colin Contreary: Nice, awesome. All right, and rounding us out is Hanson.

Hanson Ho: Hi, my name is Hanson. I do mobile sort of stuff here at Embrace. And my holiday item is this. you won’t see. It is a polar bear chocolate, hot chocolate bomb. You pour hot milk on it and you mix it up and it’s got all sorts of goodies in it. So I can’t wait to try that on a holiday day.

Colin Contreary: Do you, wait, you put that into a large cup and then it melts?

Hanson Ho: A larger cup and you pour hot milk in it and you mix it up and then you have a hot chocolate where you had a bear.

Adriana Villela: Polar bear.

Hanson Ho: There’s sacrifices for the betterment of my stomach, so you know.

Colin Contreary: All right, awesome. All right, well thank y’all. It’s happy to have all y’all here. Why don’t we end the poll? Let’s see what the people have said. Let’s see. Oh wow, the Jim Carrey “A Christmas Carol” is tied with–

Adriana Villela: I didn’t even know there was a Jim Carrey “A Christmas Carol” movie.

Colin Contreary: What’s funny, Adriana, I did not know that either. I went and Googled, like, what are the different “A Christmas Carol” versions? Wait, this one happened? Turns out not only it happened, but people love it. Also, audience, just know that obviously I had to whittle this list down because there are approximately 15,000 versions of “A Christmas Carol.” So if yours is not here, I apologize. But because only one of you said “Other,” I don’t feel that bad. All right. So with all of that out of the way, let’s get started with our discussion. So first, we are going to be visited by the Ghost of OTel Past. And so what we’re going to talk about is what are some of the best new things that were launched within the past year. So they’ve already launched this year. So it’s kind of, it’s in the recent past, but let’s talk about what was launched this year. That was very interesting and fun and very helpful for getting better observability. And I want to kick it off with Dan, cause I know you are itching to talk about declarative config. Can we start there?

Dan Gomez Blanco: Yes, and this is like, I guess, very, very recent past. I don’t know, there was a blog post released, I think a couple of days ago, maybe last week, about the third release candidate for the declarative config. And it basically solves a real problem for, and I would say I would put my platform engineer hat on, and it solves a real problem to provide a standard way of configuring the OpenTelemetry SDK across all languages.

So if you’re thinking about declarative config as a config file that you can pass to your SDK and it allows you to configure all things. Now, why do we need it in OTel? I think that’s the main question. Well, the first thing is like if anyone’s worked with OTel SDK is it’s super flexible. There’s loads of config properties, right? Lots of things that you can tune. And yeah, it’s good to have that schema, that standard basically to define all these things. And that flexibility and that richness in the config as well means that there are certain things that are difficult to do with environment variables, right? Which is the, like one of the other ways of doing OTel config or even system properties. So this allows you to do complex things. For example, if you want to define metric views, which by the way, don’t know if the audience is not familiar with metric views and OpenTelemetry.

I would urge you to go and look at it because metric views are awesome. But metric, for example, defining metric views is something that requires a little bit of a complex, you know, complex attributes. And the idea here is like, you can do that in YAML, right? I think, again, platform engineer hat on. And there was this thing like a group that I was friends with, YAML camels. I mean, everyone here is like a YAML camel and platform engineer. So YAML is great.

But yeah, so the most important thing, and this is why I started with, is if you’re a platform engineer and you want to provide that contract between where the platform ends and where your developers are using OpenTelemetry, it’s almost like this becomes your contract, right? You’ve got the API layer and you’ve got the SDK layer and OTel, they’re decoupled. And that’s a great contract between a platform team that’s able to provide a consolidated sort of like standard layer of config, then allows as well individual engineering teams that pick up the standard config and they can apply config on top. So you can merge config, you can extend it, but then ultimately, know, engineering teams can just get about the API and you can provide a config that works out of the box that basically makes it a lot easier to do these things basically to get up and running.

Colin Contreary: And just for the audience that’s unfamiliar with both of these configuration options, is the idea that we will see way less usage of environmental variables to configure OpenTelemetry? Like, is it still required in some use cases, or is the idea that this is essentially like a one-to-one replacement for most use cases?

Dan Gomez Blanco: So I would say probably that, you at the moment, the implementation is mostly, you know, is implemented in Java, PHP, C++, all the languages are coming along. So at the moment, basically environment variables are the way to go in other languages. But, you know, I would say that this will be ultimately the recommended pattern, or at least the one that I will recommend to configure OTel, right?

Juraci Paixão Kröhling: If I may add one more point there as well is that I’m using that on my own code. So on my Go, so you mentioned C++, Java and so on, but also in Go, it is already ready to use. When I compare the code that I’ve been doing this year with the config packages, with the code that I had a couple of years ago, that’s a huge difference and so much easier today to configure the SDK.

And I think one of the points that then got very close to mentioning is there is a separation between runtime and development mode. Like when I’m developing the application, I can think about using the SDKs and the APIs. But then when I run my application, then I can specify what is the actual YAML or what is the actual configuration for my SDK that I want for that specific environment. So on my local machine, I have a specific config, a YAML that points to a local host 43.1.8. that is running perhaps on a Docker container somewhere locally. But then when I deploy my application in production, that is using a YAML that sends data to two different places, my observability backend and my own backend, to evaluate the quality of my data. And that’s a decision that we made at runtime, not at development time. I think that’s for me one of the key benefits of using an external config file.

Dan Gomez Blanco: Yeah, yeah, there is one as well that comes with this as well. Another benefit is the standardization of certain things that are applicable across instrumentation libraries. So there are blocks in there, for example, if you’re, there’s an option to, you if you want to, in an instrumentation library for HTTP clients, if you want to record the headers, for example, which is something that’s not done by default, right? Because that would be a, you know, it could be sensitive information in there.

So that type of standard config across instrumentation libraries is something that maybe you just want to define once and then have all the instrumentation libraries pick up from the same block of config instead of having to configure each instrumentation library perhaps in a different way.

So that’s another extra benefit. That’s why there’s so many benefits.

Colin Contreary: We have to limit you to just sharing a few of them, Dan. Yeah. Okay, that’s awesome. Why don’t we move on to another great thing that launched this year? Now it’s not actually an OpenTelemetry project, but obviously the entire OTel community loves it. I’m going to hand it over to you, Juraci. Tell us all about Instrumentation Score.

Juraci Paixão Kröhling: Yeah, so the Instrumentation Score is a project that launched in June this year, if I remember correctly. And we made it open source. We made it an open specification out of the work that we’re doing since the beginning of the year. And when I pinged Dan about this idea about how about we make this an open spec kind of project, the first question we got was, why don’t we do that as part of OpenTelemetry? And I think the main reason why Instrumentation Score is not part of OTel right now is we want to be free to have opinions and strong opinions in some cases. And those opinions, and we know that when we have like 10 engineers, you end up having like 11 opinions. So that’s why we decided to start small and then see which of those opinions are shared across everybody and then solidify those opinions and then perhaps move into the OTel side of the, like, donate to OTel or join OTel. We don’t have a timeline for that, or even we don’t know if that’s going to happen in the future. But we know that we want to stay very close to OpenTelemetry. Perhaps I should mention what is the Instrumentation Score?

Colin Contreary: Yeah, I was gonna say, let’s start the audience. Like this is great. Yeah. What is it?

Juraci Paixão Kröhling: Yeah, sorry. So Instrumentation Score is a project where experts in the industry document what are the good things and bad things about telemetry. Like, what is a good telemetry? What is bad telemetry? So what are the things that you should be following? And what are the things that you should not be doing in any case? So for instance, a span name should not be a high cardinology name, so that you should be able to group them and you should be able to see them on a dropdown box somewhere and perhaps aggregate those and create metrics out of those specific types of spans. Or perhaps your telemetry should have a service.name resource attribute, or perhaps your logs in production should not be like debug mode for more than 14 days. So those are things that we know as an industry, we know as practitioners, but we never encoded that anywhere. So that stayed as a, I know this hint here. So this is what I implemented at my company. And then somebody set the same for a different type of telemetry. And this is us coming together and saying, those are the things that we’ve seen in the past. Those are the things that we’ve implemented in some way. And those are our opinions. And this is why it’s this set of opinions. And while OTel helps people defining what is a telemetry and how to collect and transmit and create that telemetry. The Instrumentation Score provides an opinion layer on top of that. It does not specify how to implement those ideas. It is really like you decide how to implement using your backend. But here are some of the starting points that it should be using to implement your ideas. Not all of that are useful for all of the backends.

But, and I’m gonna make a pause here because I’m so into this topic that I could talk for hours, but.

Colin Contreary: Well, I know Adriana, you’re also very excited about this project. I’d love to hear from you on it as well.

Adriana Villela: Yeah, I’m excited because, you know, I think Juraci touched upon something that, you know, it’s one of those things where you’re like, let’s, let’s do an Instrumentation Score. That makes so much sense. Right. Because instrumentation is code and you can have bad instrumentation. to have basically checks and balances in place to say, “Hey, this is what good instrumentation looks like. This is what bad instrumentation looks like,” is awesome. I did want to give a shout out to to our fallen colleagues at Tracetest who were starting to develop also some stuff around the quality of instrumentation in their product before they fold and I think it’s really cool to see someone pioneer that and then like OllyGarden taking up the mantle and then like the creation of Instrumentation Score. To be able to see that continue on in some form I think is really exciting and I’m super excited to see where it goes. I did want to mention one thing that I think can be kind of exciting is maybe some integration with OpenTelemetry Weaver because, you know, Weaver allows you to not only define schemas, but also validate schemas. So I think it would be really cool to see how Instrumentation Score and OpenTelemetry Weaver can at some point play together in the future.

Colin Contreary: Yeah, nice.

Dan Gomez Blanco: And I think that’s one of the things that we try to that we want to establish and there is an issue open to like people to contribute if they want to contribute to see where the scope is between you know Instrumentation Score and Weaver right so I think one of the aspects of Instrumentation Score that we try to encode in it is the the concept of different levels of criticality which, you know, like in terms of semantic conventions, it’s like either you follow semantic conventions or you don’t, right? But there are some other sort of like aspect, there’s an aspect of Instrumentation Score that is related to, yeah, some things are more important than others, right? And then try to codify that in this score as well. I think it’s quite important.

Juraci Paixão Kröhling: Yeah, and I think one final aspect of that is because the Instrumentation Score does not preclude any implementation, it doesn’t say how things should be implemented. It is not bound by the technical limitations of any tools. So Instrumentation Score is a set of opinions, and Weaver provides tools where people can implement some of those opinions, most of them, really. But some of them might be relatively complex to do with Weaver. Like the ones where we say that relate to a trace as a whole, not individually spans. And there is no easy way, at least for now, to specify a rule about a specific trace, like a complete trace in Weaver that spans multiple services. So Weaver has a view of services. Of course, a live check could perhaps in the future be changed if we have a full view of the trace. But Instrumentation Score being its own project allows Instrumentation Score to have opinions again that are not bound by technical implementations.

Hanson Ho: Yeah, the thing that excites me is, is experts can codify those opinions in something. And not just here’s a best practices doc that’s got 20 sections and 25 subsections. How are you going to validate it? Just follow the, follow every bullet point. You can’t really do that. And, and, and to allow us to actually express opinion in a way that could be turned into code and then turn out a score at the end allows different domains to, to, basically experts give an opinion on how you’re doing.

And this will help, I think folks trying to start an observability and instrumentation tremendously. ‘Cause right now OTel gives you like the basic letters. Semantic conventions give you the vocabulary to talk the same. Instrumentation Score, gives you the, the syntax and evaluation if you’re putting those things in the correct order. And it’s a natural progression, the maturity of the platform.

I think it’s essential for folks ⁓ eventually to be like, in a couple of years, we’ll be like, I can’t believe we just let people do this without telling them that they’re doing things right. So yeah, this is going to be fantastic.

Adriana Villela: Yeah, I think it’s great because it saves developers from themselves.

Dan Gomez Blanco: Also, I think that this panel here probably will agree on something that is, if you can’t measure it, you can’t improve it, And that includes instrumentation quality as well, so yeah.

Colin Contreary: Nice. Let’s get into our last topic of OTel Past, because I definitely want to touch on this. So we talked about Instrumentation Score being not an OpenTelemetry project, but obviously so many people know it, so we’re so excited about it. But there has been a sea of community growth, specifically in donating and starting new projects and working groups this year. So I’d love to talk about some of this that happened this past year. I wonder if we start maybe with Dan. Can we talk a little bit about some changes in the donation process that have kind of led to some the formation of some new working groups and some new donations this year?

Dan Gomez Blanco: Yeah, so I think one of the things that I’m particularly excited about that is not like, or, you know, more like happy to see is not the number of donations, which is great, but also how those happens, right, which instead of like having a, hey, here’s a bunch of code that was yours.

No, it has been more of a community building type of donations. had donations where someone came in. The one that comes to mind is the donation of like Go compile-time instrumentation where Alibaba and Tiktok and Quesma, they all got together as part of the same thing. So we almost had like two donations happening at the same time. And then they agreed on it as a community to say, hey, let’s form a new SIG, a new special interest group together and like, you know, come up with the best of each. And, you know, that’s to me, that’s like the, you know, the spirit of open source and to basically get people to collaborate. Yeah. So more of the process and more of the, what’s happening in terms of like all the different collaborations within donations and building our community rather than the donations themselves that of course are great.

Colin Contreary: Yeah, but also growing just the interest of OpenTelemetry growing into more and more domains, more and more interest bubbling up across the space. I mean, Hansen, I know you are itching to talk about one of the donations that is currently in the mix of the Kotlin API. You want to touch a little bit about that?

Hanson Ho: Yeah, so Kotlin is Android’s preferred language. It’s also a language that is used to develop a bunch of other things, not just Android. And there is not an API or an SDK that powers Kotlin OTel right now. So Embrace and a few other folks are trying to make an API and SDK of implementation of OTel. And we’re going through this process, as Dan mentioned, about trying to not just create some code and chuck it over the fence and say, here you go, there’s code, here’s a repose, a donation. It almost is a forcing function to create a community around because really the code as it is right now is just the beginning. The forming of the community is what’s important. And I think going through the process about, hey, how do you find people to do this? What is the existing things, existing people, existing things that you could actually leverage? Is it big enough to create a SIG? Things like that. And going through this process and seeing the new and improved process of how you start this up, it was really streamlined. And it was very, it asked us the right questions. I mean, it asked questions that we should have the answer to. And if we don’t, then what are we doing? And it has preliminarily been accepted pending some additional kind of work, but it should happen in the new year, maybe, maybe at the end of this year. See, it’s running out time, but which is, it’s just the beginning. The donation is the beginning. It is not the end. There’s a lot more work happening afterwards. And I think this is, when you donate something, you’re not just giving code, you’re giving time, giving commitment.

And you’re doing recruitment as well. So I think anybody going in and say, I have this cool project I want to donate. Well, are you willing to maintain or find a community to maintain it? Like that’s, that’s the first question that’s going to be asked if the answer is no. It’s like, I, especially in the age of AI where code is cheap and easily generated. If you don’t have, you know, some filtering to add to and change and improve and take feedback, it could be a bit of a challenge, shall we say.

Colin Contreary: Nice. I don’t know if that’s a, are we early or is that a record for lateness? 25 minutes into an observability chat before AI was mentioned, but it happened. We knew it would happen eventually. All right. Let’s, unless, Juraci, did you want to touch very briefly on the Injector? Oh Beyla. I know you wanted to talk about that, right?

Juraci Paixão Kröhling: Yeah, Beyla. No, I wanted to touch on governance in general and use Beyla as one example, perhaps the Go compile time instrumentation as another example. So those were two donations that happened this year that I think they were very interesting and they set the tone for future donations as well, especially the Go compile time instrumentation. So we had two donation proposals coming at around the same time from different companies, basically wanting to donate something very similar. And one thing that we achieved as a community is to join them, like get those two parties to talk to each other and form a new SIG based on the code that would be the target of that donation. So it’s a net new project within OTel, but at the same time, it came with a community and a diverse community with people that are experts in that code, in that kind of code, from two different angles.

And they are collaborating and nice things are happening. The same with Beyla. So Beyla also came with a community, that community nowadays, so OBI (OpenTelemetry eBPF Instrumentation), that community is growing as well. And I think this is what we in the governance side of the project, we aim to see for the future. We want projects, of course we do. But we want projects that are sustainable. We want projects that are set for success. And we cannot have a big success if we have a one company SIG. That’s not how OpenTelemetry succeeds.

Colin Contreary: Okay, awesome. Thank you so much, Juracii. And now let’s move on. We’ve covered the Ghost of OTel Past, all the brand new things that happened in the past year. But now let’s move on. Let’s visit the Ghost of OTel Present. So for existing projects and tools and things in the OTel ecosystem, what are some of the biggest improvements to them?

And we touched a little bit on this with Instrumentation Score and the OTel community wanting better ways to ensure quality and understand how to better instrument their applications. I want to start with Adriana in terms of improvements on the OpenTelemetry community in terms of interest, delivering, sending feedback, kind of just that community growth. Can you touch a little bit on how that’s been improved in the past year?

Adriana Villela: Yeah, for sure. So I guess it all started way back when Juraci approached us in the End User SIG about doing a survey, about us helping the OTel Collector SIG conduct a survey to help to drive the direction of the Collector, like what kinds of features were important to folks. And that was kind of the start of us collaborating with other SIGs to run surveys on their behalf.

Because, you know, of the struggles that we had with the End User SIG and one of our mandates was to share feedback from users to the maintainers, but we didn’t have a good process for it. We’re kind of at a loss and the survey was kind of the perfect solution to our problems. And with that initial collaboration, we then collaborated with other SIGs to help them run surveys and our SIG repository is basically the source of truth, if you will, for the OTel surveys where we keep the surveys and the results so that anyone who’s interested can see the raw data. Now that was part one. And in the last little while, as Dan said, we’ve had some new members join the OTel End User SIG. And in particular, we’ve had Andrej join. He’s from Grafana. And he took it upon himself to look into streamlining the survey process to really organize things. And then Andre and I started talking and he wanted to bring on an LFX mentee to help our project in some way. And we talked about bringing that mentee on to basically design a streamlined process, not only for conducting surveys, but also analyzing the results, because we are not statisticians, are not data analysts, and we needed a way to make sure that the results were significant. So I’m super excited about the work that’s being done. And our mentee, Ernest, has done a wonderful, wonderful job. So huge shout out to him. And I think he’s just about wrapping up his internship and he’s done some great stuff. And I think by the time he’s done his internship, we’ll be in a great position to have a proper framework for our OTel surveys, which is super cool.

Dan Gomez Blanco: Awesome. It’s like there’s a lot more to it than you think, you know, because if you’ve got like a list of questions and just some answers, then you might not get insights. But if you start to connect things in a way that says, you know, okay, well, companies that are of this size and that in this particular language, then you connect the dots basically, and try to basically identify those insights from the answers. That’s where the value is. And this is what, this LFX mentee has been helping tremendously to be honest and Andrej as well.

Colin Contreary: Nice. All right. Thank you, Adriana. Now, I know we touched on this a little bit, but I want to hand it back to Juraci to talk about Weaver. Because as we already mentioned Instrumentation Score, you already mentioned the connection there. I know you’re very excited. I think your big bet was 2026 would be the year of Weaver. Talk about what are some of the improvements in the Weaver project and also just in terms of adoption by the community this year.

Juraci Paixão Kröhling: Yeah, I think Weaver became a mature project this year. So it is a project where people started knowing what Weaver is about, people started playing with Weaver, people started writing blog posts and giving KubeCon talks about Weaver. And I think this is really the year of Weaver. It is, I think, the most promising tool that we have there in terms of measuring and fixing instrumentation quality and enforcing the governance side of telemetry. So how do I ensure that all of my applications within my company, they have a specific attribute? I can enforce that using Weaver. I can enforce that in my pipelines. I can do live checks. But it all starts with a defined schema for that. And that comes from Weaver.

While in the past years, Weaver was focusing on the internal consumption within OTel, like creating the schema so that the schemas could be used by semantic conventions to define what are the attributes and the values and so on. This year, I think, is where Weaver really took off and started being used by other people. I think it made good progress there, and I think 2026 is going to be even better for Weaver.

Colin Contreary: Nice, awesome, thank you. And now let’s learn a little bit. I know that this is something that I think has been in the works for awhile, but Dan, I know you’re excited to talk about the changes to the logging API. Can you share a little bit about what went on there this year?

Dan Gomez Blanco: Yeah, so it happened this year, but changes are still being implemented across different languages. But yeah, there was the discussion, the endless discussion that seemed at times between events and logs has been settled, hopefully. And, there was thinking about logging in a different way within OpenTelemetry.

And we originally said, you know, the logging API is not supposed to be forward-facing or a public API. Instead of having logging bridges to other existing logging frameworks, that was the idea behind logging. So, the logging API being used as that as a bridge to current frameworks and then the underlying mechanism to export them being part of the SDK. Now when events, when we started to think about events and perhaps this is where anything related to client side, all the discussions that happen on the client side, like mobile and browser, and also GenAI and LLM monitoring basically started to show us different needs for events, right?

So changes that happened to the logging API now to make the logging API public-facing and that logging API to actually have a way to emit events. So using the logging API, you can emit events, you can start to think about maybe, you know, span events in a different way, because ultimately, if you emit an event that has context and OpenTelemetry and everything has context, that’s what an event should be tied to a particular span or a particular trace context, right? So using the logging API has benefits, but also the way that we emit these events and the way that some of the improvements that have been to support complex attributes.

So again, this is driven by all this stabilization of semantic conventions, we realized that we need to be able to have complex attributes and events. Because for LLM cases, you know, you need complex attributes and complex values and attributes. And yeah, these changes are now in the spec, and they’ve been implemented across. So yeah, big changes there that will unlock a lot of other different work streams, right?

Colin Contreary: Awesome. Thank you so much, Dan. And now I know we cannot move on to discussion topic three without talking about one of the big things in 2025 is this investment in client-side usage in OTel, both client side APIs and SDKs, both mobile and web. So I do want to hand it over to Hanson. Can you talk a little bit about some of the progress that has been made in terms of moving OTel support into the beautiful frontend?

Hanson Ho: Yeah, so specifically, I mean, everything’s a client, but this is end-user facing clients where end-users directly use that client and talk about web and mobile. And I think there’s been tremendous progress made this year in terms of maturity, in terms of stabilization, in terms of properly structuring the projects, and how things are consumed. There’s a new SIG for browsers, specifically tackling instrumentation and semantic conventions. On iOS or Swift, there’s a push to separate the API and SDK and allow greater modularity in consumption of those artifacts. And OTel Android is also shaving down some API edges, and we released an RC candidate, which will probably be released sometime next year towards stability with a supported API of configuration that doesn’t say, ‘Hey, configure everything.’”

So the tools are getting to a place where they are starting to serve the community in better ways. Next thing, which we’ll probably talk about in some of the next section is, well, here’s tools, how do you use that? How do you think about telemetry? How do you model all this stuff? We’re not there yet there, but I do believe 2026 will be a big year for that kind of stuff.

Dan Gomez Blanco: Yes. Another, I’ve got a question for you Hanson as well, because like I watched your KubeCon presentation, “The Life of a Mobile Span,” which I loved. I think everyone should watch it. And it’s like, you know, like it’s not the same, right? Telemetry and client-side has many different requirements that are not, if you think about the way that you design telemetry in the backend, it’s got nothing to do with it, right?

Hanson Ho: Yeah, I think folks stepped into the end-user facing client world with kind of a backend mentality. It’s like, how do we track the stuff in the backend? We’ll do the same in the frontend. But if you think about mobile devices as nodes in a fleet, it starts to break down and the amount of data that’s being transported and expected to safely arrive also becomes a little bit different. So I think, 2025 and 2024, frankly, was a year of taking stock and saying, ‘Hey, there is a bit of a difference here. How do we start rounding things out?’ But yeah, the differences being recognized, diversity being celebrated and supported is, I think, a key tenet of the project and some of the work that folks, the fine folks at the end-user client are doing.

Dan Gomez Blanco: Thanks.

Colin Contreary: Hanson, do you walk around KubeCon and you overhear people talking about spans and you just have a lot of “hold my beer?” You just go like, “Hold my beer, hang on a second.” Is that what most of your experience is?

Hanson Ho 

No, it’s usually just looking around and thinking, whoa, these people are cool. No beer holding during the convention.

Colin Contreary 

OK, gotcha, gotcha, gotcha. All right, cool. And so now let’s let’s move on. We’ve covered the Ghost of OTel Present and now we are ready to visit a glimpse into the Ghost of OTel Future. So what are the best things that are coming up in 2026? What are we most excited about? What is ready to take the leap into either stability or more adoption? What are the biggest things that we are most excited to do? And so I want to start it off with Adriana. Adriana, tell us about OpAMP.

Adriana Villela: Oh, OpAMP? So OpAMP is basically a protocol that was developed to manage a fleet of collectors. You know, I think the thing that’s exciting about OpAMP is that it can make your life easier if you’re running a ton of collectors in your production environment, which I would say most organizations probably are doing that.

But I feel like it’s kind of like the little secret of OTel, if you will, because I think a lot of the OTel early adopters found ways around managing fleets of collectors without OpAMP. And so I would love to see in the future more organizations shift towards using OpAMP more readily.

The thing with OpAMP is that it is kind of tricky to set up. It’s not like, let’s just click a button and straightforward and away you go with it. Right. So like you got companies like Bindplane who are like, ‘Ooh, you know, we can capitalize on that and make it easier to to manage your fleets of collectors, leveraging the OpAMP protocol. And let’s put a nice UI on top of it. And you know, we can help you deal with that.’

So having these abstraction layers are, you know, I think we’re starting to see the value of that,  which isn’t a huge surprise, right? Because like Kubernetes, like you go to KubeCon, most of the companies out there that are on the solution showcase are all about Kubernetes abstractions, right? Because the stuff that Kubernetes does is too complicated for the average human being to want to do from scratch. Let’s just make it easier, right? So we’re seeing companies sprout up that aren’t just observability backends. They’re like, ‘Hey, you know, OTel does this. We can make it a little bit easier. Let’s go ahead and do that.’ So I’m excited to see what comes out of that, you know, to see more people adopting OpAMP, more people becoming aware of it.

Maybe, I don’t know if this is like in the roadmap at all for OpenTelemetry, even having like OpenTelemetry having its own little abstraction layer to make it easier to get started with OpAMP. That’s what I would love to see. I would also love to see more people giving a little more love. There’s like an OpAMP bridge for the OTel operator. I’m a huge fan of the OTel operator, another little unsung hero of OpenTelemetry, I would love to see more people tapping into that functionality. So we’ll see.

Juraci Paixão Kröhling: Yeah, that’s a supervisor on the operator, right? So that’s very cool. And I think we are seeing the first crop of OpAMP servers or OpAMP tools. We just saw during KubeCon in Atlanta a couple of months ago, we saw the people from Nike showing how they are doing OpAMP servers. And I thought that was really cool. Like that’s an end-user taking the OpAMP spec on their own hands and implementing a server to manage their fleet of collectors. I thought that was really cool. And I think 2026 promises also for OpAMP. think we’re going to see a new crop of tools on top of OpAMP.

Colin Contreary: Nice. Now I know that a lot of work recently over the past year has gone into overall stability across the OpenTelemetry project. know several of y’all talked about, you know, we’re so happy that so many parts of it are reliable, they’re more stable, there’s less developer friction in, you know, using upgrading tools. So I definitely want to talk about all the work that’s gone into stability and how if you are new to OTel, there are far less potential hair-pulling situations in getting started. Maybe Dan, you can talk a little bit about some of that stability work and how it’ll help people using OTel in 2026.

Dan Gomez Blanco: Yeah. And I think that is one of the things that is probably, I would like to call out to folks to help us guide that discussion is that there was a blog post from the governance committee just before KubeCon. And that was also discussed at KubeCon is the approach to stability project-wide, right? In terms of making OTel like instrumentation stable by default, which means that, in general, components that are not marked as stable, should they be disabled by default? Should they be bundled with stable components? Or how do we handle release, you know, the release process? Do we need some type of, like, epoch or releases across the… to have to coordinate releases across OpenTelemetry? So there are multiple things that are being considered at the moment to be able to improve the way that people consume OTel on day two.

I think OTel, in the way that getting started is very easy, and there’s been lots of improvements in documentation, lots of improvements in SDK config. But then is that day two operations? We’re like, OK, you’re running at scale. What do you need as an end-user? Let us know. There is a call for action there for anyone that’s an end user and wants to provide a bit of feedback, direct feedback to the governance committee as well. Apart from sending them a message on Slack, it also works.

Colin Contreary: Nice. And can you talk a little bit, Dan? You touched on it, but what were some of the changes and kind of helping stability moving forward? Is that what you were talking about? Like enforcing that different component versions or can you share a little bit more about what’s some of the–

Dan Gomez Blanco: Yeah, I’ll take one for example, is the instrumentation libraries. What does it mean for an instrumentation library to be stable? The instrumentation library itself can be stable, it can be producing, the code itself is stable, but maybe the semantic conventions that it’s based on are not stable. And then do you call that instrumentation library stable? mean, as a user, do you understand that you’re using a stable component?

But the metrics that are underneath may change with a major version bump of that instrumentation library. So these are the discussions that we want to have. So nothing’s been, I guess, codified into an OTEP (OpenTelemetry Enhancement Proposal). So for anyone that isn’t aware, OTEPs are like enhancement proposals to OpenTelemetry, and they’re being discussed. This has not been, I guess, codified yet, but this is why we’re looking for feedback from folks to understand what would work for an end-user that is applying OTel at scale. Again, another thing that I mentioned was versions. So if there are many, different components in OpenTelemetry, right? Each with their own version numbers. And I guess one of the pain points is that if you pick OTel, what is a version of OpenTelemetry? At some point, I think I’ve heard it recently, someone mentioning OTel 2.0, and I was like, I don’t know what they’re referring to.

So yeah, what is OTel 2.0? So yeah, so I think that’s part of the reason that if you think about OpenTelemetry as a product, should there be some type of release that coordinates things that are maybe a different version of each of these components and say, okay, we call this that version X, right? So yeah.

Colin Contreary: Yeah, too many versioning. Is observability and OpenTelemetry, is it being versioned differently? Oh no, oh goodness. 2.0, 3.0. Awesome, thank you so much, Dan. Hansen, let’s talk a little bit about, so you touched a little bit about on some of the improvements in terms of OpenTelemetry support for mobile and web. Can you talk a little bit about what’s next for the Kotlin community? Because obviously Kotlin can be used everywhere like back-end, front-end, but in terms of Kotlin usage in OpenTelemetry where do you envision that going?

Hanson Ho: First is getting people together and defining the use cases we want to support because its usage is relatively diverse and if we want to attack it on all fronts at the same time, that could be little chaotic.

So I think we might get a problem on our hands that we have so much interest that to manage the project in a reasonable way could be difficult. But I don’t know exactly where it’s gonna go in 2026. I know it’s going forward and it’s gonna be taken by the community to figure out what the priorities are and how we go about doing it. But 2026 is about forming the community and really looking at people who are previously using other technologies to achieve the same thing on Kotlin Multiplatform, for instance. You’d have to have different instances of OTel SDKs to actually send your data back to the collector. With the Kotlin SDK, you can only have one with multiple platforms: web, iOS, Android.

And how those use cases will start evolving will really determine where it goes. But really, it’s about activities, about participation. What those activities and who those participants are is TBD. But I am excited, and I am excited to be part of this. But that is just one aspect. I think a lot of work in the end-user client space is going to be about documentation and getting people started and actually codifying our knowledge, perhaps, an Instrumentation Score-type implementation. The world is our oyster. We only have so much time and we got to figure out, we as a community, we’ll have to figure out where it goes.

Colin Contreary: Awesome. Thanks, Hanson. Juraci, you touched on it way earlier in the chat, but I mean, come on, we know the future is clearly AI everything. I want to talk a little bit about AI making its way into OTel.

Juraci Paixão Kröhling: Of course, it is, right? I mean, it’s not technology if it’s not AI in 2026. No, I see if I look at the way that I was working at the same time last year and I compare it to where I am right now, everything changed. I don’t know about you, but my entire workflow changed with the new tools, with the new AI tools. And I can only imagine that we’re gonna see the same kind of shift or shake up in OTel, instrumentation, and observability in general.

So we’ve been seeing tons of AI SRE tools. We’ve been seeing a lot of assistance helping us build dashboards and create alerts and things like that. But I think we’re not tapping into the potential. In terms of OTel, we are seeing some things already being done, like the Generative AI semantic conventions. So this is one. So how do we ensure that the several SDKs that people have or several tools that people have at their disposal nowadays follow the same semantic conventions? Like, how do they speak the same language when it comes to observing agents, when it comes to observing Generative AI LLM calls? So this is one aspect. The other one is there’s a new proposal for OTel for an MCP server. So what does it actually mean to have an MCP server for OTel? Is it only about semantic conventions? Is it about helping people instrument their code? Is it about configuring the collector? What is it about?

And I think those are all questions that we need to ask and we’re going to eventually get the answers. But also way more than that. We talked about Instrumentation Score before. We talked about how to name things and so on. I can definitely see, and this is happening already, so I can definitely see people using Instrumentation Score rules as a context for their coding agents so that they can have better instrumentation without being OTel experts. So they can tap into our brains by using that knowledge base without having to actually know the telemetry that much. We are seeing the same kind of experimentation going on on our side with instrumentation agents. So you plug in an instrumentation agent, a coding agent, into your code, and then it can suddenly configure OTel for you without knowing OTel.

So we know that OTel is complex. We talked about the config effort that makes it easier, but that also requires so much knowledge about instrumentation, about observability, about OTel. We are trying to make it easier, but perhaps the easy button here is really like have something configure OTel for me. And I think that’s one way that we can go in the future. So many directions that we can go, but yeah. I’m very excited about that specific topic for 2026 as well.

Colin Contreary: Thanks. Awesome. Thanks, Juraci. Yeah, the AI space is so fast. I mean, I feel like they just announced the Agentic AI Foundation that’s now part of the Linux Foundation. I mean, my goodness. I know it’s not quite CNCF, but it’s the same umbrella. We’re all under the Linux Foundation. I wanted to kick it over to Adriana. In terms of the future, I wanted to get your thoughts about sampling and getting a better handle on observability vendor costs. Where do you see that going in 2026?

Adriana Villela: So, you know, OpenTelemetry is at that point now where a lot of organizations are past that 101 phase, right? Of, ‘How do we do it?’ And now we’re down to the, ‘Are we doing it well?’ And, you know, it ties into the stuff that we’re talking about earlier with the Instrumentation Score and the quality of telemetry. And the other thing, of course, is when it comes to the cost of telemetry.

It’s interesting to see, like, we’re seeing a lot of talk submissions come up around the cost of telemetry because I think a lot of organizations are waking up to a sobering reality, which is we can’t emit all the telemetry we want in the world because it costs us money. So how can we emit telemetry in a way that is meaningful and yet we still save on money? And so one of the things that I think is something that we should strive to do more in 2026 is to educate more folks on how to sample effectively. It’s also interesting to see that there are some observability vendors that have sampling built in on ingest to sort of help out with that right because maybe people in these organizations that are consuming these products aren’t necessarily well versed in sampling, right?

So to have someone show them how to do it is super helpful. and there is, I’m trying to remember now the name of, yeah, OTel Arrow. Now, OTel Arrow doesn’t do sampling, but it does deal with massive amounts of OTel data. And one of the things that it does is it, the best way to describe it is it kind of compresses your data. So if you have like a bunch of repetitive stuff, you’re not like sending it over and over and over, right? It just references those repetitive bits of telemetry. And so I think that’s another sort of exciting area where we could see some further developments and further chatter is around OTel Arrow.

Colin Contreary: Nice. Awesome. Thanks, Adriana. And I knew we had a question submitted, so I do want to go ahead and answer that as well, because I know we are coming up on the hour. But let’s answer this question. So if you have any other questions, get them in now. We will see if we can squeeze them in. Are there any plans for React Native support in OpenTelemetry? Hanson, can you take a swing?

Hanson Ho: Well, React Native is funny in the sense that you write language code in one language and it gets generated and turns into native code in other languages. So how does instrumentation and SDKs pass through that domain? There isn’t right now, as far as I know, React Native working groups specifically about how to have first class React Native support. Because you’d have to effectively have an SDK that does JavaScript and then also have, you know, emit to the collectors in native platforms and how do you tie that all together? It’s certainly an interesting, there are vendor solutions out there that do React Native and OTel, but OTel itself, I think if there’s a proposal and there’s a community behind it, as we talked about before, we’re happy to propose or accept or examine donation proposals.

But it is quite tricky. If it’s hard enough to do mobile and web with one language, imagine if you have three different runtime operation environments and all that stuff. So it’s a very cool challenge. I’ll say that. And if the question asker is interested in it, I think folks would love to talk and see what could be done if there’s a big enough community. You don’t have to ask permission. You just can do it.

Colin Contreary: Maybe they heard earlier us talking about the dedicated browser SIG, and then they said, ‘Then JavaScript all the things immediately.’ I don’t know. Who knows? Thank you, Hanson, so much for that answer. That’s actually, hang on. We might have had one last question. Okay, interesting. Hanson, this is another one for you real quick. Maybe you give the 30 second answer. Let’s see. We have a question. It’s, ‘What’s the best or most effective way to get involved and ensure our use cases and interests are represented in the evolution of the OTel Kotlin observability, especially for dev teams whose primary role isn’t observability.’

Hanson Ho: Well, the best way to get involved is to just get informed. In the CNCF Slack, there’s the OTel Kotlin channel. There’s also, as the SIGs and everything gets solidified, there’ll be additional communication out, SIGs, meetings, minutes that will come out. But right now, I would say join the CNCF Slack, search for #otel-kotlin, and get into it. Things are not going to be very active right now, because obviously we’re heading to the Christmas holidays. Starting in January, I think it’s going to rev back up. You can also ping the repository right now, which is currently still in Embrace, pending donation, which will hopefully be moved to OpenTelemetry, the domain. You can create issues. You could message maintainers. You could reach out by email, various other ways on that GitHub page. Colin’s going to post it, I think, part of the description afterwards. So just get involved. Just contact us.

Juraci Paixão Kröhling: And making a more generic case for that, like if you are an end-user and if you have any feedback to give to the community as a whole, join the OTel End User SIG. If you are not an observability expert but you have something to say, join the community and let us know. We need feedback from end-users in the whole community, not only Kotlin but for the whole OTel community. We need you.

Adriana Villela: I would also do an extra plug for the OTel End User SIG. If you have a cool OTel story to share or want to talk about how you’re using OTel in your organization, reach out to either Dan or me or even post in #otel-sig-end-user on CNCF Slack. We want to hear your story.

Colin Contreary: Awesome. Thank you so much. Yeah. So that’s all the time we have for questions. Let me just do a quick wrap up because we are coming up on time. So that’s all the time we have for this awesome discussion. I want to give a big, big, big thank you to this wonderful panel. Thank you all audience for being here, for coming along on this journey into the past, present and future of OpenTelemetry. Like some of the panelists have mentioned, we’ve mentioned a lot of projects, a lot of tools. You’re sitting there going, I can’t remember all this. Don’t worry.

We will send you an email with all the links to the tools and resources our panelists mentioned today in a follow-up email so that you can have them, you can look into them, you can try them out yourselves. So with all that said, I hope everyone here has a fantastic holiday season, an even better new year, and hopefully we will see you at the next panel. So thank you all for being here.

Dan Gomez Blanco: Thank you.

Adriana Villela: Thanks.

Juraci Paixão Kröhling: Thank you. Bye bye.

Hanson Ho: Bye.

Embrace Deliver incredible mobile experiences with Embrace.

Get started today with 1 million free user sessions.

Get started free
Related Content
OpenTelemetry panelist headshots with leaf backgrounds

Pumpkin spice and OpenTelemetry for mobile panel recap

In this OpenTelemetry expert panel, we discuss the challenges of collecting telemetry in mobile apps, why mobile developers struggle with observability, and what the current support for OpenTelemetry is on Android and Swift.