Skip to content
This repository has been archived by the owner on Dec 6, 2024. It is now read-only.

Introduce Telemetry Schemas #152

Merged

Conversation

tigrannajaryan
Copy link
Member

Resolves open-telemetry/opentelemetry-specification#1324

I believe changes to semantic conventions and the shape of emitted telemetry
are likely to occur during the lifetime of an instrumentation library.
I do not think we should aim to lock the telemetry schema and disallow changes to it.
Such locking would place a huge limitation on how instrumentation can evolve and
would make it nearly impossible to fix mistakes in the semantic conventions, in the
schema or in the implementation of the instrumentation (which will inevitably happen
sooner or later).

This OTEP introduces the concept of telemetry schemas that allows semantic conventions
and instrumentations to evolve over time without breaking consumers of telemetry.

@tigrannajaryan tigrannajaryan requested a review from a team April 12, 2021 22:24
Resolves open-telemetry/opentelemetry-specification#1324

I believe changes to semantic conventions and the shape of emitted telemetry
are likely to occur during the lifetime of an instrumentation library.
I do not think we should aim to lock the telemetry schema and disallow changes to it.
Such locking would place a huge limitation on how instrumentation can evolve and
would make it nearly impossible to fix mistakes in the semantic conventions, in the
schema or  in the implementation of the instrumentation (which will inevitably happen
sooner or later).

This OTEP introduces the concept of telemetry schemas that allows semantic conventions
and instrumentations to evolve over time without breaking consumers of telemetry.
@tigrannajaryan tigrannajaryan force-pushed the feature/tigran/schemas branch from ef79ccb to e9afbf7 Compare April 12, 2021 22:29

OpenTelemetry API and SDK require the following changes:

- Add method SetSchema(schema_url) to Tracer. After this call all telemetry
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be better to only allow this to be called once, e.g. by making it an optional argument to the Provider.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is good point and I was thinking about it, but I was not sure we want to alter the signature of an API that is already declared stable (GetTracer). If I am not wrong it would be a backward compatible API change but it will likely be a breaking ABI change (I am forgetting now if we require ABI compatibility for languages like C++).

Alternatively for languages which allow overloading we could introduce a new GetMeter/GetTracer/GetLogEmitter overload which takes 3 parameters (the 3rd parameter being schema_url).

I am open for suggestions.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to have a design principle that certain API methods are extensible, and use language-specific constructs to achieve that. E.g. in Java the tracer shouldn't be accessed via provider.getTracer, but using a builder. In Python it can be getTracer method with kw args.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that this should be set only once, preferably at creation time, and once the tracer has been used at all, that it should not be able to have it's schema value changed.

For instance, I think in Java, we'd probably just add an overload to the TracerProvider.get() method to allow the specification of a schema URL, and under the hood, we'd default it to whatever schema version the semantic convention library aligned with the SDK version used (or something functionally equivalent to this... we might choose to introduce an optional builder for tracers but it would effectively do the same thing).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I rewrote "API and SDK Changes" section to reflect this discussion. Please check.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yurishkuro please have a look at the updated "API and SDK Changes" and see if you are OK with it.

Copy link
Contributor

@jsuereth jsuereth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i love this direction! May take me a while to think through nuances of the schema-format and its evolution. Specifically: I'm a bit concerned that you may be defining 'code' / 'behavior' in YAML around these transformations.

- Telemetry schemas are versioned. Over time the schema may evolve and telemetry
sources may emit data confirming to newer versions of the schema.

- Telemetry schemas explicitly define transformations that are necessary to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

text/0152-telemetry-schemas.md Show resolved Hide resolved
text/0152-telemetry-schemas.md Show resolved Hide resolved
text/0152-telemetry-schemas.md Show resolved Hide resolved
text/0152-telemetry-schemas.md Show resolved Hide resolved
Copy link
Member Author

@tigrannajaryan tigrannajaryan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Specifically: I'm a bit concerned that you may be defining 'code' / 'behavior' in YAML around these transformations.

That's exactly what it does. The transformation are high-level code. They are instructions that operate on telemetry data. The set of opcodes is defined by schema file specification. Can you please tell what your concerns are?

text/0152-telemetry-schemas.md Show resolved Hide resolved
text/0152-telemetry-schemas.md Show resolved Hide resolved
text/0152-telemetry-schemas.md Show resolved Hide resolved
text/0152-telemetry-schemas.md Show resolved Hide resolved
Copy link
Contributor

@jmacd jmacd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very enthusiastic, thanks @tigrannajaryan

which currently does not exist (all communication is currently
one-directional, from sources to consumers).

## Prior Art
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fantastic. 😀

text/0152-telemetry-schemas.md Show resolved Hide resolved
To allow carrying the Schema URL in emitted telemetry it is necessary to add a
schema_url field to OTLP messages.

We add schema_url fields to the following messages:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an interesting pattern, where we place a new optional field in one of two places, three times over. Since it's likely to set a precedent, I have a couple of thought experiments:

  1. Instead of adding a new schema_url to the ResourceSignal messages here, why not use a resource named schema.url? (JSON Schema uses $schema, I like that too). This would support schema URLS with no new fields.

  2. Instead of adding a new schema_url to every of the InstrumentationLibraryXXX messages, could we instead add another place for per-instrumentation-library resources to be expressed? e.g., opentelemetry.proto.resource.v1.Resource library_resource = 3;. Would that be more flexible? I think it might be.

The reason for both of these questions is that I can imagine other kinds of property that are somehow special and if we add a new field for each one, we're in trouble. I'm thinking about information related to the metric collection interval and/or other kinds of processing that may have taken place as telemetry data is collected, information we may want to record either at the global level or at the per-library level.

In transportation systems, the term "Waybill" is used to document what is in transit in a multi-party exchange of goods. We have a similar situation here, where you want to express "what's inside" with a schema, and I strongly suspect there are other kinds of "what's inside" information ahead.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd also offer a non-specific concern over increasing the size of outgoing messages from OpenTelemetry, especially from things like the browser runtime. To that point, I'd support the idea of making the schema a named resource vs. adding fields.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jmacd

Instead of adding a new schema_url to the ResourceSignal messages here, why not use a resource named schema.url? (JSON Schema uses $schema, I like that too). This would support schema URLS with no new fields.

Do you mean a resource attribute named "schema.url"? That would work. The only thing I am not quite sure about is whether it is fair to say that "the telemetry schema is an attribute of a Resource that emits the telemetry".

Instead of adding a new schema_url to every of the InstrumentationLibraryXXX messages, could we instead add another place for per-instrumentation-library resources to be expressed? e.g., opentelemetry.proto.resource.v1.Resource library_resource = 3;. Would that be more flexible? I think it might be.

Alternatively, we could allow InstrumentationLibrary to have arbitrary attributes and record "schema.url" attribute there. I am not quite sure recording a Resource inside an InstrumentationLibraryXXX message reflects our understanding of what a Resource is.

I don't have a strong opinion, open for suggestions.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd also offer a non-specific concern over increasing the size of outgoing messages from OpenTelemetry, especially from things like the browser runtime. To that point, I'd support the idea of making the schema a named resource vs. adding fields.

@austinlparker size increase likely is negligible since the schema_url is recorded once per batch of telemetry emitted by instrumentation library.

Copy link
Member

@austinlparker austinlparker Apr 14, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd also offer a non-specific concern over increasing the size of outgoing messages from OpenTelemetry, especially from things like the browser runtime. To that point, I'd support the idea of making the schema a named resource vs. adding fields.

@austinlparker size increase likely is negligible since the schema_url is recorded once per batch of telemetry emitted by instrumentation library.

Ah, ok, it read like you could apply multiple schema.url attributes. Looking at OTLP, it seems like adding this field to InstrumentationLibrary would make the most sense. There's other metadata that could be useful there as well, such as links to the source of the instrumentation itself, or to a service catalog, or so forth. Maybe we just handle them as attributes rather than as resources and build semantic conventions up for them.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we just handle them as attributes rather than as resources and build semantic conventions up for them.

Yes, this is a possible option.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have put some more thought into this. While I believe it is a possible option I think it a tiny bit less desirable than using a distinct protocol field in OTLP. If the schema url is recorded simply as an attribute we need to come up with a name for the that attribute, let's assume for example it is schema.url. But what if we want to change the name of this attribute? We can't, this will be the one attribute which is not described and is not handled by the schema. This becomes an exception. I do not quite like having exceptions like this.
I think having a distinct new field in OTLP that is part of the proto is warranted in this case since it is a meta information that describes the attributes, it is not the same kind of information as all other attributes of a resource are.
So, for this reason my slight preference is to keep my original proposal.
To be clear: this does not prevent us from introducing attributes for instrumentation libraries. But that's a separate topic and can be done in a separate OTEP unrelated to this one.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I concur with your logic (the "meta information that describes attributes" part). Perhaps you could add a note or errata that indicates why this information is special enough to warrant a new distinct field, in an attempt to keep this decision from establishing precedent? I think I'm more concerned about the potential for this decision to create a ripple effect, where we're introducing many versions of OTLP and causing frission for implementors and end-users alike.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, Let me see if I can add something to the document.

To be clear: introducing schema_url as a new field in OTLP is a fully backwards compatible change. The default value for schema_url will be the empty string (as it normally works for new fields in Protobufs) which works great for our purposes. Empty schema_url indicates that the schema is unspecified (as you would expect from any pre-existing software that doesn't know about the concept of schema).
This is one of the fortunate cases when OTLP sender implementations and OTLP receiver implementations don't need to coordinate anything and can start supporting schema_url at their leisure. :-)

@jmacd
Copy link
Contributor

jmacd commented Apr 13, 2021

I'm excited about the potential to use Telemetry Schemas inside logs data to as a way to enable Logs-to-Metrics, Logs-to-Span, and Logs-to-Resource transformations (:grin:).

@victlu
Copy link

victlu commented Apr 13, 2021

How does this OTEP compare against more general Schema Evolution features available in protocols like Avro and/or Parquet? I assume we need something more purpose built for Telemetry community?

@tigrannajaryan
Copy link
Member Author

How does this OTEP compare against more general Schema Evolution features available in protocols like Avro and/or Parquet? I assume we need something more purpose built for Telemetry community?

Avro and Protobuf (I am not familiar with Parquet) require full schema definition for data, while this proposal intentionally avoids that and deals with schema changes only (for now). Other than that there are similarities such as being able to rename a field and retain compatibility (AFAIK, Avro via name aliases, Protobuf via field numbering).

This proposal is also extensible in the sense that arbitrary complex changes to the schema may be introduced as supported transformation types in the future, which are more powerful than what Avro or Protobuf offer in terms of schema evolution.

So, where Avro or Protobuf may allow schema evolution more or less automagically, they are very limited in the sense of what is allowed to evolve, while this proposal is potentially more powerful at the cost of requiring explicit declaration of changes.

This proposal also has many specifics which make sense in the context of OpenTelemetry (i.e. schema version numbers, schema urls, downloadable schemas, etc), but are not necessary useful in a general purpose encoding/data transport such as Avro or Protobuf.

@gramidt
Copy link
Member

gramidt commented Apr 14, 2021

I'm excited about the potential to use Telemetry Schemas inside logs data to as a way to enable Logs-to-Metrics, Logs-to-Span, and Logs-to-Resource transformations (😁).

This is super exciting! Outside of providing a path for multi-variate metrics, it could also eventually be used as a replacement implementation for my immediate Log-to-Span needs ( open-telemetry/opentelemetry-collector-contrib#3071 ). 🚀

### Freeze Schema

Instead of introducing formal schemas, schema files and version we can require
that the instrumentation once it is created never changes. Attributes are never
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As long as the instrumentation conforms to the semantic conventions before and after, that should be fine, shouldn't it?

I think rather than freezing the schema completely, we only need to ban changes that would create an entry in the YAML file defined in this OTEP. So renaming is not allowed, adding new attributes is. Changing the meaning of an attribute is also not allowed, but I think this can also not automatically handled with the schema YAML, although the version specified there could then be used in the backend for version-specific handling.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, certain changes would be fine with the approach you propose. I still don't think this alternate approach is sufficient. I don't believe it solves the problems we want to solve. In my opinion being able to rename attributes or metrics is a basic requirement. Given how much time we spend arguing about attribute names here in Otel semantic conventions and how much uncertainty is there that the name is right even after the attribute is accepted I believe we will inevitably run into a situation when name changes are wanted.

I saw schema troubles in the past. For example with metrics emitted by Otel Collector, we renamed metrics and broke Collector dashboards in a proprietary monitoring tool. When schemas are not explicitly declared and are not a visible concept they still change and things break, just less explicitly visibly so.

Again, we can argue that we should be careful and come up with semantic conventions that we think are right and then never change them. I think it is an attempt to achieve perfection that is very difficult to do and at the same time increases the barrier to introducing semantic conventions because the cost of making a mistake is very high. I think we should lower that cost a bit. It should still be expensive to change semantic conventions (we don't want to encourage that too much), but it should not be impossible.

@tigrannajaryan tigrannajaryan force-pushed the feature/tigran/schemas branch from a4dec17 to 9c2715f Compare April 19, 2021 20:18
Copy link

@zenmoto zenmoto left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like this proposal. I like that it bakes in the ability to evolve both the schema contents and the schema capabilities as requirements for representing data evolve. I think that modeling the schema in an event-sourcing sort of way really lends to an extensibility and fluidity of the design that's missing in other schema systems I've used. My biggest misgiving about semantic conventions is that they cannot be exhaustive but they are also difficult to extend without conflict. I think that the approach you take here neatly addresses this problem and adds much-needed flexibility to a critical portion of the design.

Copy link
Contributor

@jsuereth jsuereth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still have reservations on our ability to define a cohesive migration language in YAML that doesn't eventually turn into gobbledy-gook (or something hard to maintain).

However, on the whole, I really like this proposal for stability and clear evolution.

container.cpu.usage.total: cpu.usage.total
container.memory.usage.max: memory.usage.max

- rename_labels:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: "labels" are now "attributes" (like logs)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I'll wait until the spec is updated and will update here as well.

# starting from this version.
```

#### rename_labels Transformation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: this can be removed (we only have attributes going forward)

@tigrannajaryan
Copy link
Member Author

tigrannajaryan commented Apr 22, 2021

I already have a small prototype implementation in Go

Would be nice to have a prototype in another language (a dynamic one if possible), just to have further validation. We can discuss this after this PR is merged ;)

Just to be clear.

Schema conversion is the process that can be CPU intensive. SDKs are not expected to implement schema conversion. This is a backend job or a Collector job. So, I think we don't need to see how slow is schema conversion in slow languages, people are just not supposed to do that in slow languages. My benchmark in Go is representative of what we will see in Collector. We may also want to benchmark in some other fast languages which people may use for backends (Java, etc), but I do not expect to see dramatically different results.

The work that SDKs are supposed to do, is to populate the schema_url field. This is a trivial amount of work and will most likely be below measurement errors in any benchmark that generates telemetry using our SDKs.

OpenTelemetry publishes it own schema at
`https://opentelemetry.io/schemas/<version>`. The version number of the schema
is the same as the specification version number which publishes the schema.
Every time a new specification version is released a corresponding new schema
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may be useful to require even stronger coupling. Since both schemas and semantic conventions are machine-readable, there could be a simple validation script that makes sure that the new version of the schema accurately reflects the changes to the spec, e.g. if an attribute was removed / renamed from the spec, something about that has to be present in the schema and we should block the release otherwise.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, validation is a good idea.

@weyert
Copy link

weyert commented Apr 22, 2021

Schema conversion is the process that can be CPU intensive. SDKs are not expected to implement schema conversion. This is a backend job or a Collector job. So, I think we don't need to see how slow is schema conversion in slow languages, people are just not supposed to do that in slow languages.

This makes it sound like Telemetry Schemas is a collector only thing. Does the specification typically discuss collector only features? This would exclude a whole group of people who don't or can use the collector. In that case would you expect vendors implement support for this?

I am worried that in the future there will be a dependency on schemas to do transformations while same SDKs or vendors aren't supporting it.

@tigrannajaryan
Copy link
Member Author

Schema conversion is the process that can be CPU intensive. SDKs are not expected to implement schema conversion. This is a backend job or a Collector job. So, I think we don't need to see how slow is schema conversion in slow languages, people are just not supposed to do that in slow languages.

This makes it sound like Telemetry Schemas is a collector only thing. Does the specification typically discuss collector only features? This would exclude a whole group of people who don't or can use the collector. In that case would you expect vendors implement support for this?

I am worried that in the future there will be a dependency on schemas to do transformations while same SDKs or vendors aren't supporting it.

This was already briefly discussed above.

Schemas are not a Collector-only thing. As described in this OTEP, several parties (SDK, Collector, Backend) can participate in a particular way in a schema-aware system. See the diagrams and the explanation in the text: the Collector is an optional element that can help do schema conversions if the backend is not schema-aware. It is perfectly valid to have a system without the Collector and for the backend to handle the schema conversion.

Don't have a Collector and using a backend doesn't know how to handle schemas? That's the worst case and in that case you are exactly where you are today, no worse.

I am worried that in the future there will be a dependency on schemas to do transformations while same SDKs or vendors aren't supporting it.

Schemas are an optional capability. If you have a backend which is aware of schemas you get nicer features. If the backend is not schema-aware you get what you have today, schemas can be fully ignored in that case.

My view is that it is not SDK's job to do schema conversions. However, nothing prevents us from implementing such capabilities in SDKs in the future if we decide that it is necessary.

@weyert
Copy link

weyert commented Apr 23, 2021

Thank you for clarifying @tigrannajaryan. I really liking the idea. Can't approve it but its worth 👍 :)

@bogdandrutu
Copy link
Member

@weyert you can approve :) it will not be green but will still show as approve in the list

@tigrannajaryan tigrannajaryan merged commit 09e285e into open-telemetry:main Apr 26, 2021
@tigrannajaryan tigrannajaryan deleted the feature/tigran/schemas branch April 26, 2021 14:41
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Oct 12, 2021
This merges the remaining bits of OTEP0152 to the specification
and links to already existing schema-related sections in the API.

Related to open-telemetry/oteps#152
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Oct 12, 2021
This merges the remaining bits of OTEP0152 to the specification
and links to already existing schema-related sections in the API.

Related to open-telemetry/oteps#152
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Oct 12, 2021
This merges the remaining bits of OTEP0152 to the specification
and links to already existing schema-related sections in the API.

The documents are marked as "Experimental", however I would like
to promote file_format_v1.0.0.md as soon as possible since we
already have published such files.

Related to open-telemetry/oteps#152
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Oct 12, 2021
This merges the remaining bits of OTEP0152 to the specification
and links to already existing schema-related sections in the API.

The documents are marked as "Experimental", however I would like
to promote file_format_v1.0.0.md as soon as possible since we
already have published such files.

Related to open-telemetry/oteps#152
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Oct 12, 2021
This merges the remaining bits of OTEP0152 to the specification
and links to already existing schema-related sections in the API.

The documents are marked as "Experimental", however I would like
to promote file_format_v1.0.0.md as soon as possible since we
already have published such files.

Related to open-telemetry/oteps#152
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Oct 12, 2021
This merges the remaining bits of OTEP0152 to the specification
and links to already existing schema-related sections in the API.

The documents are marked as "Experimental", however I would like
to promote file_format_v1.0.0.md as soon as possible since we
already have published such files.

Related to open-telemetry/oteps#152
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Oct 12, 2021
This merges the remaining bits of OTEP0152 to the specification
and links to already existing schema-related sections in the API.

The documents are marked as "Experimental", however I would like
to promote file_format_v1.0.0.md to Stable as soon as possible since we
already have published such files.

Related to open-telemetry/oteps#152
tigrannajaryan added a commit to tigrannajaryan/opentelemetry-specification that referenced this pull request Oct 12, 2021
This merges the remaining bits of OTEP0152 to the specification
and links to already existing schema-related sections in the API.

The documents are marked as "Experimental", however I would like
to promote file_format_v1.0.0.md to Stable as soon as possible since we
already have published such files.

Related to open-telemetry/oteps#152
tigrannajaryan added a commit to open-telemetry/opentelemetry-specification that referenced this pull request Oct 22, 2021
Add telemetry schemas to the specification

This merges the remaining bits of OTEP0152 to the specification
and links to already existing schema-related sections in the API.

The documents are marked as "Experimental", however I would like
to promote file_format_v1.0.0.md to Stable as soon as possible since we
already have published such files.

Related to open-telemetry/oteps#152
joaopgrassi pushed a commit to dynatrace-oss-contrib/semantic-conventions that referenced this pull request Mar 21, 2024
Add telemetry schemas to the specification

This merges the remaining bits of OTEP0152 to the specification
and links to already existing schema-related sections in the API.

The documents are marked as "Experimental", however I would like
to promote file_format_v1.0.0.md to Stable as soon as possible since we
already have published such files.

Related to open-telemetry/oteps#152
carlosalberto pushed a commit to carlosalberto/oteps that referenced this pull request Oct 23, 2024
Resolves open-telemetry/opentelemetry-specification#1324

I believe changes to semantic conventions and the shape of emitted telemetry
are likely to occur during the lifetime of an instrumentation library.
I do not think we should aim to lock the telemetry schema and disallow changes to it.
Such locking would place a huge limitation on how instrumentation can evolve and
would make it nearly impossible to fix mistakes in the semantic conventions, in the
schema or  in the implementation of the instrumentation (which will inevitably happen
sooner or later).

This OTEP introduces the concept of telemetry schemas that allows semantic conventions
and instrumentations to evolve over time without breaking consumers of telemetry.
carlosalberto pushed a commit to carlosalberto/oteps that referenced this pull request Oct 23, 2024
Resolves open-telemetry/opentelemetry-specification#1324

I believe changes to semantic conventions and the shape of emitted telemetry
are likely to occur during the lifetime of an instrumentation library.
I do not think we should aim to lock the telemetry schema and disallow changes to it.
Such locking would place a huge limitation on how instrumentation can evolve and
would make it nearly impossible to fix mistakes in the semantic conventions, in the
schema or  in the implementation of the instrumentation (which will inevitably happen
sooner or later).

This OTEP introduces the concept of telemetry schemas that allows semantic conventions
and instrumentations to evolve over time without breaking consumers of telemetry.
carlosalberto pushed a commit to carlosalberto/oteps that referenced this pull request Oct 30, 2024
Resolves open-telemetry/opentelemetry-specification#1324

I believe changes to semantic conventions and the shape of emitted telemetry
are likely to occur during the lifetime of an instrumentation library.
I do not think we should aim to lock the telemetry schema and disallow changes to it.
Such locking would place a huge limitation on how instrumentation can evolve and
would make it nearly impossible to fix mistakes in the semantic conventions, in the
schema or  in the implementation of the instrumentation (which will inevitably happen
sooner or later).

This OTEP introduces the concept of telemetry schemas that allows semantic conventions
and instrumentations to evolve over time without breaking consumers of telemetry.
carlosalberto pushed a commit to open-telemetry/opentelemetry-specification that referenced this pull request Nov 8, 2024
Resolves #1324

I believe changes to semantic conventions and the shape of emitted telemetry
are likely to occur during the lifetime of an instrumentation library.
I do not think we should aim to lock the telemetry schema and disallow changes to it.
Such locking would place a huge limitation on how instrumentation can evolve and
would make it nearly impossible to fix mistakes in the semantic conventions, in the
schema or  in the implementation of the instrumentation (which will inevitably happen
sooner or later).

This OTEP introduces the concept of telemetry schemas that allows semantic conventions
and instrumentations to evolve over time without breaking consumers of telemetry.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Introduce Telemetry Schema