-
Notifications
You must be signed in to change notification settings - Fork 893
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Instrumentation Scope and Version as labels in Prometheus #2703
Conversation
A comment from the prometheus wg: We should consider splitting That is orthogonal to this PR, though. |
518cee5
to
3fc9530
Compare
This PR was marked stale due to lack of activity. It will be closed in 7 days. |
The two ideas from the spec sig today were:
The preference at the meeting seemed to be for the first option, so i'm going to update this PR to propose that. |
9c6dfa2
to
77410fd
Compare
77410fd
to
6a7866b
Compare
I've updated the description and proposal to propose using the last word of the instrumentation scope. |
Edit: based on examples @arminru and @joaopgrassi posted in the comment thread below, I am not so sure anymore that name clashes will be rare. So, maybe it is not such a good idea after all. |
@open-telemetry/specs-metrics-approvers @open-telemetry/technical-committee I will merge this tomorrow unless I see objections. |
@carlosalberto Thank you for poking me: I agree with @dashpole that my feedback has indeed been addressed. ❤️ |
d69217f
to
02d4347
Compare
Changelog updated. No new changes for several days. No objections from anyone. Merging. |
…elemetry#2703) Fixes: open-telemetry#2493 Related: open-telemetry#1906 This is a second attempt at open-telemetry#2422. ## Changes ### Background: Naming Collisions OpenTelemetry encourages the use of semantic conventions to make metric naming similar across instrumentation. For example, if I have two http client libraries in my application, they would each produce a metric named `http.client.duration`, but with different meters (e.g. [otelmux](https://github.com/open-telemetry/opentelemetry-go-contrib/tree/0dd27453a1ce8e433cb632e175a27f28ee83998d/instrumentation/github.com/gorilla/mux/otelmux) vs [otelhttp](https://github.com/open-telemetry/opentelemetry-go-contrib/tree/0dd27453a1ce8e433cb632e175a27f28ee83998d/instrumentation/net/http/otelhttp)). A prometheus exporter which receives both of these metrics would not be able to serve both of those histograms. This would occur anytime a user uses two libraries which produces the same category (e.g. http, database, rpc, etc) of metrics, or if the two libraries just happen to use the same name for a metric. Depending on the language, it may fail to create the Prometheus exporter, or may fail to send some, or all metrics if the same labels keys and values are present in both. ### Desired User Experience As a user, I can use a Prometheus exporter with OpenTelemetry without experiencing strange errors/behavior due to naming collisions, and without having to apply transformations to metric names to work around these, except in rare cases. As a user, I can easily add scope attributes to my metrics in Prometheus by joining with an info-style metric. This is a common pattern in Prometheus: https://grafana.com/blog/2021/08/04/how-to-use-promql-joins-for-more-effective-queries-of-prometheus-metrics-at-scale/. ### Design Add `opentelemetry_scope_name` and `opentelemetry_scope_version` as labels to all metrics. This ensures that if two libraries produce the same metric points, they don't collide because the scope name/version labels will differ. Those labels also serve as "join keys" to be able to add scope attributes to Prometheus metrics. This is accomplished by introducing an `opentelemetry_scope_info` metric containing the same `opentelemetry_scope_name` and `opentelemetry_scope_version` labels, but also including scope attributes. This also enables the collector's Prometheus receiver to reconstruct the original Instrumentation Scope when receiving the metrics.
Fixes: #2493
Related: #1906
This is a second attempt at #2422.
No longer depends on #2702Changes
Background: Naming Collisions
OpenTelemetry encourages the use of semantic conventions to make metric naming similar across instrumentation. For example, if I have two http client libraries in my application, they would each produce a metric named
http.client.duration
, but with different meters (e.g. otelmux vs otelhttp). A prometheus exporter which receives both of these metrics would not be able to serve both of those histograms. This would occur anytime a user uses two libraries which produces the same category (e.g. http, database, rpc, etc) of metrics, or if the two libraries just happen to use the same name for a metric. Depending on the language, it may fail to create the Prometheus exporter, or may fail to send some, or all metrics if the same labels keys and values are present in both.Desired User Experience
As a user, I can use a Prometheus exporter with OpenTelemetry without experiencing strange errors/behavior due to naming collisions, and without having to apply transformations to metric names to work around these, except in rare cases.
As a user, I can easily add scope attributes to my metrics in Prometheus by joining with an info-style metric. This is a common pattern in Prometheus: https://grafana.com/blog/2021/08/04/how-to-use-promql-joins-for-more-effective-queries-of-prometheus-metrics-at-scale/.
Design
Add
opentelemetry_scope_name
andopentelemetry_scope_version
as labels to all metrics. This ensures that if two libraries produce the same metric points, they don't collide because the scope name/version labels will differ.Those labels also serve as "join keys" to be able to add scope attributes to Prometheus metrics. This is accomplished by introducing an
opentelemetry_scope_info
metric containing the sameopentelemetry_scope_name
andopentelemetry_scope_version
labels, but also including scope attributes. This also enables the collector's Prometheus receiver to reconstruct the original Instrumentation Scope when receiving the metrics.@open-telemetry/wg-prometheus
@jmacd @tigrannajaryan @jsuereth