diff --git a/CHANGELOG.md b/CHANGELOG.md index cafe951dd70..14353e4ecb7 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -194,10 +194,10 @@ release. ### Compatibility -- Flexibilie escaping of characters that are discouraged by Prometheus Conventions +- Introduced flexible escaping of characters that are discouraged by Prometheus Conventions in Prometheus exporters. ([#4533](https://github.com/open-telemetry/opentelemetry-specification/pull/4533)) -- Flexibilize addition of unit/type related suffixes in Prometheus exporters. +- Introduced flexible addition of unit/type related suffixes in Prometheus exporters. ([#4533](https://github.com/open-telemetry/opentelemetry-specification/pull/4533)) - Define the configuration option "Translation Strategies" for Prometheus exporters. ([#4533](https://github.com/open-telemetry/opentelemetry-specification/pull/4533)) @@ -611,7 +611,7 @@ release. ### Compatibility -- Clarify prometheus exporter should have `host` and `port` configuration options. +- Clarify Prometheus exporter should have `host` and `port` configuration options. ([#4147](https://github.com/open-telemetry/opentelemetry-specification/pull/4147)) ### Common @@ -793,7 +793,7 @@ release. ([#3945](https://github.com/open-telemetry/opentelemetry-specification/pull/3945)) - Prometheus: represent Prometheus Info, StateSet and Unknown-typed metrics in OTLP. ([#3868](https://github.com/open-telemetry/opentelemetry-specification/pull/3868)) -- Update and reorganize the prometheus sdk exporter specification. +- Update and reorganize the Prometheus sdk exporter specification. ([#3872](https://github.com/open-telemetry/opentelemetry-specification/pull/3872)) ### SDK Configuration @@ -993,7 +993,7 @@ release. - Add optional configuration for Prometheus exporters to promote resource attributes to metric attributes ([#3761](https://github.com/open-telemetry/opentelemetry-specification/pull/3761)) -- Clarifications and flexibility in Exemplar speicification. +- Clarifications and flexibility in Exemplar specification. ([#3760](https://github.com/open-telemetry/opentelemetry-specification/pull/3760)) ### Logs @@ -1197,7 +1197,7 @@ release. - No changes. -### Supplemenatary Guidelines +### Supplementary Guidelines - No changes. @@ -1252,7 +1252,7 @@ release. - No changes. -### Supplemenatary Guidelines +### Supplementary Guidelines - No changes. @@ -1313,7 +1313,7 @@ release. - No changes. -### Supplemenatary Guidelines +### Supplementary Guidelines - No changes. @@ -1363,7 +1363,7 @@ release. namespaces. ([#3507](https://github.com/open-telemetry/opentelemetry-specification/pull/3507)) -### Supplemenatary Guidelines +### Supplementary Guidelines - No changes. @@ -1442,7 +1442,7 @@ release. - Add log entries to specification README.md contents. ([#3435](https://github.com/open-telemetry/opentelemetry-specification/pull/3435)) -### Supplemenatary Guidelines +### Supplementary Guidelines - Add guidance to use service-supported propagation formats as default for AWS SDK client calls. ([#3212](https://github.com/open-telemetry/opentelemetry-specification/pull/3212)) @@ -1583,13 +1583,13 @@ release. - Move X-Ray Env Variable propagation to span link instead of parent for AWS Lambda. ([#3166](https://github.com/open-telemetry/opentelemetry-specification/pull/3166)) -- Add heroku resource semantic conventions. +- Add Heroku resource semantic conventions. [#3075](https://github.com/open-telemetry/opentelemetry-specification/pull/3075) -- BREAKING: Rename faas.execution to faas.invocation_id +- BREAKING: Rename `faas.execution` to `faas.invocation_id` ([#3209](https://github.com/open-telemetry/opentelemetry-specification/pull/3209)) -- BREAKING: Change faas.max_memory units to Bytes instead of MB +- BREAKING: Change `faas.max_memory` units to Bytes instead of MB ([#3209](https://github.com/open-telemetry/opentelemetry-specification/pull/3209)) -- BREAKING: Expand scope of faas.id to cloud.resource_id +- BREAKING: Expand scope of `faas.id` to `cloud.resource_id` ([#3188](https://github.com/open-telemetry/opentelemetry-specification/pull/3188)) - Add Connect RPC specific conventions ([#3116](https://github.com/open-telemetry/opentelemetry-specification/pull/3116)) @@ -1689,7 +1689,7 @@ release. - Add condition with sum and count for Prometheus summaries ([3059](https://github.com/open-telemetry/opentelemetry-specification/pull/3059)). -- Clarify prometheus unit conversions +- Clarify Prometheus unit conversions ([#3066](https://github.com/open-telemetry/opentelemetry-specification/pull/3066)). - Define conversion mapping from OTel Exponential Histograms to Prometheus Native Histograms. @@ -1932,7 +1932,7 @@ release. ([#2874](https://github.com/open-telemetry/opentelemetry-specification/pull/2874)) - Add `process.paging.faults` metric to semantic conventions ([#2827](https://github.com/open-telemetry/opentelemetry-specification/pull/2827)) -- Define semantic conventions yaml for non-otlp conventions +- Define semantic conventions yaml for Non-OTLP conventions ([#2850](https://github.com/open-telemetry/opentelemetry-specification/pull/2850)) - Add more semantic convetion attributes of Apache RocketMQ ([#2881](https://github.com/open-telemetry/opentelemetry-specification/pull/2881)) @@ -1977,7 +1977,7 @@ release. - Changed the default buckets for Explicit Bucket Histogram to better match the official Prometheus clients. ([#2770](https://github.com/open-telemetry/opentelemetry-specification/pull/2770)). -- Fix OpenMetrics valid label keys, and specify prometheus conversion for metric name. +- Fix OpenMetrics valid label keys, and specify Prometheus conversion for metric name. ([#2788](https://github.com/open-telemetry/opentelemetry-specification/pull/2788)) ### Logs @@ -2230,7 +2230,7 @@ release. ### Common -- Move non-otlp.md to common directory +- Move `non-otlp.md` to common directory ([#2587](https://github.com/open-telemetry/opentelemetry-specification/pull/2587)). ## v1.11.0 (2022-05-04) @@ -2339,7 +2339,7 @@ release. ([#2317](https://github.com/open-telemetry/opentelemetry-specification/pull/2317)). - Clarify that expectations for user callback behavior are documentation REQUIREMENTs. ([#2361](https://github.com/open-telemetry/opentelemetry-specification/pull/2361)). -- Specify how to handle prometheus exemplar timestamp and attributes +- Specify how to handle Prometheus exemplar timestamp and attributes ([#2376](https://github.com/open-telemetry/opentelemetry-specification/pull/2376)) - Clarify that the periodic metric reader is the default metric reader to be paired with push metric exporters (OTLP, stdout, in-memory) @@ -2348,7 +2348,7 @@ release. ([#2380](https://github.com/open-telemetry/opentelemetry-specification/pull/2380)) - Clarify that MetricReader has one-to-one mapping to MeterProvider. ([#2406](https://github.com/open-telemetry/opentelemetry-specification/pull/2406)). -- For prometheus metrics without sums, leave the sum unset +- For Prometheus metrics without sums, leave the sum unset ([#2413](https://github.com/open-telemetry/opentelemetry-specification/pull/2413)) - Specify default configuration for a periodic metric reader that is associated with the stdout metric exporter. @@ -2599,7 +2599,7 @@ release. ([#1945](https://github.com/open-telemetry/opentelemetry-specification/pull/1945)) - Add "IBM z/Architecture" (`s390x`) to `host.arch` ([#2055](https://github.com/open-telemetry/opentelemetry-specification/pull/2055)) -- BREAKING: Remove db.cassandra.keyspace and db.hbase.namespace, and clarify db.name +- BREAKING: Remove `db.cassandra.keyspace` and `db.hbase.namespace`, and clarify db.name ([#1973](https://github.com/open-telemetry/opentelemetry-specification/pull/1973)) - Add AWS App Runner as a cloud platform ([#2004](https://github.com/open-telemetry/opentelemetry-specification/pull/2004)) @@ -2703,7 +2703,7 @@ Added telemetry schemas documents to the specification ([#2008](https://github.c ### OpenTelemetry Protocol - Add environment variables for configuring the OTLP exporter protocol (`grpc`, `http/protobuf`, `http/json`) ([#1880](https://github.com/open-telemetry/opentelemetry-specification/pull/1880)) -- Allow implementations to use their own default for OTLP compression, with `none` denotating no compression +- Allow implementations to use their own default for OTLP compression, with `none` indicating no compression ([#1923](https://github.com/open-telemetry/opentelemetry-specification/pull/1923)) - Clarify OTLP server components MUST support none/gzip compression ([#1955](https://github.com/open-telemetry/opentelemetry-specification/pull/1955)) @@ -2747,7 +2747,7 @@ Added telemetry schemas documents to the specification ([#2008](https://github.c ### Semantic Conventions - Add mobile-related network state: `net.host.connection.type`, `net.host.connection.subtype` & `net.host.carrier.*` [#1647](https://github.com/open-telemetry/opentelemetry-specification/issues/1647) -- Adding alibaba cloud as a cloud provider. +- Adding Alibaba cloud as a cloud provider. ([#1831](https://github.com/open-telemetry/opentelemetry-specification/pull/1831)) ### Compatibility @@ -2822,7 +2822,7 @@ Added telemetry schemas documents to the specification ([#2008](https://github.c ### Traces - Add schema_url support to `Tracer`. ([#1666](https://github.com/open-telemetry/opentelemetry-specification/pull/1666)) -- Add Dropped Links Count to non-otlp exporters section ([#1697](https://github.com/open-telemetry/opentelemetry-specification/pull/1697)) +- Add Dropped Links Count to Non-OTLP exporters section ([#1697](https://github.com/open-telemetry/opentelemetry-specification/pull/1697)) - Add note about reporting dropped counts for attributes, events, links. ([#1699](https://github.com/open-telemetry/opentelemetry-specification/pull/1699)) ### Metrics @@ -3075,7 +3075,7 @@ New: ([#1066](https://github.com/open-telemetry/opentelemetry-specification/pull/1066)) - Change Status to be consistent with Link and Event ([#1067](https://github.com/open-telemetry/opentelemetry-specification/pull/1067)) -- Clarify env variables in otlp exporter +- Clarify env variables in OTLP exporter ([#975](https://github.com/open-telemetry/opentelemetry-specification/pull/975)) - Add Prometheus exporter environment variables ([#1021](https://github.com/open-telemetry/opentelemetry-specification/pull/1021)) @@ -3319,7 +3319,7 @@ Updates: - [OTEP-0002](oteps/trace/0002-remove-spandata.md): Removed SpanData interface in favor of Span Start and End options. - [OTEP-0003](oteps/metrics/0003-measure-metric-type.md) - Consolidatesd pre-aggregated and raw metrics APIs. + Consolidated pre-aggregated and raw metrics APIs. - [OTEP-0008](oteps/metrics/0008-metric-observer.md) Added Metrics Observers API. - [OTEP-0009](oteps/metrics/0009-metric-handles.md) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 5df81c8a1fb..8dc13c3fddc 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -312,7 +312,7 @@ Release Procedure: (e.g., in the last released version rather than Unreleased). 4. Add the changelog entries from `CHANGELOG.md` to the description of the [release PR]( - https://github.com/open-telemetry/opentelemetry-specification/releases) and undraft it. + https://github.com/open-telemetry/opentelemetry-specification/releases) and un-draft it. 5. Once it is approved, confirm the date in the CHANGELOG is up-to-date, and merge it, creating a new release tag, e.g. "v1.50.0", containing the CHANGELOG contents. diff --git a/development/trace/zpages.md b/development/trace/zpages.md index 8e5f8ce2b22..c72b5329fb9 100644 --- a/development/trace/zpages.md +++ b/development/trace/zpages.md @@ -1,3 +1,4 @@ + # zPages ## Table of Contents diff --git a/oteps/0001-telemetry-without-manual-instrumentation.md b/oteps/0001-telemetry-without-manual-instrumentation.md index 1af631d251b..61999a80331 100644 --- a/oteps/0001-telemetry-without-manual-instrumentation.md +++ b/oteps/0001-telemetry-without-manual-instrumentation.md @@ -43,7 +43,7 @@ Without further ado, here are a set of requirements for “official” OpenTelem * Note that this also makes it easy to test against multiple different versions of any given library * A fully pluggable architecture, where plugins can be registered at runtime without requiring changes to the central repo at github.com/open-telemetry * E.g., for ops teams that want to write a plugin for a proprietary piece of legacy software they are unable to recompile -* Augemntation of whitebox instrumentation by blackbox instrumentation (or, perhaps, vice versa). That is, not only can the trace context be shared by these different flavors of instrumentation, but even things like in-flight Span objects can be shared and co-modified (e.g., to use runtime interposition to grab local variables and attach them to a manually-instrumented span). +* Augmentation of whitebox instrumentation by blackbox instrumentation (or, perhaps, vice versa). That is, not only can the trace context be shared by these different flavors of instrumentation, but even things like in-flight Span objects can be shared and co-modified (e.g., to use runtime interposition to grab local variables and attach them to a manually-instrumented span). ## Trade-offs and mitigations diff --git a/oteps/0016-named-tracers.md b/oteps/0016-named-tracers.md index d25734f2489..ee3dcd168f0 100644 --- a/oteps/0016-named-tracers.md +++ b/oteps/0016-named-tracers.md @@ -1,6 +1,6 @@ # Named Tracers and Meters -_Associate Tracers and Meters with the name and version of the instrumentation library which reports telemetry data by parameterizing the API which the library uses to acquire the Tracer or Meter._ +_Associate Tracers and Meters with the name and version of the instrumentation library which reports telemetry data by parameterization of the API which the library uses to acquire the Tracer or Meter._ ## Suggested reading @@ -50,7 +50,7 @@ Meter meter = OpenTelemetry.getMeterProvider().getMeter("io.opentelemetry.contri These factories (`TracerProvider` and `MeterProvider`) replace the global `Tracer` / `Meter` singleton objects as ubiquitous points to request Tracer and Meter instances. - The _name_ used to create a Tracer or Meter must identify the _instrumentation_ libraries (also referred to as _integrations_) and not the library being instrumented. These instrumentation libraries could be libraries developed in an OpenTelemetry repository, a 3rd party implementation, or even auto-injected code (see [Open Telemetry Without Manual Instrumentation OTEP](https://github.com/open-telemetry/oteps/blob/main/text/0001-telemetry-without-manual-instrumentation.md)). See also the examples for identifiers at the end. + The _name_ used to create a Tracer or Meter must identify the _instrumentation_ libraries (also referred to as _integrations_) and not the library being instrumented. These instrumentation libraries could be libraries developed in an OpenTelemetry repository, a 3rd party implementation, or even auto-injected code (see [OpenTelemetry Without Manual Instrumentation OTEP](https://github.com/open-telemetry/oteps/blob/main/text/0001-telemetry-without-manual-instrumentation.md)). See also the examples for identifiers at the end. If a library (or application) has instrumentation built-in, it is both the instrumenting and instrumented library and should pass its own name here. In all other cases (and to distinguish them from that case), the distinction between instrumenting and instrumented library is very important. For example, if an HTTP library `com.example.http` is instrumented by either `io.opentelemetry.contrib.examplehttp`, then it is important that the Tracer is not named `com.example.http`, but `io.opentelemetry.contrib.examplehttp` after the actual instrumentation library. If no name (null or empty string) is specified, following the suggestions in ["error handling proposal"](https://github.com/open-telemetry/opentelemetry-specification/pull/153), a "smart default" will be applied and a default Tracer / Meter implementation is returned. @@ -68,7 +68,7 @@ Examples (based on existing contribution libraries from OpenTracing and OpenCens * `io.opentracing.contrib.asynchttpclient` * `io.opencensus.contrib.http.servlet` * `io.opencensus.contrib.spring.sleuth.v1x` -* `io.opencesus.contrib.http.jaxrs` +* `io.opencensus.contrib.http.jaxrs` * `github.com/opentracing-contrib/go-amqp` (Go) * `github.com/opentracing-contrib/go-grpc` (Go) * `OpenTracing.Contrib.NetCore.AspNetCore` (.NET) diff --git a/oteps/0035-opentelemetry-protocol.md b/oteps/0035-opentelemetry-protocol.md index 1c277a517e6..3292aeb888a 100644 --- a/oteps/0035-opentelemetry-protocol.md +++ b/oteps/0035-opentelemetry-protocol.md @@ -1,3 +1,4 @@ + # OpenTelemetry Protocol Specification **Author**: Tigran Najaryan, Omnition Inc. diff --git a/oteps/0066-separate-context-propagation.md b/oteps/0066-separate-context-propagation.md index d63362bbfe5..86dae81e4e5 100644 --- a/oteps/0066-separate-context-propagation.md +++ b/oteps/0066-separate-context-propagation.md @@ -96,7 +96,7 @@ When a span is started, a new context is returned, with the new span set as the current span. **`GetSpanPropagator() -> (HTTP_Extractor, HTTP_Injector)`** -When a span is extracted, the extracted value is stored in the context seprately +When a span is extracted, the extracted value is stored in the context separately from the current span. ### Correlations API diff --git a/oteps/0083-component.md b/oteps/0083-component.md index e29e6d90341..34b95919656 100644 --- a/oteps/0083-component.md +++ b/oteps/0083-component.md @@ -50,7 +50,7 @@ Application servers) every Application will have it's own `TracerProvider` and ## Internal details -This proposal affects only the OpenTelemtry protocol, and proposes a way to +This proposal affects only the OpenTelemetry protocol, and proposes a way to represent the telemetry data in a structured way. For example, here is the protobuf definition for metrics: metrics: diff --git a/oteps/0122-otlp-http-json.md b/oteps/0122-otlp-http-json.md index 1a655e379cb..f2a42d4d4ec 100644 --- a/oteps/0122-otlp-http-json.md +++ b/oteps/0122-otlp-http-json.md @@ -19,7 +19,7 @@ This is a proposal to add HTTP Transport extension supporting json serialization ## Motivation -Protobuf is a relatively big dependency, which some clients are not willing to take. For example, webjs, iOS/Android (in some scenarios, the size of the installation package is limited, do not want to introduce protobuf dependencies). Plain JSON is a smaller dependency and is built in the standard libraries of many programming languages. +Protobuf is a relatively big dependency, which some clients are not willing to take. For example, WebJS, iOS/Android (in some scenarios, the size of the installation package is limited, do not want to introduce protobuf dependencies). Plain JSON is a smaller dependency and is built in the standard libraries of many programming languages. ## OTLP/HTTP+JSON Protocol Details diff --git a/oteps/0149-exponential-histogram.md b/oteps/0149-exponential-histogram.md index 48a057f9f07..acf2ab3f884 100644 --- a/oteps/0149-exponential-histogram.md +++ b/oteps/0149-exponential-histogram.md @@ -70,7 +70,7 @@ The followings are restrictions of ExponentialBuckets: Merging histograms of different types, or even the same type, but with different parameters remains an issue. There are lengthy discussions in [#226](https://github.com/open-telemetry/opentelemetry-proto/pull/226#issuecomment-776526864) -Some merge method may introduce artifacts (information not present in original data). Generally, splitting a bucket introduces artifacts. For example, when using linear interpolation to split a bucket, we are assumming uniform distribution within the bucket. "Uniform distribution" is information not present in original data. Merging buckets on the other hand, does not introduce artifacts. Merging buckets with identical bounds from two histograms is totally artifact free. Merging multiple adjacent buckets in one histogram is also artifact free, but it does reduce the resolution of the histogram. Whether such a merge is "lossy" is arguable. Because of this ambiguity, the term "lossy" is not used in this doc. +Some merge method may introduce artifacts (information not present in original data). Generally, splitting a bucket introduces artifacts. For example, when using linear interpolation to split a bucket, we are assuming uniform distribution within the bucket. "Uniform distribution" is information not present in original data. Merging buckets on the other hand, does not introduce artifacts. Merging buckets with identical bounds from two histograms is totally artifact free. Merging multiple adjacent buckets in one histogram is also artifact free, but it does reduce the resolution of the histogram. Whether such a merge is "lossy" is arguable. Because of this ambiguity, the term "lossy" is not used in this doc. For exponential histograms, if base1 = base2 ^ N, where N is an integer, the two histograms can be merged without artifacts. Furthermore, we can introduce a series of bases where diff --git a/oteps/0182-otlp-remote-parent.md b/oteps/0182-otlp-remote-parent.md index 5bcfa3b1e50..972a4389ac8 100644 --- a/oteps/0182-otlp-remote-parent.md +++ b/oteps/0182-otlp-remote-parent.md @@ -117,7 +117,7 @@ The first property described by SpanKind reflects whether the Span is a "logical However, the specification stay ambiguous for the `CONSUMER` span kind with respect to the property of the "logical" remote parent. Nevertheless, the proposed field `parent_span_is_remote` has some overlap with that `SpanKind` property. -The specification would require some clearification on the `SpanKind` and its relation to `parent_span_is_remote`. +The specification would require some clarification on the `SpanKind` and its relation to `parent_span_is_remote`. ## Future possibilities diff --git a/oteps/0199-support-elastic-common-schema-in-opentelemetry.md b/oteps/0199-support-elastic-common-schema-in-opentelemetry.md index e497e8009d5..512e7407b3b 100644 --- a/oteps/0199-support-elastic-common-schema-in-opentelemetry.md +++ b/oteps/0199-support-elastic-common-schema-in-opentelemetry.md @@ -42,7 +42,7 @@ Adding the coverage of ECS to OTel would provide guidance to authors of OpenTele In addition to the use case of structured logs, the maturity of ECS for SIEM (Security Information and Event Management) is a great opportunity for OpenTelemetry to expand its scope to the security use cases. -Another significant use case is providing first-class support for Kubernetes application logs, system logs, and application introspection events. We would also like to see support for structured events (e.g. [k8seventsreceiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/k8seventsreceiver)) and using 'content-type' to identify event types. +Another significant use case is providing first-class support for Kubernetes application logs, system logs, and application introspection events. We would also like to see support for structured events (e.g. [`k8seventsreceiver`](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/k8seventsreceiver)) and using 'content-type' to identify event types. We'd like to see different categories of structured logs being well-supported in the [OTel Log Data Model](../specification/logs/data-model.md), presumably through [semantic conventions for log attributes](../specification/logs/data-model.md#field-attributes). For example, NGINX access logs and Apache access logs should be processed the same way as structured logs. This would help in trace and metric correlation with such log data as well as it would help grow the ecosystem of curated UIs provided by observability backends and monitoring dashboards (e.g. one single HTTP access log dashboard benefiting Apache httpd, Nginx, and HAProxy). diff --git a/oteps/0202-events-and-logs-api.md b/oteps/0202-events-and-logs-api.md index 189ef9b4eee..2b371a449dc 100644 --- a/oteps/0202-events-and-logs-api.md +++ b/oteps/0202-events-and-logs-api.md @@ -4,7 +4,7 @@ We introduce an Events and Logs API that is based on the OpenTelemetry Log signa ## Motivation -In OpenTelemetry's perspective Log Records and Events are different names for the same concept - however, there is a subtle difference in how they are represented using the underlying data model that is described below. We will describe why the existing Logging APIs are not sufficient for the purpose of creating events. It will then be evident that we will need an API in OpenTelementry for creating events. Note that the Events here refer to standalone Events and not to be confused with Span Events which occur only in the context of a span. +In OpenTelemetry's perspective Log Records and Events are different names for the same concept - however, there is a subtle difference in how they are represented using the underlying data model that is described below. We will describe why the existing Logging APIs are not sufficient for the purpose of creating events. It will then be evident that we will need an API in OpenTelemetry for creating events. Note that the Events here refer to standalone Events and not to be confused with Span Events which occur only in the context of a span. The Logs part of the API introduced here is supposed to be used only by the Log Appenders and end-users should continue to use the logging APIs available in the languages. diff --git a/oteps/0225-configuration.md b/oteps/0225-configuration.md index 84e18836294..e0e5653a865 100644 --- a/oteps/0225-configuration.md +++ b/oteps/0225-configuration.md @@ -300,7 +300,7 @@ In choosing to recommend JSON schema, the working group looked at the following * Tooling available for validating CUE files in languages outside of Go were limited. * Familiarity and learning curve would create problems for both users and contributors of OpenTelemetry. * [Protobuf](https://protobuf.dev) - With protobuf already used heavily in OpenTelemetry, the format was worth investigating as an option to define the schema. The working group decided against Protobuf because: - * Validation errors are the result of serlization errors which can be difficult to interpret. + * Validation errors are the result of serialization errors which can be difficult to interpret. * Limitations in the schema definition language result in poor ergonomics if type safety is to be retained. ## Open questions @@ -335,7 +335,7 @@ There are likely more questions related to the final design that will be discuss ### Additional configuration providers -Although the initial proposal for configuration supports only describes in-code and file representations, it's possible additional sources (remote, opamp, ...) for configuration will be desirable. The implementation of the configuration model and components should be extensible to allow for this. +Although the initial proposal for configuration supports only describes in-code and file representations, it's possible additional sources (remote, OpAMP, ...) for configuration will be desirable. The implementation of the configuration model and components should be extensible to allow for this. ### Integration with auto-instrumentation diff --git a/oteps/0227-separate-semantic-conventions.md b/oteps/0227-separate-semantic-conventions.md index 427c019d0e2..d0bc611b10c 100644 --- a/oteps/0227-separate-semantic-conventions.md +++ b/oteps/0227-separate-semantic-conventions.md @@ -63,7 +63,7 @@ required attribute names and their interaction with the SDK. The following process would be used to ensure semantic conventions are seamlessly moved to their new location. This process lists steps in order: -- A moratorium will be placed on Semantic Convention PRs to the specififcation +- A moratorium will be placed on Semantic Convention PRs to the specification repository. (Caveat that PRs related to this proposal would be allowed). - Interactions between Semantic Conventions and the Specification will be extracted such that the Specification can place requirements on Semantic diff --git a/oteps/0232-maturity-of-otel.md b/oteps/0232-maturity-of-otel.md index 660406fccfc..ac93aab8bcc 100644 --- a/oteps/0232-maturity-of-otel.md +++ b/oteps/0232-maturity-of-otel.md @@ -19,7 +19,7 @@ Deliverables of a SIG MUST have a declared maturity level, established by SIG ma * the Collector core distribution might declare itself stable and include a receiver that is not stable. In that case, the receiver has to be clearly marked as such * the Java Agent might be declared stable, while individual instrumentation packages are not -Components SHOULD NOT be marked as stable if their user-visible interfaces are not stable. For instance, if the Collector's component "otlpreceiver" declares a dependency on the OpenTelemetry Collector API "config" package which is marked with a maturity level of "beta", the "otlpreceiver" should be at most "beta". Maintainers are free to deviate from this recommendation if they believe users are not going to be affected by future changes. +Components SHOULD NOT be marked as stable if their user-visible interfaces are not stable. For instance, if the Collector's component `otlpreceiver` declares a dependency on the OpenTelemetry Collector API "config" package which is marked with a maturity level of "beta", the `otlpreceiver` should be at most "beta". Maintainers are free to deviate from this recommendation if they believe users are not going to be affected by future changes. For the purposes of this document, a breaking change is defined as a change that may require consumers of our components to adapt themselves in order to avoid disruption to their usage of our components. diff --git a/oteps/0266-move-oteps-to-spec.md b/oteps/0266-move-oteps-to-spec.md index 2bef2caa34d..4acd721b646 100644 --- a/oteps/0266-move-oteps-to-spec.md +++ b/oteps/0266-move-oteps-to-spec.md @@ -16,7 +16,7 @@ Originally, OTEPs were kept as a separate repository to keep disjoint/disruptive - OTEPs are expected to be directional and subject to change when actually entered into the specification. - OTEPs require more approvals than specification PRs -- OTEPs have different PR worklfows (whether due to accidental omission or conscious decision), e.g. staleness checks, linting. +- OTEPs have different PR workflows (whether due to accidental omission or conscious decision), e.g. staleness checks, linting. As OpenTelemetry is stabilizing, the need for OTEPs to live outside the specification is growing less, and we face challenges like: @@ -55,7 +55,7 @@ aspects of the current OTEP status. OTEPs were originally based on common enhancement proposal processes in other ecosystems, where enhancements live outside core repositories and follow a more rigorous criteria and evaluation. We are finding this problematic for OpenTelemetry for reasons discussed above. Additionally, unlike many other ecosystems where enhancement/design is kept separate from core code, OpenTelemetry *already* keeps its design separate form core code via the Specification vs. implementation repositories. Unlike these other OSS projects, our Specification generally requires rigorous discussion, design and prototyping prior to acceptance. Even -after acceptance into the specification, work is still required for improvements to roll out to the ecosystem. Effectively: The OpenTelemetry specification has no such thing as a "small" change: There are only medium changes that appear small, but would be enhancements in other proejcts or large changes that require an OTEP. +after acceptance into the specification, work is still required for improvements to roll out to the ecosystem. Effectively: The OpenTelemetry specification has no such thing as a "small" change: There are only medium changes that appear small, but would be enhancements in other projects or large changes that require an OTEP. ## Open questions diff --git a/oteps/assets/0225-config.yaml b/oteps/assets/0225-config.yaml index febfd3ee861..53ac90199a8 100644 --- a/oteps/assets/0225-config.yaml +++ b/oteps/assets/0225-config.yaml @@ -404,7 +404,7 @@ sdk: # # Environment variable: OTEL_BLRP_MAX_EXPORT_BATCH_SIZE max_export_batch_size: 512 - # Sets the exporter. Exporter must refer to a key in sdk.loger_provider.exporters. + # Sets the exporter. Exporter must refer to a key in sdk.logger_provider.exporters. # # Environment variable: OTEL_LOGS_EXPORTER exporter: otlp diff --git a/oteps/entities/0256-entities-data-model.md b/oteps/entities/0256-entities-data-model.md index 70d0fde0b5f..8fe55806e5b 100644 --- a/oteps/entities/0256-entities-data-model.md +++ b/oteps/entities/0256-entities-data-model.md @@ -611,7 +611,7 @@ virtually identical to what this OTEP proposes. There is also an implementation of this design in the Collector, see [completed issue to add entity events](https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/23565) and [the PR](https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/24419) -that implements entity event emitting for k8scluster receiver in the Collector. +that implements entity event emitting for `k8scluster` receiver in the Collector. ## Alternatives diff --git a/oteps/entities/0264-resource-and-entities.md b/oteps/entities/0264-resource-and-entities.md index ab4eefd6f62..0cd055b79da 100644 --- a/oteps/entities/0264-resource-and-entities.md +++ b/oteps/entities/0264-resource-and-entities.md @@ -44,7 +44,7 @@ It is an expansion on the [previous entity proposal](0256-entities-data-model.md - [Trade-offs and mitigations](#trade-offs-and-mitigations) * [Why don't we download schema url contents?](#why-dont-we-download-schema-url-contents) - [Prior art and alternatives](#prior-art-and-alternatives) -- [Future Posibilities](#future-posibilities) +- [Future Possibilities](#future-possibilities) - [Use Cases](#use-cases) * [SDK - Multiple Detectors of the same Entity type](#sdk---multiple-detectors-of-the-same-entity-type) * [SDK and Collector - Simple coordination](#sdk-and-collector---simple-coordination) @@ -93,7 +93,7 @@ The SDK Resource Provider is responsible for running all configured Resource and - Resource detectors otherwise follow existing merge semantics. - The Specification merge rules will be updated to account for violations prevalent in ALL implementation of resource detection. - Specifically: This means the [rules around merging Resource across schema-url will be dropped](../../specification/resource/sdk.md#merge). Instead only conflicting attributes will be dropped. - - SchemaURL on Resource will be deprecated with entity-specific schema-url replacing it. SDKs will only fill out SchemaURL on Resource when SchemaURL matches across all entities discovered. Additionally, only existing stable resource attributes can be used in Resource SchemaURL in stable OpenTelemetry components (Specifially `service.*` and `sdk.*` are the only stabilized resource convnetions). Given prevalent concerns of implementations around Resource merge specification, we suspect impact of this deprecation to be minimal, and existing usage was within the "experimental" phase of semantic conventions. + - SchemaURL on Resource will be deprecated with entity-specific schema-url replacing it. SDKs will only fill out SchemaURL on Resource when SchemaURL matches across all entities discovered. Additionally, only existing stable resource attributes can be used in Resource SchemaURL in stable OpenTelemetry components (Specifically `service.*` and `sdk.*` are the only stabilized resource conventions). Given prevalent concerns of implementations around Resource merge specification, we suspect impact of this deprecation to be minimal, and existing usage was within the "experimental" phase of semantic conventions. - An OOTB ["Env Variable Entity Detector"](#environment-variable-detector) will be specified and provided vs. requiring SDK wide ENV variables for resource detection. - *Additionally, Resource Provider would be responsible for understanding Entity lifecycle events, for Entities whose lifetimes do not match or exceed the SDK's own lifetime (e.g. browser session).* @@ -123,7 +123,7 @@ We provide a simple algorithm for this behavior: - For each entity detector `D`, detect entities - For each entity detected, `d'` - If an entity `e'` exists in `E` with same entity type as `d'`, do one of the following: - - If the entity identiy and schema_url are the same, merge the descriptive attributes of `d'` into `e'`: + - If the entity identity and schema_url are the same, merge the descriptive attributes of `d'` into `e'`: - For each descriptive attribute `da'` in `d'` - If `da'.key` does not exist in `e'`, then add `da'` to `ei` - otherwise, ignore. @@ -187,7 +187,7 @@ processor: The list of detectors is given in priority order (first wins, in event of a tie, outside of override configuration). The processor may need to be updated to allow the override flag to apply to each individual detector. -The rules for attributes would follow entity merging rules, as defined for the SDK resource proivder. +The rules for attributes would follow entity merging rules, as defined for the SDK resource provider. Note: While this proposals shows a new processor replacing the `resourcedetection` processor, the details of whether to modify-in-place the existing `resourcedetection` processor or create a new one would be determined as a follow up to this design. Ideally, we don't want users to need new configuration for resource in the otel collector. @@ -196,7 +196,7 @@ Note: While this proposals shows a new processor replacing the `resourcedetectio Given our desired design and algorithms for detecting, merging and manipulating Entities, we need the ability to denote how entity and resource relate. These changes must not break existing usage of Resource, therefore: - The Entity model must be *layered on top of* the Resource model. A system does not need to interact with entities for correct behavior. -- Existing key usage of Resource must remain when using Entities, specifically navigationality (see: [OpenTelemetry Resources: Principles and Characteristics](https://docs.google.com/document/d/1Xd1JP7eNhRpdz1RIBLeA1_4UYPRJaouloAYqldCeNSc/edit)) +- Existing key usage of Resource must remain when using Entities, specifically navigation (see: [OpenTelemetry Resources: Principles and Characteristics](https://docs.google.com/document/d/1Xd1JP7eNhRpdz1RIBLeA1_4UYPRJaouloAYqldCeNSc/edit)) - Downstream components should be able to engage with the Entity model in Resource. The following changes are made: @@ -321,13 +321,13 @@ Today, [Prometheus compatibility](../../specification/compatibility/prometheus_a Here's a list of requirements for the solution: -- Existing prometheus/OpenTelemetry users should be able to migrate from where they are today. -- Any solution MUST work with the [info-typed metrics](https://github.com/prometheus/proposals/blob/main/proposals/0037-native-support-for-info-metrics-metadata.md#goals) being added in prometheus. +- Existing Prometheus/OpenTelemetry users should be able to migrate from where they are today. +- Any solution MUST work with the [info-typed metrics](https://github.com/prometheus/proposals/blob/main/proposals/0037-native-support-for-info-metrics-metadata.md#goals) being added in Prometheus. - Resource descriptive attributes should leverage `info()` or metadata. - Resource identifying attributes need more thought/design from OpenTelemetry semconv + entities WG. - - Note: Current `info()` design will only work with `target_info` metric by default (other info metrics can be specified per `info` call), and `job/instance` labels for joins. These labels MUST be generated by the OTLP endpoint in prometheus. -- (desired) Users should be able to correlate metric timeseries to other signals via Resource attributes showing up as labels in prometheus. -- (desired) Conversion from `OTLP -> prometheus` can be reversed such that `OTLP -> Prometheus -> OTLP` is non-lossy. + - Note: Current `info()` design will only work with `target_info` metric by default (other info metrics can be specified per `info` call), and `job/instance` labels for joins. These labels MUST be generated by the OTLP endpoint in Prometheus. +- (desired) Users should be able to correlate metric timeseries to other signals via Resource attributes showing up as labels in Prometheus. +- (desired) Conversion from `OTLP -> Prometheus` can be reversed such that `OTLP -> Prometheus -> OTLP` is non-lossy. Here's a few (non-exhaustive) options for what this could look like: @@ -338,23 +338,23 @@ Here's a few (non-exhaustive) options for what this could look like: - By default all identifying labels on Resource are promoted to resource attributes. - All descriptive labels are placed on `target_info`. - (likely) `job`/`instance` will need to be synthesized for resources lacking a `service` entity. -- Option #3 - Enocde entities into prometheus as info metrics +- Option #3 - Encode entities into Prometheus as info metrics - Create `{entity_type}_entity_info` metrics. - Synthesize `job`/`instance` labels for joins between all `*_info` metrics. - Expand the scope of info-typed metrics work in Prometheus to work with this encoding. - Option #4 - Find solutions leveraging the [metadata design](https://docs.google.com/document/d/1epBslSSwRO2do4armx40fruStJy_PS6thROnPeDifz8/edit#heading=h.5sybau7waq2q) -These designs will be explored and evaluated in light of the requirements. For now, prometheus compatibility will continue with Option #1 as we work together towards building a better future for resource in prometheus. +These designs will be explored and evaluated in light of the requirements. For now, Prometheus compatibility will continue with Option #1 as we work together towards building a better future for resource in Prometheus. ### Should entities have a domain? -Is it worth having a `domain` in addition to type for entity? We could force each entity to exist in one domain and leverage domain generically in resource management. Entity Detectors would be responsible for an entire domain, selecting only ONE to apply a resource. Domains could be layered, e.g. a Cloud-specific domain may layer on top of a Kubernetes domain, where "GKE cluster entity" identifies *which* kubernetes cluster a kuberntes infra entity is part of. This layer would be done naively, via automatic join of participating entities or explicit relationships derived from GKE specific hooks. +Is it worth having a `domain` in addition to type for entity? We could force each entity to exist in one domain and leverage domain generically in resource management. Entity Detectors would be responsible for an entire domain, selecting only ONE to apply a resource. Domains could be layered, e.g. a Cloud-specific domain may layer on top of a Kubernetes domain, where "GKE cluster entity" identifies *which* kubernetes cluster a kubernetes infra entity is part of. This layer would be done naively, via automatic join of participating entities or explicit relationships derived from GKE specific hooks. It's unclear if this is needed initially, and we believe this could be layered in later. ### Should resources have only one associated entity? -Given the problems leading to the Entities working group, and the needs of existing Resource users today, we think it is infeasible and unscalable to limit resource to only one entity. This would place restrictions on modeling Entities that would require OpenTelemetry to be the sole source of entity definitions and hurt building an open and extensible ecosystem. Additionally it would need careful definition of solutions for the following problems/rubrics: +Given the problems leading to the Entities working group, and the needs of existing Resource users today, we think it is infeasible and un-scalable to limit resource to only one entity. This would place restrictions on modeling Entities that would require OpenTelemetry to be the sole source of entity definitions and hurt building an open and extensible ecosystem. Additionally it would need careful definition of solutions for the following problems/rubrics: - New entities added by extension should not break existing code - Collector augmentation / enrichment (resource, e.g.) - Should be extensible and not hard-coded. We need a general algorithm not specific rulesets. @@ -370,7 +370,7 @@ This can be done in follow up design / OTEPs. While we expect the collector to be the first component to start engaging with Entities in an architecture, this could lead to data model violations. We have a few options to deal with this issue: - Consider this a bug and warn users not to do it. -- Specify that missing attribute keys are acceptable for descriptive attribtues. +- Specify that missing attribute keys are acceptable for descriptive attributes. - Specify that missing attribute keys denote that entities are unusable for that batch of telemetry, and treat the content as malformed. ### What about advanced entity interaction in the Collector? @@ -411,7 +411,7 @@ Below is a brief discussion of some design decisions: - **Embed fully Entity in Resource.** This was rejected because it makes it easy/trivial for Resource attributes and Entities to diverge. This would prevent the backwards/forwards compatibility goals and also require all participating OTLP users to leverage entities. Entity should be an opt-in / additional feature that may or may not be engaged with, depending on user need. - **Re-using resource detection as-is** This was rejected as not having a viable compatibility path forward. Creating a new set of components that can preserve existing behavior while allowing users to adopt the new functionality means that users have better control of when they see / change system behavior, and adoption is more obvious across the ecosystem. -## Future Posibilities +## Future Possibilities This proposal opens the door for addressing issues where an Entity's lifetime does not match an SDK's lifetime, in addition to providing a data model where mutable (descriptive) attributes can be changed over the lifetime of a resource without affecting its identity. We expect a follow-on OTEP which directly handles this issue. @@ -476,7 +476,7 @@ The resulting OTLP from the collector would contain a resource with all of the entities (`process`, `service`, `ec2`, and `host`). This is because the entities are all disjoint. -*Note: this matches today's behavior of existing resource detection and OpenTelmetry collector where all attributes wind up on resource.* +*Note: this matches today's behavior of existing resource detection and OpenTelemetry collector where all attributes wind up on resource.* ### SDK and Collector - Entity coordination with descriptive attributes @@ -589,208 +589,208 @@ Ideally, we'd like a solution where: - Collector - [system](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/resourcedetectionprocessor/internal/system/metadata.yaml) - - host.arch - - host.name - - host.id - - host.ip - - host.mac - - host.cpu.vendor.id - - host.cpu.family - - host.cpu.model.id - - host.cpu.model.name - - host.cpu.stepping - - host.cpu.cache.l2.size - - os.description - - os.type + - `host.arch` + - `host.name` + - `host.id` + - `host.ip` + - `host.mac` + - `host.cpu.vendor.id` + - `host.cpu.family` + - `host.cpu.model.id` + - `host.cpu.model.name` + - `host.cpu.stepping` + - `host.cpu.cache.l2.size` + - `os.description` + - `os.type` - [docker](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/resourcedetectionprocessor/internal/docker/metadata.yaml) - - host.name - - os.type + - `host.name` + - `os.type` - [heroku](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/resourcedetectionprocessor/internal/heroku/metadata.yaml) - - cloud.provider - - heroku.app.id - - heroku.dyno.id - - heroku.release.commit - - heroku.release.creation_timestamp - - service.instance.id - - service.name - - service.version + - `cloud.provider` + - `heroku.app.id` + - `heroku.dyno.id` + - `heroku.release.commit` + - `heroku.release.creation_timestamp` + - `service.instance.id` + - `service.name` + - `service.version` - [gcp](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/resourcedetectionprocessor/internal/gcp/metadata.yaml) - gke - - cloud.provider - - cloud.platform - - cloud.account.id - - cloud.region - - cloud.availability_zone - - k8s.cluster.name - - host.id - - host.name + - `cloud.provider` + - `cloud.platform` + - `cloud.account.id` + - `cloud.region` + - `cloud.availability_zone` + - `k8s.cluster.name` + - `host.id` + - `host.name` - gce - - cloud.provider - - cloud.platform - - cloud.account.id - - cloud.region - - cloud.availability_zone - - host.id - - host.name - - host.type - - (optional) gcp.gce.instance.hostname - - (optional) gcp.gce.instance.name + - `cloud.provider` + - `cloud.platform` + - `cloud.account.id` + - `cloud.region` + - `cloud.availability_zone` + - `host.id` + - `host.name` + - `host.type` + - (optional) `gcp.gce.instance.hostname` + - (optional) `gcp.gce.instance.name` - AWS - [ec2](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/resourcedetectionprocessor/internal/aws/ec2/metadata.yaml) - - cloud.provider - - cloud.platform - - cloud.account.id - - cloud.region - - cloud.availability_zone - - host.id - - host.image.id - - host.name - - host.type + - `cloud.provider` + - `cloud.platform` + - `cloud.account.id` + - `cloud.region` + - `cloud.availability_zone` + - `host.id` + - `host.image.id` + - `host.name` + - `host.type` - [ecs](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/resourcedetectionprocessor/internal/aws/ecs/metadata.yaml) - - cloud.provider - - cloud.platform - - cloud.account.id - - cloud.region - - cloud.availability_zone - - aws.ecs.cluster.arn - - aws.ecs.task.arn - - aws.ecs.task.family - - aws.ecs.task.id - - aws.ecs.task.revision - - aws.ecs.launchtype (V4 only) - - aws.log.group.names (V4 only) - - aws.log.group.arns (V4 only) - - aws.log.stream.names (V4 only) - - aws.log.stream.arns (V4 only) + - `cloud.provider` + - `cloud.platform` + - `cloud.account.id` + - `cloud.region` + - `cloud.availability_zone` + - `aws.ecs.cluster.arn` + - `aws.ecs.task.arn` + - `aws.ecs.task.family` + - `aws.ecs.task.id` + - `aws.ecs.task.revision` + - `aws.ecs.launchtype` (V4 only) + - `aws.log.group.names` (V4 only) + - `aws.log.group.arns` (V4 only) + - `aws.log.stream.names` (V4 only) + - `aws.log.stream.arns` (V4 only) - [elastic_beanstalk](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/resourcedetectionprocessor/internal/aws/elasticbeanstalk/metadata.yaml) - - cloud.provider - - cloud.platform - - deployment.environment - - service.instance.id - - service.version + - `cloud.provider` + - `cloud.platform` + - `deployment.environment` + - `service.instance.id` + - `service.version` - eks - - cloud.provider - - cloud.platform - - k8s.cluster.name + - `cloud.provider` + - `cloud.platform` + - `k8s.cluster.name` - lambda - - cloud.provider - - cloud.platform - - cloud.region - - faas.name - - faas.version - - faas.instance - - faas.max_memory - - aws.log.group.names - - aws.log.stream.names + - `cloud.provider` + - `cloud.platform` + - `cloud.region` + - `faas.name` + - `faas.version` + - `faas.instance` + - `faas.max_memory` + - `aws.log.group.names` + - `aws.log.stream.names` - Azure - - cloud.provider - - cloud.platform - - cloud.region - - cloud.account.id - - host.id - - host.name - - azure.vm.name - - azure.vm.size - - azure.vm.scaleset.name - - azure.resourcegroup.name + - `cloud.provider` + - `cloud.platform` + - `cloud.region` + - `cloud.account.id` + - `host.id` + - `host.name` + - `azure.vm.name` + - `azure.vm.size` + - `azure.vm.scaleset.name` + - `azure.resourcegroup.name` - Azure aks - - cloud.provider - - cloud.platform - - k8s.cluster.name + - `cloud.provider` + - `cloud.platform` + - `k8s.cluster.name` - Consul - - cloud.region - - host.id - - host.name + - `cloud.region` + - `host.id` + - `host.name` - *exploded consul metadata* - k8s Node - - k8s.node.uid + - `k8s.node.uid` - Openshift - - cloud.provider - - cloud.platform - - cloud.region - - k8s.cluster.name + - `cloud.provider` + - `cloud.platform` + - `cloud.region` + - `k8s.cluster.name` - Java Resource Detection - SDK-Default - - service.name - - telemetry.sdk.version - - telemetry.sdk.language - - telemetry.sdk.name + - `service.name` + - `telemetry.sdk.version` + - `telemetry.sdk.language` + - `telemetry.sdk.name` - [process](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/691de74a4b0539c1329222aefb962c232028032b/instrumentation/resources/library/src/main/java/io/opentelemetry/instrumentation/resources/ProcessResource.java#L60) - - process.pid - - process.command_line - - process.command_args - - process.executable.path + - `process.pid` + - `process.command_line` + - `process.command_args` + - `process.executable.path` - [host](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/instrumentation/resources/library/src/main/java/io/opentelemetry/instrumentation/resources/HostResource.java#L31) - - host.name - - host.arch + - `host.name` + - `host.arch` - [container](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/instrumentation/resources/library/src/main/java/io/opentelemetry/instrumentation/resources/ContainerResource.java) - - container.id + - `container.id` - [os](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/instrumentation/resources/library/src/main/java/io/opentelemetry/instrumentation/resources/OsResource.java) - - os.type + - `os.type` - [AWS](https://github.com/open-telemetry/opentelemetry-java-contrib/tree/main/aws-resources) - EC2 - - host.id - - cloud.availability_zone - - host.type - - host.image.id - - cloud.account.id - - cloud.region - - host.name + - `host.id` + - `cloud.availability_zone` + - `host.type` + - `host.image.id` + - `cloud.account.id` + - `cloud.region` + - `host.name` - ECS - - cloud.provider - - cloud.platform - - aws.log.group.names - - aws.log.stream.names + - `cloud.provider` + - `cloud.platform` + - `aws.log.group.names` + - `aws.log.stream.names` - EKS - - cloud.provider - - cloud.platform - - k8s.cluster.name - - container.id + - `cloud.provider` + - `cloud.platform` + - `k8s.cluster.name` + - `container.id` - Lambda - - cloud.platform - - cloud.region - - faas.name - - faas.version + - `cloud.platform` + - `cloud.region` + - `faas.name` + - `faas.version` - [GCP](https://github.com/open-telemetry/opentelemetry-java-contrib/tree/main/gcp-resources) - - cloud.provider - - cloud.platform - - cloud.account.id - - cloud.availability_zone - - cloud.region - - host.id - - host.name - - host.type - - k8s.pod.name - - k8s.namespace.name - - k8s.container.name - - k8s.cluster.name - - faas.name - - faas.instance + - `cloud.provider` + - `cloud.platform` + - `cloud.account.id` + - `cloud.availability_zone` + - `cloud.region` + - `host.id` + - `host.name` + - `host.type` + - `k8s.pod.name` + - `k8s.namespace.name` + - `k8s.container.name` + - `k8s.cluster.name` + - `faas.name` + - `faas.instance` - Go - [container](https://github.com/open-telemetry/opentelemetry-go/blob/main/sdk/resource/container.go) - - container.id + - `container.id` - [host](https://github.com/open-telemetry/opentelemetry-go/blob/main/sdk/resource/host_id.go) - - host.id + - `host.id` - [os](https://github.com/open-telemetry/opentelemetry-go/blob/main/sdk/resource/os.go) - - os.name + - `os.name` - [process](https://github.com/open-telemetry/opentelemetry-go/blob/main/sdk/resource/process.go) - - process.pid - - process.executable.name - - process.executable.path - - process.command_line - - process.command_args - - process.owner + - `process.pid` + - `process.executable.name` + - `process.executable.path` + - `process.command_line` + - `process.command_args` + - `process.owner` - [builtin](https://github.com/open-telemetry/opentelemetry-go/blob/main/sdk/resource/builtin.go) - - service.instance.id - - service.name + - `service.instance.id` + - `service.name` - [OTEL operator](https://github.com/open-telemetry/opentelemetry-operator/blob/a1e8f927909b81eb368c0483940e0b90d7fdb057/pkg/instrumentation/sdk_test.go#L752) injected ENV variables - - service.instance.id - - service.name - - service.version - - k8s.namespace.name - - k8s.pod.name - - k8s.node.name - - k8s.container.name + - `service.instance.id` + - `service.name` + - `service.version` + - `k8s.namespace.name` + - `k8s.pod.name` + - `k8s.node.name` + - `k8s.container.name` ### Implications @@ -851,7 +851,7 @@ included". However, this can be refined. Resources today provide a [few key features](https://docs.google.com/document/d/1Xd1JP7eNhRpdz1RIBLeA1_4UYPRJaouloAYqldCeNSc/edit): - They provide identity - Uniquely identifying the origin of the data. -- They provide "navigationality" - allowing users to find the source of the data within their o11y and infrastructure tools. +- They provide "navigation" - allowing users to find the source of the data within their o11y and infrastructure tools. - They allow aggregation / slicing of data on interesting domains. A litmus test for what entities to include on resource should be as follows: diff --git a/oteps/experimental/0121-config-service.md b/oteps/experimental/0121-config-service.md index af43645cdce..c6886fc5939 100644 --- a/oteps/experimental/0121-config-service.md +++ b/oteps/experimental/0121-config-service.md @@ -110,7 +110,7 @@ Having a small polling interval (how often we read configs) would mean that conf ## Prior art and alternatives -Jaegar has the option of a Remote sampler, which allows reading from a central configuration, even dynamically with an Adaptive sampler. +Jaeger has the option of a Remote sampler, which allows reading from a central configuration, even dynamically with an Adaptive sampler. The main comparative for remote configuration is a push vs. polling mechanism. The benefits of having a mechanism where the configuration service pushes new configs is that it's less work for the user, with it being not necessary for them to set up a configuration service. There is also no load associated with polling the configuration service in the instrumented application, which would keep the OpenTelemetry SDK more lightweight. diff --git a/oteps/logs/0097-log-data-model.md b/oteps/logs/0097-log-data-model.md index b1c7e3b8b4d..f88bbcd5dc1 100644 --- a/oteps/logs/0097-log-data-model.md +++ b/oteps/logs/0097-log-data-model.md @@ -610,31 +610,31 @@ this data model. FACILITY enum Describes where the event originated. A predefined list of Unix processes. Part of event source identity. Example: `mail system` - Attributes["syslog.facility"] + `Attributes["syslog.facility"]` VERSION number Meta: protocol version, orthogonal to the event. - Attributes["syslog.version"] + `Attributes["syslog.version"]` HOSTNAME string Describes the location where the event originated. Possible values are FQDN, IP address, etc. - Resource["host.hostname"] + `Resource["host.hostname"]` APP-NAME string User-defined app name. Part of event source identity. - Resource["service.name"] + `Resource["service.name"]` PROCID string Not well defined. May be used as a meta field for protocol operation purposes or may be part of event source identity. - Attributes["syslog.procid"] + `Attributes["syslog.procid"]` MSGID @@ -645,15 +645,15 @@ this data model. STRUCTURED-DATA array of maps of string to string - A variety of use cases depending on the SDID: + A variety of use cases depending on the SD-ID: Can describe event source identity Can include data that describes particular occurrence of the event. Can be meta-information, e.g. quality of timestamp value. - SDID origin.swVersion map to Resource["service.version"] + SD-ID origin.swVersion map to `Resource["service.version"]` -SDID origin.ip map to attribute[net.host.ip"] +SD-ID origin.ip map to `Attributes["net.host.ip"]` -Rest of SDIDs -> Attributes["syslog.*"] +Rest of SD-IDs -> `Attributes["syslog.*"]` MSG @@ -688,7 +688,7 @@ Rest of SDIDs -> Attributes["syslog.*"] Computer string The name of the computer on which the event occurred. - Resource["host.hostname"] + `Resource["host.hostname"]` EventID @@ -706,7 +706,7 @@ Rest of SDIDs -> Attributes["syslog.*"] Rest of the fields. any All other fields in the event. - Attributes["winlog.*"] + `Attributes["winlog.*"]` @@ -734,8 +734,8 @@ Rest of SDIDs -> Attributes["syslog.*"] Category enum - Describes where the event originated and why. SignalFx specific concept. Example: AGENT. If this attribute is not present on the SignalFx Event, it should be set to the null attribute value in the LogRecord -- this will allow unambigous identification of SignalFx events when they are represented as LogRecords. - Attributes["com.splunk.signalfx.event_category"] + Describes where the event originated and why. SignalFx specific concept. Example: AGENT. If this attribute is not present on the SignalFx Event, it should be set to the null attribute value in the LogRecord -- this will allow unambiguous identification of SignalFx events when they are represented as LogRecords. + `Attributes["com.splunk.signalfx.event_category"]` Dimensions @@ -747,7 +747,7 @@ Rest of SDIDs -> Attributes["syslog.*"] Properties map of string to any Additional information about the specific event occurrence. Unlike Dimensions which are fixed for a particular event source, Properties can have different values for each occurrence of the event coming from the same event source. In SignalFx, event Properties are considered additional metadata about an event and do not factor into the identity of an Event Time Series (ETS). - Attributes["com.splunk.signalfx.event_properties"] + `Attributes["com.splunk.signalfx.event_properties"]` @@ -770,19 +770,19 @@ Rest of SDIDs -> Attributes["syslog.*"] host string The host value to assign to the event data. This is typically the host name of the client that you are sending data from. - Resource["host.hostname"] + `Resource["host.hostname"]` source string The source value to assign to the event data. For example, if you are sending data from an app you are developing, you could set this key to the name of the app. - Resource["service.name"] + `Resource["service.name"]` sourcetype string The sourcetype value to assign to the event data. - Attributes["source.type"] + `Attributes["source.type"]` event @@ -900,37 +900,37 @@ Rest of SDIDs -> Attributes["syslog.*"] %a string Client IP - Attributes["net.peer.ip"] + `Attributes["net.peer.ip"]` %A string Server IP - Attributes["net.host.ip"] + `Attributes["net.host.ip"]` %h string Remote hostname. - Attributes["net.peer.name"] + `Attributes["net.peer.name"]` %m string The request method. - Attributes["http.method"] + `Attributes["http.method"]` %v,%p,%U,%q string Multiple fields that can be composed into URL. - Attributes["http.url"] + `Attributes["http.url"]` %>s string Response status. - Attributes["http.status_code"] + `Attributes["http.status_code"]` All other fields @@ -959,19 +959,19 @@ Rest of SDIDs -> Attributes["syslog.*"] eventSource string The service that the request was made to. This name is typically a short form of the service name without spaces plus .amazonaws.com. - Resource["service.name"]? + `Resource["service.name"]`? awsRegion string The AWS region that the request was made to, such as us-east-2. - Resource["cloud.region"] + `Resource["cloud.region"]` sourceIPAddress string The IP address that the request was made from. - Resource["net.peer.ip"] or Resource["net.host.ip"]? TBD + `Resource["net.peer.ip"]` or `Resource["net.host.ip"]`? TBD errorCode @@ -989,7 +989,7 @@ Rest of SDIDs -> Attributes["syslog.*"] All other fields * - Attributes["cloudtrail.*"] + `Attributes["cloudtrail.*"]` @@ -1007,7 +1007,7 @@ Rest of SDIDs -> Attributes["syslog.*"] | trace | string | The trace associated with the log entry, if any. | TraceId | | span_id | string | The span ID within the trace associated with the log entry. | SpanId | | labels | map | A set of user-defined (key, value) data that provides additional information about the log entry. | Attributes | -| All other fields | | | Attributes["google.*"] | +| All other fields | | | `Attributes["google.*"]` | ## Elastic Common Schema @@ -1070,37 +1070,37 @@ Rest of SDIDs -> Attributes["syslog.*"] agent.name string Name given to the agent - resource["telemetry.sdk.name"] + `Resource["telemetry.sdk.name"]` agent.type string Type of agent - resource["telemetry.sdk.language"] + `Resource["telemetry.sdk.language"]` agent.version string Version of agent - resource["telemetry.sdk.version"] + `Resource["telemetry.sdk.version"]` source.ip, client.ip string The IP address that the request was made from. - attributes["net.peer.ip"] or attributes["net.host.ip"] + `Attributes["net.peer.ip"]` or `Attributes["net.host.ip"]` cloud.account.id string ID of the account in the given cloud - resource["cloud.account.id"] + `Resource["cloud.account.id"]` cloud.availability_zone string Availability zone in which this host is running. - resource["cloud.zone"] + `Resource["cloud.zone"]` cloud.instance.id @@ -1124,31 +1124,31 @@ Rest of SDIDs -> Attributes["syslog.*"] cloud.provider string Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. - resource["cloud.provider"] + `Resource["cloud.provider"]` cloud.region string Region in which this host is running. - resource["cloud.region"] + `Resource["cloud.region"]` cloud.image.id* string - resource["host.image.name"] + `Resource["host.image.name"]` container.id string Unique container id - resource["container.id"] + `Resource["container.id"]` container.image.name string Name of the image the container was built on. - resource["container.image.name"] + `Resource["container.image.name"]` container.image.tag @@ -1166,7 +1166,7 @@ Rest of SDIDs -> Attributes["syslog.*"] container.name string Container name. - resource["container.name"] + `Resource["container.name"]` container.runtime @@ -1178,31 +1178,31 @@ Rest of SDIDs -> Attributes["syslog.*"] destination.address string Destination address for the event - attributes["destination.address"] + `Attributes["destination.address"]` error.code string Error code describing the error. - attributes["error.code"] + `Attributes["error.code"]` error.id string Unique identifier for the error. - attributes["error.id"] + `Attributes["error.id"]` error.message string Error message. - attributes["error.message"] + `Attributes["error.message"]` error.stack_trace string The stack trace of this error in plain text. - attributes["error.stack_trace] + `Attributes["error.stack_trace] host.architecture @@ -1226,26 +1226,26 @@ For example, on Windows this could be the host’s Active Directory domain or Ne It normally contains what the hostname command returns on the host machine. -resource["host.hostname"] +`Resource["host.hostname"]` host.id string Unique host id. - resource["host.id"] + `Resource["host.id"]` host.ip Array of string Host IP - resource["host.ip"] + `Resource["host.ip"]` host.mac array of string MAC addresses of the host - resource["host.mac"] + `Resource["host.mac"]` host.name @@ -1254,14 +1254,14 @@ It normally contains what the hostname command returns on the host machine. It may contain what hostname returns on Unix systems, the fully qualified, or a name specified by the user. -resource["host.name"] +`Resource["host.name"]` host.type string Type of host. - resource["host.type"] + `Resource["host.type"]` host.uptime @@ -1287,19 +1287,19 @@ It may contain what hostname returns on Unix systems, the fully qualified, or a service.name string Name of the service data is collected from. - resource["service.name"] + `Resource["service.name"]` service.node.name string Specific node serving that service - resource["service.instance.id"] + `Resource["service.instance.id"]` service.state string Current state of the service. - attributes["service.state"] + `Attributes["service.state"]` service.type @@ -1311,7 +1311,7 @@ It may contain what hostname returns on Unix systems, the fully qualified, or a service.version string Version of the service the data was collected from. - resource["service.version"] + `Resource["service.version"]` diff --git a/oteps/metrics/0088-metric-instrument-optional-refinements.md b/oteps/metrics/0088-metric-instrument-optional-refinements.md index c96f573a3c2..6e3dd5aab85 100644 --- a/oteps/metrics/0088-metric-instrument-optional-refinements.md +++ b/oteps/metrics/0088-metric-instrument-optional-refinements.md @@ -246,7 +246,7 @@ properties, it is natural to compute the sum. When performing spatial aggregation over data without additive properties, it is natural to combine the distributions. The distinction is about how we interpret the values when aggregating. -Use one of the sum-only refinments to report a sum in the default +Use one of the sum-only refinements to report a sum in the default configuration, otherwise use one of the non-sum-only instruments to report a distribution. @@ -260,7 +260,7 @@ to type names like `Int64Measure` and `Float64Measure`. A language with support for unsigned integer types may wish to create dedicated instruments to report these values, leading to type names like `UnsignedInt64Observer` and `UnsignedFloat64Observer`. These -would naturally apply a non-negative refinment. +would naturally apply a non-negative refinement. Other uses for built-in type refinements involve the type for duration measurements. For example, where there is built-in type for the @@ -280,7 +280,7 @@ instrument. There are a total of twelve hypothetical instruments listed in the table below, of which only one has been standardized. Hypothetical future instrument names are _italicized_. -| Foundation instrument | Sum-only? | Precomputed-sum? | Non-negative? | Non-negative-rate? | Instrument name _(hyptothetical)_ | +| Foundation instrument | Sum-only? | Precomputed-sum? | Non-negative? | Non-negative-rate? | Instrument name _(hypothetical)_ | | --- | ---- | ---- | ---- | --- | --- | | Measure | sum-only | | non-negative | non-negative-rate | Counter | | Measure | sum-only | precomputed-sum | | non-negative-rate | _CumulativeCounter_ | @@ -296,7 +296,7 @@ Hypothetical future instrument names are _italicized_. | Observer | | | | | _LastValueObserver_ | To arrive at this listing, several assumptions have been made. For -example, the precomputed-sum and non-negative-rate refeinments are +example, the precomputed-sum and non-negative-rate refinements are only applicable in conjunction with a sum-only refinement. For the precomputed-sum instruments, we technically do not care @@ -352,7 +352,7 @@ one of the additive methods (`Inc`, `Dec`, `Add`, and `Sub`). If we restrict Prometheus Gauges to support only a `Set` method, or to support only the additive methods, then we can model these two -instruments seprately, in a way that is compatible with OpenTelemetry. +instruments separately, in a way that is compatible with OpenTelemetry. A Prometheus Gauge that is used exclusively with `Set()` can be modeled as a Measure instrument with Last Value aggregation. A Prometheus Gauge that is used exclusively with the additive methods be @@ -464,7 +464,7 @@ We have identified important cases that should be standardized: Observer refinements that could be standardized in the future: -- _UpDownCumulativeObserver_: Observe a non-monotonic cumluative counter +- _UpDownCumulativeObserver_: Observe a non-monotonic cumulative counter - _UpDownDeltaObserver_: Observe positive and negative deltas - _AbsoluteLastValueObserver_: Observe non-negative current values. diff --git a/oteps/metrics/0090-remove-labelset-from-metrics-api.md b/oteps/metrics/0090-remove-labelset-from-metrics-api.md index aedeef3bea4..e6070d2d8fb 100644 --- a/oteps/metrics/0090-remove-labelset-from-metrics-api.md +++ b/oteps/metrics/0090-remove-labelset-from-metrics-api.md @@ -20,7 +20,7 @@ cleaner code and OpenTelemetry needs to address them as well, so this means that it is important for OpenTelemetry to support record APIs where users can pass directly the labels. -OpenTelementry can always add this optimization later (backwards compatible +OpenTelemetry can always add this optimization later (backwards compatible change) if we determine that it is very important to have. ## Trade-offs and mitigations diff --git a/oteps/metrics/0098-metric-instruments-explained.md b/oteps/metrics/0098-metric-instruments-explained.md index b6429cd41e1..5b0f77da66d 100644 --- a/oteps/metrics/0098-metric-instruments-explained.md +++ b/oteps/metrics/0098-metric-instruments-explained.md @@ -205,7 +205,7 @@ This proposal continues to specify the use of MinMaxSumCount for these two instr There has been a question about labeling `ValueObserver` measurements with the temporal quality Delta vs. Instantaneous. There is a related question: What does it mean aggregate a Min and Max value for an asynchronous instrument, which may only produce one measurement per collection interval? -The purpose of defining the default aggregation, when there is only one measurement per interval, is to specify how values will be aggregated across multiple collection intervals. When there is no aggregation being applied, the result of MinMaxSumCount aggregation for a single collection interval is a single measurement equal to the Min, the Max, and the Sum, as well as a Count equal to 1. Before we apply aggregation to a `ValueObserver` measurement, we can clearly define it as an Intantaneous measurement. A measurement, captured at an instant near the end of the collection interval, is neither a cumulative nor a delta with respect to the prior collection interval. +The purpose of defining the default aggregation, when there is only one measurement per interval, is to specify how values will be aggregated across multiple collection intervals. When there is no aggregation being applied, the result of MinMaxSumCount aggregation for a single collection interval is a single measurement equal to the Min, the Max, and the Sum, as well as a Count equal to 1. Before we apply aggregation to a `ValueObserver` measurement, we can clearly define it as an Instantaneous measurement. A measurement, captured at an instant near the end of the collection interval, is neither a cumulative nor a delta with respect to the prior collection interval. [OTEP 88][otep-88] discusses the Last Value relationship to help address this question. After capturing a single `ValueObserver` measurement for a given instrument and label set, that measurement becomes the Last value associated with that instrument until the next measurement is taken. @@ -224,7 +224,7 @@ Aggregating `ValueObserver` measurements across both spatial and time dimensions 3. For each distinct label set and timestamp, compute the spatial aggregation using the last-value definition at that timestamp. This results in a set of timestamped aggregate measurements with comparable counts. 4. Aggregate the timestamped measurements from step 3. -Steps 2 and 3 ensure that measurements taken less frequently have equal representation in the output, by virtue of computing the spatial aggregation first. If we were to compute the temporal aggregation first, then aggreagate across spatial dimensions, then instruments collected at a higher frequency will contribute correspondingly more points to the aggregation. Thus, we must aggregate across `ValueObserver` instruments across spatial dimensions before averaging across time. +Steps 2 and 3 ensure that measurements taken less frequently have equal representation in the output, by virtue of computing the spatial aggregation first. If we were to compute the temporal aggregation first, then aggregate across spatial dimensions, then instruments collected at a higher frequency will contribute correspondingly more points to the aggregation. Thus, we must aggregate across `ValueObserver` instruments across spatial dimensions before averaging across time. ## Open Questions @@ -242,7 +242,7 @@ This may be revisited in the future. A cumulative measurement can be converted into delta measurement by remembering the last-reported value. A helper instrument could offer to emulate synchronous cumulative measurements by remembering the last-reported value and reporting deltas synchronously. -A delta measurement can be converted into a cumluative measurement by remembering the sum of all reported values. A helper instrument could offer to emulate asynchronous delta measurements in this way. +A delta measurement can be converted into a cumulative measurement by remembering the sum of all reported values. A helper instrument could offer to emulate asynchronous delta measurements in this way. Should helpers of this nature be standardized, if there is demand? These helpers are excluded from the standard because they carry a number of caveats, but as helpers they can easily do what an OpenTelemery SDK cannot do in general. For example, we are avoiding synchronous cumulative instruments because they seem to imply ordering that an SDK is not required to support, however an instrument helper that itself uses a lock can easily convert to deltas. diff --git a/oteps/profiles/0212-profiling-vision.md b/oteps/profiles/0212-profiling-vision.md index 8ce90303444..2cadd225986 100644 --- a/oteps/profiles/0212-profiling-vision.md +++ b/oteps/profiles/0212-profiling-vision.md @@ -151,7 +151,7 @@ However, the output of OpenTelemetry's standardization effort must take into account that some existing profilers are designed to be low overhead and high performance. For example, they may operate in a whole-datacenter, always-on manner, and/or in environments where they must guarantee low CPU/RAM/network -usage. The OpenTelemetry standardisation effort should take this into account +usage. The OpenTelemetry standardization effort should take this into account and strive to produce a format that is usable by profilers of this nature without sacrificing their performance guarantees. diff --git a/oteps/profiles/0239-profiles-data-model.md b/oteps/profiles/0239-profiles-data-model.md index dff615e8a20..a4a0458e81f 100644 --- a/oteps/profiles/0239-profiles-data-model.md +++ b/oteps/profiles/0239-profiles-data-model.md @@ -1129,7 +1129,7 @@ Offset in the binary that corresponds to the first mapped address. The object this entry is loaded from. This can be a filename on disk for the main binary and shared libraries, or virtual -abstractions like "[vdso]". +abstractions like "[vDSO]". ##### Field `build_id` diff --git a/oteps/trace/0002-remove-spandata.md b/oteps/trace/0002-remove-spandata.md index 1a7c9b5b61e..75e5727cdd4 100644 --- a/oteps/trace/0002-remove-spandata.md +++ b/oteps/trace/0002-remove-spandata.md @@ -12,7 +12,7 @@ SpanData has a couple of use cases. The first use case revolves around creating a span synchronously but needing to change the start time to a more accurate timestamp. For example, in an HTTP server, you might record the time the first byte was received, parse the headers, determine the span name, and then create the span. The moment the span was created isn't representative of when the request actually began, so the time the first byte was received would become the span's start time. Since the current API doesn't allow start timestamps, you'd need to create a SpanData object. The big downside is that you don't end up with an active span object. -The second use case comes from the need to construct and report out of band spans, meaning that you're creating "custom" spans for an operation you don't own. One good example of this is a span sink that takes in structured logs that contain correlation IDs and a duration (e.g. from splunk) and converts them to spans for your tracing system. Another example is running a sidecar on an HAProxy machine, tailing the request logs, and creating spans. SpanData allows you to report the out of band reporting case, whereas you can’t with the current Span API as you cannot set the start and end timestamp. +The second use case comes from the need to construct and report out of band spans, meaning that you're creating "custom" spans for an operation you don't own. One good example of this is a span sink that takes in structured logs that contain correlation IDs and a duration (e.g. from Splunk) and converts them to spans for your tracing system. Another example is running a sidecar on an HAProxy machine, tailing the request logs, and creating spans. SpanData allows you to report the out of band reporting case, whereas you can’t with the current Span API as you cannot set the start and end timestamp. I'd like to propose getting rid of SpanData and `tracer.recordSpanData()` and replacing it by allowing `tracer.startSpan()` to accept a start timestamp option and `span.end()` to accept end timestamp option. This reduces the API surface, consolidating on a single span type. Options would meet the requirements for out of band reporting. diff --git a/oteps/trace/0168-sampling-propagation.md b/oteps/trace/0168-sampling-propagation.md index 723d10b8da9..85444efbb09 100644 --- a/oteps/trace/0168-sampling-propagation.md +++ b/oteps/trace/0168-sampling-propagation.md @@ -206,7 +206,7 @@ where `PP` are two bytes of base16 p-value and `RR` are two bytes of base16 r-value. These values are omitted when they are unknown. This proposal should be taken as a recommendation and will be modified -to [match whatever format OpenTelemtry specifies for its +to [match whatever format OpenTelemetry specifies for its `tracestate`](https://github.com/open-telemetry/opentelemetry-specification/pull/1852). The choice of base16 encoding is therefore just a recommendation, chosen because `traceparent` uses base16 encoding. @@ -362,7 +362,7 @@ probability is unknown: The inputs are recognized as out-of-range as follows: -| Range invariate | Remedy | +| Range invariant | Remedy | | -- | -- | | `p < 0` | drop `p` from tracestate | | `p > 63` | drop `p` from tracestate | diff --git a/oteps/trace/0170-sampling-probability.md b/oteps/trace/0170-sampling-probability.md index 38e1e05a2cf..d7b66556302 100644 --- a/oteps/trace/0170-sampling-probability.md +++ b/oteps/trace/0170-sampling-probability.md @@ -155,7 +155,7 @@ interval, begin weighted sampling using the adjusted count of each span as input weight. This processor drops spans when the configured rate threshold is -exceeeded, otherwise it passes spans through with unmodifed adjusted +exceeded, otherwise it passes spans through with unmodified adjusted counts. When the interval expires and the sample frame is considered complete, @@ -293,7 +293,7 @@ subpopulations being counted. For example, although the estimates for the rate of spans by distinct name drawn from a one-minute sample may have high variance, combining an hour of one-minute sample frames into an aggregate data set is guaranteed to lower variance (assuming the -numebr of span names stays fixed). It must, because the data remains +number of span names stays fixed). It must, because the data remains unbiased, so more data results in lower variance. ### Conveying the sampling probability @@ -319,7 +319,7 @@ way to understand sampling because larger numbers mean greater representivity. Note that it is possible, given this description, to produce adjusted -counts that are not integers. Adjusted counts are an approximatation, +counts that are not integers. Adjusted counts are an approximation, and the expected value of an integer can be a fractional count. Floating-point adjusted counts can be avoided with the use of integer-reciprocal inclusion probabilities. @@ -815,7 +815,7 @@ of composition we assign unknowns `p=64`, which is 1 beyond the range of the 6-bit that represent known p-values. The assignment of `p=64` simplifies the formulas below . -By following these simple rules, any numher of consistent probability +By following these simple rules, any number of consistent probability samplers and non-probability samplers can be combined. Starting with `p=64` representing unknown and `sampled=false`, update the composite p-value to the minimum value of the prior composite p-value and the diff --git a/specification/compatibility/opencensus.md b/specification/compatibility/opencensus.md index fbe3c027d06..18b87ec6998 100644 --- a/specification/compatibility/opencensus.md +++ b/specification/compatibility/opencensus.md @@ -63,7 +63,7 @@ Libraries which want a simple migration can choose to replace instrumentation in Starting with a library using OpenCensus Instrumentation: -1. Annouce to users the library's transition from OpenCensus to OpenTelemetry, and recommend users adopt OC bridges. +1. Announce to users the library's transition from OpenCensus to OpenTelemetry, and recommend users adopt OC bridges. 2. Change unit tests to use the OC bridges, and use OpenTelemetry unit testing frameworks. 3. After a notification period, migrate instrumentation line-by-line to OpenTelemetry. The notification period should be long for popular libraries. 4. Remove the OC bridge from unit tests. @@ -195,11 +195,11 @@ using the OpenCensus <-> OpenTelemetry bridge. ## OpenCensus Binary Context Propagation -The shim will provide an OpenCensus `BinaryPropogator` implementation which -maps [OpenCenus binary trace context format](https://github.com/census-instrumentation/opencensus-specs/blob/master/encodings/BinaryEncoding.md#trace-context) to an OpenTelemetry +The shim will provide an OpenCensus `BinaryPropagator` implementation which +maps [OpenCensus binary trace context format](https://github.com/census-instrumentation/opencensus-specs/blob/master/encodings/BinaryEncoding.md#trace-context) to an OpenTelemetry [SpanContext](../overview.md#spancontext). -This adapter MUST provide an implementation of OpenCensus `BinaryPropogator` to +This adapter MUST provide an implementation of OpenCensus `BinaryPropagator` to write OpenCensus binary format using OpenTelemetry's context. This implementation may be drawn from OpenCensus if applicable. diff --git a/specification/compatibility/opentracing.md b/specification/compatibility/opentracing.md index d2b81a5d03d..5a095c1133a 100644 --- a/specification/compatibility/opentracing.md +++ b/specification/compatibility/opentracing.md @@ -142,7 +142,7 @@ set to `follows_from` or `child_of`. If a list of `Span` references is specified, the union of their `Baggage` values MUST be used as the initial `Baggage` of the newly created `Span`. It is unspecified which `Baggage` value is used in the case of -repeated keys. If no such lisf of references is specified, the current +repeated keys. If no such list of references is specified, the current `Baggage` MUST be used as the initial value of the newly created `Span`. If an initial set of tags is specified, the values MUST be set at diff --git a/specification/compatibility/prometheus_and_openmetrics.md b/specification/compatibility/prometheus_and_openmetrics.md index 5a8c2ab5ffe..72f525f0b89 100644 --- a/specification/compatibility/prometheus_and_openmetrics.md +++ b/specification/compatibility/prometheus_and_openmetrics.md @@ -258,7 +258,7 @@ scope_metrics: Metrics which do not have any label with `otel_scope_` prefix MUST be assigned an instrumentation scope identifying the entity performing the translation from Prometheus to OpenTelemetry (e.g. the collector's -prometheus receiver). +Prometheus receiver). ### Resource Attributes diff --git a/specification/configuration/sdk-environment-variables.md b/specification/configuration/sdk-environment-variables.md index cf34ca6c5a9..be202a43f43 100644 --- a/specification/configuration/sdk-environment-variables.md +++ b/specification/configuration/sdk-environment-variables.md @@ -240,9 +240,9 @@ We define environment variables for setting one or more exporters per signal. | Name | Description | Default | Type | |-----------------------|-----------------------------|---------|----------| -| OTEL_TRACES_EXPORTER | Trace exporter to be used | "otlp" | [Enum][] | -| OTEL_METRICS_EXPORTER | Metrics exporter to be used | "otlp" | [Enum][] | -| OTEL_LOGS_EXPORTER | Logs exporter to be used | "otlp" | [Enum][] | +| OTEL_TRACES_EXPORTER | Trace exporter to be used | `otlp` | [Enum][] | +| OTEL_METRICS_EXPORTER | Metrics exporter to be used | `otlp` | [Enum][] | +| OTEL_LOGS_EXPORTER | Logs exporter to be used | `otlp` | [Enum][] | The implementation MAY accept a comma-separated list to enable setting multiple exporters. diff --git a/specification/entities/data-model.md b/specification/entities/data-model.md index 907c8454b8a..06630da065c 100644 --- a/specification/entities/data-model.md +++ b/specification/entities/data-model.md @@ -235,7 +235,7 @@ _Note: These examples MAY diverge from semantic conventions._ service.instance.id
service.name
- service.namesapce + service.namespace service.version diff --git a/specification/logs/data-model-appendix.md b/specification/logs/data-model-appendix.md index adfcc3bdb7e..fd6c9658bbc 100644 --- a/specification/logs/data-model-appendix.md +++ b/specification/logs/data-model-appendix.md @@ -53,46 +53,46 @@ this data model. FACILITY enum Describes where the event originated. A predefined list of Unix processes. Part of event source identity. Example: mail system - Attributes["syslog.facility"] + `Attributes["syslog.facility"]` VERSION number Meta: protocol version, orthogonal to the event. - Attributes["syslog.version"] + `Attributes["syslog.version"]` HOSTNAME string Describes the location where the event originated. Possible values are FQDN, IP address, etc. - Resource["host.name"] + `Resource["host.name"]` APP-NAME string User-defined app name. Part of event source identity. - Resource["service.name"] + `Resource["service.name"]` PROCID string Not well defined. May be used as a meta field for protocol operation purposes or may be part of event source identity. - Attributes["syslog.procid"] + `Attributes["syslog.procid"]` MSGID string Defines the type of the event. Part of event source identity. Example: "TCPIN" - Attributes["syslog.msgid"] + `Attributes["syslog.msgid"]` STRUCTURED-DATA array of maps of string to string - A variety of use cases depending on the SDID:
+ A variety of use cases depending on the SD-ID:
Can describe event source identity.
Can include data that describes particular occurrence of the event.
Can be meta-information, e.g. quality of timestamp value. - SDID origin.swVersion map to Resource["service.version"]. SDID origin.ip map to Attributes["client.address"]. Rest of SDIDs -> Attributes["syslog.*"] + SD-ID origin.swVersion map to `Resource["service.version"]`. SD-ID origin.ip map to `Attributes["client.address"]`. Rest of SD-IDs -> `Attributes["syslog.*"]` MSG @@ -127,13 +127,13 @@ Can be meta-information, e.g. quality of timestamp value. Computer string The name of the computer on which the event occurred. - Resource["host.name"] + `Resource["host.name"]` EventID uint The identifier that the provider used to identify the event. - Attributes["winlog.event_id"] + `Attributes["winlog.event_id"]` Message @@ -145,7 +145,7 @@ Can be meta-information, e.g. quality of timestamp value. Rest of the fields. any All other fields in the event. - Attributes["winlog.*"] + `Attributes["winlog.*"]` @@ -168,13 +168,13 @@ Can be meta-information, e.g. quality of timestamp value. EventType string Short machine understandable string describing the event type. SignalFx specific concept. Non-namespaced. Example: k8s Event Reason field. - Attributes["com.splunk.signalfx.event_type"] + `Attributes["com.splunk.signalfx.event_type"]` Category enum Describes where the event originated and why. SignalFx specific concept. Example: AGENT. - Attributes["com.splunk.signalfx.event_category"] + `Attributes["com.splunk.signalfx.event_category"]` Dimensions @@ -211,19 +211,19 @@ We apply this mapping from HEC to the unified model: host string The host value to assign to the event data. This is typically the host name of the client that you are sending data from. - Resource["host.name"] + `Resource["host.name"]` source string The source value to assign to the event data. For example, if you are sending data from an app you are developing, you could set this key to the name of the app. - Resource["com.splunk.source"] + `Resource["com.splunk.source"]` sourcetype string The sourcetype value to assign to the event data. - Resource["com.splunk.sourcetype"] + `Resource["com.splunk.sourcetype"]` event @@ -241,7 +241,7 @@ We apply this mapping from HEC to the unified model: index string The name of the index by which the event data is to be indexed. The index you specify here must be within the list of allowed indexes if the token has the indexes parameter set. - Attributes["com.splunk.index"] + `Attributes["com.splunk.index"]` @@ -258,37 +258,37 @@ When mapping from the unified model to HEC, we apply this additional mapping: SeverityText string The severity of the event as a human-readable string. - fields['otel.log.severity.text'] + `Fields["otel.log.severity.text"]` SeverityNumber string The severity of the event as a number. - fields['otel.log.severity.number'] + `Fields["otel.log.severity.number"]` Name string Short event identifier that does not contain varying parts. - fields['otel.log.name'] + `Fields["otel.log.name"]` TraceId string Request trace id. - fields['trace_id'] + `Fields["trace_id"]` SpanId string Request span id. - fields['span_id'] + `Fields["span_id"]` TraceFlags string W3C trace flags. - fields['trace_flags'] + `Fields["trace_flags"]` @@ -388,37 +388,37 @@ When mapping from the unified model to HEC, we apply this additional mapping: %a string Client address - Attributes["network.peer.address"] + `Attributes["network.peer.address"]` %A string Server address - Attributes["network.local.address"] + `Attributes["network.local.address"]` %h string Client hostname. - Attributes["client.address"] + `Attributes["client.address"]` %m string The request method. - Attributes["http.request.method"] + `Attributes["http.request.method"]` %v,%p,%U,%q string Multiple fields that can be composed into URL. - Attributes["url.full"] + `Attributes["url.full"]` %>s string Response status. - Attributes["http.response.status_code"] + `Attributes["http.response.status_code"]` All other fields @@ -447,25 +447,25 @@ When mapping from the unified model to HEC, we apply this additional mapping: eventSource string The service that the request was made to. This name is typically a short form of the service name without spaces plus .amazonaws.com. - Resource["service.name"]? + `Resource["service.name"]`? awsRegion string The AWS region that the request was made to, such as us-east-2. - Resource["cloud.region"] + `Resource["cloud.region"]` sourceIPAddress string The IP address that the request was made from. - Attributes["client.address"] + `Attributes["client.address"]` errorCode string The AWS service error if the request returns an error. - Attributes["cloudtrail.error_code"] + `Attributes["cloudtrail.error_code"]` errorMessage @@ -477,7 +477,7 @@ When mapping from the unified model to HEC, we apply this additional mapping: All other fields * - Attributes["cloudtrail.*"] + `Attributes["cloudtrail.*"]` @@ -487,7 +487,7 @@ When mapping from the unified model to HEC, we apply this additional mapping: | ----- | ---- | ----------- | --------------------------- | | timestamp | string | The time the event described by the log entry occurred. | Timestamp | | resource | MonitoredResource | The monitored resource that produced this log entry. | Resource | -| log_name | string | The URL-encoded LOG_ID suffix of the log_name field identifies which log stream this entry belongs to. | Attributes["gcp.log_name"] | +| log_name | string | The URL-encoded LOG_ID suffix of the log_name field identifies which log stream this entry belongs to. | `Attributes["gcp.log_name"]` | | json_payload | google.protobuf.Struct | The log entry payload, represented as a structure that is expressed as a JSON object. | Body | | proto_payload | google.protobuf.Any | The log entry payload, represented as a protocol buffer. | Body | | text_payload | string | The log entry payload, represented as a Unicode string (UTF-8). | Body | @@ -495,9 +495,9 @@ When mapping from the unified model to HEC, we apply this additional mapping: | trace | string | The trace associated with the log entry, if any. | TraceId | | span_id | string | The span ID within the trace associated with the log entry. | SpanId | | labels | map | A set of user-defined (key, value) data that provides additional information about the log entry. | Attributes | -| http_request | HttpRequest | The HTTP request associated with the log entry, if any. | Attributes["gcp.http_request"] | +| http_request | HttpRequest | The HTTP request associated with the log entry, if any. | `Attributes["gcp.http_request"]` | | trace_sampled | boolean | The sampling decision of the trace associated with the log entry. | TraceFlags.SAMPLED | -| All other fields | | | Attributes["gcp.*"] | +| All other fields | | | `Attributes["gcp.*"]` | ### Elastic Common Schema @@ -560,37 +560,37 @@ When mapping from the unified model to HEC, we apply this additional mapping: agent.name string Name given to the agent - Resource["telemetry.sdk.name"] + `Resource["telemetry.sdk.name"]` agent.type string Type of agent - Resource["telemetry.sdk.language"] + `Resource["telemetry.sdk.language"]` agent.version string Version of agent - Resource["telemetry.sdk.version"] + `Resource["telemetry.sdk.version"]` source.ip, client.ip string The IP address that the request was made from. - Attributes["client.address"] + `Attributes["client.address"]` cloud.account.id string ID of the account in the given cloud - Resource["cloud.account.id"] + `Resource["cloud.account.id"]` cloud.availability_zone string Availability zone in which this host is running. - Resource["cloud.zone"] + `Resource["cloud.zone"]` cloud.instance.id @@ -614,31 +614,31 @@ When mapping from the unified model to HEC, we apply this additional mapping: cloud.provider string Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. - Resource["cloud.provider"] + `Resource["cloud.provider"]` cloud.region string Region in which this host is running. - Resource["cloud.region"] + `Resource["cloud.region"]` cloud.image.id* string - Resource["host.image.name"] + `Resource["host.image.name"]` container.id string Unique container id - Resource["container.id"] + `Resource["container.id"]` container.image.name string Name of the image the container was built on. - Resource["container.image.name"] + `Resource["container.image.name"]` container.image.tag @@ -656,7 +656,7 @@ When mapping from the unified model to HEC, we apply this additional mapping: container.name string Container name. - Resource["container.name"] + `Resource["container.name"]` container.runtime @@ -668,31 +668,31 @@ When mapping from the unified model to HEC, we apply this additional mapping: destination.address string Destination address for the event - Attributes["destination.address"] + `Attributes["destination.address"]` error.code string Error code describing the error. - Attributes["error.code"] + `Attributes["error.code"]` error.id string Unique identifier for the error. - Attributes["error.id"] + `Attributes["error.id"]` error.message string Error message. - Attributes["error.message"] + `Attributes["error.message"]` error.stack_trace string The stack trace of this error in plain text. - Attributes["error.stack_trace] + `Attributes["error.stack_trace] host.architecture @@ -710,37 +710,37 @@ When mapping from the unified model to HEC, we apply this additional mapping: host.name string Hostname of the host.
It normally contains what the hostname command returns on the host machine. - Resource["host.name"] + `Resource["host.name"]` host.id string Unique host id. - Resource["host.id"] + `Resource["host.id"]` host.ip Array of string Host IP - Resource["host.ip"] + `Resource["host.ip"]` host.mac array of string MAC addresses of the host - Resource["host.mac"] + `Resource["host.mac"]` host.name string Name of the host.
It may contain what hostname returns on Unix systems, the fully qualified, or a name specified by the user. - Resource["host.name"] + `Resource["host.name"]` host.type string Type of host. - Resource["host.type"] + `Resource["host.type"]` host.uptime @@ -764,19 +764,19 @@ When mapping from the unified model to HEC, we apply this additional mapping: service.name string Name of the service data is collected from. - Resource["service.name"] + `Resource["service.name"]` service.node.name string Specific node serving that service - Resource["service.instance.id"] + `Resource["service.instance.id"]` service.state string Current state of the service. - Attributes["service.state"] + `Attributes["service.state"]` service.type @@ -788,7 +788,7 @@ When mapping from the unified model to HEC, we apply this additional mapping: service.version string Version of the service the data was collected from. - Resource["service.version"] + `Resource["service.version"]` diff --git a/specification/metrics/data-model.md b/specification/metrics/data-model.md index 362c882a4eb..04b86a05d7d 100644 --- a/specification/metrics/data-model.md +++ b/specification/metrics/data-model.md @@ -182,7 +182,7 @@ decisions within the metrics data model. ### Out of Scope Use-cases -The metrics data model is NOT designed to be a perfect rosetta stone of metrics. +The metrics data model is NOT designed to be a perfect Rosetta Stone of metrics. Here are a set of use cases that, while won't be outright unsupported, are not in scope for key design decisions: @@ -200,6 +200,7 @@ in scope for key design decisions: OpenTelemetry fragments metrics into three interacting models: + - An Event model, representing how instrumentation reports metric data. - A Timeseries model, representing how backends store metric data. - A Metric Stream model, defining the *O*pen*T*e*L*emetry *P*rotocol (OTLP) @@ -415,7 +416,7 @@ in OTLP consist of the following: - The time interval is inclusive of the end time. - Times are specified in Value is UNIX Epoch time in nanoseconds since `00:00:00 UTC on 1 January 1970` - - (optional) a set of examplars (see [Exemplars](#exemplars)). + - (optional) a set of exemplars (see [Exemplars](#exemplars)). - (optional) Data point flags (see [Data point flags](#data-point-flags)). The aggregation temporality is used to understand the context in which the sum @@ -451,7 +452,7 @@ in OTLP represents a sampled value at a given time. A Gauge stream consists of: - (optional) A timestamp (`start_time_unix_nano`) which best represents the first possible moment a measurement could be recorded. This is commonly set to the timestamp when a metric collection system started. - - (optional) a set of examplars (see [Exemplars](#exemplars)). + - (optional) a set of exemplars (see [Exemplars](#exemplars)). - (optional) Data point flags (see [Data point flags](#data-point-flags)). In OTLP, a point within a Gauge stream represents the last-sampled event for a @@ -498,7 +499,7 @@ Histograms consist of the following: for buckets and whether not a given observation would be recorded in this bucket. - A count of the number of observations that fell within this bucket. - - (optional) a set of examplars (see [Exemplars](#exemplars)). + - (optional) a set of exemplars (see [Exemplars](#exemplars)). - (optional) Data point flags (see [Data point flags](#data-point-flags)). Like Sums, Histograms also define an aggregation temporality. The picture above diff --git a/specification/metrics/sdk_exporters/prometheus.md b/specification/metrics/sdk_exporters/prometheus.md index 91eef280ec3..b5f2b432321 100644 --- a/specification/metrics/sdk_exporters/prometheus.md +++ b/specification/metrics/sdk_exporters/prometheus.md @@ -14,9 +14,9 @@ OpenTelemetry metrics MUST be converted to Prometheus metrics according to the A Prometheus Exporter SHOULD use [Prometheus client libraries](https://prometheus.io/docs/instrumenting/clientlibs/) -for serving Prometheus metrics. This allows the prometheus client to negotiate +for serving Prometheus metrics. This allows the Prometheus client to negotiate the [format](https://github.com/prometheus/docs/blob/main/docs/instrumenting/exposition_formats.md) -of the response using the `Content-Type` header. If a prometheus client library +of the response using the `Content-Type` header. If a Prometheus client library is used, the OpenTelemetry Prometheus Exporter SHOULD be modeled as a [custom Collector](https://prometheus.io/docs/instrumenting/writing_clientlibs/#overall-structure) so it can be used in conjunction with existing Prometheus instrumentation. diff --git a/specification/resource/README.md b/specification/resource/README.md index cbf8d11472a..ef270704dac 100644 --- a/specification/resource/README.md +++ b/specification/resource/README.md @@ -87,7 +87,7 @@ storage solution). For example, in the extreme, OpenTelemery could synthesize a UUID for every system which produces telemetry. All identifying attributes for Resource and Entity could be sent via a side channel with known relationships to this UUID. -While this would optimise the runtime generation and sending of telemetry, it +While this would optimize the runtime generation and sending of telemetry, it comes at the cost of downstream storage systems needing to join data back together either at ingestion time or query time. For high performance use cases, e.g. alerting, these joins can be expensive. diff --git a/specification/resource/data-model.md b/specification/resource/data-model.md index 94bada584c4..5fa43c193cc 100644 --- a/specification/resource/data-model.md +++ b/specification/resource/data-model.md @@ -44,6 +44,6 @@ Entity includes its own notion of identity. The identity of a resource is the set of entities contained within it. Two resources are considered different if one contains an entity not found in the other. -Some resources include raw attributes in additon to Entities. Raw attributes are +Some resources include raw attributes in addition to Entities. Raw attributes are considered identifying on a resource. That is, if the key-value pairs of raw attributes are different, then you can assume the resource is different. diff --git a/specification/trace/tracestate-probability-sampling.md b/specification/trace/tracestate-probability-sampling.md index 0c433aaac42..0bea3268801 100644 --- a/specification/trace/tracestate-probability-sampling.md +++ b/specification/trace/tracestate-probability-sampling.md @@ -264,7 +264,7 @@ randomness value or a dependent source of randomness (it can use Sampling stages that yield spans with unknown sampling probability, including parent-based samplers when they encounter a Context with -no parent thresohld, must erase the OpenTelemetry threshold +no parent threshold, must erase the OpenTelemetry threshold value in their output. Sampling stages should check for consistency when it is a simple test, @@ -435,6 +435,7 @@ This package demonstrates how to directly calculate integer thresholds from prob OpenTelemetry SDKs are recommended to use 4 digits of precision by default. The following table shows values computed by the method above for 1-in-N probability sampling, with precision 3, 4, and 5. + | 1-in-N | Input probability | Threshold (precision 3, 4, 5) | Actual probability (precision 3, 4, 5) | Exact Adjusted Count (precision 3, 4, 5) | | ------- | ------------------ | ---------------------------------- | ---------------------------------------------------------------------------- | --------------------------------------------------------------------- | @@ -451,6 +452,7 @@ The following table shows values computed by the method above for 1-in-N probabi | 10000 | 0.0001 | fff972
fff9724
fff97247 | 0.00010001659393310547
0.00010000169277191162
0.00010000006295740604 | 9998.340882002383
9999.830725674266
9999.99370426336 | | 100000 | 0.00001 | ffff584
ffff583a
ffff583a5 | 9.998679161071777e-06
1.00000761449337e-05
1.0000003385357559e-05 | 100013.21013412817
99999.238556461
99999.96614643588 | | 1000000 | 0.000001 | ffffef4
ffffef39
ffffef391 | 9.98377799987793e-07
1.00000761449337e-06
9.999930625781417e-07 | 1.0016248358208955e+06
999992.38556461
1.0000069374699865e+06 | + ### Converting integer threshold to a `T`-value