diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index bfa7a536cf83..7f3df3fa549f 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -2808,17 +2808,6 @@ Topics: Topics: - Name: Distributed tracing architecture File: distr-tracing-architecture -- Name: Distributed tracing platform (Jaeger) - Dir: distr_tracing_jaeger - Topics: - - Name: Installation - File: distr-tracing-jaeger-installing - - Name: Configuration - File: distr-tracing-jaeger-configuring - - Name: Updating - File: distr-tracing-jaeger-updating - - Name: Removal - File: distr-tracing-jaeger-removing - Name: Distributed tracing platform (Tempo) Dir: distr_tracing_tempo Topics: @@ -2830,6 +2819,17 @@ Topics: File: distr-tracing-tempo-updating - Name: Removal File: distr-tracing-tempo-removing +- Name: Distributed tracing platform (Jaeger) + Dir: distr_tracing_jaeger + Topics: + - Name: Installation + File: distr-tracing-jaeger-installing + - Name: Configuration + File: distr-tracing-jaeger-configuring + - Name: Updating + File: distr-tracing-jaeger-updating + - Name: Removal + File: distr-tracing-jaeger-removing --- Name: Red Hat build of OpenTelemetry Dir: otel @@ -2839,12 +2839,20 @@ Topics: File: otel-release-notes - Name: Installation File: otel-installing -- Name: Collector configuration - File: otel-configuring -- Name: Instrumentation - File: otel-instrumentation -- Name: Use - File: otel-using +- Name: Configuration of the OpenTelemetry Collector + File: otel-configuration-of-otel-collector +- Name: Configuration of the instrumentation + File: otel-configuration-of-instrumentation +- Name: Sending traces and metrics to the Collector + File: otel-sending-traces-and-metrics-to-otel-collector +- Name: Sending metrics to the monitoring stack + File: otel-config-send-metrics-monitoring-stack +- Name: Forwarding traces to a TempoStack + File: otel-forwarding +- Name: Configuring the Collector metrics + File: otel-configuring-otelcol-metrics +- Name: Gathering the observability data from multiple clusters + File: otel-config-multicluster - Name: Troubleshooting File: otel-troubleshooting - Name: Migration diff --git a/modules/distr-tracing-architecture.adoc b/modules/distr-tracing-architecture.adoc index eb9734a29411..4979ceb47993 100644 --- a/modules/distr-tracing-architecture.adoc +++ b/modules/distr-tracing-architecture.adoc @@ -9,22 +9,6 @@ This module included in the following assemblies: {DTProductName} is made up of several components that work together to collect, store, and display tracing data. -* *{JaegerName}* - This component is based on the open source link:https://www.jaegertracing.io/[Jaeger project]. - -** *Client* (Jaeger client, Tracer, Reporter, instrumented application, client libraries)- The {JaegerShortName} clients are language-specific implementations of the OpenTracing API. They can be used to instrument applications for distributed tracing either manually or with a variety of existing open source frameworks, such as Camel (Fuse), Spring Boot (RHOAR), MicroProfile (RHOAR/Thorntail), Wildfly (EAP), and many more, that are already integrated with OpenTracing. - -** *Agent* (Jaeger agent, Server Queue, Processor Workers) - The {JaegerShortName} agent is a network daemon that listens for spans sent over User Datagram Protocol (UDP), which it batches and sends to the Collector. The agent is meant to be placed on the same host as the instrumented application. This is typically accomplished by having a sidecar in container environments such as Kubernetes. - -** *Jaeger Collector* (Collector, Queue, Workers) - Similar to the Jaeger agent, the Jaeger Collector receives spans and places them in an internal queue for processing. This allows the Jaeger Collector to return immediately to the client/agent instead of waiting for the span to make its way to the storage. - -** *Storage* (Data Store) - Collectors require a persistent storage backend. {JaegerName} has a pluggable mechanism for span storage. Note that for this release, the only supported storage is Elasticsearch. - -** *Query* (Query Service) - Query is a service that retrieves traces from storage. - -** *Ingester* (Ingester Service) - {DTProductName} can use Apache Kafka as a buffer between the Collector and the actual Elasticsearch backing storage. Ingester is a service that reads data from Kafka and writes to the Elasticsearch storage backend. - -** *Jaeger Console* – With the {JaegerName} user interface, you can visualize your distributed tracing data. On the Search page, you can find traces and explore details of the spans that make up an individual trace. - * *{TempoName}* - This component is based on the open source link:https://grafana.com/oss/tempo/[Grafana Tempo project]. ** *Gateway* – The Gateway handles authentication, authorization, and forwarding requests to the Distributor or Query front-end service. @@ -43,3 +27,18 @@ This module included in the following assemblies: ** *OpenTelemetry Collector* - The OpenTelemetry Collector is a vendor-agnostic way to receive, process, and export telemetry data. The OpenTelemetry Collector supports open-source observability data formats, for example, Jaeger and Prometheus, sending to one or more open-source or commercial back-ends. The Collector is the default location instrumentation libraries export their telemetry data. +* *{JaegerName}* - This component is based on the open source link:https://www.jaegertracing.io/[Jaeger project]. + +** *Client* (Jaeger client, Tracer, Reporter, instrumented application, client libraries)- The {JaegerShortName} clients are language-specific implementations of the OpenTracing API. They can be used to instrument applications for distributed tracing either manually or with a variety of existing open source frameworks, such as Camel (Fuse), Spring Boot (RHOAR), MicroProfile (RHOAR/Thorntail), Wildfly (EAP), and many more, that are already integrated with OpenTracing. + +** *Agent* (Jaeger agent, Server Queue, Processor Workers) - The {JaegerShortName} agent is a network daemon that listens for spans sent over User Datagram Protocol (UDP), which it batches and sends to the Collector. The agent is meant to be placed on the same host as the instrumented application. This is typically accomplished by having a sidecar in container environments such as Kubernetes. + +** *Jaeger Collector* (Collector, Queue, Workers) - Similar to the Jaeger agent, the Jaeger Collector receives spans and places them in an internal queue for processing. This allows the Jaeger Collector to return immediately to the client/agent instead of waiting for the span to make its way to the storage. + +** *Storage* (Data Store) - Collectors require a persistent storage backend. {JaegerName} has a pluggable mechanism for span storage. Note that for this release, the only supported storage is Elasticsearch. + +** *Query* (Query Service) - Query is a service that retrieves traces from storage. + +** *Ingester* (Ingester Service) - {DTProductName} can use Apache Kafka as a buffer between the Collector and the actual Elasticsearch backing storage. Ingester is a service that reads data from Kafka and writes to the Elasticsearch storage backend. + +** *Jaeger Console* – With the {JaegerName} user interface, you can visualize your distributed tracing data. On the Search page, you can find traces and explore details of the spans that make up an individual trace. diff --git a/modules/distr-tracing-product-overview.adoc b/modules/distr-tracing-product-overview.adoc index a7768e395e9c..7d5ecf4613f9 100644 --- a/modules/distr-tracing-product-overview.adoc +++ b/modules/distr-tracing-product-overview.adoc @@ -32,12 +32,12 @@ With the {DTShortName}, you can perform the following functions: The {DTShortName} consists of three components: -* *{JaegerName}*, which is based on the open source link:https://www.jaegertracing.io/[Jaeger project]. - * *{TempoName}*, which is based on the open source link:https://grafana.com/oss/tempo/[Grafana Tempo project]. * *{OTELNAME}*, which is based on the open source link:https://opentelemetry.io/[OpenTelemetry project]. +* *{JaegerName}*, which is based on the open source link:https://www.jaegertracing.io/[Jaeger project]. ++ [IMPORTANT] ==== Jaeger does not use FIPS validated cryptographic modules. diff --git a/modules/distr-tracing-tempo-config-query-frontend.adoc b/modules/distr-tracing-tempo-config-query-frontend.adoc index 425c5a87053d..807255b377a8 100644 --- a/modules/distr-tracing-tempo-config-query-frontend.adoc +++ b/modules/distr-tracing-tempo-config-query-frontend.adoc @@ -8,10 +8,10 @@ Two components of the {TempoShortName}, the querier and query frontend, manage queries. You can configure both of these components. -The querier component finds the requested trace ID in the ingesters or back-end storage. Depending on the set parameters, the querier component can query both the ingesters and pull bloom or indexes from the back end to search blocks in object storage. The querier component exposes an HTTP endpoint at `GET /querier/api/traces/`, but it is not expected to be used directly. Queries must be sent to the query frontend. +The querier component finds the requested trace ID in the ingesters or back-end storage. Depending on the set parameters, the querier component can query both the ingesters and pull bloom or indexes from the back end to search blocks in object storage. The querier component exposes an HTTP endpoint at `GET /querier/api/traces/`, but it is not expected to be used directly. Queries must be sent to the query frontend. .Configuration parameters for the querier component -[options="header"] +[options="header",cols="l, a, a] |=== |Parameter |Description |Values @@ -28,10 +28,10 @@ The querier component finds the requested trace ID in the ingesters or back-end |type: array |=== -The query frontend component is responsible for sharding the search space for an incoming query. The query frontend exposes traces via a simple HTTP endpoint: `GET /api/traces/`. Internally, the query frontend component splits the `blockID` space into a configurable number of shards and then queues these requests. The querier component connects to the query frontend component via a streaming gRPC connection to process these sharded queries. +The query frontend component is responsible for sharding the search space for an incoming query. The query frontend exposes traces via a simple HTTP endpoint: `GET /api/traces/`. Internally, the query frontend component splits the `blockID` space into a configurable number of shards and then queues these requests. The querier component connects to the query frontend component via a streaming gRPC connection to process these sharded queries. .Configuration parameters for the query frontend component -[options="header"] +[options="header",cols="l, a, a] |=== |Parameter |Description |Values diff --git a/modules/distr-tracing-tempo-config-spanmetrics.adoc b/modules/distr-tracing-tempo-config-spanmetrics.adoc index 3e3d4dfa6692..baa2806e7c49 100644 --- a/modules/distr-tracing-tempo-config-spanmetrics.adoc +++ b/modules/distr-tracing-tempo-config-spanmetrics.adoc @@ -13,6 +13,7 @@ The metrics can be visualized in Jaeger console in the *Monitor* tab. The metrics are derived from spans in the OpenTelemetry Collector that are scraped from the Collector by the Prometheus deployed in the user-workload monitoring stack. The Jaeger UI queries these metrics from the Prometheus endpoint and visualizes them. +[id="distr-tracing-tempo-config-spanmetrics_opentelemetry-collector-configuration_{context}"] == OpenTelemetry Collector configuration The OpenTelemetry Collector requires configuration of the `spanmetrics` connector that derives metrics from traces and exports the metrics in the Prometheus format. @@ -68,6 +69,7 @@ spec: <5> The Spanmetrics connector is configured as exporter in traces pipeline. <6> The Spanmetrics connector is configured as receiver in metrics pipeline. +[id="distr-tracing-tempo-config-spanmetrics_tempo-configuration_{context}"] == Tempo configuration The `TempoStack` custom resource must specify the following: the *Monitor* tab is enabled, and the Prometheus endpoint is set to the Thanos querier service to query the data from the user-defined monitoring stack. diff --git a/modules/otel-config-collector.adoc b/modules/otel-collector-components.adoc similarity index 85% rename from modules/otel-config-collector.adoc rename to modules/otel-collector-components.adoc index 7e83edcb2cd9..3b34fde00c42 100644 --- a/modules/otel-config-collector.adoc +++ b/modules/otel-collector-components.adoc @@ -1,162 +1,18 @@ // Module included in the following assemblies: // -// * otel/otel-configuring.adoc +// * otel/otel-configuration-of-collector.adoc :_mod-docs-content-type: REFERENCE -[id="otel-collector-config-options_{context}"] -= OpenTelemetry Collector configuration options - -The OpenTelemetry Collector consists of five types of components that access telemetry data: - -Receivers:: A receiver, which can be push or pull based, is how data gets into the Collector. Generally, a receiver accepts data in a specified format, translates it into the internal format, and passes it to processors and exporters defined in the applicable pipelines. By default, no receivers are configured. One or more receivers must be configured. Receivers may support one or more data sources. - -Processors:: Optional. Processors process the data between it is received and exported. By default, no processors are enabled. Processors must be enabled for every data source. Not all processors support all data sources. Depending on the data source, multiple processors might be enabled. Note that the order of processors matters. - -Exporters:: An exporter, which can be push or pull based, is how you send data to one or more back ends or destinations. By default, no exporters are configured. One or more exporters must be configured. Exporters can support one or more data sources. Exporters might be used with their default settings, but many exporters require configuration to specify at least the destination and security settings. - -Connectors:: A connector connects two pipelines. It consumes data as an exporter at the end of one pipeline and emits data as a receiver at the start of another pipeline. It can consume and emit data of the same or different data type. It can generate and emit data to summarize the consumed data, or it can merely replicate or route data. - -Extensions:: An extension adds capabilities to the Collector. For example, authentication can be added to the receivers and exporters automatically. - -You can define multiple instances of components in a custom resource YAML file. When configured, these components must be enabled through pipelines defined in the `spec.config.service` section of the YAML file. As a best practice, only enable the components that you need. - -.Example of the OpenTelemetry Collector custom resource file -[source,yaml] ----- -apiVersion: opentelemetry.io/v1alpha1 -kind: OpenTelemetryCollector -metadata: - name: cluster-collector - namespace: tracing-system -spec: - mode: deployment - observability: - metrics: - enableMetrics: true - config: | - receivers: - otlp: - protocols: - grpc: - http: - processors: - exporters: - otlp: - endpoint: jaeger-production-collector-headless.tracing-system.svc:4317 - tls: - ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" - prometheus: - endpoint: 0.0.0.0:8889 - resource_to_telemetry_conversion: - enabled: true # by default resource attributes are dropped - service: # <1> - pipelines: - traces: - receivers: [otlp] - processors: [] - exporters: [jaeger] - metrics: - receivers: [otlp] - processors: [] - exporters: [prometheus] ----- -<1> If a component is configured but not defined in the `service` section, the component is not enabled. - -.Parameters used by the Operator to define the OpenTelemetry Collector -[options="header"] -[cols="l, a, a, a"] -|=== -|Parameter |Description |Values |Default -|receivers: -|A receiver is how data gets into the Collector. By default, no receivers are configured. There must be at least one enabled receiver for a configuration to be considered valid. Receivers are enabled by being added to a pipeline. -|`otlp`, `jaeger`, `prometheus`, `zipkin`, `kafka`, `opencensus` -|None - -|processors: -|Processors run through the data between it is received and exported. By default, no processors are enabled. -|`batch`, `memory_limiter`, `resourcedetection`, `attributes`, `span`, `k8sattributes`, `filter`, `routing` -|None - -|exporters: -|An exporter sends data to one or more back ends or destinations. By default, no exporters are configured. There must be at least one enabled exporter for a configuration to be considered valid. Exporters are enabled by being added to a pipeline. Exporters might be used with their default settings, but many require configuration to specify at least the destination and security settings. -|`otlp`, `otlphttp`, `debug`, `prometheus`, `kafka` -|None - -|connectors: -|Connectors join pairs of pipelines, that is by consuming data as end-of-pipeline exporters and emitting data as start-of-pipeline receivers, and can be used to summarize, replicate, or route consumed data. -|`spanmetrics` -|None - -|extensions: -|Optional components for tasks that do not involve processing telemetry data. -|`bearertokenauth`, `oauth2client`, `jaegerremotesamplin`, `pprof`, `health_check`, `memory_ballast`, `zpages` -|None - -|service: - pipelines: -|Components are enabled by adding them to a pipeline under `services.pipeline`. -| -| - -|service: - pipelines: - traces: - receivers: -|You enable receivers for tracing by adding them under `service.pipelines.traces`. -| -|None - -|service: - pipelines: - traces: - processors: -|You enable processors for tracing by adding them under `service.pipelines.traces`. -| -|None - -|service: - pipelines: - traces: - exporters: -|You enable exporters for tracing by adding them under `service.pipelines.traces`. -| -|None - -|service: - pipelines: - metrics: - receivers: -|You enable receivers for metrics by adding them under `service.pipelines.metrics`. -| -|None - -|service: - pipelines: - metrics: - processors: -|You enable processors for metircs by adding them under `service.pipelines.metrics`. -| -|None - -|service: - pipelines: - metrics: - exporters: -|You enable exporters for metrics by adding them under `service.pipelines.metrics`. -| -|None -|=== - [id="otel-collector-components_{context}"] -== OpenTelemetry Collector components += OpenTelemetry Collector components [id="receivers_{context}"] -=== Receivers +== Receivers Receivers get data into the Collector. [id="otlp-receiver_{context}"] -==== OTLP Receiver +=== OTLP Receiver The OTLP receiver ingests traces and metrics using the OpenTelemetry protocol (OTLP). @@ -194,7 +50,7 @@ The OTLP receiver ingests traces and metrics using the OpenTelemetry protocol (O <6> The server-side TLS configuration. For more information, see the `grpc` protocol configuration section. [id="jaeger-receiver_{context}"] -==== Jaeger Receiver +=== Jaeger Receiver The Jaeger receiver ingests traces in the Jaeger formats. @@ -227,7 +83,7 @@ The Jaeger receiver ingests traces in the Jaeger formats. <5> The server-side TLS configuration. See the OTLP receiver configuration section for more details. [id="prometheus-receiver_{context}"] -==== Prometheus Receiver +=== Prometheus Receiver The Prometheus receiver is currently a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature only. @@ -256,7 +112,7 @@ The Prometheus receiver scrapes the metrics endpoints. <4> The targets at which the metrics are exposed. This example scrapes the metrics from a `my-app` application in the `example` project. [id="zipkin-receiver_{context}"] -==== Zipkin Receiver +=== Zipkin Receiver The Zipkin receiver ingests traces in the Zipkin v1 and v2 formats. @@ -278,7 +134,7 @@ The Zipkin receiver ingests traces in the Zipkin v1 and v2 formats. <2> The server-side TLS configuration. See the OTLP receiver configuration section for more details. [id="kafka-receiver_{context}"] -==== Kafka Receiver +=== Kafka Receiver The Kafka receiver is currently a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature only. @@ -317,7 +173,7 @@ The Kafka receiver receives traces, metrics, and logs from Kafka in the OTLP for <7> ServerName indicates the name of the server requested by the client to support virtual hosting. [id="opencensus-receiver_{context}"] -==== OpenCensus receiver +=== OpenCensus receiver The OpenCensus receiver provides backwards compatibility with the OpenCensus project for easier migration of instrumented codebases. It receives metrics and traces in the OpenCensus format via gRPC or HTTP and Json. @@ -344,12 +200,12 @@ Wildcards with `+*+` are accepted under the `cors_allowed_origins`. To match any origin, enter only `+*+`. [id="processors_{context}"] -=== Processors +== Processors Processors run through the data between it is received and exported. [id="batch-processor_{context}"] -==== Batch processor +=== Batch processor The Batch processor batches traces and metrics to reduce the number of outgoing connections needed to transfer the telemetry information. @@ -397,7 +253,7 @@ The Batch processor batches traces and metrics to reduce the number of outgoing |=== [id="memorylimiter-processor_{context}"] -==== Memory Limiter processor +=== Memory Limiter processor The Memory Limiter processor periodically checks the Collector's memory usage and pauses data processing when the soft memory limit is reached. This processor supports traces, metrics, and logs. The preceding component, which is typically a receiver, is expected to retry sending the same data and may apply a backpressure to the incoming data. When memory usage exceeds the hard limit, the Memory Limiter processor forces garbage collection to run. @@ -447,7 +303,7 @@ The Memory Limiter processor periodically checks the Collector's memory usage an |=== [id="resource-detection-processor_{context}"] -==== Resource Detection processor +=== Resource Detection processor The Resource Detection processor is currently a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature only. @@ -494,7 +350,7 @@ rules: <1> Specifies which detector to use. In this example, the environment detector is specified. [id="attributes-processor_{context}"] -==== Attributes processor +=== Attributes processor The Attributes processor is currently a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature only. @@ -544,7 +400,7 @@ Convert:: Converts an existing attribute to a specified type. ---- [id="resource-processor_{context}"] -==== Resource processor +=== Resource processor The Resource processor is currently a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature only. @@ -569,7 +425,7 @@ The Resource processor applies changes to the resource attributes. This processo Attributes represent the actions that are applied to the resource attributes, such as delete the attribute, insert the attribute, or upsert the attribute. [id="span-processor_{context}"] -==== Span processor +=== Span processor The Span processor is currently a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature only. @@ -619,7 +475,7 @@ You can have the span status modified. ---- [id="kubernetes-attributes-processor_{context}"] -==== Kubernetes Attributes processor +=== Kubernetes Attributes processor The Kubernetes Attributes processor is currently a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature only. @@ -704,12 +560,12 @@ config: | You can optionally create an `attribute_source` configuratiion, which defines where to look for the attribute in `from_attribute`. The allowed value is `context` to search the context, which includes the HTTP headers, or `resource` to search the resource attributes. [id="exporters_{context}"] -=== Exporters +== Exporters Exporters send data to one or more back ends or destinations. [id="otlp-exporter_{context}"] -==== OTLP exporter +=== OTLP exporter The OTLP gRPC exporter exports traces and metrics using the OpenTelemetry protocol (OTLP). @@ -746,7 +602,7 @@ The OTLP gRPC exporter exports traces and metrics using the OpenTelemetry protoc <7> Headers are sent for every request performed during an established connection. [id="otlp-http-exporter_{context}"] -==== OTLP HTTP exporter +=== OTLP HTTP exporter The OTLP HTTP exporter exports traces and metrics using the OpenTelemetry protocol (OTLP). @@ -775,7 +631,7 @@ The OTLP HTTP exporter exports traces and metrics using the OpenTelemetry protoc <4> If true, disables HTTP keep-alives. It will only use the connection to the server for a single HTTP request. [id="debug-exporter_{context}"] -==== Debug exporter +=== Debug exporter The Debug exporter prints traces and metrics to the standard output. @@ -796,7 +652,7 @@ The Debug exporter prints traces and metrics to the standard output. <1> Verbosity of the debug export: `detailed` or `normal` or `basic`. When set to `detailed`, pipeline data is verbosely logged. Defaults to `normal`. [id="prometheus-exporter_{context}"] -==== Prometheus exporter +=== Prometheus exporter The Prometheus exporter is currently a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature only. @@ -841,7 +697,7 @@ The Prometheus exporter exports metrics in the Prometheus or OpenMetrics formats <9> Adds the metrics types and units suffixes. Must be disabled if the monitor tab in Jaeger console is enabled. The default is `true`. [id="kafka-exporter_{context}"] -==== Kafka exporter +=== Kafka exporter The Kafka exporter is currently a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature only. @@ -880,12 +736,12 @@ The Kafka exporter exports logs, metrics, and traces to Kafka. This exporter use <7> ServerName indicates the name of the server requested by the client to support virtual hosting. [id="connectors_{context}"] -=== Connectors +== Connectors Connectors connect two pipelines. [id="spanmetrics-connector_{context}"] -==== Spanmetrics connector +=== Spanmetrics connector The Spanmetrics connector is currently a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature only. @@ -908,12 +764,12 @@ The Spanmetrics connector aggregates Request, Error, and Duration (R.E.D) OpenTe <1> Defines the flush interval of the generated metrics. Defaults to `15s`. [id="extensions_{context}"] -=== Extensions +== Extensions Extensions add capabilities to the Collector. [id="bearertokenauth-extension_{context}"] -==== BearerTokenAuth extension +=== BearerTokenAuth extension The BearerTokenAuth extension is currently a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature only. @@ -956,7 +812,7 @@ This extension supports traces, metrics, and logs. <5> You can assign the authenticator configuration to an OTLP exporter. [id="oauth2client-extension_{context}"] -==== OAuth2Client extension +=== OAuth2Client extension The OAuth2Client extension is currently a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature only. @@ -1016,7 +872,7 @@ This extension supports traces, metrics, and logs. [id="jaegerremotesampling-extension_{context}"] -==== Jaeger Remote Sampling extension +=== Jaeger Remote Sampling extension The Jaeger Remote Sampling extension is currently a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature only. @@ -1101,9 +957,8 @@ The Jaeger Remote Sampling extension allows serving sampling strategies after Ja ---- - [id="pprof-extension_{context}"] -==== Performance Profiler extension +=== Performance Profiler extension The Performance Profiler extension is currently a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature only. @@ -1141,7 +996,7 @@ The Performance Profiler extension enables the Go `net/http/pprof` endpoint. Thi <4> The name of the file in which the CPU profile is to be saved. Profiling starts when the Collector starts. Profiling is saved to the file when the Collector is terminated. [id="healthcheck-extension_{context}"] -==== Health Check extension +=== Health Check extension The Health Check extension is currently a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature only. @@ -1188,7 +1043,7 @@ The Health Check extension provides an HTTP URL for checking the status of the O <7> The threshold of a number of failures until which a container is still marked as healthy. The default is `5`. [id="memory-ballast-extension_{context}"] -==== Memory Ballast extension +=== Memory Ballast extension The Memory Ballast extension is currently a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature only. @@ -1223,7 +1078,7 @@ The Memory Ballast extension enables applications to configure memory ballast fo [id="zpages-extension_{context}"] -==== zPages extension +=== zPages extension The zPages extension is currently a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature only. diff --git a/modules/otel-collector-config-options.adoc b/modules/otel-collector-config-options.adoc new file mode 100644 index 000000000000..43793cf123d8 --- /dev/null +++ b/modules/otel-collector-config-options.adoc @@ -0,0 +1,148 @@ +// Module included in the following assemblies: +// +// * otel/otel-configuration-of-collector.adoc + +:_mod-docs-content-type: REFERENCE +[id="otel-collector-config-options_{context}"] += OpenTelemetry Collector configuration options + +The OpenTelemetry Collector consists of five types of components that access telemetry data: + +Receivers:: A receiver, which can be push or pull based, is how data gets into the Collector. Generally, a receiver accepts data in a specified format, translates it into the internal format, and passes it to processors and exporters defined in the applicable pipelines. By default, no receivers are configured. One or more receivers must be configured. Receivers may support one or more data sources. + +Processors:: Optional. Processors process the data between it is received and exported. By default, no processors are enabled. Processors must be enabled for every data source. Not all processors support all data sources. Depending on the data source, multiple processors might be enabled. Note that the order of processors matters. + +Exporters:: An exporter, which can be push or pull based, is how you send data to one or more back ends or destinations. By default, no exporters are configured. One or more exporters must be configured. Exporters can support one or more data sources. Exporters might be used with their default settings, but many exporters require configuration to specify at least the destination and security settings. + +Connectors:: A connector connects two pipelines. It consumes data as an exporter at the end of one pipeline and emits data as a receiver at the start of another pipeline. It can consume and emit data of the same or different data type. It can generate and emit data to summarize the consumed data, or it can merely replicate or route data. + +Extensions:: An extension adds capabilities to the Collector. For example, authentication can be added to the receivers and exporters automatically. + +You can define multiple instances of components in a custom resource YAML file. When configured, these components must be enabled through pipelines defined in the `spec.config.service` section of the YAML file. As a best practice, only enable the components that you need. + +.Example of the OpenTelemetry Collector custom resource file +[source,yaml] +---- +apiVersion: opentelemetry.io/v1alpha1 +kind: OpenTelemetryCollector +metadata: + name: cluster-collector + namespace: tracing-system +spec: + mode: deployment + observability: + metrics: + enableMetrics: true + config: | + receivers: + otlp: + protocols: + grpc: + http: + processors: + exporters: + otlp: + endpoint: jaeger-production-collector-headless.tracing-system.svc:4317 + tls: + ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" + prometheus: + endpoint: 0.0.0.0:8889 + resource_to_telemetry_conversion: + enabled: true # by default resource attributes are dropped + service: # <1> + pipelines: + traces: + receivers: [otlp] + processors: [] + exporters: [jaeger] + metrics: + receivers: [otlp] + processors: [] + exporters: [prometheus] +---- +<1> If a component is configured but not defined in the `service` section, the component is not enabled. + +.Parameters used by the Operator to define the OpenTelemetry Collector +[options="header"] +[cols="l, a, a, a"] +|=== +|Parameter |Description |Values |Default +|receivers: +|A receiver is how data gets into the Collector. By default, no receivers are configured. There must be at least one enabled receiver for a configuration to be considered valid. Receivers are enabled by being added to a pipeline. +|`otlp`, `jaeger`, `prometheus`, `zipkin`, `kafka`, `opencensus` +|None + +|processors: +|Processors run through the received data before it is exported. By default, no processors are enabled. +|`batch`, `memory_limiter`, `resourcedetection`, `attributes`, `span`, `k8sattributes`, `filter`, `routing` +|None + +|exporters: +|An exporter sends data to one or more back ends or destinations. By default, no exporters are configured. There must be at least one enabled exporter for a configuration to be considered valid. Exporters are enabled by being added to a pipeline. Exporters might be used with their default settings, but many require configuration to specify at least the destination and security settings. +|`otlp`, `otlphttp`, `debug`, `prometheus`, `kafka` +|None + +|connectors: +|Connectors join pairs of pipelines by consuming data as end-of-pipeline exporters and emitting data as start-of-pipeline receivers. Connectors can be used to summarize, replicate, or route consumed data. +|`spanmetrics` +|None + +|extensions: +|Optional components for tasks that do not involve processing telemetry data. +|`bearertokenauth`, `oauth2client`, `jaegerremotesamplin`, `pprof`, `health_check`, `memory_ballast`, `zpages` +|None + +|service: + pipelines: +|Components are enabled by adding them to a pipeline under `services.pipeline`. +| +| + +|service: + pipelines: + traces: + receivers: +|You enable receivers for tracing by adding them under `service.pipelines.traces`. +| +|None + +|service: + pipelines: + traces: + processors: +|You enable processors for tracing by adding them under `service.pipelines.traces`. +| +|None + +|service: + pipelines: + traces: + exporters: +|You enable exporters for tracing by adding them under `service.pipelines.traces`. +| +|None + +|service: + pipelines: + metrics: + receivers: +|You enable receivers for metrics by adding them under `service.pipelines.metrics`. +| +|None + +|service: + pipelines: + metrics: + processors: +|You enable processors for metircs by adding them under `service.pipelines.metrics`. +| +|None + +|service: + pipelines: + metrics: + exporters: +|You enable exporters for metrics by adding them under `service.pipelines.metrics`. +| +|None +|=== diff --git a/modules/otel-config-instrumentation.adoc b/modules/otel-config-instrumentation.adoc index 4f94f83ec03e..d80c6c69a366 100644 --- a/modules/otel-config-instrumentation.adoc +++ b/modules/otel-config-instrumentation.adoc @@ -15,6 +15,7 @@ Auto-instrumentation in OpenTelemetry refers to the capability where the framewo The {OTELName} Operator only supports the injection mechanism of the instrumentation libraries but does not support instrumentation libraries or upstream images. Customers can build their own instrumentation images or use community images. ==== +[id="otel-instrumentation-options_{context}"] == Instrumentation options Instrumentation options are specified in the `OpenTelemetryCollector` custom resource. @@ -97,10 +98,12 @@ spec: |=== +[id="otel-using-instrumentation-cr-with-service-mesh_{context}"] == Using the instrumentation CR with Service Mesh When using the instrumentation custom resource (CR) with {SMProductName}, you must use the `b3multi` propagator. +[id="otel-configuration-of-apache-http-server-auto-instrumentation_{context}"] === Configuration of the Apache HTTP Server auto-instrumentation .Prameters for the `+.spec.apacheHttpd+` field @@ -141,6 +144,7 @@ When using the instrumentation custom resource (CR) with {SMProductName}, you mu instrumentation.opentelemetry.io/inject-apache-httpd: "true" ---- +[id="otel-configuration-of-dotnet-auto-instrumentation_{context}"] === Configuration of the .NET auto-instrumentation [options="header"] @@ -167,6 +171,7 @@ For the .NET auto-instrumentation, the required `OTEL_EXPORTER_OTLP_ENDPOINT` en instrumentation.opentelemetry.io/inject-dotnet: "true" ---- +[id="otel-configuration-of-go-auto-instrumentation_{context}"] === Configuration of the Go auto-instrumentation [options="header"] @@ -224,6 +229,7 @@ $ oc adm policy add-scc-to-user otel-go-instrumentation-scc -z ---- ==== +[id="otel-configuration-of-java-auto-instrumentation_{context}"] === Configuration of the Java auto-instrumentation [options="header"] @@ -248,6 +254,7 @@ $ oc adm policy add-scc-to-user otel-go-instrumentation-scc -z instrumentation.opentelemetry.io/inject-java: "true" ---- +[id="otel-configuration-of-nodejs-auto-instrumentation_{context}"] === Configuration of the Node.js auto-instrumentation [options="header"] @@ -275,6 +282,7 @@ instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/path/to/container/ex The `+instrumentation.opentelemetry.io/otel-go-auto-target-exe+` annotation sets the value for the required `OTEL_GO_AUTO_TARGET_EXE` environment variable. +[id="otel-configuration-of-python-auto-instrumentation_{context}"] === Configuration of the Python auto-instrumentation [options="header"] @@ -301,6 +309,7 @@ For Python auto-instrumentation, the `OTEL_EXPORTER_OTLP_ENDPOINT` environment v instrumentation.opentelemetry.io/inject-python: "true" ---- +[id="otel-configuration-of-opentelemetry-sdk-variables_{context}"] === Configuration of the OpenTelemetry SDK variables The OpenTelemetry SDK variables in your pod are configurable by using the following annotation: @@ -320,6 +329,7 @@ Note that all the annotations accept the following values: `other-namespace/instrumentation-name`:: The name of the instrumentation resource to inject from another namespace. +[id="otel-multi-container-pods_{context}"] === Multi-container pods The instrumentation is run on the first container that is available by default according to the pod specification. In some cases, you can also specify target containers for injection. diff --git a/modules/otel-config-target-allocator.adoc b/modules/otel-config-target-allocator.adoc new file mode 100644 index 000000000000..bda205341ae2 --- /dev/null +++ b/modules/otel-config-target-allocator.adoc @@ -0,0 +1,93 @@ +// Module included in the following assemblies: +// +// * otel/otel-configuration-of-collector.adoc + +:_mod-docs-content-type: REFERENCE +[id="otel-config-target-allocator_{context}"] += Target allocator + +The target allocator is an optional component of the OpenTelemetry Operator that shards scrape targets across the deployed fleet of OpenTelemetry Collector instances. The target allocator integrates with the Prometheus `PodMonitor` and `ServiceMonitor` custom resources (CR). When the target allocator is enabled, the OpenTelemetry Operator adds the `http_sd_config` field to the enabled `prometheus` receiver that connects to the target allocator service. + +.Example OpenTelemetryCollector CR with the enabled target allocator +[source,yaml] +---- +apiVersion: opentelemetry.io/v1alpha1 +kind: OpenTelemetryCollector +metadata: + name: otel + namespace: observability +spec: + mode: statefulset # <1> + targetAllocator: + enabled: true # <2> + serviceAccount: # <3> + prometheusCR: + enabled: true # <4> + scrapeInterval: 10s + serviceMonitorSelector: # <5> + name: app1 + podMonitorSelector: # <6> + name: app2 + config: | + receivers: + prometheus: # <7> + config: + scrape_configs: [] + processors: + exporters: + debug: + service: + pipelines: + metrics: + receivers: [prometheus] + processors: [] + exporters: [debug] +---- +<1> When the target allocator is enabled, the deployment mode must be set to `statefulset`. +<2> Enables the target allocator. Defaults to `false`. +<3> The service account name of the target allocator deployment. The service account needs to have RBAC to get the `ServiceMonitor`, `PodMonitor` custom resources, and other objects from the cluster to properly set labels on scraped metrics. The default service name is `-targetallocator`. +<4> Enables integration with the Prometheus `PodMonitor` and `ServiceMonitor` custom resources. +<5> Label selector for the Prometheus `ServiceMonitor` custom resources. When left empty, enables all service monitors. +<6> Label selector for the Prometheus `PodMonitor` custom resources. When left empty, enables all pod monitors. +<7> Prometheus receiver with the minimal, empty `scrape_config: []` configuration option. + +The target allocator deployment uses the Kubernetes API to get relevant objects from the cluster, so it requires a custom RBAC configuration. + +.RBAC configuration for the target allocator service account +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: otel-targetallocator +rules: + - apiGroups: [""] + resources: + - services + - pods + verbs: ["get", "list", "watch"] + - apiGroups: ["monitoring.coreos.com"] + resources: + - servicemonitors + - podmonitors + verbs: ["get", "list", "watch"] + - apiGroups: ["discovery.k8s.io"] + resources: + - endpointslices + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: otel-targetallocator +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: otel-targetallocator +subjects: + - kind: ServiceAccount + name: otel-targetallocator # <1> + namespace: observability # <2> +---- +<1> The name of the target allocator service account mane. +<2> The namespace of the target allocator service account. diff --git a/modules/otel-send-traces-and-metrics-to-otel-collector-with-sidecar.adoc b/modules/otel-send-traces-and-metrics-to-otel-collector-with-sidecar.adoc index 449b6c0095df..471668061a99 100644 --- a/modules/otel-send-traces-and-metrics-to-otel-collector-with-sidecar.adoc +++ b/modules/otel-send-traces-and-metrics-to-otel-collector-with-sidecar.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * otel/otel-using.adoc +// * otel/otel-sending-traces-and-metrics-to-otel-collector.adoc :_mod-docs-content-type: PROCEDURE [id="sending-traces-and-metrics-to-otel-collector-with-sidecar_{context}"] diff --git a/modules/otel-send-traces-and-metrics-to-otel-collector-without-sidecar.adoc b/modules/otel-send-traces-and-metrics-to-otel-collector-without-sidecar.adoc index 1f27c913b1c7..67beea579ce5 100644 --- a/modules/otel-send-traces-and-metrics-to-otel-collector-without-sidecar.adoc +++ b/modules/otel-send-traces-and-metrics-to-otel-collector-without-sidecar.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * otel/otel-using.adoc +// * otel/otel-sending-traces-and-metrics-to-otel-collector.adoc :_mod-docs-content-type: PROCEDURE [id="sending-traces-and-metrics-to-otel-collector-without-sidecar_{context}"] diff --git a/modules/otel-config-multicluster.adoc b/otel/otel-config-multicluster.adoc similarity index 96% rename from modules/otel-config-multicluster.adoc rename to otel/otel-config-multicluster.adoc index d3353364d103..781e0f500ab3 100644 --- a/modules/otel-config-multicluster.adoc +++ b/otel/otel-config-multicluster.adoc @@ -1,10 +1,8 @@ -// Module included in the following assemblies: -// -// * otel/otel-configuring.adoc - :_mod-docs-content-type: PROCEDURE -[id="gathering-observability-data-from-different-clusters_{context}"] -= Gathering the observability data from different clusters with the OpenTelemetry Collector +[id="otel-gathering-observability-data-from-multiple-clusters"] += Gathering the observability data from multiple clusters +include::_attributes/common-attributes.adoc[] +:context: otel-gathering-observability-data-from-multiple-clusters For a multicluster configuration, you can create one OpenTelemetry Collector instance in each one of the remote clusters and then forward all the telemetry data to one OpenTelemetry Collector instance. diff --git a/modules/otel-config-send-metrics-monitoring-stack.adoc b/otel/otel-config-send-metrics-monitoring-stack.adoc similarity index 91% rename from modules/otel-config-send-metrics-monitoring-stack.adoc rename to otel/otel-config-send-metrics-monitoring-stack.adoc index 5671d0f6959e..a381fb34179b 100644 --- a/modules/otel-config-send-metrics-monitoring-stack.adoc +++ b/otel/otel-config-send-metrics-monitoring-stack.adoc @@ -1,10 +1,8 @@ -// Module included in the following assemblies: -// -// * otel/deploying-otel.adoc - :_mod-docs-content-type: REFERENCE -[id="configuration-for-sending-metrics-to-the-monitoring-stack_{context}"] +[id="otel-configuration-for-sending-metrics-to-the-monitoring-stack"] = Configuration for sending metrics to the monitoring stack +include::_attributes/common-attributes.adoc[] +:context: otel-configuration-for-sending-metrics-to-the-monitoring-stack The OpenTelemetry Collector custom resource (CR) can be configured to create a Prometheus `ServiceMonitor` CR for scraping the Collector's pipeline metrics and the enabled Prometheus exporters. diff --git a/otel/otel-instrumentation.adoc b/otel/otel-configuration-of-instrumentation.adoc similarity index 74% rename from otel/otel-instrumentation.adoc rename to otel/otel-configuration-of-instrumentation.adoc index 13eadd2bf81a..86ebd4ab3560 100644 --- a/otel/otel-instrumentation.adoc +++ b/otel/otel-configuration-of-instrumentation.adoc @@ -1,8 +1,8 @@ :_mod-docs-content-type: ASSEMBLY -[id="otel-instrumentation"] -= Configuring and deploying the OpenTelemetry instrumentation injection +[id="otel-configuration-of-instrumentation"] += Configuration of the instrumentation include::_attributes/common-attributes.adoc[] -:context: otel-instrumentation +:context: otel-configuration-of-instrumentation toc::[] diff --git a/otel/otel-configuration-of-otel-collector.adoc b/otel/otel-configuration-of-otel-collector.adoc new file mode 100644 index 000000000000..18cb4704872b --- /dev/null +++ b/otel/otel-configuration-of-otel-collector.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: ASSEMBLY +[id="otel-configuration-of-otel-collector"] += Configuration of the OpenTelemetry Collector +include::_attributes/common-attributes.adoc[] +:context: otel-configuration-of-otel-collector + +toc::[] + +The {OTELName} Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the {OTELShortName} resources. You can install the default configuration or modify the file. + +include::modules/otel-collector-config-options.adoc[leveloffset=+1] +include::modules/otel-collector-components.adoc[leveloffset=+1] +include::modules/otel-config-target-allocator.adoc[leveloffset=+1] diff --git a/modules/otel-configuring-otelcol-metrics.adoc b/otel/otel-configuring-otelcol-metrics.adoc similarity index 60% rename from modules/otel-configuring-otelcol-metrics.adoc rename to otel/otel-configuring-otelcol-metrics.adoc index 3b8e6ac29d83..b33417e0d360 100644 --- a/modules/otel-configuring-otelcol-metrics.adoc +++ b/otel/otel-configuring-otelcol-metrics.adoc @@ -1,10 +1,12 @@ -// Module included in the following assemblies: -// -// * otel/otel-configuring.adoc - :_mod-docs-content-type: PROCEDURE -[id="configuring-otelcol-metrics_{context}"] +[id="otel-configuring-metrics"] = Configuring the OpenTelemetry Collector metrics +include::_attributes/common-attributes.adoc[] +:context: otel-configuring-metrics + +//[id="setting-up-monitoring-for-otel"] +//== Setting up monitoring for the {OTELShortName} +//The {OTELOperator} supports monitoring and alerting of each OpenTelemtry Collector instance and exposes upgrade and operational metrics about the Operator itself. You can enable metrics and alerts of OpenTelemetry Collector instances. @@ -33,3 +35,6 @@ spec: You can use the *Administrator* view of the web console to verify successful configuration: * Go to *Observe* -> *Targets*, filter by *Source: User*, and check that the *ServiceMonitors* in the `opentelemetry-collector-` format have the *Up* status. + +.Additional resources +* xref:../monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects] diff --git a/otel/otel-configuring.adoc b/otel/otel-configuring.adoc deleted file mode 100644 index 8350694ba3cc..000000000000 --- a/otel/otel-configuring.adoc +++ /dev/null @@ -1,27 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="otel-configuring"] -= Configuring and deploying the {OTELShortName} -include::_attributes/common-attributes.adoc[] -:context: otel-configuring - -toc::[] - -The {OTELName} Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the {OTELShortName} resources. You can install the default configuration or modify the file. - -include::modules/otel-config-collector.adoc[leveloffset=+1] -include::modules/otel-config-multicluster.adoc[leveloffset=+1] -include::modules/otel-config-send-metrics-monitoring-stack.adoc[leveloffset=+1] - -[id="setting-up-monitoring-for-otel"] -== Setting up monitoring for the {OTELShortName} - -The {OTELOperator} supports monitoring and alerting of each OpenTelemtry Collector instance and exposes upgrade and operational metrics about the Operator itself. - -include::modules/otel-configuring-otelcol-metrics.adoc[leveloffset=+2] - -// modules/otel-configuring-oteloperator-metrics.adoc[leveloffset=+2] - -[role="_additional-resources"] -[id="additional-resources_deploy-otel"] -== Additional resources -* xref:../monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects] diff --git a/modules/otel-forwarding.adoc b/otel/otel-forwarding.adoc similarity index 95% rename from modules/otel-forwarding.adoc rename to otel/otel-forwarding.adoc index a106f5810d1e..e9dbc602f8da 100644 --- a/modules/otel-forwarding.adoc +++ b/otel/otel-forwarding.adoc @@ -1,10 +1,8 @@ -// Module included in the following assemblies: -// -// * otel/otel-using.adoc - :_mod-docs-content-type: PROCEDURE -[id="forwarding-traces_{context}"] -= Forwarding traces to a TempoStack by using the OpenTelemetry Collector +[id="otel-forwarding-traces"] += Forwarding traces to a TempoStack +include::_attributes/common-attributes.adoc[] +:context: otel-forwarding-traces To configure forwarding traces to a TempoStack, you can deploy and configure the OpenTelemetry Collector. You can deploy the OpenTelemetry Collector in the deployment mode by using the specified processors, receivers, and exporters. For other modes, see the OpenTelemetry Collector documentation linked in _Additional resources_. diff --git a/otel/otel-using.adoc b/otel/otel-sending-traces-and-metrics-to-otel-collector.adoc similarity index 57% rename from otel/otel-using.adoc rename to otel/otel-sending-traces-and-metrics-to-otel-collector.adoc index 8712a19b4420..a850e6afaadb 100644 --- a/otel/otel-using.adoc +++ b/otel/otel-sending-traces-and-metrics-to-otel-collector.adoc @@ -1,20 +1,15 @@ :_mod-docs-content-type: ASSEMBLY -[id="otel-temp"] -= Using the {OTELShortName} +[id="otel-sending-traces-and-metrics-to-otel-collector"] += Sending traces and metrics to the OpenTelemetry Collector include::_attributes/common-attributes.adoc[] -:context: otel-temp +:context: otel-sending-traces-and-metrics-to-otel-collector toc::[] You can set up and use the {OTELShortName} to send traces to the OpenTelemetry Collector or the TempoStack. -include::modules/otel-forwarding.adoc[leveloffset=+1] - -[id="otel-send-traces-and-metrics-to-otel-collector_{context}"] -== Sending traces and metrics to the OpenTelemetry Collector - Sending traces and metrics to the OpenTelemetry Collector is possible with or without sidecar injection. -include::modules/otel-send-traces-and-metrics-to-otel-collector-with-sidecar.adoc[leveloffset=+2] +include::modules/otel-send-traces-and-metrics-to-otel-collector-with-sidecar.adoc[leveloffset=+1] -include::modules/otel-send-traces-and-metrics-to-otel-collector-without-sidecar.adoc[leveloffset=+2] +include::modules/otel-send-traces-and-metrics-to-otel-collector-without-sidecar.adoc[leveloffset=+1]