Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 25 additions & 17 deletions _topic_maps/_topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2808,17 +2808,6 @@ Topics:
Topics:
- Name: Distributed tracing architecture
File: distr-tracing-architecture
- Name: Distributed tracing platform (Jaeger)
Dir: distr_tracing_jaeger
Topics:
- Name: Installation
File: distr-tracing-jaeger-installing
- Name: Configuration
File: distr-tracing-jaeger-configuring
- Name: Updating
File: distr-tracing-jaeger-updating
- Name: Removal
File: distr-tracing-jaeger-removing
- Name: Distributed tracing platform (Tempo)
Dir: distr_tracing_tempo
Topics:
Expand All @@ -2830,6 +2819,17 @@ Topics:
File: distr-tracing-tempo-updating
- Name: Removal
File: distr-tracing-tempo-removing
- Name: Distributed tracing platform (Jaeger)
Dir: distr_tracing_jaeger
Topics:
- Name: Installation
File: distr-tracing-jaeger-installing
- Name: Configuration
File: distr-tracing-jaeger-configuring
- Name: Updating
File: distr-tracing-jaeger-updating
- Name: Removal
File: distr-tracing-jaeger-removing
---
Name: Red Hat build of OpenTelemetry
Dir: otel
Expand All @@ -2839,12 +2839,20 @@ Topics:
File: otel-release-notes
- Name: Installation
File: otel-installing
- Name: Collector configuration
File: otel-configuring
- Name: Instrumentation
File: otel-instrumentation
- Name: Use
File: otel-using
- Name: Configuration of the OpenTelemetry Collector
File: otel-configuration-of-otel-collector
- Name: Configuration of the instrumentation
File: otel-configuration-of-instrumentation
- Name: Sending traces and metrics to the Collector
File: otel-sending-traces-and-metrics-to-otel-collector
- Name: Sending metrics to the monitoring stack
File: otel-config-send-metrics-monitoring-stack
- Name: Forwarding traces to a TempoStack
File: otel-forwarding
- Name: Configuring the Collector metrics
File: otel-configuring-otelcol-metrics
- Name: Gathering the observability data from multiple clusters
File: otel-config-multicluster
- Name: Troubleshooting
File: otel-troubleshooting
- Name: Migration
Expand Down
31 changes: 15 additions & 16 deletions modules/distr-tracing-architecture.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,22 +9,6 @@ This module included in the following assemblies:

{DTProductName} is made up of several components that work together to collect, store, and display tracing data.

* *{JaegerName}* - This component is based on the open source link:https://www.jaegertracing.io/[Jaeger project].

** *Client* (Jaeger client, Tracer, Reporter, instrumented application, client libraries)- The {JaegerShortName} clients are language-specific implementations of the OpenTracing API. They can be used to instrument applications for distributed tracing either manually or with a variety of existing open source frameworks, such as Camel (Fuse), Spring Boot (RHOAR), MicroProfile (RHOAR/Thorntail), Wildfly (EAP), and many more, that are already integrated with OpenTracing.

** *Agent* (Jaeger agent, Server Queue, Processor Workers) - The {JaegerShortName} agent is a network daemon that listens for spans sent over User Datagram Protocol (UDP), which it batches and sends to the Collector. The agent is meant to be placed on the same host as the instrumented application. This is typically accomplished by having a sidecar in container environments such as Kubernetes.

** *Jaeger Collector* (Collector, Queue, Workers) - Similar to the Jaeger agent, the Jaeger Collector receives spans and places them in an internal queue for processing. This allows the Jaeger Collector to return immediately to the client/agent instead of waiting for the span to make its way to the storage.

** *Storage* (Data Store) - Collectors require a persistent storage backend. {JaegerName} has a pluggable mechanism for span storage. Note that for this release, the only supported storage is Elasticsearch.

** *Query* (Query Service) - Query is a service that retrieves traces from storage.

** *Ingester* (Ingester Service) - {DTProductName} can use Apache Kafka as a buffer between the Collector and the actual Elasticsearch backing storage. Ingester is a service that reads data from Kafka and writes to the Elasticsearch storage backend.

** *Jaeger Console* – With the {JaegerName} user interface, you can visualize your distributed tracing data. On the Search page, you can find traces and explore details of the spans that make up an individual trace.

* *{TempoName}* - This component is based on the open source link:https://grafana.com/oss/tempo/[Grafana Tempo project].

** *Gateway* – The Gateway handles authentication, authorization, and forwarding requests to the Distributor or Query front-end service.
Expand All @@ -43,3 +27,18 @@ This module included in the following assemblies:

** *OpenTelemetry Collector* - The OpenTelemetry Collector is a vendor-agnostic way to receive, process, and export telemetry data. The OpenTelemetry Collector supports open-source observability data formats, for example, Jaeger and Prometheus, sending to one or more open-source or commercial back-ends. The Collector is the default location instrumentation libraries export their telemetry data.

* *{JaegerName}* - This component is based on the open source link:https://www.jaegertracing.io/[Jaeger project].

** *Client* (Jaeger client, Tracer, Reporter, instrumented application, client libraries)- The {JaegerShortName} clients are language-specific implementations of the OpenTracing API. They can be used to instrument applications for distributed tracing either manually or with a variety of existing open source frameworks, such as Camel (Fuse), Spring Boot (RHOAR), MicroProfile (RHOAR/Thorntail), Wildfly (EAP), and many more, that are already integrated with OpenTracing.

** *Agent* (Jaeger agent, Server Queue, Processor Workers) - The {JaegerShortName} agent is a network daemon that listens for spans sent over User Datagram Protocol (UDP), which it batches and sends to the Collector. The agent is meant to be placed on the same host as the instrumented application. This is typically accomplished by having a sidecar in container environments such as Kubernetes.

** *Jaeger Collector* (Collector, Queue, Workers) - Similar to the Jaeger agent, the Jaeger Collector receives spans and places them in an internal queue for processing. This allows the Jaeger Collector to return immediately to the client/agent instead of waiting for the span to make its way to the storage.

** *Storage* (Data Store) - Collectors require a persistent storage backend. {JaegerName} has a pluggable mechanism for span storage. Note that for this release, the only supported storage is Elasticsearch.

** *Query* (Query Service) - Query is a service that retrieves traces from storage.

** *Ingester* (Ingester Service) - {DTProductName} can use Apache Kafka as a buffer between the Collector and the actual Elasticsearch backing storage. Ingester is a service that reads data from Kafka and writes to the Elasticsearch storage backend.

** *Jaeger Console* – With the {JaegerName} user interface, you can visualize your distributed tracing data. On the Search page, you can find traces and explore details of the spans that make up an individual trace.
4 changes: 2 additions & 2 deletions modules/distr-tracing-product-overview.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -32,12 +32,12 @@ With the {DTShortName}, you can perform the following functions:

The {DTShortName} consists of three components:

* *{JaegerName}*, which is based on the open source link:https://www.jaegertracing.io/[Jaeger project].

* *{TempoName}*, which is based on the open source link:https://grafana.com/oss/tempo/[Grafana Tempo project].

* *{OTELNAME}*, which is based on the open source link:https://opentelemetry.io/[OpenTelemetry project].

* *{JaegerName}*, which is based on the open source link:https://www.jaegertracing.io/[Jaeger project].
+
[IMPORTANT]
====
Jaeger does not use FIPS validated cryptographic modules.
Expand Down
8 changes: 4 additions & 4 deletions modules/distr-tracing-tempo-config-query-frontend.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,10 @@

Two components of the {TempoShortName}, the querier and query frontend, manage queries. You can configure both of these components.

The querier component finds the requested trace ID in the ingesters or back-end storage. Depending on the set parameters, the querier component can query both the ingesters and pull bloom or indexes from the back end to search blocks in object storage. The querier component exposes an HTTP endpoint at `GET /querier/api/traces/<traceID>`, but it is not expected to be used directly. Queries must be sent to the query frontend.
The querier component finds the requested trace ID in the ingesters or back-end storage. Depending on the set parameters, the querier component can query both the ingesters and pull bloom or indexes from the back end to search blocks in object storage. The querier component exposes an HTTP endpoint at `GET /querier/api/traces/<trace_id>`, but it is not expected to be used directly. Queries must be sent to the query frontend.

.Configuration parameters for the querier component
[options="header"]
[options="header",cols="l, a, a]
|===
|Parameter |Description |Values

Expand All @@ -28,10 +28,10 @@ The querier component finds the requested trace ID in the ingesters or back-end
|type: array
|===

The query frontend component is responsible for sharding the search space for an incoming query. The query frontend exposes traces via a simple HTTP endpoint: `GET /api/traces/<traceID>`. Internally, the query frontend component splits the `blockID` space into a configurable number of shards and then queues these requests. The querier component connects to the query frontend component via a streaming gRPC connection to process these sharded queries.
The query frontend component is responsible for sharding the search space for an incoming query. The query frontend exposes traces via a simple HTTP endpoint: `GET /api/traces/<trace_id>`. Internally, the query frontend component splits the `blockID` space into a configurable number of shards and then queues these requests. The querier component connects to the query frontend component via a streaming gRPC connection to process these sharded queries.

.Configuration parameters for the query frontend component
[options="header"]
[options="header",cols="l, a, a]
|===
|Parameter |Description |Values

Expand Down
2 changes: 2 additions & 0 deletions modules/distr-tracing-tempo-config-spanmetrics.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ The metrics can be visualized in Jaeger console in the *Monitor* tab.
The metrics are derived from spans in the OpenTelemetry Collector that are scraped from the Collector by the Prometheus deployed in the user-workload monitoring stack.
The Jaeger UI queries these metrics from the Prometheus endpoint and visualizes them.

[id="distr-tracing-tempo-config-spanmetrics_opentelemetry-collector-configuration_{context}"]
== OpenTelemetry Collector configuration

The OpenTelemetry Collector requires configuration of the `spanmetrics` connector that derives metrics from traces and exports the metrics in the Prometheus format.
Expand Down Expand Up @@ -68,6 +69,7 @@ spec:
<5> The Spanmetrics connector is configured as exporter in traces pipeline.
<6> The Spanmetrics connector is configured as receiver in metrics pipeline.

[id="distr-tracing-tempo-config-spanmetrics_tempo-configuration_{context}"]
== Tempo configuration

The `TempoStack` custom resource must specify the following: the *Monitor* tab is enabled, and the Prometheus endpoint is set to the Thanos querier service to query the data from the user-defined monitoring stack.
Expand Down
Loading