Skip to content

Define observability requirements for stable components#11772

Merged
mx-psi merged 12 commits into
open-telemetry:mainfrom
jade-guiton-dd:11581-self-o11y-reqs
Dec 16, 2024
Merged

Define observability requirements for stable components#11772
mx-psi merged 12 commits into
open-telemetry:mainfrom
jade-guiton-dd:11581-self-o11y-reqs

Conversation

@jade-guiton-dd
Copy link
Copy Markdown
Contributor

@jade-guiton-dd jade-guiton-dd commented Nov 28, 2024

Description

This PR defines observability requirements for components at the "Stable" stability levels. The goal is to ensure that Collector pipelines are properly observable, to help in debugging configuration issues.

Approach

  • The requirements are deliberately not too specific, in order to be adaptable to each specific component, and so as to not over-burden component authors.
  • After discussing it with @mx-psi, this list of requirements explicitly includes things that may end up being emitted automatically as part of the Pipeline Instrumentation RFC (RFC - Pipeline Component Telemetry #11406), with only a note at the beginning explaining that not everything may need to be implemented manually.

Feel free to share if you don't think this is the right approach for these requirements.

Link to tracking issue

Resolves #11581

Important note regarding the Pipeline Instrumentation RFC

I included this paragraph in the part about error count metrics:

The goal is to be able to easily pinpoint the source of data loss in the Collector pipeline, so this should either:

  • only include errors internal to the component, or;
  • allow distinguishing said errors from ones originating in an external service, or propagated from downstream Collector components.

The Pipeline Instrumentation RFC (hereafter abbreviated "PI"), once implemented, should allow monitoring component errors via the outcome attribute, which is either success or failure, depending on whether the Consumer API call returned an error.

Note that this does not work for receivers, or allow differentiating between different types of errors; for that reason, I believe additional component-specific error metrics will often still be required, but it would be nice to cover as many cases as possible automatically.

However, at the moment, errors are (usually) propagated upstream through the chain of Consume calls, so in case of error the failure state will end up applied to all components upstream of the actual source of the error. This means the PI metrics do not fit the first bullet point.

Moreover, I would argue that even post-processing the PI metrics does not reliably allow distinguishing the ultimate source of errors (the second bullet point). One simple idea is to compute consumed.items{outcome:failure} - produced.items{outcome:failure} to get the number of errors originating in a component. But this only works if output items map one-to-one to input items: if a processor or connector outputs fewer items than it consumes (because it aggregates them, or translates to a different signal type), this formula will return false positives. If these false positives are mixed with real errors from the component and/or from downstream, the situation becomes impossible to analyze by just looking at the metrics.

For these reasons, I believe we should do one of four things:

  1. Change the way we use the Consumer API to no longer propagate errors, making the PI metric outcomes more precise.
    We could catch errors in whatever wrapper we already use to emit the PI metrics, log them for posterity, and simply not propagate them.
    Note that some components already more or less do this, such as the batchprocessor, but this option may in principle break components which rely on downstream errors (for retry purposes for example).
  2. Keep propagating errors, but modify or extend the RFC to require distinguishing between internal and propagated errors (maybe add a third outcome value, or add another attribute).
    This could be implemented by somehow propagating additional state from one Consume call to another, allowing us to establish the first appearance of a given error value in the pipeline.
  3. Loosen this requirement so that the PI metrics suffice in their current state.
  4. Leave everything as-is and make component authors implement their own somewhat redundant error count metrics.

@jade-guiton-dd jade-guiton-dd added discussion-needed Community discussion needed Skip Changelog PRs that do not require a CHANGELOG.md entry Skip Contrib Tests labels Nov 28, 2024
@codecov
Copy link
Copy Markdown

codecov Bot commented Nov 28, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 91.59%. Comparing base (cef6ce5) to head (1b9b3a9).
Report is 1 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main   #11772   +/-   ##
=======================================
  Coverage   91.59%   91.59%           
=======================================
  Files         449      449           
  Lines       23761    23761           
=======================================
  Hits        21763    21763           
  Misses       1623     1623           
  Partials      375      375           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@jade-guiton-dd jade-guiton-dd marked this pull request as ready for review November 28, 2024 16:39
@jade-guiton-dd jade-guiton-dd requested a review from a team as a code owner November 28, 2024 16:39
@mx-psi mx-psi requested a review from djaglowski November 29, 2024 10:20
Copy link
Copy Markdown
Member

@mx-psi mx-psi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Left a few comments. I think we also want to clarify this is not an exhaustive list: components may want to add other telemetry if it makes sense

Comment thread docs/component-stability.md Outdated
Comment thread docs/component-stability.md Outdated
Comment thread docs/component-stability.md Outdated
Comment thread docs/component-stability.md
Comment thread docs/component-stability.md Outdated
Comment thread docs/component-stability.md Outdated
Comment thread docs/component-stability.md
Comment thread docs/component-stability.md Outdated
Comment thread docs/component-stability.md Outdated
Comment thread docs/component-stability.md Outdated
Comment thread docs/component-stability.md
Comment thread docs/component-stability.md Outdated
Comment thread docs/component-stability.md Outdated
Comment thread docs/component-stability.md
Comment thread docs/component-stability.md Outdated
For other components, this would typically be the number of items forwarded to the next
component through the `Consumer` API.

3. How much data is dropped because of errors.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per @djaglowski 's RFC here I think this would just be an attribute on the output metric called outcome? I can see the value in having a separate metric for errors, but want to be sure we don't create a divergence from the RFC. Separately, should we link the RFC from Dan to also specify the previously agreed upon naming conventions?

Copy link
Copy Markdown
Contributor Author

@jade-guiton-dd jade-guiton-dd Dec 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We probably want to require the use of the RFC's conventions for component-identifying attributes, and I will definitely include using an outcome attribute on the input metric instead of a separate error metric as a recommended implementation for processors.

However, if we want to incentivize contributing external components, I don't think we want to require strict adherence to all of the RFC's choices, so divergences are somewhat inevitable. Relatedly, have you read the "Important note" about the RFC in the PR description? I'm interested in hearing what you think.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per @djaglowski 's RFC here I think this would just be an attribute on the output metric called outcome?

That would be the most natural way to go about this. I feel like this document should not be too prescriptive as to how to accomplish the requirements listed, but making a recommendation like this would make sense to me to ensure consistency across components.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jade-guiton-dd I hadn't seen that yet, thanks for bringing my attention to it, i think i had only seen the initial description.

I would strongly advise against option 1 as error back propagation is key if you are running a collector in gateway mode and you want to propagate backpressure to an agent. I think options 2 or 3 are sufficient, option 4 feels not prescriptive enough IMO.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given this is the recommendation for a component, it makes sense to have the component author use a custom error metric that they can decide to either include or exclude any downstream errors as part of it. (this is what you have written, and i agree with it 😄)

Copy link
Copy Markdown
Contributor Author

@jade-guiton-dd jade-guiton-dd Dec 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

they can decide to either include or exclude any downstream errors as part of it. (this is what you have written, and i agree with it 😄)

To be clear, the current requirements allow including downstream errors in a custom error metric, but only if there is a way to distinguish them from internal errors.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, this makes sense to me. Thank you! 🙇

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to be sure @djaglowski, do you support option 2 I detailed in the PR description, ie. amending the Pipeline Instrumentation RFC to require the implementation to distinguish errors coming directly from the next pipeline component from errors propagated from components further downstream, in order to fit the last paragraph of point 3?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it makes sense in principal, as long as there is a clear mechanism for communicating this information so that instrumentation that is automatically wrapped around components can unambiguously know the correct outcome.

@mx-psi mx-psi enabled auto-merge December 16, 2024 09:03
@mx-psi mx-psi added this pull request to the merge queue Dec 16, 2024
Merged via the queue into open-telemetry:main with commit 8ac40a0 Dec 16, 2024
@github-actions github-actions Bot added this to the next release milestone Dec 16, 2024
github-merge-queue Bot pushed a commit that referenced this pull request Jan 28, 2025
…11956)

### Context

The [Pipeline Component Telemetry
RFC](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/rfcs/component-universal-telemetry.md)
was recently accepted (#11406). The document states the following
regarding error monitoring:
> For both [consumed and produced] metrics, an `outcome` attribute with
possible values `success` and `failure` should be automatically
recorded, corresponding to whether or not the corresponding function
call returned an error. Specifically, consumed measurements will be
recorded with `outcome` as `failure` when a call from the previous
component the `ConsumeX` function returns an error, and `success`
otherwise. Likewise, produced measurements will be recorded with
`outcome` as `failure` when a call to the next consumer's `ConsumeX`
function returns an error, and `success` otherwise.


[Observability requirements for stable pipeline
components](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/component-stability.md#observability-requirements)
were also recently merged (#11772). The document states the following
regarding error monitoring:
> The goal is to be able to easily pinpoint the source of data loss in
the Collector pipeline, so this should either:
> - only include errors internal to the component, or;
> - allow distinguishing said errors from ones originating in an
external service, or propagated from downstream Collector components.

Because errors are typically propagated across `ConsumeX` calls in a
pipeline (except for components with an internal queue like
`processor/batch`), the error observability mechanism proposed by the
RFC implies that Pipeline Telemetry will record failures for every
component interface upstream of the component that actually emitted the
error, which does not match the goals set out in the observability
requirements, and makes it much harder to tell which component errors
are coming from from the emitted telemetry.

### Description

This PR amends the Pipeline Component Telemetry RFC with the following:
- restrict the `outcome=failure` value to cases where the error comes
from the very next component (the component on which `ConsumeX` was
called);
- add a third possible value for the `outcome` attribute: `rejected`,
for cases where an error observed at an interface comes from further
downstream (the component did not "fail", but its output was
"rejected");
- propose a mechanism to determine which of the two values should be
used.

The current proposal for the mechanism is for the pipeline
instrumentation layer to wrap errors in an unexported `downstream`
struct, which upstream layers could check for with `errors.As` to check
whether the error has already been "attributed" to a component. This is
the same mechanism currently used for tracking permanent vs. retryable
errors. Please check the diff for details.

### Possible alternatives

There are a few alternatives to this amendment, which were discussed as
part of the observability requirements PR:
- loosen the observability requirements for stable components to not
require distinguishing internal errors from downstream ones → makes it
harder to identify the source of an error;
- modify the way we use the `Consumer` API to no longer propagate errors
upstream → prevents proper propagation of backpressure through the
pipeline (although this is likely already a problem with the `batch`
prcessor);
- let component authors make their own custom telemetry to solve the
problem → higher barrier to entry, especially for people wanting to
opensource existing components.

---------

Co-authored-by: Pablo Baeyens <pablo.baeyens@datadoghq.com>
sfc-gh-sili pushed a commit to sfc-gh-sili/opentelemetry-collector that referenced this pull request Feb 3, 2025
…pen-telemetry#11956)

### Context

The [Pipeline Component Telemetry
RFC](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/rfcs/component-universal-telemetry.md)
was recently accepted (open-telemetry#11406). The document states the following
regarding error monitoring:
> For both [consumed and produced] metrics, an `outcome` attribute with
possible values `success` and `failure` should be automatically
recorded, corresponding to whether or not the corresponding function
call returned an error. Specifically, consumed measurements will be
recorded with `outcome` as `failure` when a call from the previous
component the `ConsumeX` function returns an error, and `success`
otherwise. Likewise, produced measurements will be recorded with
`outcome` as `failure` when a call to the next consumer's `ConsumeX`
function returns an error, and `success` otherwise.


[Observability requirements for stable pipeline
components](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/component-stability.md#observability-requirements)
were also recently merged (open-telemetry#11772). The document states the following
regarding error monitoring:
> The goal is to be able to easily pinpoint the source of data loss in
the Collector pipeline, so this should either:
> - only include errors internal to the component, or;
> - allow distinguishing said errors from ones originating in an
external service, or propagated from downstream Collector components.

Because errors are typically propagated across `ConsumeX` calls in a
pipeline (except for components with an internal queue like
`processor/batch`), the error observability mechanism proposed by the
RFC implies that Pipeline Telemetry will record failures for every
component interface upstream of the component that actually emitted the
error, which does not match the goals set out in the observability
requirements, and makes it much harder to tell which component errors
are coming from from the emitted telemetry.

### Description

This PR amends the Pipeline Component Telemetry RFC with the following:
- restrict the `outcome=failure` value to cases where the error comes
from the very next component (the component on which `ConsumeX` was
called);
- add a third possible value for the `outcome` attribute: `rejected`,
for cases where an error observed at an interface comes from further
downstream (the component did not "fail", but its output was
"rejected");
- propose a mechanism to determine which of the two values should be
used.

The current proposal for the mechanism is for the pipeline
instrumentation layer to wrap errors in an unexported `downstream`
struct, which upstream layers could check for with `errors.As` to check
whether the error has already been "attributed" to a component. This is
the same mechanism currently used for tracking permanent vs. retryable
errors. Please check the diff for details.

### Possible alternatives

There are a few alternatives to this amendment, which were discussed as
part of the observability requirements PR:
- loosen the observability requirements for stable components to not
require distinguishing internal errors from downstream ones → makes it
harder to identify the source of an error;
- modify the way we use the `Consumer` API to no longer propagate errors
upstream → prevents proper propagation of backpressure through the
pipeline (although this is likely already a problem with the `batch`
prcessor);
- let component authors make their own custom telemetry to solve the
problem → higher barrier to entry, especially for people wanting to
opensource existing components.

---------

Co-authored-by: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

discussion-needed Community discussion needed Skip Changelog PRs that do not require a CHANGELOG.md entry Skip Contrib Tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Document observability requirements for stable components

4 participants