Define observability requirements for stable components#11772
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #11772 +/- ##
=======================================
Coverage 91.59% 91.59%
=======================================
Files 449 449
Lines 23761 23761
=======================================
Hits 21763 21763
Misses 1623 1623
Partials 375 375 ☔ View full report in Codecov by Sentry. |
mx-psi
left a comment
There was a problem hiding this comment.
Thanks! Left a few comments. I think we also want to clarify this is not an exhaustive list: components may want to add other telemetry if it makes sense
Co-authored-by: Pablo Baeyens <pbaeyens31+github@gmail.com>
…ion for measuring performance.
| For other components, this would typically be the number of items forwarded to the next | ||
| component through the `Consumer` API. | ||
|
|
||
| 3. How much data is dropped because of errors. |
There was a problem hiding this comment.
Per @djaglowski 's RFC here I think this would just be an attribute on the output metric called outcome? I can see the value in having a separate metric for errors, but want to be sure we don't create a divergence from the RFC. Separately, should we link the RFC from Dan to also specify the previously agreed upon naming conventions?
There was a problem hiding this comment.
We probably want to require the use of the RFC's conventions for component-identifying attributes, and I will definitely include using an outcome attribute on the input metric instead of a separate error metric as a recommended implementation for processors.
However, if we want to incentivize contributing external components, I don't think we want to require strict adherence to all of the RFC's choices, so divergences are somewhat inevitable. Relatedly, have you read the "Important note" about the RFC in the PR description? I'm interested in hearing what you think.
There was a problem hiding this comment.
Per @djaglowski 's RFC here I think this would just be an attribute on the output metric called outcome?
That would be the most natural way to go about this. I feel like this document should not be too prescriptive as to how to accomplish the requirements listed, but making a recommendation like this would make sense to me to ensure consistency across components.
There was a problem hiding this comment.
@jade-guiton-dd I hadn't seen that yet, thanks for bringing my attention to it, i think i had only seen the initial description.
I would strongly advise against option 1 as error back propagation is key if you are running a collector in gateway mode and you want to propagate backpressure to an agent. I think options 2 or 3 are sufficient, option 4 feels not prescriptive enough IMO.
There was a problem hiding this comment.
Given this is the recommendation for a component, it makes sense to have the component author use a custom error metric that they can decide to either include or exclude any downstream errors as part of it. (this is what you have written, and i agree with it 😄)
There was a problem hiding this comment.
they can decide to either include or exclude any downstream errors as part of it. (this is what you have written, and i agree with it 😄)
To be clear, the current requirements allow including downstream errors in a custom error metric, but only if there is a way to distinguish them from internal errors.
There was a problem hiding this comment.
yep, this makes sense to me. Thank you! 🙇
There was a problem hiding this comment.
Just to be sure @djaglowski, do you support option 2 I detailed in the PR description, ie. amending the Pipeline Instrumentation RFC to require the implementation to distinguish errors coming directly from the next pipeline component from errors propagated from components further downstream, in order to fit the last paragraph of point 3?
There was a problem hiding this comment.
I think it makes sense in principal, as long as there is a clear mechanism for communicating this information so that instrumentation that is automatically wrapped around components can unambiguously know the correct outcome.
…11956) ### Context The [Pipeline Component Telemetry RFC](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/rfcs/component-universal-telemetry.md) was recently accepted (#11406). The document states the following regarding error monitoring: > For both [consumed and produced] metrics, an `outcome` attribute with possible values `success` and `failure` should be automatically recorded, corresponding to whether or not the corresponding function call returned an error. Specifically, consumed measurements will be recorded with `outcome` as `failure` when a call from the previous component the `ConsumeX` function returns an error, and `success` otherwise. Likewise, produced measurements will be recorded with `outcome` as `failure` when a call to the next consumer's `ConsumeX` function returns an error, and `success` otherwise. [Observability requirements for stable pipeline components](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/component-stability.md#observability-requirements) were also recently merged (#11772). The document states the following regarding error monitoring: > The goal is to be able to easily pinpoint the source of data loss in the Collector pipeline, so this should either: > - only include errors internal to the component, or; > - allow distinguishing said errors from ones originating in an external service, or propagated from downstream Collector components. Because errors are typically propagated across `ConsumeX` calls in a pipeline (except for components with an internal queue like `processor/batch`), the error observability mechanism proposed by the RFC implies that Pipeline Telemetry will record failures for every component interface upstream of the component that actually emitted the error, which does not match the goals set out in the observability requirements, and makes it much harder to tell which component errors are coming from from the emitted telemetry. ### Description This PR amends the Pipeline Component Telemetry RFC with the following: - restrict the `outcome=failure` value to cases where the error comes from the very next component (the component on which `ConsumeX` was called); - add a third possible value for the `outcome` attribute: `rejected`, for cases where an error observed at an interface comes from further downstream (the component did not "fail", but its output was "rejected"); - propose a mechanism to determine which of the two values should be used. The current proposal for the mechanism is for the pipeline instrumentation layer to wrap errors in an unexported `downstream` struct, which upstream layers could check for with `errors.As` to check whether the error has already been "attributed" to a component. This is the same mechanism currently used for tracking permanent vs. retryable errors. Please check the diff for details. ### Possible alternatives There are a few alternatives to this amendment, which were discussed as part of the observability requirements PR: - loosen the observability requirements for stable components to not require distinguishing internal errors from downstream ones → makes it harder to identify the source of an error; - modify the way we use the `Consumer` API to no longer propagate errors upstream → prevents proper propagation of backpressure through the pipeline (although this is likely already a problem with the `batch` prcessor); - let component authors make their own custom telemetry to solve the problem → higher barrier to entry, especially for people wanting to opensource existing components. --------- Co-authored-by: Pablo Baeyens <pablo.baeyens@datadoghq.com>
…pen-telemetry#11956) ### Context The [Pipeline Component Telemetry RFC](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/rfcs/component-universal-telemetry.md) was recently accepted (open-telemetry#11406). The document states the following regarding error monitoring: > For both [consumed and produced] metrics, an `outcome` attribute with possible values `success` and `failure` should be automatically recorded, corresponding to whether or not the corresponding function call returned an error. Specifically, consumed measurements will be recorded with `outcome` as `failure` when a call from the previous component the `ConsumeX` function returns an error, and `success` otherwise. Likewise, produced measurements will be recorded with `outcome` as `failure` when a call to the next consumer's `ConsumeX` function returns an error, and `success` otherwise. [Observability requirements for stable pipeline components](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/component-stability.md#observability-requirements) were also recently merged (open-telemetry#11772). The document states the following regarding error monitoring: > The goal is to be able to easily pinpoint the source of data loss in the Collector pipeline, so this should either: > - only include errors internal to the component, or; > - allow distinguishing said errors from ones originating in an external service, or propagated from downstream Collector components. Because errors are typically propagated across `ConsumeX` calls in a pipeline (except for components with an internal queue like `processor/batch`), the error observability mechanism proposed by the RFC implies that Pipeline Telemetry will record failures for every component interface upstream of the component that actually emitted the error, which does not match the goals set out in the observability requirements, and makes it much harder to tell which component errors are coming from from the emitted telemetry. ### Description This PR amends the Pipeline Component Telemetry RFC with the following: - restrict the `outcome=failure` value to cases where the error comes from the very next component (the component on which `ConsumeX` was called); - add a third possible value for the `outcome` attribute: `rejected`, for cases where an error observed at an interface comes from further downstream (the component did not "fail", but its output was "rejected"); - propose a mechanism to determine which of the two values should be used. The current proposal for the mechanism is for the pipeline instrumentation layer to wrap errors in an unexported `downstream` struct, which upstream layers could check for with `errors.As` to check whether the error has already been "attributed" to a component. This is the same mechanism currently used for tracking permanent vs. retryable errors. Please check the diff for details. ### Possible alternatives There are a few alternatives to this amendment, which were discussed as part of the observability requirements PR: - loosen the observability requirements for stable components to not require distinguishing internal errors from downstream ones → makes it harder to identify the source of an error; - modify the way we use the `Consumer` API to no longer propagate errors upstream → prevents proper propagation of backpressure through the pipeline (although this is likely already a problem with the `batch` prcessor); - let component authors make their own custom telemetry to solve the problem → higher barrier to entry, especially for people wanting to opensource existing components. --------- Co-authored-by: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Description
This PR defines observability requirements for components at the "Stable" stability levels. The goal is to ensure that Collector pipelines are properly observable, to help in debugging configuration issues.
Approach
Feel free to share if you don't think this is the right approach for these requirements.
Link to tracking issue
Resolves #11581
Important note regarding the Pipeline Instrumentation RFC
I included this paragraph in the part about error count metrics:
The Pipeline Instrumentation RFC (hereafter abbreviated "PI"), once implemented, should allow monitoring component errors via the
outcomeattribute, which is eithersuccessorfailure, depending on whether theConsumerAPI call returned an error.Note that this does not work for receivers, or allow differentiating between different types of errors; for that reason, I believe additional component-specific error metrics will often still be required, but it would be nice to cover as many cases as possible automatically.
However, at the moment, errors are (usually) propagated upstream through the chain of
Consumecalls, so in case of error thefailurestate will end up applied to all components upstream of the actual source of the error. This means the PI metrics do not fit the first bullet point.Moreover, I would argue that even post-processing the PI metrics does not reliably allow distinguishing the ultimate source of errors (the second bullet point). One simple idea is to compute
consumed.items{outcome:failure} - produced.items{outcome:failure}to get the number of errors originating in a component. But this only works if output items map one-to-one to input items: if a processor or connector outputs fewer items than it consumes (because it aggregates them, or translates to a different signal type), this formula will return false positives. If these false positives are mixed with real errors from the component and/or from downstream, the situation becomes impossible to analyze by just looking at the metrics.For these reasons, I believe we should do one of four things:
ConsumerAPI to no longer propagate errors, making the PI metric outcomes more precise.We could catch errors in whatever wrapper we already use to emit the PI metrics, log them for posterity, and simply not propagate them.
Note that some components already more or less do this, such as the
batchprocessor, but this option may in principle break components which rely on downstream errors (for retry purposes for example).outcomevalue, or add another attribute).This could be implemented by somehow propagating additional state from one
Consumecall to another, allowing us to establish the first appearance of a given error value in the pipeline.