Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add error parameter to EndTimeExtractor and AttributesExtractor#onEnd() #3988

Merged
merged 4 commits into from
Sep 8, 2021

Conversation

mateuszrzeszutek
Copy link
Member

I've added a @Nullable Throwable error parameter to EndTimeExtractor and AttributesExtractor#onEnd(). Now, along with the SpanStatusExtractor, every method in the Instrumenter API that accepts a RESPONSE also accepts a Throwable. The operation can end with either a RESPONSE or an error being passed to the instrumenter so I thought it makes sense to always pass both of these to have a complete view of how it ended.

This PR resolves #3730. It is now possible to extract custom attributes in an AttributesExtractor and they'll be passed along to the RequestListener, thus making it possible to use those attributes in custom metrics.

Comment on lines +167 to +175
if (error != null) {
error = errorCauseExtractor.extractCause(error);
span.recordException(error);
}

UnsafeAttributes attributesBuilder = new UnsafeAttributes();
for (AttributesExtractor<? super REQUEST, ? super RESPONSE> extractor : attributesExtractors) {
extractor.onEnd(attributesBuilder, request, response);
extractor.onEnd(attributesBuilder, request, response, error);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wondering a little bit about passing original error vs passing the extracted error here.

do you think ErrorCauseExtractor might be used/useful for suppressing "expected" exceptions from being recorded? (and maybe still wanting to add a tag that there was an error, just wanting to suppress the big stack trace from getting recorded?)

or maybe there's a better mechanism for this use case

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you think ErrorCauseExtractor might be used/useful for suppressing "expected" exceptions from being recorded?

I think that'd be a misuse of the API. It was originally meant to strip out those "meaningless" exceptions that wrap over the "real" ones, like ExecutionException.

The stacktrace issue seems to be a real one though, we have #431 in our backlog. Perhaps we could think about an additional configuration knob on the InstrumenterBuilder for that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ya, that makes sense to me 👍

@zmapleshine
Copy link
Contributor

Hey @mateuszrzeszutek ,thank you for resolve the problem in #3730 .But...I do not understand the question about the label value of a metrics.
Because from the design of Instrumentation API module, it is almost certain that the label value of a metrics comes from attributes, but it is a "processed" value, and sometimes such a value from attributes is not suitable as label of a metrics . Similarly, labels that were put into attributes for use in metrics do not have to be a span tag .
I think if it is possible to pass the original object to the hook function as well? Or is there a clever way to combine trace and metrics organically without coupling?

@zmapleshine
Copy link
Contributor

but it is a "processed" value, and sometimes such a value from attributes is not suitable as label of a metrics . Similarly, labels that were put into attributes for use in metrics do not have to be a span tag .

In a practical scenario, in the monitoring process of the Lettuce client, commands and parameters are usually concatenated together as span labels:

sql.statement: mget key1,key2...

Then, attributes must have a key named "sql.statement", but this value is not suitable for a label of redis metric, because the cardinality is too large and "mget" should be used instead.

@trask
Copy link
Member

trask commented Aug 30, 2021

@mateuszrzeszutek this one has merge conflict

@mateuszrzeszutek
Copy link
Member Author

Because from the design of Instrumentation API module, it is almost certain that the label value of a metrics comes from attributes, but it is a "processed" value, and sometimes such a value from attributes is not suitable as label of a metrics . Similarly, labels that were put into attributes for use in metrics do not have to be a span tag .

Yeah that's true. In case of metrics, the View API should solve that problem, but we don't really have anything like that for spans (well, you can remove attributes in the collector I guess).

I think if it is possible to pass the original object to the hook function as well?

I wanted to avoid passing the response & exception everywhere. I think that those endAttributes passed to the RequestListener are basically a derivative of the response, they should contain all useful information. Passing a response object to the listener would duplicate some of the AttributesExtractor logic and responsibility. I'd like to get @anuraaga's opinion on that though - whether what I'm describing here makes sense or not.

In a practical scenario, in the monitoring process of the Lettuce client, commands and parameters are usually concatenated together as span labels:

sql.statement: mget key1,key2...

Then, attributes must have a key named "sql.statement", but this value is not suitable for a label of redis metric, because the cardinality is too large and "mget" should be used instead.

That's why almost all (if not all) database clients' instrumentations emit db.operation attribute - in case of your example it'd contain just "mget" and would be a perfect choice for a metric label/attribute.

@zmapleshine
Copy link
Contributor

That's why almost all (if not all) database clients' instrumentations emit db.operation attribute - in case of your example it'd contain just "mget" and would be a perfect choice for a metric label/attribute.

@mateuszrzeszutek
So how do you extract the URI from the attributes of HTTP client instrumentation? : ) :) :)

It might be given a full URL like "http://10.x.x.x/user/123456?sign=xxxxx&...", but it's not appropriate for metrics, which typically require a tag like "/user/{id}" (perhaps splicing together the service name to buid a new tag called "router", perhaps a slightly inappropriate example, but I mean the loss of the original request object may also lose the accuracy of the metric tag and even increase the difficulty of obtaining the tag value.)

@anuraaga
Copy link
Contributor

The goal of instrumentation is to collect data - how the data gets transformed aside from that is signal specific. Traces generally follow a pattern of full data on a sample of traces. So spans just accept the attributes as is.

Metrics instead don't sample but aggregate - the metric view configuration configures the aggregation. Just like they aggregate points to a sum, the views will also be used to pick correct attributes - attribute selection is a sort of aggregation after all. The intention is definitely not to record high cardinality attributes. But the metrics SDK is currently in its infancy so we are still working on that piece of machinery. Hopefully that makes sense.

Copy link
Contributor

@anuraaga anuraaga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mapping error to semantic attributes makes a lot of sense.

Less sure about end time ;) I guess it's symmetry but don't expect that to ever be useful.

@mateuszrzeszutek
Copy link
Member Author

Less sure about end time ;) I guess it's symmetry but don't expect that to ever be useful.

Yeah, I added it there mostly for symmetry - honestly I have no idea how you can extract a useful end time from an exception.

@trask
Copy link
Member

trask commented Aug 31, 2021

Less sure about end time ;) I guess it's symmetry but don't expect that to ever be useful.

Yeah, I added it there mostly for symmetry - honestly I have no idea how you can extract a useful end time from an exception.

is the symmetry "everywhere we pass RESPONSE, we also pass error"?

@mateuszrzeszutek
Copy link
Member Author

is the symmetry "everywhere we pass RESPONSE, we also pass error"?

Yep, exactly that.

@trask trask merged commit 4820ec4 into open-telemetry:main Sep 8, 2021
@mateuszrzeszutek mateuszrzeszutek deleted the add-exception-param branch December 20, 2021 14:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Can't listen error info by RequestListener in Instrumenter lifecycle end
4 participants