-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
groupbyattrsprocessor drops metric metadata #33419
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
There is a CopyTo function for metrics, but i'm not sure if that is what is needed here. |
For reference, this is the spot that does not copy metadata: opentelemetry-collector-contrib/processor/groupbyattrsprocessor/processor.go Lines 204 to 208 in dd2e45a
|
@braydonk looks like this could done using - https://pkg.go.dev/go.opentelemetry.io/collector/[email protected]/pmetric#Metric.CopyTo as suggested previously. I think the ask here makes sense. Please go ahead If you'd like to make the change. I can review the changes when ready. |
Removing |
If nobody is working on this issue, I would like to take it cc @evan-bradley |
After some investigation, using Therefore I would suggest adding the missing I am opened for suggestions |
Thanks for taking on this issue! Might be naive, but perhaps the datapoints could just be deleted from the copy of the metric? If that won't work then it's fine to just do the metadata copy. Just would be nice if the full metric |
This should work, but when deleting datapoints, we still need to first find out, what type are we dealing with (sum/gauge/histogram...) and firstly then delete the appropiate datapoints (these are also different types - numeric value/ histogram value/...). Therefore I do not see any improvement in the logic here, since still the logic determining the type for the metric needs to stay in place and instead of creating empty metrics, we will copy them and them delete parts of them. |
Ah okay I understand the issue now. Probably fine to just use the metadata copy then; it's probably sufficiently rare for that metric proto to change much for this kind of thing to happen again. Thanks! |
…g metrics (#33781) **Description:** Fixes the metadata dropping when processing metrics **Link to tracking Issue:** #33419 **Testing:** <Describe what testing was performed and which tests were added.> - unit tests --------- Signed-off-by: odubajDT <[email protected]>
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Fixed by #33781 |
Component(s)
processor/groupbyattrs
What happened?
Description
When groupbyattrsprocessor makes a new metric, it does not copy the metric metadata. I assume this is because
Metadata
is a relatively new field and isn't fully respected everywhere yet.Steps to Reproduce
Found this by testing the new untyped metric support in Prometheus. It adds a Metadata key called
prometheus.type
. So that's the easiest way to see the effect. Create a pipeline from prometheusreceiver to groupbyattrsprocessor to debugexporter and have it scrape some manner of metrics.Expected Result
The
prometheus.type
metadata should still be present when the value is seen indebugexporter
.Actual Result
It's gone.
Other notes
Collector version
v0.102.0
Environment information
Environment
OS: Debian 12
Compiler(if manually compiled): go 1.22.3
OpenTelemetry Collector configuration
No response
Log output
Additional context
Should there be an actual API in
pdata
for making full metric copies like this? What thegroupbyattrsprocessor
has to do here is pretty brittle for exactly this reason, and I'm not sure if there are other processors doing something similar.The text was updated successfully, but these errors were encountered: