-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to retrieve batched metric data due to i/o timeout #58
Comments
Hey, thank you for the feedback and I like your suggestion! We will take a closer look into this issue and fix it as soon as possible. Because of the unavailability during Christmas/end-of-year season we will provide a fix in January. |
As suggested by @ckroehnert in #58 (comment)
The patch from Ninja243 looks good to me. |
@ckroehnert Thanks for taking a look at it! I just need to test it and then it should be good to go! |
Should be solved with the latest release. |
Feedback: Yes, it solved the issue for us. Thx. Maybe it makes sense to create a new helm release with the latest two app updates/fixes. |
Thanks for the heads up! I've pushed a new release and helm chart. |
The Exporter crashes with the panic 'Unable to retrieve batched metric data' (main.go:61) in our setup from time to time.
"level":"panic","ts":1734508642.2779338,"caller":"internal/logging.go:62","msg":"Unable to retrieve batched metric data","error":"Post \"https://ces.eu-de.otc.t-systems.com/V1.0/<tenant>/batch-query-metric-data\": dial tcp <ip>:443: i/o timeout","stacktrace":"github.com/iits-consulting/otc-prometheus-exporter/internal.(*Logger).Panic\n\t/go/src/github.com/iits-consulting/otc-prometheus-exporter/internal/logging.go:62\nmain.collectMetricsInBackground.func1\n\t/go/src/github.com/iits-consulting/otc-prometheus-exporter/main.go:61"}
Is it really necessary to kill the app with a panic?
My solution:
The text was updated successfully, but these errors were encountered: