-
Notifications
You must be signed in to change notification settings - Fork 971
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(cmd): initial integration of OpenTelemetry with OTLP exporter #907
Conversation
10a4ab9
to
4813d46
Compare
Codecov Report
@@ Coverage Diff @@
## main #907 +/- ##
==========================================
- Coverage 58.59% 58.35% -0.24%
==========================================
Files 130 132 +2
Lines 7798 7937 +139
==========================================
+ Hits 4569 4632 +63
- Misses 2755 2831 +76
Partials 474 474
Help us with your feedback. Take ten seconds to tell us how you rate us. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool! Is it possible for me to see this in action? Is there an OTEL collector that has metrics / traces emitted to it?
@rootulp, you can try running collector locally with default endpoints/port while passing to Light or Full node new flags described on the PR desc. And you should see the height of the node being reported, etc. To see traces you would need to run Full node |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! It would be nice to get a demo for this some time soon. One question, tracing happens regardless of whether tracing is enabled, correct? --tracing
being enabled just adds the default (or custom) exporter endpoint. So if --tracing
isn't passed, but tracing still occurs, does it impact performance?
That's a good question, and I am glad you asked it. I had the same concern and even looked deeper at Otel implementation to find out. My conclusion is that it won't noticeably affect performance. The default tracing exporter does nothing besides keeping information about spans to be delegated if tracing is turned on in runtime(see #937). The same is true for metrics. The default exporter does nothing besides logic to delegate to some custom exporter that can be enabled in runtime. I could potentially make |
Screenshot of a Grafana dashboard featuring the head metric (implemented in this PR). This is for a light node I'm running on AWS which is emitting metrics to an OTEL Collector deployed on the same instance. The OTEL Collector is exporting metrics to Prometheus that is hosted by Grafana Cloud. Setup instructions are in #922 tagging @renaynay because I'm happy to hop on a call and demo this or set it up for you. |
@rootulp would love to do a demo on how to set it up, thanks! |
Co-authored-by: rene <[email protected]>
These errors can only happen if the flags are set incorrectly(wrong name, not added to the command, etc) which are bad programmer issues and should be reported as panics accordingly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍🏻
JaegerOTLP exporter--tracing
and--tracing.endpoint
--metrics
and--metrics.endpoint
P.S. OTLP integration could be separated from introduced metrics and tracing coverage, but it is the bare minimum needed to manually verify that the integration works as expected by looking at actual data reported by the local nodes.
TODO:
As always, recommend looking at the PR commit by commit(CBC)
Substitutes #810
Closes #934