-
Notifications
You must be signed in to change notification settings - Fork 821
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: use Uint8Array for ids #786
feat: use Uint8Array for ids #786
Conversation
Use Uint8Array for traceId/spandId to avoid conversion from buffer to hex string during creation and from hex string to buffer during export. The opentelemetry protocol uses also a byte array in protobuf. There are still conversions needed for propagation via HTTP headers but overall the number of conversions should be reduced.
Can we get quick benchmark ? Like |
Are there somewhere instructions how to run OTel benchmarks? |
you can just do |
Codecov Report
@@ Coverage Diff @@
## master #786 +/- ##
==========================================
+ Coverage 90.67% 92.64% +1.96%
==========================================
Files 236 243 +7
Lines 11045 10816 -229
Branches 1024 1064 +40
==========================================
+ Hits 10015 10020 +5
+ Misses 1030 796 -234
|
Result of benchmarks: IDs as Uint8Array
IDs as hex strings
Don't know if this is enough for a conclusion. As expected span creation and binary propagation is faster but text propagation is slower. An end2end setup using text tagging and a few internal spans with e.g. jaeger exporter would be better. |
Thx @Flarna |
Summary of benchmark:
The speed improvement in span creation is fairly minor and seems fine, but the very interesting results are the propagators. The B3 inject has a seemingly impossible speedup, but everything else suffered quite a bit with the trace context injection showing the biggest drop. |
All the conversions around Buffer/Uint8Array here https://github.com/open-telemetry/opentelemetry-js/pull/786/files#diff-c818867edf5b3960f94f5c579ff06043 are unfortunate as in most cases a Buffer can be used as a Uint8Array anyways. I wonder if we would be better adding platform-specific wrapper classes with the methods we use (toHex, fromHex, equality), so that the platform specific code could use whatever platform specific object (node: buffer, web: Uint8Array) makes the most sense for it. |
I updated benchmark results above. I noticed that I forgot to adapt the benchmarks therefore measurements for propagators were garbage. |
I think we should be also careful in interpretation of the benchmarks as they show ops/sec but the real usecase don't do the ops independent. e.g. you will always have at least one created spans for inject and one more for extract. Examples: summing this up (2 spans + inject + extract): 23,14us => 22.23us But I'm still not sure if the added complexity is it worth to do this. |
Guys with regards to your test I completely agree with comparing the total sum of cpu usage, taking into consideration the creation of span and then exporting that - in the most typical scenario. As it will be hard to understand which of this operation is "more valuable" / "more used" and which of this operation will have bigger impact of the total cpu usage. |
are those tests run only for node or also for different browsers ? |
I agree using total timings makes sense.
Overall this seems like a micro-optimization that may not be worth the effort of maintaining a more complex implementation. It may also end up being slower in cases where ids are logged many times. It is easy to envision a use-case where span context is logged on every log message. In this case you must pay the encoding cost each time you log (ignoring the possibility of saving and re-using the encoded version, which is yet more complexity). |
what if we add some caching when converting ? so each id will be converted once only no matter how many times you will convert it ? |
I think conversion caching would add additional complexity that we may not want to maintain for such a small performance gain. Maybe @Flarna can provide some stats as to the mean and median numbers of outgoing span operations for an incoming request? |
I used only Node.js (I think one of the latest 12.x versions on windows). No idea how to run OTel and it's benchmarks in a browser...
Well if people start logging span context that often the conversion cost is most likely the wrong area to start for improving...
I thought about this also. But this would require to encapsulate this into some class with getters/setters which adds overhead again. And it's no longer that easy to just create a SpanContext from literals. see also #698 (comment))
Sorry, no numbers regarding this. Incoming and outgoing spans have usually several attributes on it (e.g. HTTP semantic conventions,...) therefore the relative gain for them will be small. In general I was expecting more gain from this change. I have to admit that using |
I had a chat with @dyladan today and we found that a significant part of the improvents seen here (at least for nodejs) are just because Seems that at least V8 is significant better in working with |
Closing in favor of #824 |
Which problem is this PR solving?
closes #698
Short description of the changes
Use Uint8Array for traceId/spandId to avoid conversion from buffer to
hex string during creation and from hex string to buffer during export.
The opentelemetry protocol uses also a byte array in protobuf.
There are still conversions needed for propagation via HTTP headers but
overall the number of conversions should be reduced.