Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace ArrayBlockingQueue with jctools queue. #3034

Merged
merged 9 commits into from
Mar 31, 2021

Conversation

anuraaga
Copy link
Contributor

@anuraaga anuraaga commented Mar 17, 2021

The minimized trace-shaded-deps jar is ~30K which is pretty small and probably worth it since it's only in the SDK.

Benchmarks at https://gist.github.com/anuraaga/1de9e2526a159b4e932d011c2dfb58e2 for throughput. I didn't check CPU overhead since I think that requires a profiler with our current setup, but there really shouldn't be any real change.

ArrayBlockingQueue caps off at around 3M ops/sec on my Macbook Pro 16, while the JCTools queue gets up to 8.5M. I also tried compound queue (shard queue into # of CPUs chunks) but didn't find a significant improvement so using the simpler one.

@jkwatson
Copy link
Contributor

I'll take a look and put it on the profiler tomorrow.

@@ -42,7 +42,7 @@
private long exportedSpans;
private long droppedSpans;

@Setup(Level.Iteration)
@Setup(Level.Trial)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this will mess up with the exporter metrics..

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since when collecting metrics it clears the current values I think it might be ok. Either way, either this needs to be Trial or tearDown needs to be Iteration generally we'd only want to start one BSP per trial though I think.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you please run this benchmark and verify the exportedSpans metric is valid, i.e; current values do get cleared after collecting them at the end of each iteration.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does keep on going up now - I don't think we should need to restart the BSP for it though. @jkwatson Do you know a nice way to aggregate the metrics into a rate for reporting in the JMH benchmark?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand the question. Is something not working about the way things are before this change? Also, should we be doing a forceFlush() at the iteration teardown ?

And, honestly, I'm not sure I understand the purpose of this benchmark in the first place. I ran this on the main branch, and for the 20-thread case, we drop almost all of the spans. Is the goal to see if we can make the BSP drop fewer spans if we can improve things?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this change still needs to be reverted @anuraaga

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I went ahead and changed the shutdown to be per-iteration too then, need one of the changes to make sure threads are closed or JMH complains (at least to me). I don't think we actually wanted to initialize a whole BSP (worker thread, etc) per iteration though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What complaints did you get from jmh?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something about threads not all being shut down, waiting XXX seconds for them to shutdown. Which is definitely true since currently we don't call shutdown on every created BSP.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes. I had been wondering about that. Good find!

* implementation so callers do not need to use the shaded classes.
*/
public static long capacity(Queue<?> queue) {
return ((MessagePassingQueue<?>) queue).capacity();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can use MpscArrayQueue right? It does have capacity()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're shading it and using this in a test in a different artifact where we don't want to have to reference the shaded class. I could cast to MpscArrayQueue here too but may as well stick with the iterface.

@@ -73,7 +75,7 @@ public static BatchSpanProcessorBuilder builder(SpanExporter spanExporter) {
scheduleDelayNanos,
maxExportBatchSize,
exporterTimeoutNanos,
new ArrayBlockingQueue<>(maxQueueSize));
JcTools.newMpscArrayQueue(maxQueueSize));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that MpscArrayQueue rounds the queue size to power to 2 for various perf reasons. In my opinion it is better to enforce this so users know what the actual memory that is getting allocated.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean falling back to ArrayBlockingQueue if size isn't power of 2? I don't think we can require this for the BSP setting instead since it's too tricky to use.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant enforcing the maxQueueSize to be a power of 2.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That we can't do we don't want to lose usability (adding restrictions that can only be conveyed through documentation or error messages) here. Would like to hear more thoughts on whether we should fallback if it's not power-of-2

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Falling back is not great really since it is not an efficient solution. How about calling it out in the documentation that queue size is rounded to the next power of 2?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a note to the builder that some more memory may be allocated, without going too much into implementation detail.

@@ -97,8 +98,7 @@ void configureSpanProcessor_empty() {
assertThat(worker)
.extracting("queue")
.isInstanceOfSatisfying(
ArrayBlockingQueue.class,
queue -> assertThat(queue.remainingCapacity()).isEqualTo(2048));
Queue.class, queue -> assertThat(JcTools.capacity(queue)).isEqualTo(2048));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The existing logic is verifying the remainingCapacity(). Do you want to do the same?

     (JcTools.capacity(queue) - JcTools.size(queue)).isEqualTo(2048)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nah, it only used tha tmethod since the JDK only provides that one, but capacity is what we're checking

@@ -17,8 +17,10 @@
import io.opentelemetry.sdk.trace.ReadableSpan;
import io.opentelemetry.sdk.trace.SpanProcessor;
import io.opentelemetry.sdk.trace.data.SpanData;
import io.opentelemetry.sdk.trace.internal.JcTools;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

@anuraaga anuraaga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jkwatson Can you take another look at this? Thanks!

@@ -97,8 +98,7 @@ void configureSpanProcessor_empty() {
assertThat(worker)
.extracting("queue")
.isInstanceOfSatisfying(
ArrayBlockingQueue.class,
queue -> assertThat(queue.remainingCapacity()).isEqualTo(2048));
Queue.class, queue -> assertThat(JcTools.capacity(queue)).isEqualTo(2048));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nah, it only used tha tmethod since the JDK only provides that one, but capacity is what we're checking

@@ -73,7 +75,7 @@ public static BatchSpanProcessorBuilder builder(SpanExporter spanExporter) {
scheduleDelayNanos,
maxExportBatchSize,
exporterTimeoutNanos,
new ArrayBlockingQueue<>(maxQueueSize));
JcTools.newMpscArrayQueue(maxQueueSize));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a note to the builder that some more memory may be allocated, without going too much into implementation detail.

Copy link
Contributor

@jkwatson jkwatson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's try it.

Copy link
Contributor

@sbandadd sbandadd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

@anuraaga anuraaga merged commit 2f2af19 into open-telemetry:main Mar 31, 2021
This was referenced Dec 19, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants