-
Notifications
You must be signed in to change notification settings - Fork 893
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarify whether BatchSpanProcessor scheduledDelayMillis is between batches or entire processing of queue #849
Comments
Hey - this question came up. Does anyone happen to have any suggestions on how to handle schedule delay? |
IMHO it has to be the latter, otherwise you would be severely limiting throughput. EDIT: |
@Oberon00 Yeah I think that makes sense, it actually was a silly question. I think what I meant to ask, was this other one, should we eagerly export batches? Currently Java exports as soon as the queue is over Edit: I guess main concern is that with eager exporting, user seems to lose control. If they want fast exports, they could just reduce schedule delay, but there's no way to have slower exports if we default to eager behavior. |
+1 in favor of eager exporting |
I think the wording should be something akin to:
The current version is a bit ambiguous (or perhaps even understood as a loop where exports happen after Stumbled upon this in the JS SDK, where exporting happens exactly with Agree with eager exporting 😄 |
I am also in support of eager exporting. I see 2 obvious ways to do it:
|
@open-telemetry/technical-committee I think this is quite an important issue that likely affects many SDKs. As @morigs pointed out in the related js issue open-telemetry/opentelemetry-js#3094 the current situation effectively works as a rate-limiter which I believe is not be the intended effect. JS would like to implement a fix for this, but we don't want to do something against the spec. |
imo the export should happen once the desired batch size is reached or the delay timer ticks, whichever is earlier. The former protects against unnecessarily large batches, the latter protects against sitting on the data that trickles in slowly. |
This. This is also exactly what we do in the Collector if I remember correctly. If the spec does not describe this behavior I would argue it is a spec bug. |
There is also ForceFlush which we might want to cover. |
Currently we define
scheduledDelayMillis
,maxQueueSize
, andmaxExportBatchSize
.https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/trace/sdk.md#batching-processor
It's not clear to me whether
scheduledDelayMillis
is supposed to be time in between exports of chunks of sizemaxExportBatchSize
, or if it's the delay between processing the queue, when processing we export as many batches as we can, and then wait for the delay amount until the next set of batches. The wording in the spec seems closer to the former, but go and java at least seem to implement as the latter.The text was updated successfully, but these errors were encountered: