-
Notifications
You must be signed in to change notification settings - Fork 9
src: add batching options to AsyncTSQueue #319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThis update adds batching support to the Changes
Sequence Diagram(s)sequenceDiagram
participant Producer as Producer Thread
participant Queue as AsyncTSQueue
participant Timer as Timer (optional)
participant Consumer as Callback
Producer->>Queue: enqueue(item)
alt Batching enabled
alt Queue size >= min_size
Queue->>Queue: trigger_async()
Queue->>Consumer: Invoke callback with batch
else Queue size < min_size
Queue->>Timer: start_timer()
Timer-->>Queue: After max_time, trigger_async()
Queue->>Consumer: Invoke callback with batch
end
else Batching disabled
alt Queue size == 1
Queue->>Queue: trigger_async()
Queue->>Consumer: Invoke callback with item
end
end
Poem
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
I ran into an unexpected issue while reviewing this PR. Please try again later. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🔭 Outside diff range comments (1)
src/nsolid/async_ts_queue.h (1)
71-73:⚠️ Potential issueAdd timer cleanup in destructor to prevent memory leak.
The timer is allocated with
newbut never deleted, causing a memory leak.Apply this diff to properly clean up the timer:
~AsyncTSQueue() { async_handle_->close_and_delete(); + if (timer_ != nullptr) { + timer_->close_and_delete(); + } }
🧹 Nitpick comments (2)
src/nsolid/async_ts_queue.h (2)
250-254: Initializebatching_enabled_at declaration.The member variable
batching_enabled_has a default initializer but is always overwritten in constructors. For consistency and clarity, remove the redundant initialization.// Batching options and timer AsyncTSQueueOptions opts_; -bool batching_enabled_ = false; +bool batching_enabled_; nsuv::ns_timer* timer_;
18-24: Consider adding validation and documentation for AsyncTSQueueOptions.The struct would benefit from validation constraints and more detailed documentation about the behavior when values are 0.
/** * Options for AsyncTSQueue batching notification + * + * @field min_size Minimum queue size to trigger notification (0 = disabled) + * @field max_time Maximum time in milliseconds before notification (0 = disabled) + * + * When both are specified, notification is triggered when either condition is met. */ struct AsyncTSQueueOptions { uint64_t min_size = 0; // Minimum queue size to trigger notification uint64_t max_time = 0; // Maximum time (ms) before notification };
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/nsolid/async_ts_queue.h(6 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (8)
- GitHub Check: test-linux (ubuntu-24.04-arm)
- GitHub Check: test-linux (ubuntu-24.04)
- GitHub Check: coverage-linux-without-intl
- GitHub Check: coverage-windows
- GitHub Check: lint-js-and-md
- GitHub Check: test-macOS
- GitHub Check: build-tarball
- GitHub Check: coverage-linux
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (2)
src/nsolid/async_ts_queue.h (2)
160-163: 🛠️ Refactor suggestion
batching_enabled_should allow size-only OR time-only modesThe current check requires both
min_sizeandmax_timeto be positive.
This prevents perfectly valid use-cases such as “flush after 100 items no matter
how long it takes” or “flush every 10 ms even for a single item”.- batching_enabled_ = (opts_.min_size > 0 && opts_.max_time > 0); + batching_enabled_ = (opts_.min_size > 0 || opts_.max_time > 0);After changing this line remember to adapt the enqueue logic so that the timer
is only armed whenopts_.max_time > 0.
94-102: 🛠️ Refactor suggestionBatch trigger at
size == 1defeats the purpose of batchingWith batching enabled the queue is drained immediately when the very first
element arrives (size == 1).
This provides no opportunity for additional items to accumulate and essentially
turns “batching” into “single-item + timer ping-pong”.A more typical batching strategy is:
- If
size >= min_size→ trigger immediately.- else if this is the first item → start the timer only.
- if (batching_enabled_) { - if (size == 1 || size >= opts_.min_size) { - trigger_async(); - } - } else { + if (batching_enabled_) { + if (opts_.min_size > 0 && size >= opts_.min_size) { + trigger_async(); // enough items – flush now + } else if (opts_.max_time > 0 && size == 1) { + start_timer(); // first item – arm the timer + } + } else { if (size == 1) { ASSERT_EQ(0, async_handle_->send()); } }
🧹 Nitpick comments (1)
src/nsolid/async_ts_queue.h (1)
119-122:trigger_async()should cancel any pending timerOnce a batch is flushed we no longer need the outstanding timer. Cancel it to
avoid unnecessary wake-ups:void trigger_async() { ASSERT_EQ(0, async_handle_->send()); + if (timer_ != nullptr && timer_->is_active()) + ASSERT_EQ(0, timer_->stop()); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/nsolid/async_ts_queue.h(7 hunks)
🧰 Additional context used
🧠 Learnings (1)
src/nsolid/async_ts_queue.h (1)
Learnt from: santigimeno
PR: nodesource/nsolid#319
File: src/nsolid/async_ts_queue.h:97-107
Timestamp: 2025-06-02T14:51:54.441Z
Learning: In nsuv timer implementation, timer_->start() is idempotent - it automatically stops the timer if needed before rearming it, eliminating the need for manual timer stopping.
⏰ Context from checks skipped due to timeout of 90000ms (7)
- GitHub Check: test-tarball-linux
- GitHub Check: coverage-linux
- GitHub Check: coverage-linux-without-intl
- GitHub Check: coverage-windows
- GitHub Check: test-linux (ubuntu-24.04-arm)
- GitHub Check: test-linux (ubuntu-24.04)
- GitHub Check: test-macOS
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/nsolid/async_ts_queue.h (1)
162-174:batching_enabled_should be||, not&&Previous reviews already pointed out that requiring both
min_sizeand
max_timeto be greater than zero disables pure size-based or pure time-based
batching modes. Unless this is intentional, change to:batching_enabled_ = (opts_.min_size > 0 || opts_.max_time > 0);
🧹 Nitpick comments (1)
src/nsolid/async_ts_queue.h (1)
20-26: Include<cstdint>to guaranteeuint64_tis defined
uint64_tis declared in<cstdint>/<stdint.h>. Relying on a transitive
include can silently break builds on some tool-chains. Add the header next to
the existing standard includes.#include <memory> #include <vector> +#include <cstdint> // uint64_t #include <functional>
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/nsolid/async_ts_queue.h(8 hunks)
🧰 Additional context used
🧠 Learnings (1)
src/nsolid/async_ts_queue.h (1)
Learnt from: santigimeno
PR: nodesource/nsolid#319
File: src/nsolid/async_ts_queue.h:97-107
Timestamp: 2025-06-02T14:51:54.441Z
Learning: In nsuv timer implementation, timer_->start() is idempotent - it automatically stops the timer if needed before rearming it, eliminating the need for manual timer stopping.
⏰ Context from checks skipped due to timeout of 90000ms (7)
- GitHub Check: test-tarball-linux
- GitHub Check: coverage-linux-without-intl
- GitHub Check: coverage-linux
- GitHub Check: test-linux (ubuntu-24.04)
- GitHub Check: test-linux (ubuntu-24.04-arm)
- GitHub Check: coverage-windows
- GitHub Check: test-macOS
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/nsolid/async_ts_queue.h (1)
171-172: Enable batching when either threshold is configured, not only when both are
batching_enabled_currently usesmin_size > 0 && max_time > 0, preventing
pure size-based or pure time-based batching. Switching to logical OR allows
the three expected modes (size-only, time-only, combined).- batching_enabled_ = (opts_.min_size > 0 && opts_.max_time > 0); + batching_enabled_ = (opts_.min_size > 0 || opts_.max_time > 0);(You’ll also need to guard
start_timer()withif (opts_.max_time > 0)when
the timer is actually required.)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/nsolid/async_ts_queue.h(7 hunks)
🧰 Additional context used
🧠 Learnings (1)
src/nsolid/async_ts_queue.h (1)
Learnt from: santigimeno
PR: nodesource/nsolid#319
File: src/nsolid/async_ts_queue.h:97-107
Timestamp: 2025-06-02T14:51:54.441Z
Learning: In nsuv timer implementation, timer_->start() is idempotent - it automatically stops the timer if needed before rearming it, eliminating the need for manual timer stopping.
⏰ Context from checks skipped due to timeout of 90000ms (7)
- GitHub Check: test-tarball-linux
- GitHub Check: coverage-linux
- GitHub Check: test-macOS
- GitHub Check: test-linux (ubuntu-24.04-arm)
- GitHub Check: test-linux (ubuntu-24.04)
- GitHub Check: coverage-linux-without-intl
- GitHub Check: coverage-windows
🔇 Additional comments (1)
src/nsolid/async_ts_queue.h (1)
73-78: Destructor now cleans up the timer – nice catch!
timer_->close_and_delete()prevents the leak reported earlier and mirrors the
async handle cleanup.
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/nsolid/async_ts_queue.h (1)
162-176:batching_enabled_only when both thresholds are set – breaks “size-only” / “time-only” modesEarlier review round already flagged this, and the logic is unchanged. Requiring both
min_sizeandmax_timeto be non-zero prevents users from opting into purely size-based or purely time-based batching, which the public docs claim is supported (“whichever occurs first”).- batching_enabled_ = (opts_.min_size > 0 && opts_.max_time > 0); + batching_enabled_ = (opts_.min_size > 0 || opts_.max_time > 0);Follow-up changes needed in
enqueue_impl():- if (size == 1) { + if (opts_.max_time > 0 && size == 1) { ... - } else if (size >= opts_.min_size) { + } else if (opts_.min_size > 0 && size >= opts_.min_size) {(This is exactly the suggestion made in the previous review; marking as duplicate.)
🧹 Nitpick comments (6)
src/nsolid/async_ts_queue.h (6)
10-16: Include<cstdint>for the fixed-width integer types you introduce
AsyncTSQueueOptionsusesuint64_t, but<cstdint>(or<stdint.h>) is not included in this header.
While most toolchains incidentally bring it in through other headers, relying on that is brittle and can break when the transitive include hierarchy changes.#include <memory> #include <vector> +#include <cstdint> // uint64_t #include <functional> #include <tuple> #include <type_traits> #include <atomic>
20-27: Clarify default-constructed options & validate at runtime
A default-constructed
AsyncTSQueueOptions{}currently disables batching (min_size == 0 && max_time == 0). That is fine, but worth documenting explicitly so users understand that both values must be set to enable batching under the current implementation.Consider adding a cheap runtime
ASSERT(or at least a comment) to flag obviously invalid combinations, e.g.min_size == 0 && max_time == 0, if the intent is to require either field to be non-zero (see next comment).
98-111: Edge-case: active timer not cancelled whenmin_sizeflushes immediatelyWhen the queue reaches
min_size, you cleartimer_armed_and trigger an async flush, but any already-running timer is left active.
That wastes a wake-up and can lead to surprising extra callbacks.A minimal defensive fix:
} else if (size >= opts_.min_size) { // Make sure we don't arm the timer if min size is reached, so we're // items are consumed in the async callback. timer_armed_.store(false, std::memory_order_release); + if (timer_ != nullptr && timer_->is_active()) { + /* safe – nsuv timers are single-shot */ + ASSERT_EQ(0, timer_->stop()); + } trigger_async(); }
49-60: Usestd::make_sharedto avoid two separate allocationsBoth factory functions
create()usenew+shared_ptr, leading to two allocations (control block + object).std::make_sharedcombines them and gives a small perf win:- SharedAsyncTSQueue queue(new AsyncTSQueue<T>( - loop, opts, std::forward<Cb>(cb), std::forward<Args>(args)...)); + auto queue = std::make_shared<AsyncTSQueue<T>>( + loop, opts, std::forward<Cb>(cb), std::forward<Args>(args)...);The same applies to the legacy factory version.
(If you rely on the incomplete type inside the constructor this is still safe because construction happens after allocation.)
74-78: Settimer_tonullptrafter closing to avoid accidental reuseAfter
close_and_delete()the pointer is dangling. Clearing it prevents accidental dereference in future maintenance:if (timer_) { timer_->close_and_delete(); + timer_ = nullptr; }
270-275:bool batching_enabled_should beconstAfter construction the flag never changes.
Marking itconstcommunicates intent and enables compiler optimisations:- bool batching_enabled_ = false; + const bool batching_enabled_ = false;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/nsolid/async_ts_queue.h(7 hunks)
🧰 Additional context used
🧠 Learnings (1)
src/nsolid/async_ts_queue.h (1)
Learnt from: santigimeno
PR: nodesource/nsolid#319
File: src/nsolid/async_ts_queue.h:97-107
Timestamp: 2025-06-02T14:51:54.441Z
Learning: In nsuv timer implementation, timer_->start() is idempotent - it automatically stops the timer if needed before rearming it, eliminating the need for manual timer stopping.
⏰ Context from checks skipped due to timeout of 90000ms (7)
- GitHub Check: test-tarball-linux
- GitHub Check: test-macOS
- GitHub Check: coverage-linux-without-intl
- GitHub Check: coverage-linux
- GitHub Check: test-linux (ubuntu-24.04-arm)
- GitHub Check: test-linux (ubuntu-24.04)
- GitHub Check: coverage-windows
🔇 Additional comments (1)
src/nsolid/async_ts_queue.h (1)
247-255: Possible spin when queue drains between async & timer startIn
async_callback()youreturnafterstart_timer().
If the queue is emptied before the timer fires (e.g., producer thread flushes and no new items come), the timer still fires, sends another async, which then finds the queue empty and exits.
Not harmful, but a wasted wake-up.A cheap optimisation: stop the timer if the queue becomes empty immediately after
start_timer()or early-exit in the timer callback whenqueue_->empty().Would you like a concrete diff demonstrating this micro-optimisation?
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
src/nsolid/async_ts_queue.h (2)
100-115: Race-free but still spurious timer wake-ups whenmin_sizeis reachedWhen
size >= opts_.min_size, the timer may already be running.
The code clearstimer_armed_but does not cancel the active timer, so an
extra callback fires with an empty queue, wasting a wake-up (see the discussion
in prior reviews).If you want to avoid the redundant wake-up while keeping the implementation
simple and thread-safe, you can stop the timer only when you know you are on
the loop thread (insideasync_callback):else if (size >= opts_.min_size) { timer_armed_.store(false, std::memory_order_release); trigger_async(); + // The async callback runs on the loop thread – stop the timer there }and inside
async_callbackjust afterprocess():if (batching_enabled_ && timer_->is_active()) ASSERT_EQ(0, timer_->stop());This is an optional optimisation; correctness is not affected.
170-180: Batching requires BOTHmin_sizeandmax_time– limits flexibility
batching_enabled_is set only when both thresholds are positive. This
prevents users from selecting only size-based or only time-based
batching, which is a common use-case.Switching the condition to
opts.min_size > 0 || opts.max_time > 0and
guarding each path accordingly would make the API more flexible without
breaking existing behaviour.
🧹 Nitpick comments (2)
src/nsolid/async_ts_queue.h (1)
240-247:async_callback()skipsprocess()when queue drained by timerThe early
returnafterstart_timer()is correct for the first enqueue,
but after the timer fires and the queue is empty the same early exit will not
be taken (becausetimer_armed_isfalse). This ensuresprocess()is
invoked exactly once per flush – good!However, if new items are enqueued during
process(), the timer is not
re-armed until the producer pushes another item (size==1 path). That is fine
for most workloads but worth documenting as it differs from level-triggered
queues.test/cctest/test_nsolid_async_ts_queue.cc (1)
597-603: Waiting predicate always false afterprocessed.clear()After clearing
processed, the predicateprocessed.size() == 1can never
become true, so thewait_forsimply times out. Although the test still
passes, the intent (detect spurious callbacks) is obscured.Consider waiting on
call_countinstead, or useprocessed.empty()to make
the intention explicit:cv.wait_for(lk, 500ms, [&] { return call_count > 0; }); ... EXPECT_EQ(call_count, 0);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/nsolid/async_ts_queue.h(7 hunks)test/cctest/test_nsolid_async_ts_queue.cc(14 hunks)
🧰 Additional context used
🧠 Learnings (2)
src/nsolid/async_ts_queue.h (2)
Learnt from: santigimeno
PR: nodesource/nsolid#319
File: src/nsolid/async_ts_queue.h:119-130
Timestamp: 2025-06-02T23:07:18.018Z
Learning: In AsyncTSQueue, batching_enabled_ is only true when both opts_.min_size > 0 AND opts_.max_time > 0. The start_timer() method is only called when batching_enabled_ is true, which ensures opts_.max_time > 0 is always satisfied when the timer is started.
Learnt from: santigimeno
PR: nodesource/nsolid#319
File: src/nsolid/async_ts_queue.h:97-107
Timestamp: 2025-06-02T14:51:54.461Z
Learning: In nsuv timer implementation, timer_->start() is idempotent - it automatically stops the timer if needed before rearming it, eliminating the need for manual timer stopping.
test/cctest/test_nsolid_async_ts_queue.cc (1)
Learnt from: santigimeno
PR: nodesource/nsolid#319
File: src/nsolid/async_ts_queue.h:119-130
Timestamp: 2025-06-02T23:07:18.018Z
Learning: In AsyncTSQueue, batching_enabled_ is only true when both opts_.min_size > 0 AND opts_.max_time > 0. The start_timer() method is only called when batching_enabled_ is true, which ensures opts_.max_time > 0 is always satisfied when the timer is started.
🧬 Code Graph Analysis (1)
src/nsolid/async_ts_queue.h (1)
agents/zmq/src/zmq_agent.h (1)
loop_(432-432)
⏰ Context from checks skipped due to timeout of 90000ms (8)
- GitHub Check: test-tarball-linux
- GitHub Check: lint-js-and-md
- GitHub Check: test-linux (ubuntu-24.04-arm)
- GitHub Check: coverage-windows
- GitHub Check: coverage-linux
- GitHub Check: test-linux (ubuntu-24.04)
- GitHub Check: coverage-linux-without-intl
- GitHub Check: test-macOS
🔇 Additional comments (2)
src/nsolid/async_ts_queue.h (2)
124-134: Guardstart_timer()against zero or re-entrancy is no longer neededNice job simplifying the logic –
start_timer()is now called only from the
loop thread and the timer is one-shot, so the previous re-entrancy concerns are
gone. The lambda correctly captures a weak pointer to avoid use-after-free
issues. Looks good!
75-82:❓ Verification inconclusive
Potential cross-thread
uv_closecall in the destructor
async_handle_->close_and_delete()andtimer_->close_and_delete()must run on
the same thread that owns the underlyinguv_loop_t.
Because the queue is typically destroyed on the producer thread (tests do this
in the main thread while the loop lives on another thread), these calls may
violate libuv’s thread-affinity rules and lead to use-after-free or silent
memory leaks.Consider one of the following:
~AsyncTSQueue() { - async_handle_->close_and_delete(); - ... + // Dispatch close to the loop thread. + if (uv_loop_alive(loop_) != 0) { + uv_async_t* closer = new uv_async_t; + ASSERT_EQ(0, uv_async_init(loop_, closer, + +[](uv_async_t* h) { + auto* q = static_cast<AsyncTSQueue*>(h->data); + q->async_handle_->close_and_delete(); + if (q->timer_ != nullptr) + q->timer_->close_and_delete(); + uv_close(reinterpret_cast<uv_handle_t*>(h), [](uv_handle_t* h2){ delete h2; }); + })); + closer->data = this; + uv_async_send(closer); + } }At minimum, document that the destructor must be invoked on the loop thread
or provide a dedicatedclose()method to be called from there.
#!/bin/bash # Locate and print the AsyncTSQueue class definition and its destructor context rg --context 5 "class AsyncTSQueue" -n .
#!/bin/bash # Print the AsyncTSQueue class definition including its destructor rg --context 10 "class AsyncTSQueue" -n src/nsolid/async_ts_queue.h # Locate every definition and usage of close_and_delete in the codebase rg --context 5 "close_and_delete" -n src
Enforce libuv thread-affinity in AsyncTSQueue destructor
The destructor currently calls
async_handle_->close_and_delete()andtimer_->close_and_delete()directly. These must run on the thread that owns theuv_loop_t, otherwise you risk use-after-free or silent leaks if~AsyncTSQueue()is invoked on the producer/main thread.Please ensure one of the following:
- Document that
~AsyncTSQueue()must be called on the loop thread.- Provide an explicit
close()method that’s called on the loop thread.- Dispatch the close calls to the loop thread via
uv_async_t:~AsyncTSQueue() { - async_handle_->close_and_delete(); - if (timer_) { - timer_->close_and_delete(); - } + // Dispatch handle closures on the loop thread + if (uv_loop_alive(loop_)) { + auto* closer = new uv_async_t; + uv_async_init(loop_, closer, + [](uv_async_t* req) { + auto* self = static_cast<AsyncTSQueue*>(req->data); + self->async_handle_->close_and_delete(); + if (self->timer_) self->timer_->close_and_delete(); + uv_close(reinterpret_cast<uv_handle_t*>(req), + [](uv_handle_t* h){ delete h; }); + }); + closer->data = this; + uv_async_send(closer); + }
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (5)
src/nsolid/async_ts_queue.h (4)
75-82: LGTM! Timer cleanup properly implemented.The destructor correctly handles timer cleanup by calling
close_and_delete()when the timer exists, preventing the memory leak issue raised in previous reviews. Setting pointers to nullptr after deletion is a good defensive practice.
103-119:⚠️ Potential issueCritical: Min-size logic doesn't trigger immediate flush
When
size >= opts_.min_size, the queue should be flushed immediately. However, the current implementation only prevents the timer from being armed - it doesn't stop an already-running timer or ensure immediate processing.The issue: If a timer was started when size was 1, and then size reaches
min_sizebefore the timer expires, the queue won't be flushed until the timer fires.Apply this fix to ensure immediate flush when min_size is reached:
} else if (size >= opts_.min_size) { // Make sure we don't arm the timer if min size is reached, so we're // items are consumed in the async callback. timer_armed_.store(false, std::memory_order_release); + // Stop any active timer to prevent delayed processing + if (timer_ != nullptr && timer_->is_active()) { + ASSERT_EQ(0, timer_->stop()); + } trigger_async(); }
124-135:⚠️ Potential issuePotential issue: Timer callback unconditionally triggers async
The timer callback calls
trigger_async()without checking if the queue is empty. Combined with the async callback's timer restart logic, this could lead to unnecessary timer cycles when the queue is empty.Consider adding a queue empty check in the timer callback:
ASSERT_EQ(0, timer_->start(+[](nsuv::ns_timer*, WeakAsyncTSQueue queue_wp) { SharedAsyncTSQueue queue = queue_wp.lock(); if (queue == nullptr) { return; } - queue->trigger_async(); + // Only trigger if there are items to process + if (!queue->queue_.empty()) { + queue->trigger_async(); + } }, opts_.max_time, 0, this->weak_from_this()));
247-255:⚠️ Potential issueTimer restart logic prevents immediate processing
The async callback only starts the timer when
timer_armed_is true, but doesn't callprocess()in that case. This prevents immediate processing when min_size is reached, even thoughtimer_armed_was set to false inenqueue_impl.The logic should process immediately when timer_armed_ is false (indicating a deliberate flush).
test/cctest/test_nsolid_async_ts_queue.cc (1)
28-43: LGTM! Proper event loop lifecycle managementThe test fixture correctly:
- Initializes the UV loop
- Runs it in a dedicated thread
- Uses async handle for clean shutdown
- Properly closes the loop in TearDown (addressing the previous review comment)
🧹 Nitpick comments (2)
src/nsolid/async_ts_queue.h (1)
234-240: Important design limitation documentedThe comment correctly documents that new items enqueued during
process()won't automatically re-arm the timer, potentially causing delays. This edge-triggered behavior differs from level-triggered designs.Consider documenting this behavior in the public API documentation or class comments as well, since it affects how users should design their callback handlers. If callbacks enqueue new items, users might need to manually trigger another notification.
test/cctest/test_nsolid_async_ts_queue.cc (1)
9-19: Fix lint warnings for C++11 headersThe NOLINTNEXTLINE comments suggest these headers are flagged by the linter. Consider using the project's standard approach for including C++11 headers or updating the lint configuration.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/nsolid/async_ts_queue.h(7 hunks)test/cctest/test_nsolid_async_ts_queue.cc(14 hunks)
🧰 Additional context used
🧠 Learnings (2)
src/nsolid/async_ts_queue.h (2)
Learnt from: santigimeno
PR: nodesource/nsolid#319
File: src/nsolid/async_ts_queue.h:119-130
Timestamp: 2025-06-02T23:07:18.018Z
Learning: In AsyncTSQueue, batching_enabled_ is only true when both opts_.min_size > 0 AND opts_.max_time > 0. The start_timer() method is only called when batching_enabled_ is true, which ensures opts_.max_time > 0 is always satisfied when the timer is started.
Learnt from: santigimeno
PR: nodesource/nsolid#319
File: src/nsolid/async_ts_queue.h:97-107
Timestamp: 2025-06-02T14:51:54.461Z
Learning: In nsuv timer implementation, timer_->start() is idempotent - it automatically stops the timer if needed before rearming it, eliminating the need for manual timer stopping.
test/cctest/test_nsolid_async_ts_queue.cc (1)
Learnt from: santigimeno
PR: nodesource/nsolid#319
File: src/nsolid/async_ts_queue.h:119-130
Timestamp: 2025-06-02T23:07:18.018Z
Learning: In AsyncTSQueue, batching_enabled_ is only true when both opts_.min_size > 0 AND opts_.max_time > 0. The start_timer() method is only called when batching_enabled_ is true, which ensures opts_.max_time > 0 is always satisfied when the timer is started.
⏰ Context from checks skipped due to timeout of 90000ms (7)
- GitHub Check: test-tarball-linux
- GitHub Check: test-macOS
- GitHub Check: coverage-linux
- GitHub Check: coverage-linux-without-intl
- GitHub Check: test-linux (ubuntu-24.04)
- GitHub Check: test-linux (ubuntu-24.04-arm)
- GitHub Check: coverage-windows
🔇 Additional comments (12)
src/nsolid/async_ts_queue.h (7)
10-14: LGTM!The added includes are necessary for the new batching functionality -
atomicfor thread-safe timer state,cstdintfor fixed-width integers in the options struct, and standard containers for the implementation.
21-28: LGTM!The
AsyncTSQueueOptionsstruct is well-designed with sensible defaults (0 = disabled) and clear documentation. The naming clearly indicates the purpose of each field.
50-61: LGTM!The new factory method properly forwards the options and maintains the same perfect forwarding pattern as the legacy method. Good separation of concerns between options-based and legacy behavior.
62-71: LGTM!The legacy factory method correctly creates default options to maintain backward compatibility. This ensures existing code continues to work without modifications.
170-181: LGTM!The constructor properly initializes all members including the new batching-related fields. The
batching_enabled_logic correctly requires both conditions per the documented behavior.
227-231: LGTM!Timer initialization is properly guarded by the
batching_enabled_flag and correctly initialized with the event loop.
271-275: LGTM!Member variables are properly declared with appropriate types and initialization. The atomic bool for
timer_armed_ensures thread-safe access.test/cctest/test_nsolid_async_ts_queue.cc (5)
461-496: LGTM! Comprehensive test for size-based batchingThe test properly verifies:
- Items below min_size don't trigger processing
- Reaching min_size triggers immediate batch processing
- All items are processed in a single batch
Good use of timeouts to ensure non-triggering behavior.
498-522: LGTM! Validates time-based batchingThe test correctly verifies that items are processed after max_time expires even when min_size isn't reached. The 200ms wait is appropriate for a 50ms timer.
582-617: Good test for timer lifecycleThis test validates that the timer doesn't continue firing after the queue is empty, which was a concern raised in previous reviews. The test structure properly checks for spurious wakeups.
620-645: LGTM! Thread safety validationThe test properly validates concurrent enqueuing from multiple threads and ensures all items are processed. Good use of sorting to verify no items were lost.
524-579:⚠️ Potential issueTest reveals batching design issue
The test expects 3 separate batches when items are enqueued slowly (batch_count == 3), but this contradicts the typical batching behavior where items should accumulate until min_size or max_time is reached.
The current implementation appears to process each item individually when enqueued slowly, which defeats the purpose of time-based batching.
The test seems to confirm the issue mentioned in previous reviews - that items are processed immediately rather than being batched by time. Consider whether this test reflects the intended behavior or if the implementation needs fixing.
Likely an incorrect or invalid review comment.
6870fa5 to
1793381
Compare
RafaelGSS
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RSLGTM
3dc12d9 to
b7ddf91
Compare
Introduce AsyncTSQueueOptions for batching: notification is sent when the queue size reaches min_size or after max_time ms, whichever comes first. Implements timer-based batching using nsuv::ns_timer, with all timer and notification logic simplified for clarity and efficiency. Refactor cctests so the event loop runs on a separate. Also add new tests covering the new functionality.
1793381 to
0456808
Compare
Introduce AsyncTSQueueOptions for batching: notification is sent when the queue size reaches min_size or after max_time ms, whichever comes first. Implements timer-based batching using nsuv::ns_timer, with all timer and notification logic simplified for clarity and efficiency. Refactor cctests so the event loop runs on a separate. Also add new tests covering the new functionality. PR-URL: #319 Reviewed-By: Rafael Gonzaga <[email protected]>
|
Landed in 308046b |
Introduce AsyncTSQueueOptions for batching: notification is sent when the queue size reaches min_size or after max_time ms, whichever comes first. Implements timer-based batching using nsuv::ns_timer, with all timer and notification logic simplified for clarity and efficiency. Refactor cctests so the event loop runs on a separate. Also add new tests covering the new functionality. PR-URL: #319 Reviewed-By: Rafael Gonzaga <[email protected]>
Introduce AsyncTSQueueOptions for batching: notification is sent when the queue size reaches min_size or after max_time ms, whichever comes first. Implements timer-based batching using nsuv::ns_timer, with all timer and notification logic simplified for clarity and efficiency. Refactor cctests so the event loop runs on a separate. Also add new tests covering the new functionality. PR-URL: #319 Reviewed-By: Rafael Gonzaga <[email protected]> PR-URL: #359 Reviewed-By: Juan José Arboleda <[email protected]>
Introduce AsyncTSQueueOptions for batching: notification is sent when the queue size reaches min_size or after max_time ms, whichever comes first.
Implements timer-based batching using nsuv::ns_timer, with all timer and notification logic simplified for clarity and efficiency.
Summary by CodeRabbit