t321: Derive auto-batch concurrency from CPU cores#1265
Conversation
…ing 3 Auto-batch creation in cmd_phase0_auto_dispatch() now uses cores/2 (min 2) as the base concurrency. A 10-core Mac gets 5, a 32-core server gets 16. The existing adaptive scaling adjusts up/down from this base depending on actual CPU load. Previously hardcoded to 3 regardless of available resources. Closes t321
Summary of ChangesHello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the efficiency of auto-batched task processing by transitioning from a fixed concurrency setting to a dynamic, resource-aware approach. By calculating the initial concurrency based on the system's CPU core count, the system can better leverage available hardware, preventing underutilization and allowing for more efficient task execution, especially on multi-core machines. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. WalkthroughThe changes update auto-dispatch batching to dynamically calculate concurrency based on CPU cores (cores/2, minimum 2) instead of using a fixed value of 3. This allows batches to automatically scale with system resources while maintaining backward compatibility with a minimum threshold. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related issues
Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Thu Feb 12 14:29:41 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @.agents/scripts/supervisor-helper.sh:
- Around line 13082-13099: The new local variables are declared without
assignment; change them to use the local var="..." pattern so they’re
initialized at declaration (e.g. declare auto_batch_name, task_csv, auto_cores,
auto_base_concurrency, auto_batch_id with their initial values instead of
separate assignment lines). Specifically, set local auto_batch_name="auto-$(date
+%Y%m%d-%H%M%S)", local task_csv="$(echo "$unbatched_queued" | tr '\n' ',' | sed
's/,$//')", local auto_cores="$(get_cpu_cores)", local
auto_base_concurrency="$((auto_cores / 2))" (then clamp if <2 as before), and
local auto_batch_id="$(cmd_batch "$auto_batch_name" --concurrency
"$auto_base_concurrency" --tasks "$task_csv" 2>/dev/null)"; keep the existing
logic for clamping and conditional checks around these variables
(functions/commands referenced: get_cpu_cores, cmd_batch, auto_batch_name,
auto_base_concurrency, task_csv, auto_batch_id).
🧹 Nitpick comments (1)
.agents/AGENTS.md (1)
116-116: Consider moving the detailed concurrency description out of root AGENTS.md.This file already exceeds the recommended instruction count; keeping high-level guidance here and linking to workflows/plans.md would better fit the limit.
Based on learnings "Applies to **/AGENTS.md : Limit root AGENTS.md to ~50-100 max instructions with universal applicability to >80% of tasks".
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Thu Feb 12 14:33:22 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
There was a problem hiding this comment.
Code Review
This pull request introduces a sensible change to derive the auto-batch concurrency from the number of CPU cores, which should lead to better resource utilization on more powerful machines. The implementation is straightforward and the documentation updates in AGENTS.md and the script's help text are clear and helpful.
I've added a couple of minor suggestions to group the local variable declarations in supervisor-helper.sh for improved readability and maintainability. Overall, this is a solid improvement.
| # A 10-core Mac gets 5, a 32-core server gets 16, etc. | ||
| # The adaptive scaling in calculate_adaptive_concurrency() then | ||
| # adjusts up/down from this base depending on actual load. | ||
| local auto_cores="$(get_cpu_cores)" |
There was a problem hiding this comment.
For better readability and to follow the practice of declaring variables at the top of their scope, it's good to group local variable declarations. You can declare auto_batch_id here along with auto_cores and auto_base_concurrency and remove the separate declaration on line 13097.
| local auto_cores="$(get_cpu_cores)" | |
| local auto_cores auto_base_concurrency auto_batch_id |
| fi | ||
| local auto_batch_id | ||
| auto_batch_id=$(cmd_batch "$auto_batch_name" --concurrency 3 --tasks "$task_csv" 2>/dev/null) | ||
| auto_batch_id=$(cmd_batch "$auto_batch_name" --concurrency "$auto_base_concurrency" --tasks "$task_csv" 2>/dev/null) |
|



Summary
cores / 2, min 2) instead of hardcoding 3calculate_adaptive_concurrency()) continues to adjust up/down from this base depending on actual CPU loadMotivation
The hardcoded
--concurrency 3incmd_phase0_auto_dispatch()meant machines with significant available CPU and RAM were underutilized. This was observed on a 10-core Mac with 64GB RAM and ~55% idle CPU still running only 2-3 workers.Scaling table
Testing
bash -nsyntax check: passedSummary by CodeRabbit
Improvements
Documentation