Skip to content

t321: Derive auto-batch concurrency from CPU cores#1265

Merged
marcusquinn merged 2 commits intomainfrom
feature/t321-auto-batch-concurrency
Feb 12, 2026
Merged

t321: Derive auto-batch concurrency from CPU cores#1265
marcusquinn merged 2 commits intomainfrom
feature/t321-auto-batch-concurrency

Conversation

@marcusquinn
Copy link
Owner

@marcusquinn marcusquinn commented Feb 12, 2026

Summary

  • Auto-batch creation now derives base concurrency from CPU cores (cores / 2, min 2) instead of hardcoding 3
  • A 10-core Mac gets base 5, a 32-core server gets 16, a 64-core server gets 32
  • The existing adaptive scaling (calculate_adaptive_concurrency()) continues to adjust up/down from this base depending on actual CPU load
  • Updated AGENTS.md docs and supervisor help text to reflect the new behavior

Motivation

The hardcoded --concurrency 3 in cmd_phase0_auto_dispatch() meant machines with significant available CPU and RAM were underutilized. This was observed on a 10-core Mac with 64GB RAM and ~55% idle CPU still running only 2-3 workers.

Scaling table

Cores Base Adaptive range (light → heavy load)
2 2 4 → 1
4 2 4 → 1
8 4 8 → 2
10 5 10 → 2
16 8 16 → 4
32 16 32 → 8
64 32 64 → 16

Testing

  • bash -n syntax check: passed
  • ShellCheck on changed lines: no warnings
  • Arithmetic verified for core counts 1-128
  • Pre-existing secretlint warnings (proxy-integration.md) unrelated to this change

Summary by CodeRabbit

  • Improvements

    • Auto-batch concurrency now adapts dynamically based on system CPU cores (cores/2, minimum 2) instead of a fixed value.
  • Documentation

    • Updated documentation to reflect adaptive concurrency behavior in auto-batching.

…ing 3

Auto-batch creation in cmd_phase0_auto_dispatch() now uses cores/2 (min 2)
as the base concurrency. A 10-core Mac gets 5, a 32-core server gets 16.
The existing adaptive scaling adjusts up/down from this base depending on
actual CPU load. Previously hardcoded to 3 regardless of available resources.

Closes t321
@gemini-code-assist
Copy link

Summary of Changes

Hello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the efficiency of auto-batched task processing by transitioning from a fixed concurrency setting to a dynamic, resource-aware approach. By calculating the initial concurrency based on the system's CPU core count, the system can better leverage available hardware, preventing underutilization and allowing for more efficient task execution, especially on multi-core machines.

Highlights

  • Dynamic Auto-Batch Concurrency: Auto-batch creation now dynamically determines its base concurrency by dividing the number of CPU cores by two, with a minimum of two, replacing the previous hardcoded value of three.
  • Improved Resource Utilization: This change addresses the underutilization of resources on machines with many CPU cores by allowing auto-batches to scale their initial concurrency more effectively.
  • Adaptive Scaling Integration: The existing adaptive scaling mechanism will continue to adjust the concurrency up or down from this new dynamically calculated base, depending on the actual CPU load.
  • Documentation Updates: The AGENTS.md documentation and the supervisor's help text have been updated to accurately reflect the new auto-batch concurrency behavior.
Changelog
  • .agents/AGENTS.md
    • Updated the description of auto-dispatch to reflect the new concurrency calculation.
  • .agents/scripts/supervisor-helper.sh
    • Implemented logic to derive auto-batch concurrency from CPU cores (cores / 2, min 2).
    • Modified the cmd_batch call to use the calculated auto_base_concurrency.
    • Updated the supervisor's help text to explain the new dynamic auto-batching concurrency.
Activity
  • No specific activity (comments, reviews, progress) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 12, 2026

Warning

Rate limit exceeded

@marcusquinn has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 23 minutes and 43 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

Walkthrough

The changes update auto-dispatch batching to dynamically calculate concurrency based on CPU cores (cores/2, minimum 2) instead of using a fixed value of 3. This allows batches to automatically scale with system resources while maintaining backward compatibility with a minimum threshold.

Changes

Cohort / File(s) Summary
Auto-dispatch concurrency scaling
.agents/AGENTS.md, .agents/scripts/supervisor-helper.sh
Replaced hardcoded --concurrency 3 with dynamic calculation using CPU cores (cores/2, minimum 2). Updated documentation to reflect adaptive concurrency behavior and added reference to related task tracking.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related issues

Possibly related PRs

Poem

🔄 Cores divided, batches align,
Two threads per processor, oh how fine!
No longer fixed at three and done—
Let systems dance with cores they've won! ⚙️

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Derive auto-batch concurrency from CPU cores' directly and specifically describes the main change in the PR—replacing hardcoded concurrency of 3 with dynamic CPU-core-based calculation.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/t321-auto-batch-concurrency

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 15 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Thu Feb 12 14:29:39 UTC 2026: Code review monitoring started
Thu Feb 12 14:29:39 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 15

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 15
  • VULNERABILITIES: 0

Generated on: Thu Feb 12 14:29:41 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In @.agents/scripts/supervisor-helper.sh:
- Around line 13082-13099: The new local variables are declared without
assignment; change them to use the local var="..." pattern so they’re
initialized at declaration (e.g. declare auto_batch_name, task_csv, auto_cores,
auto_base_concurrency, auto_batch_id with their initial values instead of
separate assignment lines). Specifically, set local auto_batch_name="auto-$(date
+%Y%m%d-%H%M%S)", local task_csv="$(echo "$unbatched_queued" | tr '\n' ',' | sed
's/,$//')", local auto_cores="$(get_cpu_cores)", local
auto_base_concurrency="$((auto_cores / 2))" (then clamp if <2 as before), and
local auto_batch_id="$(cmd_batch "$auto_batch_name" --concurrency
"$auto_base_concurrency" --tasks "$task_csv" 2>/dev/null)"; keep the existing
logic for clamping and conditional checks around these variables
(functions/commands referenced: get_cpu_cores, cmd_batch, auto_batch_name,
auto_base_concurrency, task_csv, auto_batch_id).
🧹 Nitpick comments (1)
.agents/AGENTS.md (1)

116-116: Consider moving the detailed concurrency description out of root AGENTS.md.

This file already exceeds the recommended instruction count; keeping high-level guidance here and linking to workflows/plans.md would better fit the limit.
Based on learnings "Applies to **/AGENTS.md : Limit root AGENTS.md to ~50-100 max instructions with universal applicability to >80% of tasks".

@github-actions
Copy link

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 15 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Thu Feb 12 14:33:19 UTC 2026: Code review monitoring started
Thu Feb 12 14:33:19 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 15

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 15
  • VULNERABILITIES: 0

Generated on: Thu Feb 12 14:33:22 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a sensible change to derive the auto-batch concurrency from the number of CPU cores, which should lead to better resource utilization on more powerful machines. The implementation is straightforward and the documentation updates in AGENTS.md and the script's help text are clear and helpful.

I've added a couple of minor suggestions to group the local variable declarations in supervisor-helper.sh for improved readability and maintainability. Overall, this is a solid improvement.

# A 10-core Mac gets 5, a 32-core server gets 16, etc.
# The adaptive scaling in calculate_adaptive_concurrency() then
# adjusts up/down from this base depending on actual load.
local auto_cores="$(get_cpu_cores)"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better readability and to follow the practice of declaring variables at the top of their scope, it's good to group local variable declarations. You can declare auto_batch_id here along with auto_cores and auto_base_concurrency and remove the separate declaration on line 13097.

Suggested change
local auto_cores="$(get_cpu_cores)"
local auto_cores auto_base_concurrency auto_batch_id

fi
local auto_batch_id
auto_batch_id=$(cmd_batch "$auto_batch_name" --concurrency 3 --tasks "$task_csv" 2>/dev/null)
auto_batch_id=$(cmd_batch "$auto_batch_name" --concurrency "$auto_base_concurrency" --tasks "$task_csv" 2>/dev/null)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Since auto_batch_id is now declared on line 13091 with the other local variables, this line is redundant and can be removed.

@sonarqubecloud
Copy link

@marcusquinn marcusquinn merged commit 5c122e1 into main Feb 12, 2026
15 checks passed
marcusquinn added a commit that referenced this pull request Feb 12, 2026
@marcusquinn marcusquinn deleted the feature/t321-auto-batch-concurrency branch February 21, 2026 01:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant