Skip to content

feat: bidirectional adaptive concurrency - scale up when resources available#637

Merged
marcusquinn merged 2 commits intomainfrom
feature/adaptive-concurrency
Feb 8, 2026
Merged

feat: bidirectional adaptive concurrency - scale up when resources available#637
marcusquinn merged 2 commits intomainfrom
feature/adaptive-concurrency

Conversation

@marcusquinn
Copy link
Owner

@marcusquinn marcusquinn commented Feb 8, 2026

Summary

  • Adaptive concurrency now scales UP (2x base) when system load is below 50% of CPU cores, not just throttles down
  • New --max-concurrency flag on batch command sets a hard cap (defaults to cpu_cores)
  • Pulse summary shows adaptive vs base concurrency with color-coded scaling indicators
  • DB migration adds max_concurrency column to existing databases
  • Min floor reduced from 6 to 1 (allows single-worker batches when system is overloaded)

Previously, a batch with --concurrency 3 could never exceed 3 workers even when the machine was idle. Now it auto-scales to 6 (or up to cpu_cores) when resources are available, and throttles back down under load.

Summary by CodeRabbit

Release Notes

  • New Features
    • Added --max-concurrency option to batch commands for setting per-batch concurrency limits
    • Enhanced batch status output to display adaptive concurrency information, including base concurrency, applied cap, and computed scaled values
    • Improved batch lifecycle messaging to reflect concurrency control semantics

…ailable

Previously, adaptive concurrency only throttled DOWN from the batch base.
Now it also scales UP (2x base) when system load is below 50% of CPU cores.

Changes:
- calculate_adaptive_concurrency() scales up when load_ratio < 50
- New max_concurrency column on batches table (hard cap, default: cpu_cores)
- cmd_batch accepts --max-concurrency flag
- Pulse summary shows adaptive vs base concurrency with color coding
- Batch status displays adaptive concurrency info
- DB migration adds max_concurrency column to existing databases
- Min floor reduced from 6 to 1 (allows single-worker batches)
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 8, 2026

Warning

Rate limit exceeded

@marcusquinn has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 18 minutes and 19 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

Walkthrough

This change adds per-batch maximum concurrency control to the supervisor automation framework. It introduces a new max_concurrency parameter to batch operations via CLI, updates the database schema with a corresponding column, threads the cap through concurrency calculations, adds migration logic, and enhances status outputs to display concurrency metrics.

Changes

Cohort / File(s) Summary
Database Schema & Migrations
max_concurrency column additions
Added max_concurrency INTEGER NOT NULL DEFAULT 0 to batches table; updated migration path to ensure column presence and populate missing values; batch insert statements now capture max_concurrency parameter.
Batch Lifecycle & CLI Interface
cmd_batch() command enhancements
Added --max-concurrency N option to batch creation command; updated help/usage text; integrated parameter into batch creation workflow and internal state variables.
Concurrency Calculation Logic
calculate_adaptive_concurrency() and related callsites
Extended function signature to accept max_concurrency_cap as third argument; implemented capping logic with default CPU core fallback when cap ≤ 0; updated all callers (cmd_next, cmd_dispatch, cmd_status) to pass batch-specific cap.
Status & Monitoring Outputs
Batch status display and reporting
Enhanced status output to show base concurrency, adaptive/concurrency values, and max concurrency cap; updated batch lifecycle messaging to reflect new concurrency semantics.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🔧 Batches now embrace their concurrency ceiling,
Each cap wired through with graceful feeling,
Adaptive minds meet maximum will,
Status speaks truth of each batch's skill.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main feature: bidirectional adaptive concurrency with the ability to scale up when resources are available.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/adaptive-concurrency

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link

Summary of Changes

Hello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the system's adaptive concurrency management by introducing bidirectional scaling. Previously, concurrency only throttled down under load; now, it can intelligently scale up when system resources are underutilized. This change includes a new command-line option to define a maximum concurrency limit, updates to the database schema to persist this setting, and improved status reporting to visualize the adaptive scaling in action. The overall impact is better resource utilization and more efficient batch processing.

Highlights

  • Bidirectional Adaptive Concurrency: The system's adaptive concurrency now scales UP (2x base) when system load is below 50% of CPU cores, in addition to throttling down under heavy load.
  • New --max-concurrency Flag: A new --max-concurrency flag has been added to the batch command, allowing users to set a hard upper limit for concurrency, which defaults to the number of CPU cores.
  • Enhanced Pulse Summary: The pulse summary now provides a detailed view of adaptive vs. base concurrency, including color-coded indicators for scaling (scaled up, throttled, or effective).
  • Database Migration: A database migration has been included to add a max_concurrency column to the batches table, supporting the new concurrency cap feature.
  • Reduced Minimum Concurrency Floor: The minimum concurrency floor for adaptive scaling has been reduced from 6 to 1, allowing for single-worker batches even when the system is heavily overloaded.
Changelog
  • .agents/scripts/supervisor-helper.sh
    • The batch command now supports a --max-concurrency argument.
    • The calculate_adaptive_concurrency function has been refactored to implement bidirectional scaling: scaling up to double the base concurrency under light load, maintaining base concurrency under moderate load, and scaling down under heavy load.
    • The minimum concurrency floor in calculate_adaptive_concurrency has been lowered from 6 to 1.
    • A new max_concurrency_cap parameter was added to calculate_adaptive_concurrency, defaulting to CPU cores if not explicitly set.
    • Database migration logic was updated to add a max_concurrency column to the batches table.
    • The batches table creation statement now includes the max_concurrency column with a default of 0.
    • The cmd_batch function was updated to parse and store the new --max-concurrency value.
    • The cmd_status output for batches now includes adaptive concurrency details, showing base, adaptive, and capped values.
    • cmd_next and cmd_dispatch functions now retrieve and pass the batch's max_concurrency to the adaptive concurrency calculation.
    • The cmd_pulse output now provides a detailed, color-coded display of the current worker concurrency, indicating if it's scaled up, throttled, or at its effective base.
Activity
  • No specific activity (comments, reviews, etc.) was provided in the context for this pull request.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link

github-actions bot commented Feb 8, 2026

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 37 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Sun Feb 8 16:58:10 UTC 2026: Code review monitoring started
Sun Feb 8 16:58:10 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 37

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 37
  • VULNERABILITIES: 0

Generated on: Sun Feb 8 16:58:13 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant enhancement with bidirectional adaptive concurrency, allowing the system to scale up workers when resources are available and throttle down under load. The implementation is robust, including a new --max-concurrency flag, corresponding database migrations, and updated status reporting. The logic for scaling is sound, and the reduction of the minimum concurrency floor is a practical improvement. The minor suggestions to improve the user-friendliness of the status output remain valid, and overall, this is a well-executed and valuable feature.

echo " ID: $bid"
echo " Status: $bstatus"
echo " Concurrency: $bconc"
echo " Concurrency: $bconc (adaptive: $badaptive, cap: ${bmax_conc:-auto})"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The status output for max_concurrency could be more user-friendly. When bmax_conc is 0, it will display cap: 0, but 0 actually means 'auto' (capped at the number of CPU cores). This is inconsistent with the batch creation message which shows max: auto. To improve clarity, it would be better to display 'auto' when the cap is not explicitly set.

Suggested change
echo " Concurrency: $bconc (adaptive: $badaptive, cap: ${bmax_conc:-auto})"
echo " Concurrency: $bconc (adaptive: $badaptive, cap: $(if [[ ${bmax_conc:-0} -gt 0 ]]; then echo ${bmax_conc}; else echo auto; fi))"

else
adaptive_label="${adaptive_label} effective:${display_adaptive}"
fi
echo -e " ${BLUE}[SUPERVISOR]${NC} Workers: ${adaptive_label} (cap:${display_max:-auto})"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The display for max_concurrency could be more user-friendly. When display_max is 0, it will show (cap:0), but 0 actually means 'auto' (capped at the number of CPU cores). This is inconsistent with the batch creation message which shows max: auto. To improve clarity, it would be better to display 'auto' when the cap is not explicitly set.

Suggested change
echo -e " ${BLUE}[SUPERVISOR]${NC} Workers: ${adaptive_label} (cap:${display_max:-auto})"
echo -e " ${BLUE}[SUPERVISOR]${NC} Workers: ${adaptive_label} (cap:$(if [[ ${display_max:-0} -gt 0 ]]; then echo ${display_max}; else echo auto; fi))"

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
.agents/scripts/supervisor-helper.sh (2)

657-685: ⚠️ Potential issue | 🟠 Major

Handle max_concurrency migration failures explicitly

The migration currently ignores errors and logs success unconditionally. If ALTER TABLE fails (locked DB, I/O error), later queries for max_concurrency will break. Use log_cmd and branch on success like other migrations.

🔧 Suggested fix
 if [[ "$has_max_concurrency" -eq 0 ]]; then
     log_info "Migrating batches table: adding max_concurrency column..."
-    db "$SUPERVISOR_DB" "ALTER TABLE batches ADD COLUMN max_concurrency INTEGER NOT NULL DEFAULT 0;" 2>/dev/null || true
-    log_success "Added max_concurrency column to batches (0 = auto-detect from cpu_cores)"
+    if ! log_cmd "db-migrate" db "$SUPERVISOR_DB" "ALTER TABLE batches ADD COLUMN max_concurrency INTEGER NOT NULL DEFAULT 0;"; then
+        log_warn "Failed to add max_concurrency column (may already exist or DB locked)"
+    else
+        log_success "Added max_concurrency column to batches (0 = auto-detect from cpu_cores)"
+    fi
 fi

As per coding guidelines, Automation scripts - focus on: Reliability and robustness; Clear logging and feedback; Error recovery mechanisms.


1009-1068: ⚠️ Potential issue | 🟠 Major

Validate --max-concurrency before interpolating into SQL

max_concurrency is injected unquoted into the INSERT. A non‑numeric or negative value can break the insert (and it’s an easy injection vector). Add a numeric guard (0 = auto) before writing to SQLite.

✅ Suggested guard
     # Validate release_type
     case "$release_type" in
         major|minor|patch) ;;
         *) log_error "Invalid release type: $release_type (must be major, minor, or patch)"; return 1 ;;
     esac
+
+    if ! [[ "$max_concurrency" =~ ^[0-9]+$ ]]; then
+        log_error "--max-concurrency must be a non-negative integer (0 = auto)"
+        return 1
+    fi

As per coding guidelines, Automation scripts - focus on: Reliability and robustness.

🤖 Fix all issues with AI agents
In @.agents/scripts/supervisor-helper.sh:
- Around line 1595-1603: The display currently hard-codes max_load_factor "2"
and shows cap as 0; update the logic to query the batch's max_load_factor from
the DB (similar to how bmax_conc is fetched), pass that value into
calculate_adaptive_concurrency instead of the literal "2", and when printing the
cap map a value of 0 to the string "auto" (e.g. use ${bmax_load_factor:-0} and
display "auto" when it equals 0); apply the same change to the other occurrence
referenced (lines around calculate_adaptive_concurrency usage at 5157-5174) and
keep references to bconc, badaptive, bmax_conc, bname, bstatus, and
brelease_flag so the correct variables are updated.
- Line 10: Update the batch help text in .agents/scripts/supervisor-helper.sh by
adding a line for the --max-concurrency flag in the show_usage output so the
detailed "Options for 'batch'" matches the synopsis; specifically, in the
show_usage (or the function/variable that prints "Options for 'batch'") include
a description for --max-concurrency, its expected value, and note that 0 means
"auto" (and include any default if present), formatted consistently with the
existing option lines.

…_factor, add help text

- Display 'cap: auto' instead of 'cap: 0' when max_concurrency is unset
- Use actual batch max_load_factor instead of hardcoded '2' in status/pulse display
- Add --max-concurrency and --max-load to batch help text
@github-actions
Copy link

github-actions bot commented Feb 8, 2026

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 37 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Sun Feb 8 17:09:28 UTC 2026: Code review monitoring started
Sun Feb 8 17:09:29 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 37

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 37
  • VULNERABILITIES: 0

Generated on: Sun Feb 8 17:09:31 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

@sonarqubecloud
Copy link

sonarqubecloud bot commented Feb 8, 2026

@marcusquinn marcusquinn merged commit ae29581 into main Feb 8, 2026
11 checks passed
@marcusquinn marcusquinn deleted the feature/adaptive-concurrency branch February 21, 2026 01:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant