feat: bidirectional adaptive concurrency - scale up when resources available#637
feat: bidirectional adaptive concurrency - scale up when resources available#637marcusquinn merged 2 commits intomainfrom
Conversation
…ailable Previously, adaptive concurrency only throttled DOWN from the batch base. Now it also scales UP (2x base) when system load is below 50% of CPU cores. Changes: - calculate_adaptive_concurrency() scales up when load_ratio < 50 - New max_concurrency column on batches table (hard cap, default: cpu_cores) - cmd_batch accepts --max-concurrency flag - Pulse summary shows adaptive vs base concurrency with color coding - Batch status displays adaptive concurrency info - DB migration adds max_concurrency column to existing databases - Min floor reduced from 6 to 1 (allows single-worker batches)
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. WalkthroughThis change adds per-batch maximum concurrency control to the supervisor automation framework. It introduces a new Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the system's adaptive concurrency management by introducing bidirectional scaling. Previously, concurrency only throttled down under load; now, it can intelligently scale up when system resources are underutilized. This change includes a new command-line option to define a maximum concurrency limit, updates to the database schema to persist this setting, and improved status reporting to visualize the adaptive scaling in action. The overall impact is better resource utilization and more efficient batch processing. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Sun Feb 8 16:58:13 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
There was a problem hiding this comment.
Code Review
This pull request introduces a significant enhancement with bidirectional adaptive concurrency, allowing the system to scale up workers when resources are available and throttle down under load. The implementation is robust, including a new --max-concurrency flag, corresponding database migrations, and updated status reporting. The logic for scaling is sound, and the reduction of the minimum concurrency floor is a practical improvement. The minor suggestions to improve the user-friendliness of the status output remain valid, and overall, this is a well-executed and valuable feature.
.agents/scripts/supervisor-helper.sh
Outdated
| echo " ID: $bid" | ||
| echo " Status: $bstatus" | ||
| echo " Concurrency: $bconc" | ||
| echo " Concurrency: $bconc (adaptive: $badaptive, cap: ${bmax_conc:-auto})" |
There was a problem hiding this comment.
The status output for max_concurrency could be more user-friendly. When bmax_conc is 0, it will display cap: 0, but 0 actually means 'auto' (capped at the number of CPU cores). This is inconsistent with the batch creation message which shows max: auto. To improve clarity, it would be better to display 'auto' when the cap is not explicitly set.
| echo " Concurrency: $bconc (adaptive: $badaptive, cap: ${bmax_conc:-auto})" | |
| echo " Concurrency: $bconc (adaptive: $badaptive, cap: $(if [[ ${bmax_conc:-0} -gt 0 ]]; then echo ${bmax_conc}; else echo auto; fi))" |
.agents/scripts/supervisor-helper.sh
Outdated
| else | ||
| adaptive_label="${adaptive_label} effective:${display_adaptive}" | ||
| fi | ||
| echo -e " ${BLUE}[SUPERVISOR]${NC} Workers: ${adaptive_label} (cap:${display_max:-auto})" |
There was a problem hiding this comment.
The display for max_concurrency could be more user-friendly. When display_max is 0, it will show (cap:0), but 0 actually means 'auto' (capped at the number of CPU cores). This is inconsistent with the batch creation message which shows max: auto. To improve clarity, it would be better to display 'auto' when the cap is not explicitly set.
| echo -e " ${BLUE}[SUPERVISOR]${NC} Workers: ${adaptive_label} (cap:${display_max:-auto})" | |
| echo -e " ${BLUE}[SUPERVISOR]${NC} Workers: ${adaptive_label} (cap:$(if [[ ${display_max:-0} -gt 0 ]]; then echo ${display_max}; else echo auto; fi))" |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
.agents/scripts/supervisor-helper.sh (2)
657-685:⚠️ Potential issue | 🟠 MajorHandle max_concurrency migration failures explicitly
The migration currently ignores errors and logs success unconditionally. If ALTER TABLE fails (locked DB, I/O error), later queries for
max_concurrencywill break. Uselog_cmdand branch on success like other migrations.🔧 Suggested fix
if [[ "$has_max_concurrency" -eq 0 ]]; then log_info "Migrating batches table: adding max_concurrency column..." - db "$SUPERVISOR_DB" "ALTER TABLE batches ADD COLUMN max_concurrency INTEGER NOT NULL DEFAULT 0;" 2>/dev/null || true - log_success "Added max_concurrency column to batches (0 = auto-detect from cpu_cores)" + if ! log_cmd "db-migrate" db "$SUPERVISOR_DB" "ALTER TABLE batches ADD COLUMN max_concurrency INTEGER NOT NULL DEFAULT 0;"; then + log_warn "Failed to add max_concurrency column (may already exist or DB locked)" + else + log_success "Added max_concurrency column to batches (0 = auto-detect from cpu_cores)" + fi fiAs per coding guidelines, Automation scripts - focus on: Reliability and robustness; Clear logging and feedback; Error recovery mechanisms.
1009-1068:⚠️ Potential issue | 🟠 MajorValidate
--max-concurrencybefore interpolating into SQL
max_concurrencyis injected unquoted into the INSERT. A non‑numeric or negative value can break the insert (and it’s an easy injection vector). Add a numeric guard (0 = auto) before writing to SQLite.✅ Suggested guard
# Validate release_type case "$release_type" in major|minor|patch) ;; *) log_error "Invalid release type: $release_type (must be major, minor, or patch)"; return 1 ;; esac + + if ! [[ "$max_concurrency" =~ ^[0-9]+$ ]]; then + log_error "--max-concurrency must be a non-negative integer (0 = auto)" + return 1 + fiAs per coding guidelines, Automation scripts - focus on: Reliability and robustness.
🤖 Fix all issues with AI agents
In @.agents/scripts/supervisor-helper.sh:
- Around line 1595-1603: The display currently hard-codes max_load_factor "2"
and shows cap as 0; update the logic to query the batch's max_load_factor from
the DB (similar to how bmax_conc is fetched), pass that value into
calculate_adaptive_concurrency instead of the literal "2", and when printing the
cap map a value of 0 to the string "auto" (e.g. use ${bmax_load_factor:-0} and
display "auto" when it equals 0); apply the same change to the other occurrence
referenced (lines around calculate_adaptive_concurrency usage at 5157-5174) and
keep references to bconc, badaptive, bmax_conc, bname, bstatus, and
brelease_flag so the correct variables are updated.
- Line 10: Update the batch help text in .agents/scripts/supervisor-helper.sh by
adding a line for the --max-concurrency flag in the show_usage output so the
detailed "Options for 'batch'" matches the synopsis; specifically, in the
show_usage (or the function/variable that prints "Options for 'batch'") include
a description for --max-concurrency, its expected value, and note that 0 means
"auto" (and include any default if present), formatted consistently with the
existing option lines.
…_factor, add help text - Display 'cap: auto' instead of 'cap: 0' when max_concurrency is unset - Use actual batch max_load_factor instead of hardcoded '2' in status/pulse display - Add --max-concurrency and --max-load to batch help text
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Sun Feb 8 17:09:31 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
|



Summary
--max-concurrencyflag onbatchcommand sets a hard cap (defaults to cpu_cores)max_concurrencycolumn to existing databasesPreviously, a batch with
--concurrency 3could never exceed 3 workers even when the machine was idle. Now it auto-scales to 6 (or up to cpu_cores) when resources are available, and throttles back down under load.Summary by CodeRabbit
Release Notes
--max-concurrencyoption to batch commands for setting per-batch concurrency limits