feat: supervisor self-healing for stuck tasks, dispatch stalls, and action robustness#1683
Conversation
…h stalls, and action executor robustness - Phase 1c: auto-reap tasks stuck in 'evaluating' >10min with dead worker process. Transitions to retrying (if retries remain) or failed. Cleans up PID files. Prevents tasks from permanently blocking queue slots. - Phase 2b: dispatch stall detection after Phase 2. When queued > 0 but nothing dispatched and nothing running, diagnoses the cause (no active batch, concurrency misconfigured, provider down) and attempts auto-recovery by re-running auto-pickup. Logs stall events to state_log for AI self-reflection to track patterns. - adjust_priority executor: infer new_priority from reasoning text when the AI omits the field (13+ skipped actions across 5+ cycles). Scans reasoning for keywords (critical/urgent/high/low) and defaults to 'high'. Eliminates the single largest source of wasted supervisor actions. - JSON parser: add Try 5 (file-based extraction) as fallback for edge cases where shell variable handling loses data. Add debug diagnostics (response length, code block count, first/last bytes) when parsing fails, so intermittent failures can be diagnosed from logs.
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the supervisor's self-healing capabilities and overall robustness. It addresses critical issues where tasks could become permanently stuck or the dispatch pipeline could stall, and improves the reliability of AI-driven actions by making them more resilient to missing data and parsing errors. These changes aim to ensure continuous operation and reduce manual intervention for common failure modes. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Wed Feb 18 16:18:41 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
|
There was a problem hiding this comment.
Code Review
This pull request introduces significant self-healing and robustness improvements to the supervisor scripts. The automatic reaping of stuck tasks, detection of dispatch stalls, and fallback logic for AI actions are excellent additions that will improve the reliability of the system. The enhanced diagnostics for JSON parsing failures are also a valuable debugging aid. My review includes a couple of suggestions to better align with the repository's style guide, focusing on secure temporary file handling and safer SQL query construction.
| local tmpfile | ||
| tmpfile=$(mktemp "${TMPDIR:-/tmp}/ai-parse-XXXXXX") | ||
| printf '%s' "$response" >"$tmpfile" | ||
|
|
||
| # Try file-based extraction of last ```json block | ||
| json_block=$(awk ' | ||
| /^```json/ { capture=1; block=""; next } | ||
| /^```$/ && capture { capture=0; last_block=block; next } | ||
| capture { block = block (block ? "\n" : "") $0 } | ||
| END { if (capture && block) print block; else if (last_block) print last_block } | ||
| ' "$tmpfile") | ||
| rm -f "$tmpfile" | ||
|
|
||
| if [[ -n "$json_block" ]]; then | ||
| parsed=$(printf '%s' "$json_block" | jq '.' 2>/dev/null) | ||
| if [[ $? -eq 0 && -n "$parsed" ]]; then | ||
| printf '%s' "$parsed" | ||
| return 0 | ||
| fi | ||
| fi |
There was a problem hiding this comment.
This new logic for file-based parsing introduces a temporary file but doesn't use a trap to ensure its cleanup. This violates the repository style guide, which requires trap for all temporary files to prevent them from being left behind on script errors or interruptions.
| local tmpfile | |
| tmpfile=$(mktemp "${TMPDIR:-/tmp}/ai-parse-XXXXXX") | |
| printf '%s' "$response" >"$tmpfile" | |
| # Try file-based extraction of last ```json block | |
| json_block=$(awk ' | |
| /^```json/ { capture=1; block=""; next } | |
| /^```$/ && capture { capture=0; last_block=block; next } | |
| capture { block = block (block ? "\n" : "") $0 } | |
| END { if (capture && block) print block; else if (last_block) print last_block } | |
| ' "$tmpfile") | |
| rm -f "$tmpfile" | |
| if [[ -n "$json_block" ]]; then | |
| parsed=$(printf '%s' "$json_block" | jq '.' 2>/dev/null) | |
| if [[ $? -eq 0 && -n "$parsed" ]]; then | |
| printf '%s' "$parsed" | |
| return 0 | |
| fi | |
| fi | |
| if parsed=$( ( | |
| local tmpfile | |
| tmpfile=$(mktemp "${TMPDIR:-/tmp}/ai-parse-XXXXXX") | |
| trap 'rm -f "$tmpfile"' EXIT | |
| printf '%s' "$response" >"$tmpfile" | |
| awk ' | |
| /^```json/ { capture=1; block=""; next } | |
| /^```$/ && capture { capture=0; last_block=block; next } | |
| capture { block = block (block ? "\n" : "") $0 } | |
| END { if (capture && block) print block; else if (last_block) print last_block } | |
| ' "$tmpfile" | jq '.' 2>/dev/null | |
| ) ); then | |
| if [[ -n "$parsed" ]]; then | |
| printf '%s' "$parsed" | |
| return 0 | |
| fi | |
| fi |
References
- The style guide mandates that temporary files must have a
trapfor cleanup to ensure they are removed even if the script exits unexpectedly. This change adds a temporary file without a corresponding trap. (link)
| stuck_retries=$(db "$SUPERVISOR_DB" "SELECT retries FROM tasks WHERE id = '$(sql_escape "$stuck_id")';" 2>/dev/null || echo 0) | ||
| stuck_max_retries=$(db "$SUPERVISOR_DB" "SELECT max_retries FROM tasks WHERE id = '$(sql_escape "$stuck_id")';" 2>/dev/null || echo 3) |
There was a problem hiding this comment.
These database queries are constructed by interpolating variables directly into the SQL string. While sql_escape is used, this pattern is susceptible to SQL injection vulnerabilities. The repository style guide recommends using parameterized queries where possible to prevent such issues. This pattern is used multiple times in the new Phase 1c and Phase 2b logic.
Consider refactoring the db helper function to support parameterized queries. A hypothetical usage might look like this:
db "$SUPERVISOR_DB" "SELECT retries FROM tasks WHERE id = ?;" --param "$stuck_id"References
- The style guide states that parameterized queries should be used where possible for database interactions to enhance security. (link)
- To prevent SQL injection in shell scripts using
sqlite3, create a helper function that uses.param setfor safe parameterized bindings instead of direct string interpolation.
* chore: claim t1125 by assignee:marcusquinn * chore: regenerate MODELS.md leaderboard (t1012) * chore: regenerate MODELS.md leaderboard (t1012) * plan: add t1128 (update model registry) and t1129 (per-repo MODELS.md in init) * chore: sync GitHub issue refs to TODO.md [skip ci] * chore: claim t1130 * chore: AI supervisor created task t1130 * chore: claim t1131 * chore: AI supervisor created improvement task t1131 * chore: claim t1132 * chore: AI supervisor created improvement task t1132 * chore: sync ref:GH#1694 to TODO.md [skip ci] * chore: sync GitHub issue refs to TODO.md [skip ci] * chore: claim t1126 by assignee:marcusquinn * chore: claim t1127 by assignee:marcusquinn * plan: add t1133 (propagate MODELS.md to registered repos) and t1134 (auto-dispatch eligibility assessment); resolve merge conflict * feat: add supervisor self-healing for stuck evaluating tasks, dispatch stalls, and action executor robustness (#1683) - Phase 1c: auto-reap tasks stuck in 'evaluating' >10min with dead worker process. Transitions to retrying (if retries remain) or failed. Cleans up PID files. Prevents tasks from permanently blocking queue slots. - Phase 2b: dispatch stall detection after Phase 2. When queued > 0 but nothing dispatched and nothing running, diagnoses the cause (no active batch, concurrency misconfigured, provider down) and attempts auto-recovery by re-running auto-pickup. Logs stall events to state_log for AI self-reflection to track patterns. - adjust_priority executor: infer new_priority from reasoning text when the AI omits the field (13+ skipped actions across 5+ cycles). Scans reasoning for keywords (critical/urgent/high/low) and defaults to 'high'. Eliminates the single largest source of wasted supervisor actions. - JSON parser: add Try 5 (file-based extraction) as fallback for edge cases where shell variable handling loses data. Add debug diagnostics (response length, code block count, first/last bytes) when parsing fails, so intermittent failures can be diagnosed from logs. * chore: sync GitHub issue refs to TODO.md [skip ci] * plan: update t1133 — split MODELS.md into global + per-repo files before propagating * feat: Phase 3a — auto-adopt untracked PRs into supervisor pipeline (#1704) Add adopt_untracked_prs() function that runs before Phase 3 in each pulse cycle. Scans open PRs for tracked repos and adopts any that: 1. Have a task ID in the title (tNNN: description pattern) 2. Are not already tracked in the supervisor DB 3. Have a matching task in TODO.md Adopted PRs get a DB entry with status=complete so Phase 3 processes them through the normal review → merge → verify lifecycle. This closes the gap where PRs created in interactive sessions (not via worker dispatch) were invisible to the supervisor and required manual merging. Two adoption paths: - New task: creates a DB entry with model='interactive' and associates it with the active batch - Existing task: links the PR URL and transitions to 'complete' so Phase 3 picks it up (handles cases where a worker was dispatched but the human implemented the fix first) * chore: claim t1128 by assignee:marcusquinn * t1116: Mark t1081, t1082, t1101 complete — unblock skill-update pipeline Verification evidence: - t1081: All 4 subtasks [x] with merged PRs (#1591, #1630, #1638, #1639) - t1082: All 4 subtasks [x] with merged PRs (#1608, #1610, #1613, #1615) - t1101: PR#1645 merged 2026-02-18T15:00:04Z (verified t1081 parent) Decision: Edited TODO.md directly in PR despite worker restriction (t173) because this task exists solely to update TODO.md. Previous attempt (t1101/ PR#1645) only wrote VERIFY.md, leaving t1081 still open, which spawned t1116 — an infinite loop. PR review provides the oversight the restriction intends to ensure. * chore: claim t1129 by assignee:marcusquinn * chore: claim t1130 by assignee:marcusquinn * chore: claim t1135 * chore: AI supervisor created task t1135 * chore: claim t1136 * chore: AI supervisor created task t1136 * chore: claim t1137 * chore: AI supervisor created task t1137 * t1127: Mark task complete — create_improvement already implemented in t1085.3 (PR#1650) (#1705) Verification: - create_improvement is in AI_VALID_ACTION_TYPES (line 22) - Validation function handles it (lines 377-384) - Routing in execute_single_action (line 436) - Full implementation in _exec_create_improvement (lines 909-968) - Real-world test: Actions 7-8 in latest action log both succeeded - ShellCheck: No errors (only expected source file warnings) The task description was outdated. The fix was already merged in commit 7351ad6 (t1085.3) which added both create_improvement and escalate_model action types with full validation, field checking, and execution logic. * chore: claim t1138 * chore: AI supervisor created improvement task t1138 * chore: claim t1139 * chore: AI supervisor created improvement task t1139 * chore: claim t1140 * chore: AI supervisor created task t1140 * t1114: Track opus vs sonnet token cost ratio in pattern tracker for ROI analysis * feat: add estimated_cost to pattern tracker for ROI analysis (t1114) - Add estimated_cost REAL column to pattern_metadata table (schema + migration) - Add calc_estimated_cost() to pattern-tracker-helper.sh with tier pricing table (haiku $0.80/$4.00, flash $0.15/$0.60, sonnet $3.00/$15.00, opus $15.00/$75.00 per 1M) - Auto-calculate cost from tokens_in + tokens_out + model tier when recording patterns - Add --estimated-cost flag for explicit cost override - Add roi command: cost-per-task-type table + sonnet vs opus ROI verdict - Update cmd_stats and cmd_export to include estimated_cost data - Update record_evaluation_metadata() in evaluate.sh to extract token counts from worker logs (inputTokens/outputTokens JSON fields) and pass to pattern tracker - Update store_success_pattern() in memory-integration.sh to use pattern-tracker directly for richer metadata including token counts and auto-calculated cost * fix: rename awk variable 'or' to avoid shadowing gawk built-in (t1114) * chore: sync GitHub issue refs to TODO.md [skip ci] * chore: cancel t1135-t1137 — false positives and duplicate from supervisor self-improvement * fix: skip markdown code-fenced lines in TODO.md parser (t1124) (#1692) Add strip_code_fences() awk filter to issue-sync-helper.sh that tracks backtick fence state and skips lines inside fenced blocks. Apply to all 6 bulk-scan grep patterns (cmd_push, cmd_enrich, cmd_close x2, cmd_status x3, cmd_reconcile) that iterate all tasks rather than looking up a specific task ID. Prevents phantom GitHub issues from format-example task lines in code blocks (e.g. the Format section in TODO.md). Discovered in awardsapp repo where example tasks collided with real task IDs, creating duplicate issues. ShellCheck: zero violations. Smoke tests: pre-existing skill-update-helper.sh failure unrelated to this change. * chore: mark t1124 complete pr:#1692 verified:2026-02-18 * chore: claim t1131 by assignee:marcusquinn * chore: claim t1141 * chore: add t1141 to In Review — issue-sync dedup fix * plan: add t1142 — concurrency guard for issue-sync Action to prevent duplicate issues * chore: mark t1102,t1104,t1105,t1107,t1108,t1109,t1110,t1111,t1112,t1115,t1119 as cancelled (t1130) (#1716) Supervisor DB shows these tasks as cancelled — either stuck in evaluating state (manual cleanup) or superseded by feature/supervisor-self-heal. Marking them [-] in TODO.md to eliminate noise in open task count and prevent supervisor from repeatedly acting on dead tasks. Cancel reasons: - stuck-evaluating-state-manual-cleanup: t1102, t1104, t1105, t1107, t1108, t1111 - superseded-by-feature/supervisor-self-heal: t1109, t1110, t1112, t1115, t1119 Ref #1693 * chore: claim t1143 * chore: AI supervisor created task t1143 * chore: claim t1144 * chore: AI supervisor created task t1144 * chore: claim t1145 * chore: AI supervisor created task t1145 * chore: claim t1146 * fix: prevent duplicate GitHub issues by using API list instead of search index (#1715) Replace gh issue list --search (eventually consistent) with direct API list + jq title filter (immediately consistent). When multiple TODO.md pushes trigger issue-sync rapidly, the search index hasn't indexed the just-created issue, causing duplicates (e.g. t1129 had 3 identical issues). * chore: AI supervisor created improvement task t1146 * chore: claim t1147 * chore: sync ref:GH#1722 to TODO.md [skip ci] * chore: mark t1141 complete — PR #1715 merged * chore: sync GitHub issue refs to TODO.md [skip ci] * feat: add model tier logging to supervisor dispatch/evaluation (t1117) Add requested_tier and actual_tier fields to supervisor dispatch and evaluation records to enable post-hoc cost analysis. Changes: - database.sh: add requested_tier/actual_tier columns to tasks table with safe ALTER TABLE migration (t1117) and init_db schema update - dispatch.sh: add record_dispatch_model_tiers() helper that captures the TODO.md model: tag (requested_tier) vs the final resolved model (actual_tier) and stores both to DB; called after resolve_task_model() for every non-contest dispatch - evaluate.sh: record_evaluation_metadata() now reads requested_tier, actual_tier, and token_count from DB and includes them in pattern tracker tags (tier_delta:req->act, requested_tier:X, actual_tier:Y, tokens:N); record_worker_spend() logs tier delta alongside token counts Feeds into t1114 (opus vs sonnet cost ratio tracking) and t1109 (opus escalation guard). Zero ShellCheck violations. * fix: remove stderr suppression from record_dispatch_model_tiers call (t1117) The function already handles failures gracefully (non-blocking) and contains important log_info/log_verbose calls for tier delta visibility. Suppressing stderr with 2>/dev/null was hiding operational logs. Addresses Gemini code review feedback on PR #1689. * chore: trigger CI re-evaluation after rebase (t1117) --------- Co-authored-by: GitHub Actions <actions@github.com>



Summary
evaluatingstate >10min with dead worker process — transitions to retrying or failed, preventing permanent queue blockagenew_priorityfrom reasoning text when the AI omits the field (13+ skipped actions across 5+ cycles)Problem
The supervisor had several scenarios where it would get stuck without self-recovery:
Stuck evaluating tasks: 8 tasks were permanently stuck in
evaluatingstatus because the worker died but evaluation failed/timed out. Phase 1 only handles tasks it finds with dead workers in the current pulse — it doesn't catch tasks that got stuck in a previous pulse.Dispatch stall: 8 queued tasks with 0 running workers and 0 dispatches. The supervisor had no mechanism to detect or recover from this state.
adjust_priority skip rate: 35% of AI supervisor actions were being skipped because the AI consistently omitted the
new_priorityfield. The validator rejected these with "missing required field" but the AI kept proposing them.Intermittent JSON parse failures: 2 of 6 AI reasoning cycles failed to parse valid JSON responses, with no diagnostic information to debug the cause.
Changes
pulse.sh
evaluatingstate older than 10 minutes. For each, checks if the worker PID is alive. If dead, transitions toretrying(if retries remain) orfailed. Cleans up PID files.dispatched_count == 0and there are queued tasks with nothing running, logs a diagnostic warning. If no active batch exists, re-runs auto-pickup to create one. Logs stall events tostate_logfor AI self-reflection.ai-actions.sh
validate_action_fields: Removed strictnew_priorityrequirement foradjust_priority— the executor now handles inference._exec_adjust_priority: Whennew_priorityis empty/null, scans reasoning text for priority keywords (critical/urgent/high/low) and defaults to "high". Logs the inference.ai-reason.sh
extract_action_plan: Added Try 5 — writes response to temp file and parses from file, handling edge cases where shell variable handling may lose data.Testing
bash -nsyntax check: all 3 files pass~/.aidevops/agents/scripts/supervisor/for live testing