-
Notifications
You must be signed in to change notification settings - Fork 5
feat: supervisor review triage, compaction resilience, and quality subagents #446
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…fix t146 db() bug t148: Add review_triage state to supervisor post-PR lifecycle. Before merging, the supervisor now fetches unresolved review threads via GraphQL, classifies them by source (bot vs human) and severity, and blocks merge if human reviews or high-severity bot findings are unaddressed. Low-severity bot-only threads pass with a warning. Bypass with --skip-review-triage flag. t146: Fix missing $SUPERVISOR_DB arg in db() calls on lines 3165/3183. Compaction resilience: Pulse cycle now writes pulse-checkpoint.md with full task state (running, queued, blocked, post-PR) so orchestrating AI sessions can re-orient after context compaction. Reprompt for clean_exit_no_signal retries now includes worktree git status and recent commits to help the retried worker pick up where the previous attempt left off. DB migration: Adds review_triage to status CHECK constraint. ShellCheck: zero new violations.
New script: session-checkpoint-helper.sh persists session state (current task, branch, worktree, progress) to disk so AI sessions can re-orient after context compaction. Integrated into session-manager.md workflow. ShellCheck: zero violations.
Document t{NNN}: title prefix convention for GitHub issues, add sync rule
to AGENTS.md, update log-issue-aidevops.md with t-number assignment step.
…ts, and x-helper New subagents: backlink-checker (t070), voice-models (t071), transcription (t072), document-extraction (t073), terminal-optimization (t025), subscription-audit (t026), rapidfuzz (t014). New script: x-helper.sh (t033) for tweet fetching. ShellCheck: zero violations on x-helper.sh.
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. WalkthroughThis PR implements bidirectional GitHub issue synchronization with TODO.md tasks, adds session checkpoint persistence for context compaction resilience, introduces a review-triage gate in the PR lifecycle, and expands documentation for multiple tool and workflow capabilities. Changes
Sequence DiagramsequenceDiagram
participant Supervisor as Supervisor<br/>(post-PR)
participant GH as GitHub API<br/>(GraphQL)
participant Triage as Triage<br/>Decision
participant Worker as Worker<br/>Dispatch
participant GHSync as GitHub<br/>Sync
Supervisor->>GH: check_review_threads(PR_id)
GH-->>Supervisor: unresolved threads + authors
Supervisor->>Triage: triage_review_feedback(threads)
Triage-->>Supervisor: {triage_pass, triage_block, severity}
alt Triage: Block (High/Critical)
Supervisor->>Worker: dispatch fix or dismiss task
Worker-->>Supervisor: task_id (t{NNN})
Supervisor->>GHSync: sync_github_issue(task_id, blocked)
GHSync-->>GH: comment + link TODO
else Triage: Pass (Resolved/Low)
Supervisor->>GHSync: sync_github_issue(task_id, proceeding)
GHSync-->>GH: comment
Supervisor->>Supervisor: transition to merging
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the supervisor's robustness and expands its capabilities. It introduces a critical review triage step in the PR lifecycle to prevent merges with outstanding issues, improves resilience for long-running AI sessions by enabling state checkpointing, and streamlines workflow management through automated GitHub issue synchronization. Additionally, it integrates a suite of new specialized subagents, broadening the system's functional scope. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Sat Feb 7 16:57:17 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces several significant features, including a review triage gate to block merges on unresolved human or high-severity bot reviews, a session checkpointing system for compaction resilience, and a convention for syncing GitHub issues with internal tasks. The changes are extensive and well-documented, with new helper scripts and subagent definitions. I've identified a few bugs in the new helper scripts related to argument parsing and some areas for improvement in efficiency and robustness. Overall, this is a strong contribution that enhances the supervisor's capabilities and the framework's resilience.
| # Main dispatch | ||
| main() { | ||
| local command="${1:-help}" | ||
| shift 2>/dev/null || true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The shift 2 command is incorrect here. It will consume the main command (e.g., save) and its first argument (e.g., --task), causing argument parsing to fail for any command that takes options. It should be shift or shift 1 to only consume the main command.
| shift 2>/dev/null || true | |
| shift || true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
False positive. This is shift 2>/dev/null || true, not shift 2. The 2>/dev/null redirects stderr (fd 2) to /dev/null — it does not pass 2 as an argument to shift. This is a standard bash pattern: shift (no argument) consumes one positional parameter, and 2>/dev/null || true silently handles the case where there are no remaining parameters.
|
|
||
| main() { | ||
| local command="${1:-help}" | ||
| shift 2>/dev/null || true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The shift 2 command is incorrect. It consumes the main command (e.g., fetch) and its first argument (e.g., the URL), which is then lost before the argument parsing loop. This breaks all commands that take arguments. It should be shift or shift 1 to only consume the command itself.
| shift 2>/dev/null || true | |
| shift || true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
False positive. Same as above — this is shift 2>/dev/null || true, not shift 2. The 2> is stderr redirection, not an argument to shift. The command consumes exactly one positional parameter (the command name), leaving all remaining args for the option parsing loop.
| current_task="$(grep -m1 'Current Task' "$CHECKPOINT_FILE" | sed 's/.*| //' | sed 's/ *$//' || echo "unknown")" | ||
| local branch | ||
| branch="$(grep -m1 'Branch' "$CHECKPOINT_FILE" | sed 's/.*| //' | sed 's/ *$//' || echo "unknown")" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The sed commands used to extract values from the checkpoint file are incorrect. They will leave a trailing | character in the extracted value because sed 's/ *$//' only removes trailing spaces, not the pipe character. A single, more robust sed command can correctly extract and trim the value for both variables.
| current_task="$(grep -m1 'Current Task' "$CHECKPOINT_FILE" | sed 's/.*| //' | sed 's/ *$//' || echo "unknown")" | |
| local branch | |
| branch="$(grep -m1 'Branch' "$CHECKPOINT_FILE" | sed 's/.*| //' | sed 's/ *$//' || echo "unknown")" | |
| current_task="$(grep -m1 'Current Task' "$CHECKPOINT_FILE" | sed 's/.*| *//; s/ *| *$//' || echo "unknown")" | |
| local branch | |
| branch="$(grep -m1 'Branch' "$CHECKPOINT_FILE" | sed 's/.*| *//; s/ *| *$//' || echo "unknown")" |
.agents/scripts/supervisor-helper.sh
Outdated
| high_severity_count=$(echo "$threads_json" | jq '[.threads[] | select(.is_bot == true) | select(.body | test("bug|security|vulnerability|critical|error|crash|data.loss|injection|XSS|CSRF"; "i"))] | length' 2>/dev/null || echo "0") | ||
|
|
||
| if [[ "$high_severity_count" -gt 0 ]]; then | ||
| log_warn " $high_severity_count high-severity bot finding(s) - blocking merge" | ||
| echo "$threads_json" | jq -r '.threads[] | select(.is_bot == true) | select(.body | test("bug|security|vulnerability|critical|error|crash|data.loss|injection|XSS|CSRF"; "i")) | " - \(.author) on \(.path):\(.line): \(.body[0:120])"' 2>/dev/null || true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The regex used to detect high-severity bot comments could have false positives. For example, a comment like "this is not a critical issue" would be flagged as high-severity. Using word boundaries (\b) in the regex would make it more accurate by matching whole words only.
| high_severity_count=$(echo "$threads_json" | jq '[.threads[] | select(.is_bot == true) | select(.body | test("bug|security|vulnerability|critical|error|crash|data.loss|injection|XSS|CSRF"; "i"))] | length' 2>/dev/null || echo "0") | |
| if [[ "$high_severity_count" -gt 0 ]]; then | |
| log_warn " $high_severity_count high-severity bot finding(s) - blocking merge" | |
| echo "$threads_json" | jq -r '.threads[] | select(.is_bot == true) | select(.body | test("bug|security|vulnerability|critical|error|crash|data.loss|injection|XSS|CSRF"; "i")) | " - \(.author) on \(.path):\(.line): \(.body[0:120])"' 2>/dev/null || true | |
| high_severity_count=$(echo "$threads_json" | jq '[.threads[] | select(.is_bot == true) | select(.body | test("\\b(bug|security|vulnerability|critical|error|crash|data.loss|injection|XSS|CSRF)\\b"; "i"))] | length' 2>/dev/null || echo "0") | |
| if [[ "$high_severity_count" -gt 0 ]]; then | |
| log_warn " $high_severity_count high-severity bot finding(s) - blocking merge" | |
| echo "$threads_json" | jq -r '.threads[] | select(.is_bot == true) | select(.body | test("\\b(bug|security|vulnerability|critical|error|crash|data.loss|injection|XSS|CSRF)\\b"; "i")) | " - \(.author) on \(.path):\(.line): \(.body[0:120])"' 2>/dev/null || true |
.agents/scripts/x-helper.sh
Outdated
| author=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('author',{}).get('name','Unknown'))" 2>/dev/null || echo "Unknown") | ||
| author_handle=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('author',{}).get('screen_name','unknown'))" 2>/dev/null || echo "unknown") | ||
| text=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('text',''))" 2>/dev/null || echo "") | ||
| created_at=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('created_at',''))" 2>/dev/null || echo "") | ||
| likes=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('likes',0))" 2>/dev/null || echo "0") | ||
| retweets=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('retweets',0))" 2>/dev/null || echo "0") | ||
| replies=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('replies',0))" 2>/dev/null || echo "0") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Parsing the JSON response by calling python3 seven separate times is very inefficient as each call starts a new process. This can be done with a single call to jq or a single Python script, which would be much faster and more readable.
| author=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('author',{}).get('name','Unknown'))" 2>/dev/null || echo "Unknown") | |
| author_handle=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('author',{}).get('screen_name','unknown'))" 2>/dev/null || echo "unknown") | |
| text=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('text',''))" 2>/dev/null || echo "") | |
| created_at=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('created_at',''))" 2>/dev/null || echo "") | |
| likes=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('likes',0))" 2>/dev/null || echo "0") | |
| retweets=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('retweets',0))" 2>/dev/null || echo "0") | |
| replies=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('replies',0))" 2>/dev/null || echo "0") | |
| author=$(echo "$response" | jq -r '.tweet.author.name // "Unknown"') | |
| author_handle=$(echo "$response" | jq -r '.tweet.author.screen_name // "unknown"') | |
| text=$(echo "$response" | jq -r '.tweet.text // ""') | |
| created_at=$(echo "$response" | jq -r '.tweet.created_at // ""') | |
| likes=$(echo "$response" | jq -r '.tweet.likes // 0') | |
| retweets=$(echo "$response" | jq -r '.tweet.retweets // 0') | |
| replies=$(echo "$response" | jq -r '.tweet.replies // 0') |
.agents/scripts/x-helper.sh
Outdated
| name=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('user',{}).get('name','Unknown'))" 2>/dev/null || echo "Unknown") | ||
| followers=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('user',{}).get('followers',0))" 2>/dev/null || echo "0") | ||
| following=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('user',{}).get('following',0))" 2>/dev/null || echo "0") | ||
| description=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('user',{}).get('description',''))" 2>/dev/null || echo "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to cmd_fetch, this function calls python3 multiple times to parse a single JSON response. This is inefficient. A single jq call would be much better for performance and readability.
| name=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('user',{}).get('name','Unknown'))" 2>/dev/null || echo "Unknown") | |
| followers=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('user',{}).get('followers',0))" 2>/dev/null || echo "0") | |
| following=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('user',{}).get('following',0))" 2>/dev/null || echo "0") | |
| description=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('user',{}).get('description',''))" 2>/dev/null || echo "") | |
| name=$(echo "$response" | jq -r '.user.name // "Unknown"') | |
| followers=$(echo "$response" | jq -r '.user.followers // 0') | |
| following=$(echo "$response" | jq -r '.user.following // 0') | |
| description=$(echo "$response" | jq -r '.user.description // ""') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 17
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.agents/scripts/supervisor-helper.sh (1)
3419-3437:⚠️ Potential issue | 🟠 Major
json_setwill fail iferrorisn’t valid JSON (no_pr counter won’t persist).If
errorcontains plain text,json_extract/json_setthrows “malformed JSON”, and retry counts won’t save. Guard withjson_valid()and use the escaped ID to avoid malformed SQL.
As per coding guidelines, "Automation scripts - focus on: Reliability and robustness".🛡️ Suggested fix
- no_pr_count=$(db "$SUPERVISOR_DB" "SELECT COALESCE( - (SELECT CAST(json_extract(error, '$.no_pr_retries') AS INTEGER) - FROM tasks WHERE id='$task_id'), 0);" 2>/dev/null || echo "0") + no_pr_count=$(db "$SUPERVISOR_DB" "SELECT COALESCE( + (SELECT CAST(json_extract( + CASE WHEN json_valid(error) THEN error ELSE '{}' END, + '$.no_pr_retries') AS INTEGER) + FROM tasks WHERE id='$escaped_id'), 0);" 2>/dev/null || echo "0") ... - db "$SUPERVISOR_DB" "UPDATE tasks SET error = json_set(COALESCE(error, '{}'), '$.no_pr_retries', $no_pr_count), updated_at = strftime('%Y-%m-%dT%H:%M:%SZ','now') WHERE id='$task_id';" 2>/dev/null || true + db "$SUPERVISOR_DB" "UPDATE tasks SET error = json_set( + CASE WHEN json_valid(error) THEN error ELSE '{}' END, + '$.no_pr_retries', $no_pr_count), + updated_at = strftime('%Y-%m-%dT%H:%M:%SZ','now') + WHERE id='$escaped_id';" 2>/dev/null || true
🤖 Fix all issues with AI agents
In @.agents/AGENTS.md:
- Line 105: In the "GitHub issue sync" rule update the inline code span to
remove the trailing space inside the backticks so the literal is `t{NNN}:` (no
space inside the code span) and then in the surrounding prose explicitly state
that when used in issue titles there must be a single space after the colon
(e.g., "t{NNN}: <title>") so linting passes while keeping the documentation of
the required trailing space; target the `t{NNN}:` code span and the sentence
that explains "Issue titles MUST be prefixed..." to make this change.
In @.agents/scripts/commands/log-issue-aidevops.md:
- Around line 132-142: The Step 5b workflow about reading/writing TODO.md should
not be in the user-facing /log-issue-aidevops command docs; remove or relocate
Step 5b (the instructions to read highest t-number, add ref:GH#{issue_number},
and commit/push TODO.md) to a maintainer-only guide (e.g., supervisor helper)
and instead leave only the user-facing behavior (create the GitHub issue,
optionally prefix title with t{NNN} if you want a visible convention) or
explicitly mark the TODO.md sync as “maintainer-only”; update the document
sections referencing Step 5b and any mention of TODO.md so /log-issue-aidevops
no longer instructs external users to commit to the aidevops repo.
In @.agents/scripts/session-checkpoint-helper.sh:
- Around line 66-77: The case block that parses options in the while loop
(handling --task, --next, --worktree, --branch, --batch, --note, --elapsed,
--target) must guard against missing values to avoid crashes under set -euo
pipefail; for each option (e.g., when setting current_task, next_tasks,
worktree_path, branch_name, batch_name, note, elapsed_mins, target_mins) add a
check that a value exists (e.g., verify $# -gt 1 and that $2 is not another
option) before assigning and shifting; on failure call print_error "Missing
value for <option>" and return 1 so the script fails gracefully instead of
hard-exiting.
In @.agents/scripts/supervisor-helper.sh:
- Around line 3348-3375: The script doesn't log when the review triage is
bypassed via the --skip-review-triage flag; update the branch handling
skip_review_triage (the else branch of the if checking "$skip_review_triage" and
"$dry_run") to emit an explicit audit log entry when skip_review_triage=="true"
(e.g., call log_info or log_warn indicating "Review triage bypassed via
--skip-review-triage for $task_id") before proceeding to call cmd_transition and
set tstatus="merging"; reference the existing variables and functions
triage_review_feedback, cmd_transition, tstatus and send_task_notification so
the new log sits next to those same operations.
- Around line 2893-2916: The current GraphQL fetch (graphql_result via gh api
graphql) only requests reviewThreads(first: 100) and therefore truncates
results; modify the logic that builds/executes the gh api graphql call to use
cursor-based pagination on reviewThreads by requesting pageInfo { hasNextPage
endCursor } and calling reviewThreads(first:100, after:$cursor) in a loop,
accumulating nodes into graphql_result (or a separate accumulator variable)
until hasNextPage is false, passing the endCursor as the next $cursor parameter
to gh api each iteration and merging returned nodes.
In @.agents/scripts/x-helper.sh:
- Around line 62-66: The curl call that populates the local variable response
using "${FXTWITTER_API}/${path}" can hang or ignore HTTP errors; update the
invocation in the block that assigns response (and the analogous block at the
other location) to use curl --fail (-f), a hard timeout (--max-time, e.g. 10),
and retry flags (--retry and optionally --retry-delay) so non-2xx responses and
stalls produce a non-zero exit and trigger the existing error path (print_error
"Failed to fetch tweet" / return 1); ensure the command still captures output
into the response variable and retains the existing error handling.
- Around line 166-170: The case for --format currently assumes a following
argument and will break under set -u if none is provided; update the --format
branch in the while/case handling (the block handling --format and setting
output_format) to first verify that a next positional parameter exists and is
not another option, and if missing print a clear error to stderr and exit
non-zero; otherwise assign output_format="$2" and shift 2 as before. Ensure this
check uses the same variables (output_format, args) and exits cleanly to
preserve script robustness.
In @.agents/seo/backlink-checker.md:
- Around line 123-126: Update the Related list entry that currently reads "Link
building strategies" to the hyphenated form "Link-building strategies" in the
markdown for the backlink checker; locate the string in
.agents/seo/backlink-checker.md (the Related list block containing
`seo/link-building.md`) and replace the unhyphenated text with the hyphenated
version so the list reads `seo/link-building.md` - Link-building strategies.
- Around line 31-77: The CLI examples referencing the non-existent seo-helper.sh
must be replaced with authoritative file:line references to the real export
scripts; update the Ahrefs section to point to the corresponding implementation
in seo-export-ahrefs.sh and the DataForSEO section to point to
seo-export-dataforseo.sh, and similarly replace the WHOIS/batch example with the
script and line reference that implements that batch flow; locate and edit the
examples in .agents/seo/backlink-checker.md so each command block becomes a
file:line pointer to the exact function/command in seo-export-ahrefs.sh or
seo-export-dataforseo.sh (and the batch WHOIS implementation) instead of using
seo-helper.sh.
- Around line 19-25: The docs reference a non-existent helper script and
subcommands (seo-helper.sh with backlinks, backlinks-bulk, check-expiry and
flags --source, --lost, --days, --broken, --referring-domains-only, --stdin), so
either implement seo-helper.sh exposing those subcommands and flags (implement
CLI parsing, wire to existing functions that call Ahrefs/DataForSEO/WHOIS, and
ensure script is installed under ~/.aidevops/agents/scripts/) or update the
examples to call existing helpers (e.g., keyword-research-helper.sh,
seo-analysis-helper.sh) and map the specific subcommands/flags to their real
equivalents; update all example references (seo-helper.sh backlinks,
backlinks-bulk, check-expiry and the listed flags) to match the chosen approach
so the workflow becomes functional.
In @.agents/tools/accounts/subscription-audit.md:
- Around line 27-73: The documentation references a non-existent helper script
subscription-audit-helper.sh in the "Audit Workflow" examples; remove those
illustrative bash blocks or replace them with authoritative file:line references
to the actual implementation (or to the real CLI commands) so examples are
accurate. Edit the markdown section where subscription-audit-helper.sh is shown
and either (a) delete the pseudo-commands and add file:line pointers to the real
script/implementation, or (b) substitute the blocks with the actual CLI/tool
names that exist in the codebase, ensuring the replacement points to the
authoritative source.
In @.agents/tools/document/document-extraction.md:
- Around line 26-40: The quick reference in document-extraction.md references a
non-existent helper script document-extraction-helper.sh; update the doc to
either mark these examples as planned (e.g., add a forward-looking note that
document-extraction-helper.sh is Phase 6/draft per PLANS.md) or replace the
examples with authoritative pointers to the planned command spec in
todo/PLANS.md (reference the PLANS.md section describing the
document-extraction-helper.sh commands), ensuring you mention
document-extraction-helper.sh and the PLANS.md entry so readers know the
examples are not yet implemented.
- Around line 1-200: This new "Document Extraction" doc duplicates the existing
Unstract-based extraction content (overlapping Docling/ExtractThinker/Presidio
vs Unstract MCP); reconcile by either consolidating this material into the
Unstract MCP documentation (merge Docling/ExtractThinker/Presidio sections into
the Unstract narrative and remove duplicate guidance), or clearly justify and
document why Docling+ExtractThinker+Presidio must coexist (add a short "When to
use" section comparing capabilities: Docling vs Unstract, extraction schemas
Invoice/Receipt/Contract, and privacy modes), and update any MCP integration
references (Unstract MCP / mcp-integrations) to point to the chosen canonical
doc so there are no contradictory or duplicate instructions across the agents
docs.
- Around line 129-167: The Extraction Schemas (Invoice, Receipt, ContractSummary
classes) are presented without corresponding implementations; update the
document to explicitly state they are example/template schemas (not
authoritative) by adding a short note above the code block indicating
"Example/template schemas — customize for your project", or if they are
canonical, replace the examples with file:line references to the actual
implementations that define Invoice, Receipt, and ContractSummary so readers
know where the real models live; ensure the note references the class names
Invoice, Receipt, and ContractSummary so it's clear which entries are examples
vs. authoritative.
In @.agents/tools/terminal/terminal-optimization.md:
- Around line 182-183: Remove the broken cross-reference entry to
`tools/ai-assistants/opencode.md` from
`.agents/tools/terminal/terminal-optimization.md` (the list that includes
`tools/terminal/terminal-title.md`), or if the reference is intended, restore
the missing `.agents/tools/ai-assistants/opencode.md` file; specifically, either
delete the `tools/ai-assistants/opencode.md - AI coding tool config` line from
the list or recreate the referenced `opencode.md` content so the link is valid.
In @.agents/tools/voice/transcription.md:
- Around line 28-146: The examples in this doc (the faster-whisper Python
snippet using WhisperModel with device="auto" and model "large-v3-turbo", the
whisper.cpp install/transcribe example, the Groq curl example, and the
ElevenLabs curl example) are non-authoritative and must be removed; replace each
inline example by pointing readers to the repo's canonical implementation
(voice-helper.sh) and/or official API docs using file:line references.
Specifically, remove the faster-whisper block referencing
WhisperModel("large-v3-turbo", device="auto"), remove the whisper.cpp brew/build
example and CLI usage, remove the Groq curl example (or update to use model
"whisper-large-v3" and include multipart/form-data headers) and remove the
ElevenLabs example (or point to official docs for correct model IDs like
scribe_v1 and xi-api-key header); instead add short references such as "see
voice-helper.sh:lines X-Y" and links to upstream docs for each provider.
In @.agents/tools/voice/voice-models.md:
- Around line 43-153: Replace the non-authoritative external TTS examples
(Qwen3-TTS, Piper, Bark, Coqui, ElevenLabs, OpenAI, Hugging Face) with
references to the repository's actual TTS implementations by removing those
example blocks and instead pointing readers to the in-repo implementations
EdgeTTS, MacOSSayTTS, and FacebookMMSTTS (use those exact identifiers to locate
the code), and remove the incorrect "voice-helper.sh tts" command reference
(replace it with instructions to use the repo's real voice bridge
implementations). Ensure the updated section contains short, authoritative notes
that refer to EdgeTTS, MacOSSayTTS, and FacebookMMSTTS for examples and usage
rather than external example code.
🧹 Nitpick comments (9)
.agents/tools/context/rapidfuzz.md (1)
32-116: Replace non‑authoritative code examples with file:line references (or cite an authoritative source).Guideline says .agents/**/*.md should avoid code examples unless authoritative; most snippets here look illustrative. Please swap to file:line references or explicitly cite an authoritative source for each example.
As per coding guidelines, ".agents/**/*.md: Use code examples only when authoritative, otherwise use file:line references."
.agents/tools/document/document-extraction.md (1)
82-82: Consider removing star counts to reduce documentation maintenance.GitHub star counts (lines 82, 106, 127) will become stale and require manual updates. Consider either removing them or adding a "as of [date]" qualifier to set reader expectations.
♻️ Proposed refactor to remove star counts
-- **Repo**: https://github.com/DS4SD/docling (16k+ stars) +- **Repo**: https://github.com/DS4SD/docling -- **Repo**: https://github.com/enoch3712/ExtractThinker (1.5k+ stars) +- **Repo**: https://github.com/enoch3712/ExtractThinker -- **Repo**: https://github.com/microsoft/presidio (3.5k+ stars) +- **Repo**: https://github.com/microsoft/presidioAlso applies to: 106-106, 127-127
.agents/tools/terminal/terminal-optimization.md (5)
46-46: Document thetsdependency or use a portable alternative.The
tscommand (from moreutils package) is not commonly installed by default. Consider either:
- Documenting the dependency:
# Requires: brew install moreutils- Using a portable alternative:
zsh -xv -c exit 2>&1 | head -50📝 Proposed alternatives
Option 1: Document dependency
# Profile zsh startup +# Requires: brew install moreutils zsh -xvs 2>&1 | ts -i '%.s' | head -50Option 2: Remove dependency
# Profile zsh startup -zsh -xvs 2>&1 | ts -i '%.s' | head -50 +zsh -xv -c exit 2>&1 | head -50 +# For timing, use: hyperfine 'zsh -i -c exit' --warmup 3
62-62: Clarify the comment -nvm initis not standard nvm usage.The comment refers to
nvm init, but the standard initialization issource "$NVM_DIR/nvm.sh"or[ -s "$NVM_DIR/nvm.sh" ] && source "$NVM_DIR/nvm.sh"(as shown correctly in line 67).📝 Proposed fix
-# Instead of: eval "$(nvm init)" +# Instead of: source "$NVM_DIR/nvm.sh" # Use lazy-load:
81-83: PATH cleanup assumes no spaces in directory names.The snippet uses
tr '\n' ':'which will fail if directory names contain newlines (rare but possible). The current approach is acceptable for typical use cases, but for production-grade robustness, consider using a null-delimited approach.🛡️ More robust alternative
# Remove non-existent directories -PATH=$(echo "$PATH" | tr ':' '\n' | while read -r dir; do - [ -d "$dir" ] && echo "$dir" -done | tr '\n' ':' | sed 's/:$//') +NEW_PATH="" +IFS=':' +for dir in $PATH; do + [ -d "$dir" ] && NEW_PATH="${NEW_PATH}${NEW_PATH:+:}${dir}" +done +PATH="$NEW_PATH" +unset NEW_PATH
86-99: Note platform-specific install commands.All install commands use
brew, which assumes macOS or Linux with Homebrew. Consider adding a note about platform requirements or providing alternatives (e.g.,apt-get,dnf,pacman).📝 Suggested addition
Add before the table:
> **Note**: Install commands use Homebrew (`brew`). For other platforms, replace `brew install` with your package manager (e.g., `apt-get install`, `dnf install`, `pacman -S`).
116-118: Reconsider the "safety" aliases - they can create anti-patterns.Aliasing
rm/cp/mvto interactive mode is commonly discouraged because:
- Scripts that source the config will break when encountering these aliases
- Creates dependency on aliases that won't exist in CI/CD, containers, or other environments
- Better practice: use
-iexplicitly when needed, or usetrashas a safer alternative♻️ Safer alternatives
-# Safety -alias rm='rm -i' -alias cp='cp -i' -alias mv='mv -i' +# Safety: use trash instead of rm (brew install trash) +alias t='trash' +# Or if you prefer prompts, use these in interactive sessions only: +# alias rm='rm -i' # Note: avoid in scripts - use /bin/rm instead.agents/scripts/session-checkpoint-helper.sh (1)
181-186: Trim parsed table values to avoid trailing|in status output.The current sed chain can leave a trailing pipe in
current_task/branch. Use a field-based extraction.🔧 Suggested fix
- current_task="$(grep -m1 'Current Task' "$CHECKPOINT_FILE" | sed 's/.*| //' | sed 's/ *$//' || echo "unknown")" + current_task="$(awk -F'|' '/Current Task/ {gsub(/^[ \t]+|[ \t]+$/, "", $3); print $3; exit}' "$CHECKPOINT_FILE" || echo "unknown")" - branch="$(grep -m1 'Branch' "$CHECKPOINT_FILE" | sed 's/.*| //' | sed 's/ *$//' || echo "unknown")" + branch="$(awk -F'|' '/Branch/ {gsub(/^[ \t]+|[ \t]+$/, "", $3); print $3; exit}' "$CHECKPOINT_FILE" || echo "unknown")".agents/scripts/supervisor-helper.sh (1)
3834-3835: Metrics omitreview_triagestate (summary + checkpoint).The summary and checkpoint lists exclude
review_triage, so Post-PR metrics undercount.📊 Suggested fix
- total_pr_review=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM tasks WHERE status IN ('pr_review', 'merging', 'merged', 'deploying');") + total_pr_review=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM tasks WHERE status IN ('pr_review', 'review_triage', 'merging', 'merged', 'deploying');") ... - FROM tasks WHERE status IN ('pr_review', 'merging', 'merged', 'deploying') + FROM tasks WHERE status IN ('pr_review', 'review_triage', 'merging', 'merged', 'deploying')Also applies to: 4416-4418
| # Fetch review threads via GraphQL (REST API doesn't expose thread resolution) | ||
| local graphql_result | ||
| graphql_result=$(gh api graphql -f query=' | ||
| query($owner: String!, $repo: String!, $pr: Int!) { | ||
| repository(owner: $owner, name: $repo) { | ||
| pullRequest(number: $pr) { | ||
| reviewThreads(first: 100) { | ||
| totalCount | ||
| nodes { | ||
| isResolved | ||
| comments(first: 1) { | ||
| nodes { | ||
| author { login } | ||
| body | ||
| path | ||
| line | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } | ||
| ' -f owner="$owner" -f repo="$repo_name" -F pr="$pr_number" 2>/dev/null || echo "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, examine the full context around the flagged lines and look for pagination handling
cd .agents/scripts && wc -l supervisor-helper.shRepository: marcusquinn/aidevops
Length of output: 89
🏁 Script executed:
# Extract context around lines 2893-2916 and look for pagination logic
sed -n '2880,2950p' .agents/scripts/supervisor-helper.shRepository: marcusquinn/aidevops
Length of output: 2657
🏁 Script executed:
# Search for pagination-related patterns in the script (pageInfo, endCursor, hasNextPage, after)
rg -n "pageInfo|endCursor|hasNextPage|after.*reviewThreads|pagination.*review" .agents/scripts/supervisor-helper.shRepository: marcusquinn/aidevops
Length of output: 46
🏁 Script executed:
# Look at how graphql_result is used after this fetch - check for loop/iteration logic
rg -n -A 20 "graphql_result" .agents/scripts/supervisor-helper.sh | head -100Repository: marcusquinn/aidevops
Length of output: 2325
🌐 Web query:
GitHub GraphQL API reviewThreads pagination maximum page size cursor pagination endCursor
💡 Result:
- Maximum page size: For any GitHub GraphQL connection (including
pullRequest { reviewThreads(...) { ... } }), you must providefirstorlast, and the value must be between 1 and 100 (so max page size = 100). [1] - Cursor pagination (
endCursor): RequestpageInfo { endCursor hasNextPage }from thereviewThreadsconnection; to fetch the next page, pass the prior response’spageInfo.endCursoras theafterargument. [1] reviewThreadspagination args:reviewThreadssupportsafter,before,first, andlast. [2]
Example (forward pagination):
query($owner: String!, $repo: String!, $pr: Int!, $cursor: String) {
repository(owner: $owner, name: $repo) {
pullRequest(number: $pr) {
reviewThreads(first: 100, after: $cursor) {
nodes { id }
pageInfo { hasNextPage endCursor }
}
}
}
}Sources:
[1] GitHub Docs — Using pagination in the GraphQL API (docs.github.com)
[2] GitHub Docs — GraphQL Objects reference (shows reviewThreads args) (docs.github.com)
Citations:
- 1: https://docs.github.com/en/graphql/guides/using-pagination-in-the-graphql-api?utm_source=openai
- 2: https://docs.github.com/en/enterprise-server%403.14/graphql/reference/objects?utm_source=openai
Implement pagination for review threads — currently truncates results to first 100.
The GraphQL query uses reviewThreads(first: 100) but does not request pageInfo { hasNextPage endCursor }. While totalCount is fetched, the code only processes nodes from the first page, silently dropping any unresolved threads beyond 100. Add cursor-based pagination to fetch all review threads until hasNextPage is false, accumulating results in a loop.
Query pattern required:
reviewThreads(first: 100, after: $cursor) {
totalCount
pageInfo { hasNextPage endCursor }
nodes { ... }
}Loop until hasNextPage is false, passing each endCursor as the next after parameter.
🤖 Prompt for AI Agents
In @.agents/scripts/supervisor-helper.sh around lines 2893 - 2916, The current
GraphQL fetch (graphql_result via gh api graphql) only requests
reviewThreads(first: 100) and therefore truncates results; modify the logic that
builds/executes the gh api graphql call to use cursor-based pagination on
reviewThreads by requesting pageInfo { hasNextPage endCursor } and calling
reviewThreads(first:100, after:$cursor) in a loop, accumulating nodes into
graphql_result (or a separate accumulator variable) until hasNextPage is false,
passing the endCursor as the next $cursor parameter to gh api each iteration and
merging returned nodes.
- session-checkpoint-helper.sh: guard option parsing against missing values - session-checkpoint-helper.sh: fix sed trailing pipe in checkpoint parsing (use awk) - supervisor-helper.sh: log when --skip-review-triage bypasses triage gate - supervisor-helper.sh: add word boundaries to high-severity regex - supervisor-helper.sh: include review_triage state in metrics queries - supervisor-helper.sh: guard json_extract with json_valid for no_pr counter - supervisor-helper.sh: add pagination note to GraphQL reviewThreads query - x-helper.sh: add curl timeouts/retries and --fail flag - x-helper.sh: replace python3 JSON parsing with jq (single process per field) - x-helper.sh: guard --format against missing value - AGENTS.md: fix trailing space in code span (Codacy lint) - log-issue-aidevops.md: mark Step 5b as maintainer-only - backlink-checker.md: replace non-existent seo-helper.sh refs with real scripts - subscription-audit.md: remove non-existent helper script examples - document-extraction.md: mark as planned, add When to Use vs Unstract section - terminal-optimization.md: remove broken opencode.md cross-reference
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Sat Feb 7 17:12:35 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
…cial docs transcription.md: - faster-whisper: remove fabricated device='auto' example, reference voice-bridge.py:99-115 - whisper.cpp: replace non-existent 'brew install' with actual build-from-source steps - Groq: fix model to whisper-large-v3, add Content-Type header - ElevenLabs: fix model_id to scribe_v1 (not scribe_v2) - Add official doc links for all cloud APIs - Remove non-existent download commands from dependencies voice-models.md: - Replace fabricated Qwen3-TTS/Piper/Bark/Coqui examples with links to official repos - Remove non-existent 'voice-helper.sh tts' command reference - Document actual implementations: EdgeTTS, MacOSSayTTS, FacebookMMSTTS - Reference voice-bridge.py line ranges for each implementation - Separate 'implemented' vs 'not yet integrated' models in selection guide
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Sat Feb 7 18:04:20 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
|
|
Closing: supervisor-helper.sh changes are stale (10+ PRs merged since). Will cherry-pick unique subagent docs into a clean PR. |
…nc docs Cherry-picked unique content from closed PR #446 (supervisor changes were stale). New subagents: backlink-checker, voice transcription/models, document extraction, terminal optimization, subscription audit, rapidfuzz, x-helper. New scripts: session-checkpoint-helper.sh (compaction resilience), x-helper.sh (X/Twitter posting). Doc additions: GitHub issue sync convention in plans.md, compaction resilience workflow in session-manager.md.
…nc docs (#470) * chore: mark t135.8 blocked in TODO.md * feat: add quality subagents, session checkpoints, and GitHub issue sync docs Cherry-picked unique content from closed PR #446 (supervisor changes were stale). New subagents: backlink-checker, voice transcription/models, document extraction, terminal optimization, subscription audit, rapidfuzz, x-helper. New scripts: session-checkpoint-helper.sh (compaction resilience), x-helper.sh (X/Twitter posting). Doc additions: GitHub issue sync convention in plans.md, compaction resilience workflow in session-manager.md.



Summary
review_triagestate to supervisor post-PR lifecycle. Before merging, fetches unresolved review threads via GraphQL, classifies by source (bot vs human) and severity, blocks merge on human reviews or high-severity bot findings. Bypass with--skip-review-triage.$SUPERVISOR_DBarg indb()calls (lines 3165/3183) - real bug caught by CodeRabbit, previously ignored.pulse-checkpoint.mdwith full task state so orchestrating AI sessions re-orient after context compaction. Reprompt forclean_exit_no_signalretries includes worktree git status and recent commits.session-checkpoint-helper.shfor interactive sessions to persist state to disk.t{NNN}:title prefix), updated AGENTS.md and plans.md.Changes
fb8fc2031595338dee621be0a65eQuality
bash -nsyntax check: passeslocal var="$1"pattern and explicit returnsTesting
supervisor-helper.sh pr-lifecycle <task_id>now goes throughreview_triagestatesupervisor-helper.sh pr-lifecycle <task_id> --skip-review-triagesession-checkpoint-helper.sh save --task t148 --next t147,t145~/.aidevops/.agent-workspace/supervisor/pulse-checkpoint.mdCloses #437, closes #439
Summary by CodeRabbit
Release Notes
New Features
Documentation