Skip to content

Conversation

@marcusquinn
Copy link
Owner

@marcusquinn marcusquinn commented Feb 7, 2026

Summary

  • t148: Add review_triage state to supervisor post-PR lifecycle. Before merging, fetches unresolved review threads via GraphQL, classifies by source (bot vs human) and severity, blocks merge on human reviews or high-severity bot findings. Bypass with --skip-review-triage.
  • t146: Fix missing $SUPERVISOR_DB arg in db() calls (lines 3165/3183) - real bug caught by CodeRabbit, previously ignored.
  • Compaction resilience: Pulse cycle writes pulse-checkpoint.md with full task state so orchestrating AI sessions re-orient after context compaction. Reprompt for clean_exit_no_signal retries includes worktree git status and recent commits.
  • Session checkpoint: New session-checkpoint-helper.sh for interactive sessions to persist state to disk.
  • Docs: GitHub issue sync convention (t{NNN}: title prefix), updated AGENTS.md and plans.md.
  • Subagents: backlink-checker (t070), voice-models (t071), transcription (t072), document-extraction (t073), terminal-optimization (t025), subscription-audit (t026), rapidfuzz (t014), x-helper.sh (t033).

Changes

Commit Scope Description
fb8fc20 supervisor Review triage gate, compaction resilience, t146 fix, DB migration
3159533 scripts Session checkpoint system
8dee621 docs GitHub issue sync convention
be0a65e subagents 7 new subagents + x-helper.sh

Quality

  • ShellCheck: zero new violations (3 pre-existing: 2x SC2034 unused vars, 1x SC2016 intentional GraphQL single quotes)
  • bash -n syntax check: passes
  • All new scripts follow local var="$1" pattern and explicit returns

Testing

  • Review triage: supervisor-helper.sh pr-lifecycle <task_id> now goes through review_triage state
  • Bypass: supervisor-helper.sh pr-lifecycle <task_id> --skip-review-triage
  • Checkpoint: session-checkpoint-helper.sh save --task t148 --next t147,t145
  • Pulse checkpoint: auto-written to ~/.aidevops/.agent-workspace/supervisor/pulse-checkpoint.md

Closes #437, closes #439

Summary by CodeRabbit

Release Notes

  • New Features

    • Added bidirectional GitHub issue synchronization with automatic task linking and tracking.
    • Introduced session checkpoint system for persisting state during long autonomous workflows.
    • Added PR review triage workflow gate for enhanced code review management.
    • New utilities: X/Twitter post fetcher, document extraction pipeline, terminal optimizer, and audio transcriber.
  • Documentation

    • Added comprehensive guides for subscription auditing, backlink checking, voice models, and session resilience best practices.

…fix t146 db() bug

t148: Add review_triage state to supervisor post-PR lifecycle. Before merging,
the supervisor now fetches unresolved review threads via GraphQL, classifies
them by source (bot vs human) and severity, and blocks merge if human reviews
or high-severity bot findings are unaddressed. Low-severity bot-only threads
pass with a warning. Bypass with --skip-review-triage flag.

t146: Fix missing $SUPERVISOR_DB arg in db() calls on lines 3165/3183.

Compaction resilience: Pulse cycle now writes pulse-checkpoint.md with full
task state (running, queued, blocked, post-PR) so orchestrating AI sessions
can re-orient after context compaction. Reprompt for clean_exit_no_signal
retries now includes worktree git status and recent commits to help the
retried worker pick up where the previous attempt left off.

DB migration: Adds review_triage to status CHECK constraint.
ShellCheck: zero new violations.
New script: session-checkpoint-helper.sh persists session state (current task,
branch, worktree, progress) to disk so AI sessions can re-orient after context
compaction. Integrated into session-manager.md workflow.

ShellCheck: zero violations.
Document t{NNN}: title prefix convention for GitHub issues, add sync rule
to AGENTS.md, update log-issue-aidevops.md with t-number assignment step.
…ts, and x-helper

New subagents: backlink-checker (t070), voice-models (t071), transcription (t072),
document-extraction (t073), terminal-optimization (t025), subscription-audit (t026),
rapidfuzz (t014). New script: x-helper.sh (t033) for tweet fetching.

ShellCheck: zero violations on x-helper.sh.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 7, 2026

Warning

Rate limit exceeded

@marcusquinn has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 24 minutes and 31 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

Walkthrough

This PR implements bidirectional GitHub issue synchronization with TODO.md tasks, adds session checkpoint persistence for context compaction resilience, introduces a review-triage gate in the PR lifecycle, and expands documentation for multiple tool and workflow capabilities.

Changes

Cohort / File(s) Summary
GitHub Issue Sync Integration
AGENTS.md, .agents/scripts/commands/log-issue-aidevops.md, .agents/workflows/plans.md
Added conventions for t{NNN} task prefixes in GitHub issue titles and ref:GH{issue} backlinks in TODO.md; documented creation workflows, automated enforcement, and bidirectional sync requirements.
PR Review Triage & Lifecycle
.agents/scripts/supervisor-helper.sh
Introduced review_triage state with state machine transitions; added check_review_threads() to fetch unresolved GitHub review threads via GraphQL; added triage_review_feedback() to classify severity; extended pr-lifecycle with --dry-run and --skip-review-triage flags; added sync_github_issue() for GitHub event-driven PR/issue closing.
Session Checkpoint & Resilience
.agents/scripts/session-checkpoint-helper.sh, .agents/workflows/session-manager.md, .agents/subagent-index.toon
New session-checkpoint-helper.sh script (save/load/clear/status) persists task state to survive context compaction; added Compaction Resilience documentation with checkpoint workflow and self-prompting loop patterns.
Pulse State Persistence
.agents/scripts/supervisor-helper.sh
Added save_pulse_checkpoint() to record detailed pulse metrics, task state, and post-PR lifecycle details into persistent checkpoint file.
Tool Documentation
.agents/tools/accounts/subscription-audit.md, .agents/tools/context/rapidfuzz.md, .agents/tools/document/document-extraction.md, .agents/tools/terminal/terminal-optimization.md, .agents/tools/voice/transcription.md, .agents/tools/voice/voice-models.md
Six new documentation files covering audit workflows, fuzzy matching reference, document extraction pipeline, terminal optimization, audio transcription, and TTS/STT models.
New Utility Scripts
.agents/scripts/x-helper.sh, .agents/seo/backlink-checker.md
Added x-helper.sh for FxTwitter API fetching (fetch/thread/user commands) with JSON/Markdown output; added backlink-checker.md for domain reclamation workflows via Ahrefs/DataForSEO APIs.

Sequence Diagram

sequenceDiagram
    participant Supervisor as Supervisor<br/>(post-PR)
    participant GH as GitHub API<br/>(GraphQL)
    participant Triage as Triage<br/>Decision
    participant Worker as Worker<br/>Dispatch
    participant GHSync as GitHub<br/>Sync

    Supervisor->>GH: check_review_threads(PR_id)
    GH-->>Supervisor: unresolved threads + authors
    
    Supervisor->>Triage: triage_review_feedback(threads)
    Triage-->>Supervisor: {triage_pass, triage_block, severity}
    
    alt Triage: Block (High/Critical)
        Supervisor->>Worker: dispatch fix or dismiss task
        Worker-->>Supervisor: task_id (t{NNN})
        Supervisor->>GHSync: sync_github_issue(task_id, blocked)
        GHSync-->>GH: comment + link TODO
    else Triage: Pass (Resolved/Low)
        Supervisor->>GHSync: sync_github_issue(task_id, proceeding)
        GHSync-->>GH: comment
        Supervisor->>Supervisor: transition to merging
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related issues

Possibly related PRs

Poem

🔄 Reviews triaged, threads all tamed,
Sessions checkpointed when context is claimed,
Sync TODO with GitHub, bidirectional grace,
Pulses persist through the compaction race,
Nine new tools bloom in the documentation space! 📚✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 44.44% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: supervisor review triage, compaction resilience, and quality subagents' directly and comprehensively summarizes the three main changes: review triage (t148), compaction resilience improvements, and new subagent additions.
Linked Issues check ✅ Passed The PR fully implements both linked issues: #437 adds review_triage state with check_review_threads() and triage_review_feedback() functions, GitHub GraphQL thread fetching, severity classification, merge blocking logic, and --skip-review-triage bypass [#437]; #439 fixes missing $SUPERVISOR_DB arguments in db() calls at the identified lines and removes unused no_pr_key variable [#439].
Out of Scope Changes check ✅ Passed All changes are directly scoped to the PR objectives: supervisor review triage (t148/t146), compaction resilience via pulse checkpointing and session helpers, documentation for GitHub issue sync, and addition of quality subagents (documentation and helper scripts). No unrelated refactoring or scope creep detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/batch2-quality-subagents

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link

Summary of Changes

Hello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the supervisor's robustness and expands its capabilities. It introduces a critical review triage step in the PR lifecycle to prevent merges with outstanding issues, improves resilience for long-running AI sessions by enabling state checkpointing, and streamlines workflow management through automated GitHub issue synchronization. Additionally, it integrates a suite of new specialized subagents, broadening the system's functional scope.

Highlights

  • Supervisor Review Triage: Introduced a new 'review_triage' state in the supervisor's post-PR lifecycle to automatically check for unresolved review threads (human or high-severity bot findings) before merging, with an option to bypass this check.
  • Compaction Resilience: Enhanced the supervisor's ability to recover from context compaction by persisting task state to 'pulse-checkpoint.md' and tailoring reprompts for clean exits due to context limits, including worktree status and recent commits.
  • Session Checkpointing: Added a new 'session-checkpoint-helper.sh' script to allow interactive AI sessions to save and load their state to disk, improving resilience against context loss during long autonomous operations.
  • GitHub Issue Synchronization: Implemented a convention and automated syncing mechanism to link GitHub issues with TODO.md tasks, ensuring bidirectional status updates (e.g., closing issues on task completion or commenting on blocked tasks).
  • Database Fix: Corrected a bug where 'db()' calls in 'supervisor-helper.sh' were missing the '$SUPERVISOR_DB' argument, ensuring proper database interaction.
  • New Subagents: Integrated several new subagents, expanding the system's capabilities in areas such as backlink checking, subscription auditing, fuzzy string matching, document extraction, terminal optimization, voice AI models, transcription, and X/Twitter post fetching.
Changelog
  • .agents/AGENTS.md
    • Added a new section detailing the GitHub issue synchronization convention, outlining how issue titles and TODO.md tasks should be linked.
  • .agents/scripts/commands/log-issue-aidevops.md
    • Updated the instructions for creating GitHub issues to enforce the 't{NNN}:' prefix in titles and the subsequent 'ref:GH#{issue_number}' addition to TODO.md.
  • .agents/scripts/session-checkpoint-helper.sh
    • A new script was added to manage session checkpoints, allowing users to save, load, clear, and check the status of session state to a markdown file, primarily for AI context compaction resilience.
  • .agents/scripts/supervisor-helper.sh
    • Introduced a new 'review_triage' state into the PR lifecycle state machine and its valid transitions.
    • Implemented 'check_review_threads' and 'triage_review_feedback' functions to fetch and classify unresolved PR review comments (human vs. bot, severity).
    • Integrated the 'review_triage' logic into 'cmd_pr_lifecycle', blocking merges if human reviews or high-severity bot findings are unresolved, with a '--skip-review-triage' option.
    • Corrected several 'db' command calls to explicitly pass the '$SUPERVISOR_DB' argument, fixing a previously ignored bug.
    • Enhanced the 'cmd_reprompt' function to provide more specific guidance when a task fails due to context compaction, including worktree status and recent commits.
    • Added 'sync_github_issue' function to automatically close GitHub issues upon task completion or add comments when tasks are blocked.
    • Implemented 'save_pulse_checkpoint' to persist supervisor pulse state to 'pulse-checkpoint.md', aiding orchestrator re-orientation after context loss.
    • Updated internal documentation and help messages to reflect the new 'review_triage' state and 'pr-lifecycle' options.
  • .agents/scripts/x-helper.sh
    • A new script was added to fetch X/Twitter posts and user profiles using the fxtwitter API, supporting markdown, text, or raw JSON output.
  • .agents/seo/backlink-checker.md
    • A new subagent documentation was added, outlining tools and workflows for monitoring backlinks, detecting lost/broken links, and discovering expired domains for SEO purposes.
  • .agents/subagent-index.toon
    • The 'session-checkpoint-helper.sh' script was added to the subagent index.
  • .agents/tools/accounts/subscription-audit.md
    • A new subagent documentation was added, detailing how to audit, track, and optimize recurring payments by scanning emails, bank statements, and providing recommendations.
  • .agents/tools/context/rapidfuzz.md
    • A new subagent documentation was added, introducing the RapidFuzz library for fast fuzzy string matching, with examples and performance tips.
  • .agents/tools/document/document-extraction.md
    • A new subagent documentation was added, describing a privacy-preserving document extraction pipeline using Docling, ExtractThinker, and Presidio for structured data and PII redaction.
  • .agents/tools/terminal/terminal-optimization.md
    • A new subagent documentation was added, focusing on auditing and optimizing terminal environments, including shell startup, PATH, and recommending modern tool replacements.
  • .agents/tools/voice/transcription.md
    • A new subagent documentation was added, covering audio/video transcription using local (faster-whisper, whisper.cpp) and cloud (Groq, ElevenLabs, OpenAI) models, with input/output formats.
  • .agents/tools/voice/voice-models.md
    • A new subagent documentation was added, providing a guide to various Text-to-Speech (TTS) and Speech-to-Text (STT) models, both local and cloud-based, with usage examples and selection criteria.
  • .agents/workflows/plans.md
    • A new comprehensive section on 'GitHub Issue Sync' was added, detailing the conventions, creation workflows, and automated enforcement for linking GitHub issues with TODO.md tasks.
  • .agents/workflows/session-manager.md
    • A new section on 'Compaction Resilience (Long Autonomous Sessions)' was added, explaining the checkpoint workflow using 'session-checkpoint-helper.sh' and self-prompting loop patterns.
Activity
  • The pull request introduces a new 'review_triage' state in the supervisor's post-PR lifecycle, designed to automatically check for unresolved review threads before merging.
  • A new 'session-checkpoint-helper.sh' script has been added to enable interactive AI sessions to persist their state to disk, enhancing resilience against context compaction.
  • The supervisor's 'cmd_reprompt' function has been improved to provide more targeted guidance when tasks fail due to context limits, including relevant worktree information.
  • Automated synchronization between GitHub issues and TODO.md tasks has been implemented, ensuring that task completion or blocking events are reflected in corresponding issues.
  • Several new subagents have been introduced, expanding the system's capabilities in areas such as backlink checking, subscription auditing, fuzzy string matching, document extraction, terminal optimization, and voice AI.
  • All new scripts adhere to coding standards, including zero new ShellCheck violations and correct 'bash -n' syntax.
  • Testing was conducted for the new 'review_triage' state, its bypass mechanism, session checkpointing, and pulse checkpoint functionality.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link

github-actions bot commented Feb 7, 2026

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 26 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Sat Feb 7 16:57:14 UTC 2026: Code review monitoring started
Sat Feb 7 16:57:15 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 26
Sat Feb 7 16:57:15 UTC 2026: Qlty - 0 issues found, auto-formatting applied
Sat Feb 7 16:57:17 UTC 2026: Codacy analysis completed with auto-fixes

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 26
  • VULNERABILITIES: 0

Generated on: Sat Feb 7 16:57:17 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces several significant features, including a review triage gate to block merges on unresolved human or high-severity bot reviews, a session checkpointing system for compaction resilience, and a convention for syncing GitHub issues with internal tasks. The changes are extensive and well-documented, with new helper scripts and subagent definitions. I've identified a few bugs in the new helper scripts related to argument parsing and some areas for improvement in efficiency and robustness. Overall, this is a strong contribution that enhances the supervisor's capabilities and the framework's resilience.

# Main dispatch
main() {
local command="${1:-help}"
shift 2>/dev/null || true

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The shift 2 command is incorrect here. It will consume the main command (e.g., save) and its first argument (e.g., --task), causing argument parsing to fail for any command that takes options. It should be shift or shift 1 to only consume the main command.

Suggested change
shift 2>/dev/null || true
shift || true

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

False positive. This is shift 2>/dev/null || true, not shift 2. The 2>/dev/null redirects stderr (fd 2) to /dev/null — it does not pass 2 as an argument to shift. This is a standard bash pattern: shift (no argument) consumes one positional parameter, and 2>/dev/null || true silently handles the case where there are no remaining parameters.


main() {
local command="${1:-help}"
shift 2>/dev/null || true

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The shift 2 command is incorrect. It consumes the main command (e.g., fetch) and its first argument (e.g., the URL), which is then lost before the argument parsing loop. This breaks all commands that take arguments. It should be shift or shift 1 to only consume the command itself.

Suggested change
shift 2>/dev/null || true
shift || true

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

False positive. Same as above — this is shift 2>/dev/null || true, not shift 2. The 2> is stderr redirection, not an argument to shift. The command consumes exactly one positional parameter (the command name), leaving all remaining args for the option parsing loop.

Comment on lines 183 to 185
current_task="$(grep -m1 'Current Task' "$CHECKPOINT_FILE" | sed 's/.*| //' | sed 's/ *$//' || echo "unknown")"
local branch
branch="$(grep -m1 'Branch' "$CHECKPOINT_FILE" | sed 's/.*| //' | sed 's/ *$//' || echo "unknown")"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The sed commands used to extract values from the checkpoint file are incorrect. They will leave a trailing | character in the extracted value because sed 's/ *$//' only removes trailing spaces, not the pipe character. A single, more robust sed command can correctly extract and trim the value for both variables.

Suggested change
current_task="$(grep -m1 'Current Task' "$CHECKPOINT_FILE" | sed 's/.*| //' | sed 's/ *$//' || echo "unknown")"
local branch
branch="$(grep -m1 'Branch' "$CHECKPOINT_FILE" | sed 's/.*| //' | sed 's/ *$//' || echo "unknown")"
current_task="$(grep -m1 'Current Task' "$CHECKPOINT_FILE" | sed 's/.*| *//; s/ *| *$//' || echo "unknown")"
local branch
branch="$(grep -m1 'Branch' "$CHECKPOINT_FILE" | sed 's/.*| *//; s/ *| *$//' || echo "unknown")"

Comment on lines 2995 to 2999
high_severity_count=$(echo "$threads_json" | jq '[.threads[] | select(.is_bot == true) | select(.body | test("bug|security|vulnerability|critical|error|crash|data.loss|injection|XSS|CSRF"; "i"))] | length' 2>/dev/null || echo "0")

if [[ "$high_severity_count" -gt 0 ]]; then
log_warn " $high_severity_count high-severity bot finding(s) - blocking merge"
echo "$threads_json" | jq -r '.threads[] | select(.is_bot == true) | select(.body | test("bug|security|vulnerability|critical|error|crash|data.loss|injection|XSS|CSRF"; "i")) | " - \(.author) on \(.path):\(.line): \(.body[0:120])"' 2>/dev/null || true

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The regex used to detect high-severity bot comments could have false positives. For example, a comment like "this is not a critical issue" would be flagged as high-severity. Using word boundaries (\b) in the regex would make it more accurate by matching whole words only.

Suggested change
high_severity_count=$(echo "$threads_json" | jq '[.threads[] | select(.is_bot == true) | select(.body | test("bug|security|vulnerability|critical|error|crash|data.loss|injection|XSS|CSRF"; "i"))] | length' 2>/dev/null || echo "0")
if [[ "$high_severity_count" -gt 0 ]]; then
log_warn " $high_severity_count high-severity bot finding(s) - blocking merge"
echo "$threads_json" | jq -r '.threads[] | select(.is_bot == true) | select(.body | test("bug|security|vulnerability|critical|error|crash|data.loss|injection|XSS|CSRF"; "i")) | " - \(.author) on \(.path):\(.line): \(.body[0:120])"' 2>/dev/null || true
high_severity_count=$(echo "$threads_json" | jq '[.threads[] | select(.is_bot == true) | select(.body | test("\\b(bug|security|vulnerability|critical|error|crash|data.loss|injection|XSS|CSRF)\\b"; "i"))] | length' 2>/dev/null || echo "0")
if [[ "$high_severity_count" -gt 0 ]]; then
log_warn " $high_severity_count high-severity bot finding(s) - blocking merge"
echo "$threads_json" | jq -r '.threads[] | select(.is_bot == true) | select(.body | test("\\b(bug|security|vulnerability|critical|error|crash|data.loss|injection|XSS|CSRF)\\b"; "i")) | " - \(.author) on \(.path):\(.line): \(.body[0:120])"' 2>/dev/null || true

Comment on lines 75 to 81
author=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('author',{}).get('name','Unknown'))" 2>/dev/null || echo "Unknown")
author_handle=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('author',{}).get('screen_name','unknown'))" 2>/dev/null || echo "unknown")
text=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('text',''))" 2>/dev/null || echo "")
created_at=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('created_at',''))" 2>/dev/null || echo "")
likes=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('likes',0))" 2>/dev/null || echo "0")
retweets=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('retweets',0))" 2>/dev/null || echo "0")
replies=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('replies',0))" 2>/dev/null || echo "0")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Parsing the JSON response by calling python3 seven separate times is very inefficient as each call starts a new process. This can be done with a single call to jq or a single Python script, which would be much faster and more readable.

Suggested change
author=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('author',{}).get('name','Unknown'))" 2>/dev/null || echo "Unknown")
author_handle=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('author',{}).get('screen_name','unknown'))" 2>/dev/null || echo "unknown")
text=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('text',''))" 2>/dev/null || echo "")
created_at=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('created_at',''))" 2>/dev/null || echo "")
likes=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('likes',0))" 2>/dev/null || echo "0")
retweets=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('retweets',0))" 2>/dev/null || echo "0")
replies=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tweet',{}).get('replies',0))" 2>/dev/null || echo "0")
author=$(echo "$response" | jq -r '.tweet.author.name // "Unknown"')
author_handle=$(echo "$response" | jq -r '.tweet.author.screen_name // "unknown"')
text=$(echo "$response" | jq -r '.tweet.text // ""')
created_at=$(echo "$response" | jq -r '.tweet.created_at // ""')
likes=$(echo "$response" | jq -r '.tweet.likes // 0')
retweets=$(echo "$response" | jq -r '.tweet.retweets // 0')
replies=$(echo "$response" | jq -r '.tweet.replies // 0')

Comment on lines 139 to 142
name=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('user',{}).get('name','Unknown'))" 2>/dev/null || echo "Unknown")
followers=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('user',{}).get('followers',0))" 2>/dev/null || echo "0")
following=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('user',{}).get('following',0))" 2>/dev/null || echo "0")
description=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('user',{}).get('description',''))" 2>/dev/null || echo "")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to cmd_fetch, this function calls python3 multiple times to parse a single JSON response. This is inefficient. A single jq call would be much better for performance and readability.

Suggested change
name=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('user',{}).get('name','Unknown'))" 2>/dev/null || echo "Unknown")
followers=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('user',{}).get('followers',0))" 2>/dev/null || echo "0")
following=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('user',{}).get('following',0))" 2>/dev/null || echo "0")
description=$(echo "$response" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('user',{}).get('description',''))" 2>/dev/null || echo "")
name=$(echo "$response" | jq -r '.user.name // "Unknown"')
followers=$(echo "$response" | jq -r '.user.followers // 0')
following=$(echo "$response" | jq -r '.user.following // 0')
description=$(echo "$response" | jq -r '.user.description // ""')

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 17

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
.agents/scripts/supervisor-helper.sh (1)

3419-3437: ⚠️ Potential issue | 🟠 Major

json_set will fail if error isn’t valid JSON (no_pr counter won’t persist).

If error contains plain text, json_extract/json_set throws “malformed JSON”, and retry counts won’t save. Guard with json_valid() and use the escaped ID to avoid malformed SQL.
As per coding guidelines, "Automation scripts - focus on: Reliability and robustness".

🛡️ Suggested fix
-                no_pr_count=$(db "$SUPERVISOR_DB" "SELECT COALESCE(
-                    (SELECT CAST(json_extract(error, '$.no_pr_retries') AS INTEGER)
-                     FROM tasks WHERE id='$task_id'), 0);" 2>/dev/null || echo "0")
+                no_pr_count=$(db "$SUPERVISOR_DB" "SELECT COALESCE(
+                    (SELECT CAST(json_extract(
+                        CASE WHEN json_valid(error) THEN error ELSE '{}' END,
+                        '$.no_pr_retries') AS INTEGER)
+                     FROM tasks WHERE id='$escaped_id'), 0);" 2>/dev/null || echo "0")
...
-                db "$SUPERVISOR_DB" "UPDATE tasks SET error = json_set(COALESCE(error, '{}'), '$.no_pr_retries', $no_pr_count), updated_at = strftime('%Y-%m-%dT%H:%M:%SZ','now') WHERE id='$task_id';" 2>/dev/null || true
+                db "$SUPERVISOR_DB" "UPDATE tasks SET error = json_set(
+                    CASE WHEN json_valid(error) THEN error ELSE '{}' END,
+                    '$.no_pr_retries', $no_pr_count),
+                    updated_at = strftime('%Y-%m-%dT%H:%M:%SZ','now')
+                    WHERE id='$escaped_id';" 2>/dev/null || true
🤖 Fix all issues with AI agents
In @.agents/AGENTS.md:
- Line 105: In the "GitHub issue sync" rule update the inline code span to
remove the trailing space inside the backticks so the literal is `t{NNN}:` (no
space inside the code span) and then in the surrounding prose explicitly state
that when used in issue titles there must be a single space after the colon
(e.g., "t{NNN}: <title>") so linting passes while keeping the documentation of
the required trailing space; target the `t{NNN}:` code span and the sentence
that explains "Issue titles MUST be prefixed..." to make this change.

In @.agents/scripts/commands/log-issue-aidevops.md:
- Around line 132-142: The Step 5b workflow about reading/writing TODO.md should
not be in the user-facing /log-issue-aidevops command docs; remove or relocate
Step 5b (the instructions to read highest t-number, add ref:GH#{issue_number},
and commit/push TODO.md) to a maintainer-only guide (e.g., supervisor helper)
and instead leave only the user-facing behavior (create the GitHub issue,
optionally prefix title with t{NNN} if you want a visible convention) or
explicitly mark the TODO.md sync as “maintainer-only”; update the document
sections referencing Step 5b and any mention of TODO.md so /log-issue-aidevops
no longer instructs external users to commit to the aidevops repo.

In @.agents/scripts/session-checkpoint-helper.sh:
- Around line 66-77: The case block that parses options in the while loop
(handling --task, --next, --worktree, --branch, --batch, --note, --elapsed,
--target) must guard against missing values to avoid crashes under set -euo
pipefail; for each option (e.g., when setting current_task, next_tasks,
worktree_path, branch_name, batch_name, note, elapsed_mins, target_mins) add a
check that a value exists (e.g., verify $# -gt 1 and that $2 is not another
option) before assigning and shifting; on failure call print_error "Missing
value for <option>" and return 1 so the script fails gracefully instead of
hard-exiting.

In @.agents/scripts/supervisor-helper.sh:
- Around line 3348-3375: The script doesn't log when the review triage is
bypassed via the --skip-review-triage flag; update the branch handling
skip_review_triage (the else branch of the if checking "$skip_review_triage" and
"$dry_run") to emit an explicit audit log entry when skip_review_triage=="true"
(e.g., call log_info or log_warn indicating "Review triage bypassed via
--skip-review-triage for $task_id") before proceeding to call cmd_transition and
set tstatus="merging"; reference the existing variables and functions
triage_review_feedback, cmd_transition, tstatus and send_task_notification so
the new log sits next to those same operations.
- Around line 2893-2916: The current GraphQL fetch (graphql_result via gh api
graphql) only requests reviewThreads(first: 100) and therefore truncates
results; modify the logic that builds/executes the gh api graphql call to use
cursor-based pagination on reviewThreads by requesting pageInfo { hasNextPage
endCursor } and calling reviewThreads(first:100, after:$cursor) in a loop,
accumulating nodes into graphql_result (or a separate accumulator variable)
until hasNextPage is false, passing the endCursor as the next $cursor parameter
to gh api each iteration and merging returned nodes.

In @.agents/scripts/x-helper.sh:
- Around line 62-66: The curl call that populates the local variable response
using "${FXTWITTER_API}/${path}" can hang or ignore HTTP errors; update the
invocation in the block that assigns response (and the analogous block at the
other location) to use curl --fail (-f), a hard timeout (--max-time, e.g. 10),
and retry flags (--retry and optionally --retry-delay) so non-2xx responses and
stalls produce a non-zero exit and trigger the existing error path (print_error
"Failed to fetch tweet" / return 1); ensure the command still captures output
into the response variable and retains the existing error handling.
- Around line 166-170: The case for --format currently assumes a following
argument and will break under set -u if none is provided; update the --format
branch in the while/case handling (the block handling --format and setting
output_format) to first verify that a next positional parameter exists and is
not another option, and if missing print a clear error to stderr and exit
non-zero; otherwise assign output_format="$2" and shift 2 as before. Ensure this
check uses the same variables (output_format, args) and exits cleanly to
preserve script robustness.

In @.agents/seo/backlink-checker.md:
- Around line 123-126: Update the Related list entry that currently reads "Link
building strategies" to the hyphenated form "Link-building strategies" in the
markdown for the backlink checker; locate the string in
.agents/seo/backlink-checker.md (the Related list block containing
`seo/link-building.md`) and replace the unhyphenated text with the hyphenated
version so the list reads `seo/link-building.md` - Link-building strategies.
- Around line 31-77: The CLI examples referencing the non-existent seo-helper.sh
must be replaced with authoritative file:line references to the real export
scripts; update the Ahrefs section to point to the corresponding implementation
in seo-export-ahrefs.sh and the DataForSEO section to point to
seo-export-dataforseo.sh, and similarly replace the WHOIS/batch example with the
script and line reference that implements that batch flow; locate and edit the
examples in .agents/seo/backlink-checker.md so each command block becomes a
file:line pointer to the exact function/command in seo-export-ahrefs.sh or
seo-export-dataforseo.sh (and the batch WHOIS implementation) instead of using
seo-helper.sh.
- Around line 19-25: The docs reference a non-existent helper script and
subcommands (seo-helper.sh with backlinks, backlinks-bulk, check-expiry and
flags --source, --lost, --days, --broken, --referring-domains-only, --stdin), so
either implement seo-helper.sh exposing those subcommands and flags (implement
CLI parsing, wire to existing functions that call Ahrefs/DataForSEO/WHOIS, and
ensure script is installed under ~/.aidevops/agents/scripts/) or update the
examples to call existing helpers (e.g., keyword-research-helper.sh,
seo-analysis-helper.sh) and map the specific subcommands/flags to their real
equivalents; update all example references (seo-helper.sh backlinks,
backlinks-bulk, check-expiry and the listed flags) to match the chosen approach
so the workflow becomes functional.

In @.agents/tools/accounts/subscription-audit.md:
- Around line 27-73: The documentation references a non-existent helper script
subscription-audit-helper.sh in the "Audit Workflow" examples; remove those
illustrative bash blocks or replace them with authoritative file:line references
to the actual implementation (or to the real CLI commands) so examples are
accurate. Edit the markdown section where subscription-audit-helper.sh is shown
and either (a) delete the pseudo-commands and add file:line pointers to the real
script/implementation, or (b) substitute the blocks with the actual CLI/tool
names that exist in the codebase, ensuring the replacement points to the
authoritative source.

In @.agents/tools/document/document-extraction.md:
- Around line 26-40: The quick reference in document-extraction.md references a
non-existent helper script document-extraction-helper.sh; update the doc to
either mark these examples as planned (e.g., add a forward-looking note that
document-extraction-helper.sh is Phase 6/draft per PLANS.md) or replace the
examples with authoritative pointers to the planned command spec in
todo/PLANS.md (reference the PLANS.md section describing the
document-extraction-helper.sh commands), ensuring you mention
document-extraction-helper.sh and the PLANS.md entry so readers know the
examples are not yet implemented.
- Around line 1-200: This new "Document Extraction" doc duplicates the existing
Unstract-based extraction content (overlapping Docling/ExtractThinker/Presidio
vs Unstract MCP); reconcile by either consolidating this material into the
Unstract MCP documentation (merge Docling/ExtractThinker/Presidio sections into
the Unstract narrative and remove duplicate guidance), or clearly justify and
document why Docling+ExtractThinker+Presidio must coexist (add a short "When to
use" section comparing capabilities: Docling vs Unstract, extraction schemas
Invoice/Receipt/Contract, and privacy modes), and update any MCP integration
references (Unstract MCP / mcp-integrations) to point to the chosen canonical
doc so there are no contradictory or duplicate instructions across the agents
docs.
- Around line 129-167: The Extraction Schemas (Invoice, Receipt, ContractSummary
classes) are presented without corresponding implementations; update the
document to explicitly state they are example/template schemas (not
authoritative) by adding a short note above the code block indicating
"Example/template schemas — customize for your project", or if they are
canonical, replace the examples with file:line references to the actual
implementations that define Invoice, Receipt, and ContractSummary so readers
know where the real models live; ensure the note references the class names
Invoice, Receipt, and ContractSummary so it's clear which entries are examples
vs. authoritative.

In @.agents/tools/terminal/terminal-optimization.md:
- Around line 182-183: Remove the broken cross-reference entry to
`tools/ai-assistants/opencode.md` from
`.agents/tools/terminal/terminal-optimization.md` (the list that includes
`tools/terminal/terminal-title.md`), or if the reference is intended, restore
the missing `.agents/tools/ai-assistants/opencode.md` file; specifically, either
delete the `tools/ai-assistants/opencode.md - AI coding tool config` line from
the list or recreate the referenced `opencode.md` content so the link is valid.

In @.agents/tools/voice/transcription.md:
- Around line 28-146: The examples in this doc (the faster-whisper Python
snippet using WhisperModel with device="auto" and model "large-v3-turbo", the
whisper.cpp install/transcribe example, the Groq curl example, and the
ElevenLabs curl example) are non-authoritative and must be removed; replace each
inline example by pointing readers to the repo's canonical implementation
(voice-helper.sh) and/or official API docs using file:line references.
Specifically, remove the faster-whisper block referencing
WhisperModel("large-v3-turbo", device="auto"), remove the whisper.cpp brew/build
example and CLI usage, remove the Groq curl example (or update to use model
"whisper-large-v3" and include multipart/form-data headers) and remove the
ElevenLabs example (or point to official docs for correct model IDs like
scribe_v1 and xi-api-key header); instead add short references such as "see
voice-helper.sh:lines X-Y" and links to upstream docs for each provider.

In @.agents/tools/voice/voice-models.md:
- Around line 43-153: Replace the non-authoritative external TTS examples
(Qwen3-TTS, Piper, Bark, Coqui, ElevenLabs, OpenAI, Hugging Face) with
references to the repository's actual TTS implementations by removing those
example blocks and instead pointing readers to the in-repo implementations
EdgeTTS, MacOSSayTTS, and FacebookMMSTTS (use those exact identifiers to locate
the code), and remove the incorrect "voice-helper.sh tts" command reference
(replace it with instructions to use the repo's real voice bridge
implementations). Ensure the updated section contains short, authoritative notes
that refer to EdgeTTS, MacOSSayTTS, and FacebookMMSTTS for examples and usage
rather than external example code.
🧹 Nitpick comments (9)
.agents/tools/context/rapidfuzz.md (1)

32-116: Replace non‑authoritative code examples with file:line references (or cite an authoritative source).

Guideline says .agents/**/*.md should avoid code examples unless authoritative; most snippets here look illustrative. Please swap to file:line references or explicitly cite an authoritative source for each example.

As per coding guidelines, ".agents/**/*.md: Use code examples only when authoritative, otherwise use file:line references."

.agents/tools/document/document-extraction.md (1)

82-82: Consider removing star counts to reduce documentation maintenance.

GitHub star counts (lines 82, 106, 127) will become stale and require manual updates. Consider either removing them or adding a "as of [date]" qualifier to set reader expectations.

♻️ Proposed refactor to remove star counts
-- **Repo**: https://github.com/DS4SD/docling (16k+ stars)
+- **Repo**: https://github.com/DS4SD/docling

-- **Repo**: https://github.com/enoch3712/ExtractThinker (1.5k+ stars)
+- **Repo**: https://github.com/enoch3712/ExtractThinker

-- **Repo**: https://github.com/microsoft/presidio (3.5k+ stars)
+- **Repo**: https://github.com/microsoft/presidio

Also applies to: 106-106, 127-127

.agents/tools/terminal/terminal-optimization.md (5)

46-46: Document the ts dependency or use a portable alternative.

The ts command (from moreutils package) is not commonly installed by default. Consider either:

  1. Documenting the dependency: # Requires: brew install moreutils
  2. Using a portable alternative: zsh -xv -c exit 2>&1 | head -50
📝 Proposed alternatives

Option 1: Document dependency

 # Profile zsh startup
+# Requires: brew install moreutils
 zsh -xvs 2>&1 | ts -i '%.s' | head -50

Option 2: Remove dependency

 # Profile zsh startup
-zsh -xvs 2>&1 | ts -i '%.s' | head -50
+zsh -xv -c exit 2>&1 | head -50
+# For timing, use: hyperfine 'zsh -i -c exit' --warmup 3

62-62: Clarify the comment - nvm init is not standard nvm usage.

The comment refers to nvm init, but the standard initialization is source "$NVM_DIR/nvm.sh" or [ -s "$NVM_DIR/nvm.sh" ] && source "$NVM_DIR/nvm.sh" (as shown correctly in line 67).

📝 Proposed fix
-# Instead of: eval "$(nvm init)"
+# Instead of: source "$NVM_DIR/nvm.sh"
 # Use lazy-load:

81-83: PATH cleanup assumes no spaces in directory names.

The snippet uses tr '\n' ':' which will fail if directory names contain newlines (rare but possible). The current approach is acceptable for typical use cases, but for production-grade robustness, consider using a null-delimited approach.

🛡️ More robust alternative
 # Remove non-existent directories
-PATH=$(echo "$PATH" | tr ':' '\n' | while read -r dir; do
-    [ -d "$dir" ] && echo "$dir"
-done | tr '\n' ':' | sed 's/:$//')
+NEW_PATH=""
+IFS=':'
+for dir in $PATH; do
+    [ -d "$dir" ] && NEW_PATH="${NEW_PATH}${NEW_PATH:+:}${dir}"
+done
+PATH="$NEW_PATH"
+unset NEW_PATH

86-99: Note platform-specific install commands.

All install commands use brew, which assumes macOS or Linux with Homebrew. Consider adding a note about platform requirements or providing alternatives (e.g., apt-get, dnf, pacman).

📝 Suggested addition

Add before the table:

> **Note**: Install commands use Homebrew (`brew`). For other platforms, replace `brew install` with your package manager (e.g., `apt-get install`, `dnf install`, `pacman -S`).

116-118: Reconsider the "safety" aliases - they can create anti-patterns.

Aliasing rm/cp/mv to interactive mode is commonly discouraged because:

  • Scripts that source the config will break when encountering these aliases
  • Creates dependency on aliases that won't exist in CI/CD, containers, or other environments
  • Better practice: use -i explicitly when needed, or use trash as a safer alternative
♻️ Safer alternatives
-# Safety
-alias rm='rm -i'
-alias cp='cp -i'
-alias mv='mv -i'
+# Safety: use trash instead of rm (brew install trash)
+alias t='trash'
+# Or if you prefer prompts, use these in interactive sessions only:
+# alias rm='rm -i'  # Note: avoid in scripts - use /bin/rm instead
.agents/scripts/session-checkpoint-helper.sh (1)

181-186: Trim parsed table values to avoid trailing | in status output.

The current sed chain can leave a trailing pipe in current_task/branch. Use a field-based extraction.

🔧 Suggested fix
-    current_task="$(grep -m1 'Current Task' "$CHECKPOINT_FILE" | sed 's/.*| //' | sed 's/ *$//' || echo "unknown")"
+    current_task="$(awk -F'|' '/Current Task/ {gsub(/^[ \t]+|[ \t]+$/, "", $3); print $3; exit}' "$CHECKPOINT_FILE" || echo "unknown")"
-    branch="$(grep -m1 'Branch' "$CHECKPOINT_FILE" | sed 's/.*| //' | sed 's/ *$//' || echo "unknown")"
+    branch="$(awk -F'|' '/Branch/ {gsub(/^[ \t]+|[ \t]+$/, "", $3); print $3; exit}' "$CHECKPOINT_FILE" || echo "unknown")"
.agents/scripts/supervisor-helper.sh (1)

3834-3835: Metrics omit review_triage state (summary + checkpoint).

The summary and checkpoint lists exclude review_triage, so Post-PR metrics undercount.

📊 Suggested fix
-    total_pr_review=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM tasks WHERE status IN ('pr_review', 'merging', 'merged', 'deploying');")
+    total_pr_review=$(db "$SUPERVISOR_DB" "SELECT count(*) FROM tasks WHERE status IN ('pr_review', 'review_triage', 'merging', 'merged', 'deploying');")
...
-        FROM tasks WHERE status IN ('pr_review', 'merging', 'merged', 'deploying')
+        FROM tasks WHERE status IN ('pr_review', 'review_triage', 'merging', 'merged', 'deploying')

Also applies to: 4416-4418

Comment on lines 2893 to 2916
# Fetch review threads via GraphQL (REST API doesn't expose thread resolution)
local graphql_result
graphql_result=$(gh api graphql -f query='
query($owner: String!, $repo: String!, $pr: Int!) {
repository(owner: $owner, name: $repo) {
pullRequest(number: $pr) {
reviewThreads(first: 100) {
totalCount
nodes {
isResolved
comments(first: 1) {
nodes {
author { login }
body
path
line
}
}
}
}
}
}
}
' -f owner="$owner" -f repo="$repo_name" -F pr="$pr_number" 2>/dev/null || echo "")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, examine the full context around the flagged lines and look for pagination handling
cd .agents/scripts && wc -l supervisor-helper.sh

Repository: marcusquinn/aidevops

Length of output: 89


🏁 Script executed:

# Extract context around lines 2893-2916 and look for pagination logic
sed -n '2880,2950p' .agents/scripts/supervisor-helper.sh

Repository: marcusquinn/aidevops

Length of output: 2657


🏁 Script executed:

# Search for pagination-related patterns in the script (pageInfo, endCursor, hasNextPage, after)
rg -n "pageInfo|endCursor|hasNextPage|after.*reviewThreads|pagination.*review" .agents/scripts/supervisor-helper.sh

Repository: marcusquinn/aidevops

Length of output: 46


🏁 Script executed:

# Look at how graphql_result is used after this fetch - check for loop/iteration logic
rg -n -A 20 "graphql_result" .agents/scripts/supervisor-helper.sh | head -100

Repository: marcusquinn/aidevops

Length of output: 2325


🌐 Web query:

GitHub GraphQL API reviewThreads pagination maximum page size cursor pagination endCursor

💡 Result:

  • Maximum page size: For any GitHub GraphQL connection (including pullRequest { reviewThreads(...) { ... } }), you must provide first or last, and the value must be between 1 and 100 (so max page size = 100). [1]
  • Cursor pagination (endCursor): Request pageInfo { endCursor hasNextPage } from the reviewThreads connection; to fetch the next page, pass the prior response’s pageInfo.endCursor as the after argument. [1]
  • reviewThreads pagination args: reviewThreads supports after, before, first, and last. [2]

Example (forward pagination):

query($owner: String!, $repo: String!, $pr: Int!, $cursor: String) {
  repository(owner: $owner, name: $repo) {
    pullRequest(number: $pr) {
      reviewThreads(first: 100, after: $cursor) {
        nodes { id }
        pageInfo { hasNextPage endCursor }
      }
    }
  }
}

Sources:
[1] GitHub Docs — Using pagination in the GraphQL API (docs.github.com)
[2] GitHub Docs — GraphQL Objects reference (shows reviewThreads args) (docs.github.com)

Citations:


Implement pagination for review threads — currently truncates results to first 100.

The GraphQL query uses reviewThreads(first: 100) but does not request pageInfo { hasNextPage endCursor }. While totalCount is fetched, the code only processes nodes from the first page, silently dropping any unresolved threads beyond 100. Add cursor-based pagination to fetch all review threads until hasNextPage is false, accumulating results in a loop.

Query pattern required:
reviewThreads(first: 100, after: $cursor) {
  totalCount
  pageInfo { hasNextPage endCursor }
  nodes { ... }
}

Loop until hasNextPage is false, passing each endCursor as the next after parameter.

🤖 Prompt for AI Agents
In @.agents/scripts/supervisor-helper.sh around lines 2893 - 2916, The current
GraphQL fetch (graphql_result via gh api graphql) only requests
reviewThreads(first: 100) and therefore truncates results; modify the logic that
builds/executes the gh api graphql call to use cursor-based pagination on
reviewThreads by requesting pageInfo { hasNextPage endCursor } and calling
reviewThreads(first:100, after:$cursor) in a loop, accumulating nodes into
graphql_result (or a separate accumulator variable) until hasNextPage is false,
passing the endCursor as the next $cursor parameter to gh api each iteration and
merging returned nodes.

- session-checkpoint-helper.sh: guard option parsing against missing values
- session-checkpoint-helper.sh: fix sed trailing pipe in checkpoint parsing (use awk)
- supervisor-helper.sh: log when --skip-review-triage bypasses triage gate
- supervisor-helper.sh: add word boundaries to high-severity regex
- supervisor-helper.sh: include review_triage state in metrics queries
- supervisor-helper.sh: guard json_extract with json_valid for no_pr counter
- supervisor-helper.sh: add pagination note to GraphQL reviewThreads query
- x-helper.sh: add curl timeouts/retries and --fail flag
- x-helper.sh: replace python3 JSON parsing with jq (single process per field)
- x-helper.sh: guard --format against missing value
- AGENTS.md: fix trailing space in code span (Codacy lint)
- log-issue-aidevops.md: mark Step 5b as maintainer-only
- backlink-checker.md: replace non-existent seo-helper.sh refs with real scripts
- subscription-audit.md: remove non-existent helper script examples
- document-extraction.md: mark as planned, add When to Use vs Unstract section
- terminal-optimization.md: remove broken opencode.md cross-reference
@github-actions
Copy link

github-actions bot commented Feb 7, 2026

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 26 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Sat Feb 7 17:12:32 UTC 2026: Code review monitoring started
Sat Feb 7 17:12:33 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 26
Sat Feb 7 17:12:33 UTC 2026: Qlty - 0 issues found, auto-formatting applied
Sat Feb 7 17:12:35 UTC 2026: Codacy analysis completed with auto-fixes

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 26
  • VULNERABILITIES: 0

Generated on: Sat Feb 7 17:12:35 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

…cial docs

transcription.md:
- faster-whisper: remove fabricated device='auto' example, reference voice-bridge.py:99-115
- whisper.cpp: replace non-existent 'brew install' with actual build-from-source steps
- Groq: fix model to whisper-large-v3, add Content-Type header
- ElevenLabs: fix model_id to scribe_v1 (not scribe_v2)
- Add official doc links for all cloud APIs
- Remove non-existent download commands from dependencies

voice-models.md:
- Replace fabricated Qwen3-TTS/Piper/Bark/Coqui examples with links to official repos
- Remove non-existent 'voice-helper.sh tts' command reference
- Document actual implementations: EdgeTTS, MacOSSayTTS, FacebookMMSTTS
- Reference voice-bridge.py line ranges for each implementation
- Separate 'implemented' vs 'not yet integrated' models in selection guide
@github-actions
Copy link

github-actions bot commented Feb 7, 2026

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 26 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Sat Feb 7 18:04:17 UTC 2026: Code review monitoring started
Sat Feb 7 18:04:17 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 26
Sat Feb 7 18:04:18 UTC 2026: Qlty - 0 issues found, auto-formatting applied
Sat Feb 7 18:04:20 UTC 2026: Codacy analysis completed with auto-fixes

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 26
  • VULNERABILITIES: 0

Generated on: Sat Feb 7 18:04:20 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

@sonarqubecloud
Copy link

sonarqubecloud bot commented Feb 7, 2026

@marcusquinn
Copy link
Owner Author

Closing: supervisor-helper.sh changes are stale (10+ PRs merged since). Will cherry-pick unique subagent docs into a clean PR.

@marcusquinn marcusquinn closed this Feb 7, 2026
marcusquinn added a commit that referenced this pull request Feb 7, 2026
…nc docs

Cherry-picked unique content from closed PR #446 (supervisor changes were stale).

New subagents: backlink-checker, voice transcription/models, document extraction,
terminal optimization, subscription audit, rapidfuzz, x-helper.

New scripts: session-checkpoint-helper.sh (compaction resilience),
x-helper.sh (X/Twitter posting).

Doc additions: GitHub issue sync convention in plans.md,
compaction resilience workflow in session-manager.md.
marcusquinn added a commit that referenced this pull request Feb 7, 2026
…nc docs (#470)

* chore: mark t135.8 blocked in TODO.md

* feat: add quality subagents, session checkpoints, and GitHub issue sync docs

Cherry-picked unique content from closed PR #446 (supervisor changes were stale).

New subagents: backlink-checker, voice transcription/models, document extraction,
terminal optimization, subscription audit, rapidfuzz, x-helper.

New scripts: session-checkpoint-helper.sh (compaction resilience),
x-helper.sh (X/Twitter posting).

Doc additions: GitHub issue sync convention in plans.md,
compaction resilience workflow in session-manager.md.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

t146: bug: supervisor no_pr retry counter non-functional (missing $SUPERVISOR_DB) t148: Supervisor: add review-triage phase before PR merge

1 participant