Skip to content

t1424: Upstream watch — track external repos for release monitoring#3994

Closed
johnwaldo wants to merge 4 commits intomarcusquinn:mainfrom
johnwaldo:feature/t1424-upstream-watch
Closed

t1424: Upstream watch — track external repos for release monitoring#3994
johnwaldo wants to merge 4 commits intomarcusquinn:mainfrom
johnwaldo:feature/t1424-upstream-watch

Conversation

@johnwaldo
Copy link
Copy Markdown
Contributor

@johnwaldo johnwaldo commented Mar 9, 2026

Summary

  • New upstream-watch-helper.sh to maintain a watchlist of external repos we've borrowed ideas/code from
  • Checks for new GitHub releases, shows changelog diffs between our last-seen version and latest
  • Integrated into the daily auto-update cycle (24h-gated, configurable via AIDEVOPS_UPSTREAM_WATCH)

Why

We pull ideas and code patterns from early-release repos (portless, plankton, fractals, etc.) but had no mechanism to track their future releases. Skill imports track code we pulled in; contribution-watch tracks repos we filed issues on. This fills the gap: passive monitoring of repos we're watching for improvements relevant to our implementation.

What's included

File Purpose
upstream-watch-helper.sh CLI: add, remove, check, ack, status
configs/upstream-watch.json Watchlist config (committed template)
auto-update-helper.sh 24h-gated check_upstream_watch() in daily cycle
AGENTS.md Capabilities + domain index entries
reference/services.md Auto-update section docs
reference/settings.md Config key documentation

Workflow

# Watch a repo
upstream-watch-helper.sh add vercel-labs/portless \
  --relevance "Local dev hosting — compare against localdev-helper.sh"

# Daily auto-check surfaces new releases
upstream-watch-helper.sh check

# After reviewing, acknowledge
upstream-watch-helper.sh ack vercel-labs/portless

Testing

  • Tested add/remove/check/ack/status commands
  • Verified release diff display by simulating older version state
  • ShellCheck clean (only SC1091 — expected external source)
  • First watched repo: vercel-labs/portless (v0.5.2)

Closes #TBD

Summary by CodeRabbit

  • New Features

    • Added upstream watch to monitor external repositories for releases and notable commits.
    • Integrates upstream checks into the auto-update flow and honors a configurable check interval (default 24 hours).
    • Provides CLI commands to add/remove watches, check status, acknowledge updates, and view diffs.
  • Documentation

    • Added docs and configuration reference for the upstream watch feature and settings.

AI DevOps added 2 commits March 9, 2026 11:26
…1424)

Add upstream-watch-helper.sh to maintain a watchlist of external repos
we've borrowed ideas/code from. Checks for new releases, shows changelog
diffs between last-seen version and latest, supports explicit ack flow.

Distinct from skill-sources.json (imported skills) and contribution-watch
(repos we've contributed to) — this covers 'inspiration repos' for
passive monitoring.

Components:
- upstream-watch-helper.sh: add/remove/check/ack/status commands
- configs/upstream-watch.json: watchlist config (committed template)
- cache/upstream-watch-state.json: runtime state (gitignored)
- Auto-update integration: 24h-gated check in auto-update-helper.sh
- Docs: AGENTS.md capabilities + domain index, services.md, settings.md

First watched repo: vercel-labs/portless (local dev hosting comparison).

Closes #TBD
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 9, 2026

Walkthrough

Adds an "Upstream watch" feature: new config file and settings, docs updates, integration into the auto-update helper with a 24-hour gate, and a new bash helper script to add/remove/check/ack/status watched GitHub repositories and persist state.

Changes

Cohort / File(s) Summary
Docs & Config
​.agents/AGENTS.md, ​.agents/reference/services.md, ​.agents/reference/settings.md, ​.agents/configs/upstream-watch.json, TODO.md
Documentation and settings added for upstream watch; new config schema upstream-watch.json; settings upstream_watch and upstream_watch_hours; backlog entries added.
Auto-Update Integration
​.agents/scripts/auto-update-helper.sh
Integrated upstream watch into auto-update flow: added DEFAULT_UPSTREAM_WATCH_HOURS, check_upstream_watch(), update_upstream_watch_timestamp(), and invoked checks in existing cmd_check/update paths with 24h gate.
Upstream Watch Tool
​.agents/scripts/upstream-watch-helper.sh
New comprehensive helper script providing add/remove/check/ack/status commands, gh/jq usage, config/state persistence, logging, and diffing of releases/commits per watched repo.

Sequence Diagram

sequenceDiagram
    participant AU as Auto-Update Helper
    participant UWH as Upstream Watch Helper
    participant GH as GitHub API
    participant CF as Config/State Files

    AU->>AU: check_upstream_watch()
    AU->>AU: validate interval gate (upstream_watch_hours)
    AU->>CF: read `upstream-watch.json`
    CF-->>AU: repos list
    AU->>UWH: run cmd_check with repos
    UWH->>CF: load per-repo state
    loop for each watched repo
        UWH->>GH: query latest release (gh api)
        GH-->>UWH: release metadata
        UWH->>GH: query latest commit
        GH-->>UWH: commit metadata
        UWH->>UWH: compute diff vs state, flag updates_pending
    end
    UWH->>CF: write updated state (last_checked, last_seen, flags)
    UWH-->>AU: report status / pending updates
    AU->>CF: update last_upstream_watch_check timestamp
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

needs-review, enhancement

Poem

🌊 New upstream sentinels awake,
Shell scripts watch for each release break,
Config and state in steady rhyme,
Daily checks keep pace with time,
DevOps sails on, errors at bay.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 44.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title directly and clearly describes the main feature addition: upstream watch capability for monitoring external repositories. It is specific, concise, and accurately reflects the primary change in the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new capability to passively monitor external GitHub repositories for new releases and significant changes. It establishes a dedicated system for tracking 'inspiration repos' that are not directly imported but provide valuable ideas or code patterns. This mechanism ensures that relevant updates from these external sources are automatically surfaced for review, enhancing awareness of upstream developments without requiring active manual tracking.

Highlights

  • New Upstream Watch Feature: Introduced a new upstream-watch-helper.sh script and associated configuration (configs/upstream-watch.json) to maintain a watchlist of external repositories.
  • Release and Changelog Monitoring: The system now checks for new GitHub releases and provides changelog diffs between the last-seen version and the latest for watched repositories.
  • Daily Auto-Update Integration: The upstream watch checks are integrated into the daily auto-update cycle, with configurable frequency and an opt-out option via AIDEVOPS_UPSTREAM_WATCH.
  • Enhanced Documentation: Updated AGENTS.md, reference/services.md, and reference/settings.md to reflect the new 'Upstream watch' capability and its configuration.
  • CLI for Watchlist Management: The upstream-watch-helper.sh script provides commands to add, remove, check, ack (acknowledge), and view the status of watched repositories.
Changelog
  • .agents/AGENTS.md
    • Added 'Upstream watch' to the domain index.
    • Included 'Upstream watch' as a key capability with its purpose and associated commands.
  • .agents/configs/upstream-watch.json
    • Added a new JSON configuration file to store the list of watched repositories.
  • .agents/reference/services.md
    • Updated the 'Auto-update' section to include a description of the 'Upstream watch' functionality.
  • .agents/reference/settings.md
    • Added new configuration keys auto_update.upstream_watch and auto_update.upstream_watch_hours for controlling the feature.
  • .agents/scripts/auto-update-helper.sh
    • Introduced AIDEVOPS_UPSTREAM_WATCH and AIDEVOPS_UPSTREAM_WATCH_HOURS environment variables.
    • Defined DEFAULT_UPSTREAM_WATCH_HOURS.
    • Implemented check_upstream_watch and update_upstream_watch_timestamp functions to manage the daily check and state.
    • Integrated check_upstream_watch into the cmd_check function for daily execution.
  • .agents/scripts/upstream-watch-helper.sh
    • Added a new executable shell script to provide CLI commands for managing upstream repository watches, fetching release and commit information, and displaying diffs.
  • TODO.md
    • Added a new backlog item for the 'Upstream watch' feature (t1424).
    • Added a new backlog item for evaluating portless (t1425), which is blocked by t1424.
Activity
  • No human activity has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new feature for monitoring upstream repositories, which is a valuable addition. The implementation is solid, with a new helper script, integration into the auto-update cycle, and corresponding documentation updates. My review focuses on improving the robustness and debuggability of the new shell scripts by enhancing error handling and argument parsing. Specifically, I've suggested changes to prevent silent failures in JSON processing, provide more informative error messages from gh commands, and make the command-line argument parsing for the add command more reliable.

Comment on lines +963 to +970
if [[ -f "$STATE_FILE" ]]; then
jq --arg ts "$timestamp" \
'. + {last_upstream_watch_check: $ts}' \
"$STATE_FILE" >"$tmp_state" 2>/dev/null && mv "$tmp_state" "$STATE_FILE"
else
jq -n --arg ts "$timestamp" \
'{last_upstream_watch_check: $ts}' >"$STATE_FILE"
fi
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This function has two issues related to file writing:

  1. Silent Failures: The jq command on line 966 suppresses stderr with 2>/dev/null. If jq fails (e.g., due to a corrupted state file), the error is hidden, and the timestamp is not updated. This would cause the upstream watch check to run on every pulse, which is inefficient.
  2. Non-Atomic Write: The else block on line 968 writes directly to $STATE_FILE. This is not an atomic operation. If the script is interrupted or the disk is full, it could result in a corrupted or empty state file.

I suggest refactoring this to use a temporary file for both cases and to log any errors from jq for better diagnostics.

Suggested change
if [[ -f "$STATE_FILE" ]]; then
jq --arg ts "$timestamp" \
'. + {last_upstream_watch_check: $ts}' \
"$STATE_FILE" >"$tmp_state" 2>/dev/null && mv "$tmp_state" "$STATE_FILE"
else
jq -n --arg ts "$timestamp" \
'{last_upstream_watch_check: $ts}' >"$STATE_FILE"
fi
if [[ -f "$STATE_FILE" ]]; then
jq --arg ts "$timestamp" \
'. + {last_upstream_watch_check: $ts}' \
"$STATE_FILE" >"$tmp_state" 2>>"$LOG_FILE" && mv "$tmp_state" "$STATE_FILE"
else
jq -n --arg ts "$timestamp" \
'{last_upstream_watch_check: $ts}' >"$tmp_state" 2>>"$LOG_FILE
References
  1. When reporting errors for failed file operations in shell scripts, such as 'jq' writes, include the file path in the error message. Avoid suppressing stderr with '2>/dev/null' to ensure that diagnostic information about malformed files or write failures is visible.
  2. Avoid using '2>/dev/null' for blanket suppression of command errors in shell scripts to ensure that authentication, syntax, or system issues remain visible for debugging.

Comment on lines +132 to +135
echo "$state" | jq '.' >"$STATE_FILE" 2>/dev/null || {
_log_error "Failed to write state file (invalid JSON)"
return 1
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The use of 2>/dev/null here suppresses any error message from jq. If the JSON in the $state variable is invalid, the operation will fail silently, and the log message will be generic. It's better to capture the error output from jq and include it in the log for easier debugging.

Suggested change
echo "$state" | jq '.' >"$STATE_FILE" 2>/dev/null || {
_log_error "Failed to write state file (invalid JSON)"
return 1
}
local output
if ! output=$(echo "$state" | jq '.' 2>&1); then
_log_error "Failed to write state file (invalid JSON): ${output}"
return 1
fi
echo "$output" >"$STATE_FILE"
References
  1. When reporting errors for failed file operations in shell scripts, such as 'jq' writes, include the file path in the error message. Avoid suppressing stderr with '2>/dev/null' to ensure that diagnostic information about malformed files or write failures is visible.
  2. Avoid using '2>/dev/null' for blanket suppression of command errors in shell scripts to ensure that authentication, syntax, or system issues remain visible for debugging.

Comment on lines +169 to +172
echo "$config" | jq '.' >"$CONFIG_FILE" 2>/dev/null || {
_log_error "Failed to write config file (invalid JSON)"
return 1
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to _write_state, suppressing the error output from jq with 2>/dev/null makes it difficult to diagnose issues with malformed JSON. Capturing and logging the specific error from jq would improve debuggability.

Suggested change
echo "$config" | jq '.' >"$CONFIG_FILE" 2>/dev/null || {
_log_error "Failed to write config file (invalid JSON)"
return 1
}
local output
if ! output=$(echo "$config" | jq '.' 2>&1); then
_log_error "Failed to write config file (invalid JSON): ${output}"
return 1
fi
echo "$output" >"$CONFIG_FILE"
References
  1. When reporting errors for failed file operations in shell scripts, such as 'jq' writes, include the file path in the error message. Avoid suppressing stderr with '2>/dev/null' to ensure that diagnostic information about malformed files or write failures is visible.
  2. Avoid using '2>/dev/null' for blanket suppression of command errors in shell scripts to ensure that authentication, syntax, or system issues remain visible for debugging.

Comment on lines +219 to +222
repo_info=$(gh api "repos/${slug}" --jq '{description, stargazers_count, pushed_at, default_branch}' 2>/dev/null) || {
echo -e "${RED}Error: Could not access repo ${slug}. Check the slug and your permissions.${NC}" >&2
return 1
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Suppressing gh api's stderr with 2>/dev/null hides potentially useful error messages. If the command fails due to permissions, an incorrect slug, or network issues, the generic error message "Could not access repo..." isn't very helpful for debugging. It's better to capture and display the actual error output from the gh command.

Suggested change
repo_info=$(gh api "repos/${slug}" --jq '{description, stargazers_count, pushed_at, default_branch}' 2>/dev/null) || {
echo -e "${RED}Error: Could not access repo ${slug}. Check the slug and your permissions.${NC}" >&2
return 1
}
if ! repo_info=$(gh api "repos/${slug}" --jq '{description, stargazers_count, pushed_at, default_branch}' 2>&1); then
echo -e "${RED}Error: Could not access repo ${slug}. Check the slug and your permissions.\nDetails from gh: ${repo_info}${NC}" >&2
return 1
fi
References
  1. Avoid using '2>/dev/null' for blanket suppression of command errors in shell scripts to ensure that authentication, syntax, or system issues remain visible for debugging.

Comment on lines +703 to +719
local slug=""
local relevance=""
while [[ $# -gt 0 ]]; do
case "$1" in
--relevance)
relevance="${2:-}"
shift 2
;;
*)
if [[ -z "$slug" ]]; then
slug="$1"
fi
shift
;;
esac
done
cmd_add "$slug" "$relevance"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The argument parsing for the add command is a bit fragile. The *) case unconditionally shifts, which can consume arguments meant for other flags if they appear after the slug. For example, add my/repo --verbose would incorrectly consume --verbose. This can also lead to silently ignored arguments.

A more robust approach would be to handle unknown options and ensure only one slug is provided.

        local slug=""
        local relevance=""
        while [[ $# -gt 0 ]]; do
            case "$1" in
            --relevance)
                relevance="${2:-}"
                shift 2
                ;;
            -*)
                echo -e "${RED}Unknown option: $1${NC}" >&2
                return 1
                ;;
            *)
                if [[ -z "$slug" ]]; then
                    slug="$1"
                else
                    echo -e "${RED}Error: multiple slugs provided for 'add' command.${NC}" >&2
                    return 1
                fi
                shift
                ;;
            esac
        done
        cmd_add "$slug" "$relevance"
References
  1. When parsing CLI command strings to extract specific arguments, use a robust method that accounts for different flag formats (e.g., '--flag value', '--flag=value', and valueless flags) by maintaining a whitelist of flags known to consume an additional argument.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 9, 2026

Missing issue link. This PR references issue #3995, but the PR body doesn't contain a closing keyword.

Add Closes #3995 to the PR description so GitHub auto-links and auto-closes the issue on merge.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (1)
.agents/configs/upstream-watch.json (1)

1-4: Consider adding an example repo entry structure in the comment.

The config template is clean and functional, but developers adding repos manually would benefit from seeing the expected schema. Consider extending the $comment or adding a commented-out example entry.

📋 Optional enhancement to document expected structure
 {
-  "$comment": "Upstream repos to watch for releases and significant changes. Managed by upstream-watch-helper.sh (t1424).",
+  "$comment": "Upstream repos to watch for releases and significant changes. Managed by upstream-watch-helper.sh (t1424). Example entry: {\"owner\": \"vercel-labs\", \"repo\": \"portless\", \"last_seen\": \"v0.5.2\", \"last_checked\": \"2026-03-09T12:00:00Z\", \"acked\": true}",
   "repos": []
 }

Alternatively, include a schema reference or link to documentation showing the full structure with required/optional fields.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/configs/upstream-watch.json around lines 1 - 4, Update the
"$comment" value in the JSON (or add a commented-out example object next to the
"repos" array) to include a concrete example repo entry showing expected keys
and types (e.g., owner, repo, branches, tags, watchers, or other fields used by
upstream-watch-helper.sh (t1424)); reference the "$comment" and "repos" symbols
so maintainers can see the expected schema inline and/or add a link to external
docs for the full schema.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.agents/scripts/auto-update-helper.sh:
- Around line 940-947: The script currently calls
update_upstream_watch_timestamp unconditionally after running
"$upstream_watch_script" check; change this so the timestamp is only advanced
when the check succeeds: run "$upstream_watch_script" check, capture its exit
status (or rely on the if-success branch), call log_info/log_warn as now, and
move the call to update_upstream_watch_timestamp into the if-true branch (or
call it only when the captured exit status is zero) to avoid suppressing retries
when the check failed.
- Around line 913-930: The scheduled check hard-codes $HOME/.aidevops/agents for
upstream_watch_script and watch_config, causing mismatch with
upstream-watch-helper.sh which uses
${AIDEVOPS_AGENTS_DIR:-$HOME/.aidevops/agents}; change the resolution to use
agents_dir="${AIDEVOPS_AGENTS_DIR:-$HOME/.aidevops/agents}" and build
upstream_watch_script and watch_config from that agents_dir (keeping the
existing INSTALL_DIR fallback for upstream_watch_script), so the variables
upstream_watch_script, watch_config, log_info and
update_upstream_watch_timestamp operate against the same directory as the
helper.

In @.agents/scripts/upstream-watch-helper.sh:
- Around line 430-434: The code currently advances
.repos[$slug].last_commit_seen on every check (the jq update in
upstream-watch-helper.sh using variables state, slug, now, latest_commit,
has_new_release), which can shrink the pending-release diff before an explicit
ack; change the jq update so it only updates .repos[$slug].last_checked and
.repos[$slug].updates_pending (remove assignment to .last_commit_seen), leaving
.last_commit_seen unchanged until the explicit ack handler sets it (ensure the
ack flow uses latest_commit or the short commit id to update
.repos[$slug].last_commit_seen).
- Around line 362-390: The probes currently swallow gh api errors by assigning
empty strings to release_json and commit_json; update cmd_check() so you capture
and check the exit status/output of each gh api call (for release_json and
commit_json), log a clear error message including the repo slug and the API
stderr/output on failure, set a probe failure flag (e.g., probe_failed=true)
when either call fails instead of treating empty values as "up to date", and
after processing repos return a non-zero exit code from cmd_check() when any (or
all, per policy) probes failed; ensure you still update
last_checked/last_commit_seen/updates_pending only when probes succeeded for
that repo to avoid masking failures.

In `@TODO.md`:
- Line 110: The TODO entry for t1424 inaccurately describes shipped scope;
update the t1424 description to reflect the actual implementation (a daily
auto-update/release-watch flow) by removing claims about "integration with
pulse" and "tracks commit activity", and instead note the new script
upstream-watch-helper.sh, the config file configs/upstream-watch.json, that it
focuses on release/watch updates (first watched repo vercel-labs/portless), and
clarify it is distinct from skill-sources.json and contribution-watch; make
these wording edits so the entry aligns with the delivered feature.

---

Nitpick comments:
In @.agents/configs/upstream-watch.json:
- Around line 1-4: Update the "$comment" value in the JSON (or add a
commented-out example object next to the "repos" array) to include a concrete
example repo entry showing expected keys and types (e.g., owner, repo, branches,
tags, watchers, or other fields used by upstream-watch-helper.sh (t1424));
reference the "$comment" and "repos" symbols so maintainers can see the expected
schema inline and/or add a link to external docs for the full schema.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 38e967db-2180-47c2-88a4-c38bfe4bae90

📥 Commits

Reviewing files that changed from the base of the PR and between 254e22f and 2a2aa85.

📒 Files selected for processing (7)
  • .agents/AGENTS.md
  • .agents/configs/upstream-watch.json
  • .agents/reference/services.md
  • .agents/reference/settings.md
  • .agents/scripts/auto-update-helper.sh
  • .agents/scripts/upstream-watch-helper.sh
  • TODO.md

Comment on lines +362 to +390
release_json=$(gh api "repos/${slug}/releases/latest" 2>/dev/null) || release_json=""

if [[ -n "$release_json" ]]; then
latest_release_tag=$(echo "$release_json" | jq -r '.tag_name // ""')
latest_release_name=$(echo "$release_json" | jq -r '.name // ""')
latest_release_date=$(echo "$release_json" | jq -r '.published_at // ""')
else
latest_release_tag=""
latest_release_name=""
latest_release_date=""
fi

local has_new_release=false
if [[ -n "$latest_release_tag" && "$latest_release_tag" != "$last_release_seen" ]]; then
has_new_release=true
fi

# --- Check commits (even if no new release) ---
local latest_commit latest_commit_date
local commit_json
commit_json=$(gh api "repos/${slug}/commits?per_page=1" --jq '.[0]' 2>/dev/null) || commit_json=""

if [[ -n "$commit_json" ]]; then
latest_commit=$(echo "$commit_json" | jq -r '.sha // ""')
latest_commit_date=$(echo "$commit_json" | jq -r '.commit.committer.date // ""')
else
latest_commit=""
latest_commit_date=""
fi
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

wc -l .agents/scripts/upstream-watch-helper.sh

Repository: marcusquinn/aidevops

Length of output: 108


🏁 Script executed:

cat -n .agents/scripts/upstream-watch-helper.sh | head -100

Repository: marcusquinn/aidevops

Length of output: 4344


🏁 Script executed:

cat -n .agents/scripts/upstream-watch-helper.sh | sed -n '350,450p'

Repository: marcusquinn/aidevops

Length of output: 4628


🏁 Script executed:

cat -n .agents/scripts/upstream-watch-helper.sh | sed -n '290,350p'

Repository: marcusquinn/aidevops

Length of output: 1815


🏁 Script executed:

grep -n "cmd_check()" .agents/scripts/upstream-watch-helper.sh -A 50 | head -70

Repository: marcusquinn/aidevops

Length of output: 1770


🏁 Script executed:

grep -n "return" .agents/scripts/upstream-watch-helper.sh | tail -20

Repository: marcusquinn/aidevops

Length of output: 402


Don't treat probe failures as "up to date" — log API errors and propagate failure.

When gh api calls fail (lines 362, 382), the code silently collapses to empty strings and line 427 reports the repo as current. Meanwhile, lines 431–434 still update last_checked, last_commit_seen, and updates_pending, advancing timestamps that mask repeated failures. A deleted/private repo or transient GitHub/auth failure becomes a false clean bill of health instead of surfacing a check error.

Add logging for API failures and return a non-zero exit code from cmd_check() when any repo's probes fail (or at minimum, when all repos fail). This aligns with the automation script guidelines: "Reliability and robustness", "Clear logging and feedback", and "Proper exit codes".

Also applies to: 426–434

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/scripts/upstream-watch-helper.sh around lines 362 - 390, The probes
currently swallow gh api errors by assigning empty strings to release_json and
commit_json; update cmd_check() so you capture and check the exit status/output
of each gh api call (for release_json and commit_json), log a clear error
message including the repo slug and the API stderr/output on failure, set a
probe failure flag (e.g., probe_failed=true) when either call fails instead of
treating empty values as "up to date", and after processing repos return a
non-zero exit code from cmd_check() when any (or all, per policy) probes failed;
ensure you still update last_checked/last_commit_seen/updates_pending only when
probes succeeded for that repo to avoid masking failures.

@marcusquinn
Copy link
Copy Markdown
Owner

This PR is from an external contributor (@johnwaldo). Auto-merge is disabled for external PRs — a maintainer must review and approve manually.


To approve or decline, comment on this PR:

  • approved — removes the review gate and allows merge (CI permitting)
  • declined: <reason> — closes this PR (include your reason after the colon)

@marcusquinn marcusquinn added external-contributor PR from external contributor, requires maintainer review needs-maintainer-review Awaiting maintainer review before pipeline pickup labels Mar 9, 2026
- Only advance 24h gate on successful check (auto-update-helper.sh)
- Use AIDEVOPS_AGENTS_DIR instead of hardcoded path (auto-update-helper.sh)
- Don't advance last_commit_seen on check — only on ack (upstream-watch-helper.sh)
- Capture gh api and jq errors instead of suppressing with 2>/dev/null
- Add schema example to upstream-watch.json comment
- Align t1424 TODO description with shipped scope
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
.agents/scripts/upstream-watch-helper.sh (1)

517-523: Combine redundant API calls for each release tag.

Lines 519-520 make two separate API calls to fetch .name and .published_at for the same release. This doubles API requests in the release diff loop, which can be slow for repos with many releases between versions.

♻️ Combine into single API call
 		if [[ "$in_range" == true ]]; then
 			release_count=$((release_count + 1))
 			# Get one-line summary for each release
-			local rel_name rel_date
-			rel_name=$(gh api "repos/${slug}/releases/tags/${tag}" --jq '.name // .tag_name' 2>/dev/null) || rel_name="$tag"
-			rel_date=$(gh api "repos/${slug}/releases/tags/${tag}" --jq '.published_at // ""' 2>/dev/null) || rel_date=""
+			local rel_info rel_name rel_date
+			rel_info=$(gh api "repos/${slug}/releases/tags/${tag}" --jq '{name: (.name // .tag_name), date: (.published_at // "")}' 2>/dev/null) || rel_info='{"name":"'"$tag"'","date":""}'
+			rel_name=$(echo "$rel_info" | jq -r '.name')
+			rel_date=$(echo "$rel_info" | jq -r '.date')
 			local date_short="${rel_date:0:10}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/scripts/upstream-watch-helper.sh around lines 517 - 523, The two
separate gh api calls that set rel_name and rel_date (see variables rel_name,
rel_date and the gh api invocations) should be replaced with a single gh api
request that returns both .name (or .tag_name) and .published_at and then parse
those values into rel_name and rel_date; update the block that currently runs
rel_name=$(gh api "repos/${slug}/releases/tags/${tag}" --jq '.name // .tag_name'
...) and rel_date=$(gh api "repos/${slug}/releases/tags/${tag}" --jq
'.published_at // ""' ...) to one gh api call and split its output into the
existing date_short and echo logic so you only hit the API once per tag.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.agents/scripts/upstream-watch-helper.sh:
- Around line 603-616: The script currently silences failures of the gh API
calls by defaulting latest_tag and latest_commit to empty strings, which then
overwrites last_release_seen/last_commit_seen; change the logic in the block
that calls gh api (the assignments to latest_tag and latest_commit) to detect
failure (empty value) and emit a warning (e.g., via stderr or a logger)
indicating the API call failed for $slug, and when updating state (the jq update
that sets .repos[$slug].last_release_seen and .last_commit_seen before
_write_state) only overwrite those fields if the corresponding latest_tag or
latest_commit are non-empty—otherwise keep the existing values from the current
state variable and still update last_checked and updates_pending as before.
- Around line 456-476: cmd_check currently always returns 0 even when probe
failures occur; update it to track probe failures (e.g., a failure counter
variable incremented whenever probe_failed is set inside the per-repo loop) and
then return a non-zero exit code if any probes failed before the final return.
Locate the probe_failed flag usage in cmd_check and increment a failure counter
there, then replace the unconditional "return 0" at the end of cmd_check with
logic that returns a non-zero code when the failure counter is >0 (while
preserving existing behavior for updates_found logging and _write_state).

---

Nitpick comments:
In @.agents/scripts/upstream-watch-helper.sh:
- Around line 517-523: The two separate gh api calls that set rel_name and
rel_date (see variables rel_name, rel_date and the gh api invocations) should be
replaced with a single gh api request that returns both .name (or .tag_name) and
.published_at and then parse those values into rel_name and rel_date; update the
block that currently runs rel_name=$(gh api "repos/${slug}/releases/tags/${tag}"
--jq '.name // .tag_name' ...) and rel_date=$(gh api
"repos/${slug}/releases/tags/${tag}" --jq '.published_at // ""' ...) to one gh
api call and split its output into the existing date_short and echo logic so you
only hit the API once per tag.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 0c9a5e4f-c2e5-4674-bf44-68d0a1fba718

📥 Commits

Reviewing files that changed from the base of the PR and between 2a2aa85 and aa6b65f.

📒 Files selected for processing (4)
  • .agents/configs/upstream-watch.json
  • .agents/scripts/auto-update-helper.sh
  • .agents/scripts/upstream-watch-helper.sh
  • TODO.md
🚧 Files skipped from review as they are similar to previous changes (2)
  • .agents/scripts/auto-update-helper.sh
  • .agents/configs/upstream-watch.json

Comment on lines +456 to +476
if [[ "$probe_failed" != true ]]; then
state=$(echo "$state" | jq --arg slug "$slug" --arg now "$now" \
--argjson pending "$([[ "$has_new_release" == true ]] && echo 1 || echo 0)" \
'.repos[$slug].last_checked = $now | .repos[$slug].updates_pending = $pending')
fi

done <<<"$slugs"

# Update global last_check
state=$(echo "$state" | jq --arg now "$now" '.last_check = $now')
_write_state "$state"

echo ""
if [[ "$updates_found" -gt 0 ]]; then
echo -e "${YELLOW}${updates_found} repo(s) have updates to review.${NC}"
else
echo -e "${GREEN}All watched repos are up to date.${NC}"
fi

_log_info "Check complete: ${updates_found} updates found"
return 0
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Propagate probe failures to the exit code for integration reliability.

The probe_failed flag correctly skips state updates per-repo (addressing prior feedback), but cmd_check() always returns 0 at line 476. If API probes fail—especially for all repos—the calling auto-update integration receives a false success signal.

Consider tracking a failure count and returning non-zero when any (or all, per policy) probes fail:

🔧 Suggested fix
 	local updates_found=0
+	local probes_failed=0
 	local now
 	now=$(_now_iso)

Inside the while loop after probe_failed=true is set, increment:

+		[[ "$probe_failed" == true ]] && probes_failed=$((probes_failed + 1))

At the end of cmd_check():

 	_log_info "Check complete: ${updates_found} updates found"
-	return 0
+	if [[ "$probes_failed" -gt 0 ]]; then
+		_log_warn "Check completed with ${probes_failed} probe failure(s)"
+		return 1
+	fi
+	return 0
 }

As per coding guidelines, "Automation scripts - focus on: Proper exit codes".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/scripts/upstream-watch-helper.sh around lines 456 - 476, cmd_check
currently always returns 0 even when probe failures occur; update it to track
probe failures (e.g., a failure counter variable incremented whenever
probe_failed is set inside the per-repo loop) and then return a non-zero exit
code if any probes failed before the final return. Locate the probe_failed flag
usage in cmd_check and increment a failure counter there, then replace the
unconditional "return 0" at the end of cmd_check with logic that returns a
non-zero code when the failure counter is >0 (while preserving existing behavior
for updates_found logging and _write_state).

Comment on lines +603 to +616
# Get current latest release
local latest_tag
latest_tag=$(gh api "repos/${slug}/releases/latest" --jq '.tag_name' 2>/dev/null) || latest_tag=""

local latest_commit
latest_commit=$(gh api "repos/${slug}/commits?per_page=1" --jq '.[0].sha // empty' 2>/dev/null) || latest_commit=""

local now
now=$(_now_iso)

state=$(echo "$state" | jq --arg slug "$slug" --arg tag "$latest_tag" \
--arg commit "${latest_commit:0:7}" --arg now "$now" \
'.repos[$slug].last_release_seen = $tag | .repos[$slug].last_commit_seen = $commit | .repos[$slug].last_checked = $now | .repos[$slug].updates_pending = 0')
_write_state "$state"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Warn when API calls fail during acknowledge to prevent silent baseline reset.

Lines 605 and 608 silently fall back to empty strings if the API calls fail. This causes lines 613-616 to set last_release_seen and last_commit_seen to empty values, effectively resetting the baseline. The next check will flag all releases as new.

🛡️ Add warning on API failure
 	# Get current latest release
-	local latest_tag
-	latest_tag=$(gh api "repos/${slug}/releases/latest" --jq '.tag_name' 2>/dev/null) || latest_tag=""
+	local latest_tag api_failed=false
+	latest_tag=$(gh api "repos/${slug}/releases/latest" --jq '.tag_name' 2>/dev/null) || {
+		latest_tag=""
+		# 404 is OK (no releases), but log other failures
+	}

-	local latest_commit
-	latest_commit=$(gh api "repos/${slug}/commits?per_page=1" --jq '.[0].sha // empty' 2>/dev/null) || latest_commit=""
+	local latest_commit
+	latest_commit=$(gh api "repos/${slug}/commits?per_page=1" --jq '.[0].sha // empty' 2>/dev/null) || {
+		latest_commit=""
+		api_failed=true
+	}
+
+	if [[ "$api_failed" == true && -z "$latest_tag" && -z "$latest_commit" ]]; then
+		echo -e "${YELLOW}Warning: Could not fetch current state from API. Ack may be incomplete.${NC}" >&2
+		_log_warn "ack for ${slug} proceeding with incomplete API data"
+	fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/scripts/upstream-watch-helper.sh around lines 603 - 616, The script
currently silences failures of the gh API calls by defaulting latest_tag and
latest_commit to empty strings, which then overwrites
last_release_seen/last_commit_seen; change the logic in the block that calls gh
api (the assignments to latest_tag and latest_commit) to detect failure (empty
value) and emit a warning (e.g., via stderr or a logger) indicating the API call
failed for $slug, and when updating state (the jq update that sets
.repos[$slug].last_release_seen and .last_commit_seen before _write_state) only
overwrite those fields if the corresponding latest_tag or latest_commit are
non-empty—otherwise keep the existing values from the current state variable and
still update last_checked and updates_pending as before.

@marcusquinn
Copy link
Copy Markdown
Owner

Closing: rebased onto main, resolved task ID collision (t1424→t1426), and addressed all CodeRabbit findings. Reopening as new PR from main repo branch.

@github-actions
Copy link
Copy Markdown
Contributor

Completed via PR #4003. Task t1426 merged to main.

@github-actions github-actions bot added the status:done Task is complete label Mar 10, 2026
@marcusquinn
Copy link
Copy Markdown
Owner

Pulse recovery: dispatched a salvage worker for this high-risk closed-unmerged PR (branch deleted).

  • Model: sonnet/gpt alternating default (headless-runtime-helper)
  • Branch: salvage/pr-3994-upstream-watch
  • Scope: Reconstruct recoverable value from the PR diff into a new branch and mergeable PR.
  • Attempt: 1 of 1
  • Direction: Preserve intent, reduce blast radius, and re-link to the original context.

@marcusquinn
Copy link
Copy Markdown
Owner

Pulse salvage action: dispatching a recovery worker because this is high-risk closed-unmerged work (+923/-1, branch deleted). Worker will reconstruct the diff onto a fresh branch and open a replacement PR linked to the original intent.

@marcusquinn
Copy link
Copy Markdown
Owner

Audit cross-link: recovery was completed in PR #4003 (t1426), which superseded this closed-unmerged PR after task-ID collision resolution and review-finding fixes.

@marcusquinn
Copy link
Copy Markdown
Owner

Pulse dispatching salvage worker for this closed-unmerged high-risk PR.\n- Model: default headless rotation (sonnet/codex via headless-runtime-helper)\n- Scope: Assess whether PR can be safely recovered (reopen/cherry-pick) and execute the best recovery path\n- Direction: Verify against current main and avoid duplicate/superseded recovery

@marcusquinn
Copy link
Copy Markdown
Owner

Starting salvage recovery dispatch for closed-unmerged high-risk PR.\n- Model: default headless rotation (anthropic/claude-sonnet-4-6 or openai/gpt-5.3-codex)\n- Branch: recreate from diff (source branch deleted)\n- Scope: recover valuable upstream-watch code from PR #3994 and reopen as a fresh implementation PR\n- Attempt: 1 of 1\n- Direction: recreate only validated, non-superseded changes and link resulting PR back to #3994

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

external-contributor PR from external contributor, requires maintainer review needs-maintainer-review Awaiting maintainer review before pipeline pickup status:done Task is complete

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants