t1424: Upstream watch — track external repos for release monitoring#3994
t1424: Upstream watch — track external repos for release monitoring#3994johnwaldo wants to merge 4 commits intomarcusquinn:mainfrom
Conversation
…backlog [skip ci]
…1424) Add upstream-watch-helper.sh to maintain a watchlist of external repos we've borrowed ideas/code from. Checks for new releases, shows changelog diffs between last-seen version and latest, supports explicit ack flow. Distinct from skill-sources.json (imported skills) and contribution-watch (repos we've contributed to) — this covers 'inspiration repos' for passive monitoring. Components: - upstream-watch-helper.sh: add/remove/check/ack/status commands - configs/upstream-watch.json: watchlist config (committed template) - cache/upstream-watch-state.json: runtime state (gitignored) - Auto-update integration: 24h-gated check in auto-update-helper.sh - Docs: AGENTS.md capabilities + domain index, services.md, settings.md First watched repo: vercel-labs/portless (local dev hosting comparison). Closes #TBD
WalkthroughAdds an "Upstream watch" feature: new config file and settings, docs updates, integration into the auto-update helper with a 24-hour gate, and a new bash helper script to add/remove/check/ack/status watched GitHub repositories and persist state. Changes
Sequence DiagramsequenceDiagram
participant AU as Auto-Update Helper
participant UWH as Upstream Watch Helper
participant GH as GitHub API
participant CF as Config/State Files
AU->>AU: check_upstream_watch()
AU->>AU: validate interval gate (upstream_watch_hours)
AU->>CF: read `upstream-watch.json`
CF-->>AU: repos list
AU->>UWH: run cmd_check with repos
UWH->>CF: load per-repo state
loop for each watched repo
UWH->>GH: query latest release (gh api)
GH-->>UWH: release metadata
UWH->>GH: query latest commit
GH-->>UWH: commit metadata
UWH->>UWH: compute diff vs state, flag updates_pending
end
UWH->>CF: write updated state (last_checked, last_seen, flags)
UWH-->>AU: report status / pending updates
AU->>CF: update last_upstream_watch_check timestamp
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a new capability to passively monitor external GitHub repositories for new releases and significant changes. It establishes a dedicated system for tracking 'inspiration repos' that are not directly imported but provide valuable ideas or code patterns. This mechanism ensures that relevant updates from these external sources are automatically surfaced for review, enhancing awareness of upstream developments without requiring active manual tracking. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a new feature for monitoring upstream repositories, which is a valuable addition. The implementation is solid, with a new helper script, integration into the auto-update cycle, and corresponding documentation updates. My review focuses on improving the robustness and debuggability of the new shell scripts by enhancing error handling and argument parsing. Specifically, I've suggested changes to prevent silent failures in JSON processing, provide more informative error messages from gh commands, and make the command-line argument parsing for the add command more reliable.
| if [[ -f "$STATE_FILE" ]]; then | ||
| jq --arg ts "$timestamp" \ | ||
| '. + {last_upstream_watch_check: $ts}' \ | ||
| "$STATE_FILE" >"$tmp_state" 2>/dev/null && mv "$tmp_state" "$STATE_FILE" | ||
| else | ||
| jq -n --arg ts "$timestamp" \ | ||
| '{last_upstream_watch_check: $ts}' >"$STATE_FILE" | ||
| fi |
There was a problem hiding this comment.
This function has two issues related to file writing:
- Silent Failures: The
jqcommand on line 966 suppresses stderr with2>/dev/null. Ifjqfails (e.g., due to a corrupted state file), the error is hidden, and the timestamp is not updated. This would cause the upstream watch check to run on every pulse, which is inefficient. - Non-Atomic Write: The
elseblock on line 968 writes directly to$STATE_FILE. This is not an atomic operation. If the script is interrupted or the disk is full, it could result in a corrupted or empty state file.
I suggest refactoring this to use a temporary file for both cases and to log any errors from jq for better diagnostics.
| if [[ -f "$STATE_FILE" ]]; then | |
| jq --arg ts "$timestamp" \ | |
| '. + {last_upstream_watch_check: $ts}' \ | |
| "$STATE_FILE" >"$tmp_state" 2>/dev/null && mv "$tmp_state" "$STATE_FILE" | |
| else | |
| jq -n --arg ts "$timestamp" \ | |
| '{last_upstream_watch_check: $ts}' >"$STATE_FILE" | |
| fi | |
| if [[ -f "$STATE_FILE" ]]; then | |
| jq --arg ts "$timestamp" \ | |
| '. + {last_upstream_watch_check: $ts}' \ | |
| "$STATE_FILE" >"$tmp_state" 2>>"$LOG_FILE" && mv "$tmp_state" "$STATE_FILE" | |
| else | |
| jq -n --arg ts "$timestamp" \ | |
| '{last_upstream_watch_check: $ts}' >"$tmp_state" 2>>"$LOG_FILE |
References
- When reporting errors for failed file operations in shell scripts, such as 'jq' writes, include the file path in the error message. Avoid suppressing stderr with '2>/dev/null' to ensure that diagnostic information about malformed files or write failures is visible.
- Avoid using '2>/dev/null' for blanket suppression of command errors in shell scripts to ensure that authentication, syntax, or system issues remain visible for debugging.
| echo "$state" | jq '.' >"$STATE_FILE" 2>/dev/null || { | ||
| _log_error "Failed to write state file (invalid JSON)" | ||
| return 1 | ||
| } |
There was a problem hiding this comment.
The use of 2>/dev/null here suppresses any error message from jq. If the JSON in the $state variable is invalid, the operation will fail silently, and the log message will be generic. It's better to capture the error output from jq and include it in the log for easier debugging.
| echo "$state" | jq '.' >"$STATE_FILE" 2>/dev/null || { | |
| _log_error "Failed to write state file (invalid JSON)" | |
| return 1 | |
| } | |
| local output | |
| if ! output=$(echo "$state" | jq '.' 2>&1); then | |
| _log_error "Failed to write state file (invalid JSON): ${output}" | |
| return 1 | |
| fi | |
| echo "$output" >"$STATE_FILE" |
References
- When reporting errors for failed file operations in shell scripts, such as 'jq' writes, include the file path in the error message. Avoid suppressing stderr with '2>/dev/null' to ensure that diagnostic information about malformed files or write failures is visible.
- Avoid using '2>/dev/null' for blanket suppression of command errors in shell scripts to ensure that authentication, syntax, or system issues remain visible for debugging.
| echo "$config" | jq '.' >"$CONFIG_FILE" 2>/dev/null || { | ||
| _log_error "Failed to write config file (invalid JSON)" | ||
| return 1 | ||
| } |
There was a problem hiding this comment.
Similar to _write_state, suppressing the error output from jq with 2>/dev/null makes it difficult to diagnose issues with malformed JSON. Capturing and logging the specific error from jq would improve debuggability.
| echo "$config" | jq '.' >"$CONFIG_FILE" 2>/dev/null || { | |
| _log_error "Failed to write config file (invalid JSON)" | |
| return 1 | |
| } | |
| local output | |
| if ! output=$(echo "$config" | jq '.' 2>&1); then | |
| _log_error "Failed to write config file (invalid JSON): ${output}" | |
| return 1 | |
| fi | |
| echo "$output" >"$CONFIG_FILE" |
References
- When reporting errors for failed file operations in shell scripts, such as 'jq' writes, include the file path in the error message. Avoid suppressing stderr with '2>/dev/null' to ensure that diagnostic information about malformed files or write failures is visible.
- Avoid using '2>/dev/null' for blanket suppression of command errors in shell scripts to ensure that authentication, syntax, or system issues remain visible for debugging.
| repo_info=$(gh api "repos/${slug}" --jq '{description, stargazers_count, pushed_at, default_branch}' 2>/dev/null) || { | ||
| echo -e "${RED}Error: Could not access repo ${slug}. Check the slug and your permissions.${NC}" >&2 | ||
| return 1 | ||
| } |
There was a problem hiding this comment.
Suppressing gh api's stderr with 2>/dev/null hides potentially useful error messages. If the command fails due to permissions, an incorrect slug, or network issues, the generic error message "Could not access repo..." isn't very helpful for debugging. It's better to capture and display the actual error output from the gh command.
| repo_info=$(gh api "repos/${slug}" --jq '{description, stargazers_count, pushed_at, default_branch}' 2>/dev/null) || { | |
| echo -e "${RED}Error: Could not access repo ${slug}. Check the slug and your permissions.${NC}" >&2 | |
| return 1 | |
| } | |
| if ! repo_info=$(gh api "repos/${slug}" --jq '{description, stargazers_count, pushed_at, default_branch}' 2>&1); then | |
| echo -e "${RED}Error: Could not access repo ${slug}. Check the slug and your permissions.\nDetails from gh: ${repo_info}${NC}" >&2 | |
| return 1 | |
| fi |
References
- Avoid using '2>/dev/null' for blanket suppression of command errors in shell scripts to ensure that authentication, syntax, or system issues remain visible for debugging.
| local slug="" | ||
| local relevance="" | ||
| while [[ $# -gt 0 ]]; do | ||
| case "$1" in | ||
| --relevance) | ||
| relevance="${2:-}" | ||
| shift 2 | ||
| ;; | ||
| *) | ||
| if [[ -z "$slug" ]]; then | ||
| slug="$1" | ||
| fi | ||
| shift | ||
| ;; | ||
| esac | ||
| done | ||
| cmd_add "$slug" "$relevance" |
There was a problem hiding this comment.
The argument parsing for the add command is a bit fragile. The *) case unconditionally shifts, which can consume arguments meant for other flags if they appear after the slug. For example, add my/repo --verbose would incorrectly consume --verbose. This can also lead to silently ignored arguments.
A more robust approach would be to handle unknown options and ensure only one slug is provided.
local slug=""
local relevance=""
while [[ $# -gt 0 ]]; do
case "$1" in
--relevance)
relevance="${2:-}"
shift 2
;;
-*)
echo -e "${RED}Unknown option: $1${NC}" >&2
return 1
;;
*)
if [[ -z "$slug" ]]; then
slug="$1"
else
echo -e "${RED}Error: multiple slugs provided for 'add' command.${NC}" >&2
return 1
fi
shift
;;
esac
done
cmd_add "$slug" "$relevance"References
- When parsing CLI command strings to extract specific arguments, use a robust method that accounts for different flag formats (e.g., '--flag value', '--flag=value', and valueless flags) by maintaining a whitelist of flags known to consume an additional argument.
|
Missing issue link. This PR references issue #3995, but the PR body doesn't contain a closing keyword. Add |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (1)
.agents/configs/upstream-watch.json (1)
1-4: Consider adding an example repo entry structure in the comment.The config template is clean and functional, but developers adding repos manually would benefit from seeing the expected schema. Consider extending the
$commentor adding a commented-out example entry.📋 Optional enhancement to document expected structure
{ - "$comment": "Upstream repos to watch for releases and significant changes. Managed by upstream-watch-helper.sh (t1424).", + "$comment": "Upstream repos to watch for releases and significant changes. Managed by upstream-watch-helper.sh (t1424). Example entry: {\"owner\": \"vercel-labs\", \"repo\": \"portless\", \"last_seen\": \"v0.5.2\", \"last_checked\": \"2026-03-09T12:00:00Z\", \"acked\": true}", "repos": [] }Alternatively, include a schema reference or link to documentation showing the full structure with required/optional fields.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/configs/upstream-watch.json around lines 1 - 4, Update the "$comment" value in the JSON (or add a commented-out example object next to the "repos" array) to include a concrete example repo entry showing expected keys and types (e.g., owner, repo, branches, tags, watchers, or other fields used by upstream-watch-helper.sh (t1424)); reference the "$comment" and "repos" symbols so maintainers can see the expected schema inline and/or add a link to external docs for the full schema.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.agents/scripts/auto-update-helper.sh:
- Around line 940-947: The script currently calls
update_upstream_watch_timestamp unconditionally after running
"$upstream_watch_script" check; change this so the timestamp is only advanced
when the check succeeds: run "$upstream_watch_script" check, capture its exit
status (or rely on the if-success branch), call log_info/log_warn as now, and
move the call to update_upstream_watch_timestamp into the if-true branch (or
call it only when the captured exit status is zero) to avoid suppressing retries
when the check failed.
- Around line 913-930: The scheduled check hard-codes $HOME/.aidevops/agents for
upstream_watch_script and watch_config, causing mismatch with
upstream-watch-helper.sh which uses
${AIDEVOPS_AGENTS_DIR:-$HOME/.aidevops/agents}; change the resolution to use
agents_dir="${AIDEVOPS_AGENTS_DIR:-$HOME/.aidevops/agents}" and build
upstream_watch_script and watch_config from that agents_dir (keeping the
existing INSTALL_DIR fallback for upstream_watch_script), so the variables
upstream_watch_script, watch_config, log_info and
update_upstream_watch_timestamp operate against the same directory as the
helper.
In @.agents/scripts/upstream-watch-helper.sh:
- Around line 430-434: The code currently advances
.repos[$slug].last_commit_seen on every check (the jq update in
upstream-watch-helper.sh using variables state, slug, now, latest_commit,
has_new_release), which can shrink the pending-release diff before an explicit
ack; change the jq update so it only updates .repos[$slug].last_checked and
.repos[$slug].updates_pending (remove assignment to .last_commit_seen), leaving
.last_commit_seen unchanged until the explicit ack handler sets it (ensure the
ack flow uses latest_commit or the short commit id to update
.repos[$slug].last_commit_seen).
- Around line 362-390: The probes currently swallow gh api errors by assigning
empty strings to release_json and commit_json; update cmd_check() so you capture
and check the exit status/output of each gh api call (for release_json and
commit_json), log a clear error message including the repo slug and the API
stderr/output on failure, set a probe failure flag (e.g., probe_failed=true)
when either call fails instead of treating empty values as "up to date", and
after processing repos return a non-zero exit code from cmd_check() when any (or
all, per policy) probes failed; ensure you still update
last_checked/last_commit_seen/updates_pending only when probes succeeded for
that repo to avoid masking failures.
In `@TODO.md`:
- Line 110: The TODO entry for t1424 inaccurately describes shipped scope;
update the t1424 description to reflect the actual implementation (a daily
auto-update/release-watch flow) by removing claims about "integration with
pulse" and "tracks commit activity", and instead note the new script
upstream-watch-helper.sh, the config file configs/upstream-watch.json, that it
focuses on release/watch updates (first watched repo vercel-labs/portless), and
clarify it is distinct from skill-sources.json and contribution-watch; make
these wording edits so the entry aligns with the delivered feature.
---
Nitpick comments:
In @.agents/configs/upstream-watch.json:
- Around line 1-4: Update the "$comment" value in the JSON (or add a
commented-out example object next to the "repos" array) to include a concrete
example repo entry showing expected keys and types (e.g., owner, repo, branches,
tags, watchers, or other fields used by upstream-watch-helper.sh (t1424));
reference the "$comment" and "repos" symbols so maintainers can see the expected
schema inline and/or add a link to external docs for the full schema.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 38e967db-2180-47c2-88a4-c38bfe4bae90
📒 Files selected for processing (7)
.agents/AGENTS.md.agents/configs/upstream-watch.json.agents/reference/services.md.agents/reference/settings.md.agents/scripts/auto-update-helper.sh.agents/scripts/upstream-watch-helper.shTODO.md
| release_json=$(gh api "repos/${slug}/releases/latest" 2>/dev/null) || release_json="" | ||
|
|
||
| if [[ -n "$release_json" ]]; then | ||
| latest_release_tag=$(echo "$release_json" | jq -r '.tag_name // ""') | ||
| latest_release_name=$(echo "$release_json" | jq -r '.name // ""') | ||
| latest_release_date=$(echo "$release_json" | jq -r '.published_at // ""') | ||
| else | ||
| latest_release_tag="" | ||
| latest_release_name="" | ||
| latest_release_date="" | ||
| fi | ||
|
|
||
| local has_new_release=false | ||
| if [[ -n "$latest_release_tag" && "$latest_release_tag" != "$last_release_seen" ]]; then | ||
| has_new_release=true | ||
| fi | ||
|
|
||
| # --- Check commits (even if no new release) --- | ||
| local latest_commit latest_commit_date | ||
| local commit_json | ||
| commit_json=$(gh api "repos/${slug}/commits?per_page=1" --jq '.[0]' 2>/dev/null) || commit_json="" | ||
|
|
||
| if [[ -n "$commit_json" ]]; then | ||
| latest_commit=$(echo "$commit_json" | jq -r '.sha // ""') | ||
| latest_commit_date=$(echo "$commit_json" | jq -r '.commit.committer.date // ""') | ||
| else | ||
| latest_commit="" | ||
| latest_commit_date="" | ||
| fi |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
wc -l .agents/scripts/upstream-watch-helper.shRepository: marcusquinn/aidevops
Length of output: 108
🏁 Script executed:
cat -n .agents/scripts/upstream-watch-helper.sh | head -100Repository: marcusquinn/aidevops
Length of output: 4344
🏁 Script executed:
cat -n .agents/scripts/upstream-watch-helper.sh | sed -n '350,450p'Repository: marcusquinn/aidevops
Length of output: 4628
🏁 Script executed:
cat -n .agents/scripts/upstream-watch-helper.sh | sed -n '290,350p'Repository: marcusquinn/aidevops
Length of output: 1815
🏁 Script executed:
grep -n "cmd_check()" .agents/scripts/upstream-watch-helper.sh -A 50 | head -70Repository: marcusquinn/aidevops
Length of output: 1770
🏁 Script executed:
grep -n "return" .agents/scripts/upstream-watch-helper.sh | tail -20Repository: marcusquinn/aidevops
Length of output: 402
Don't treat probe failures as "up to date" — log API errors and propagate failure.
When gh api calls fail (lines 362, 382), the code silently collapses to empty strings and line 427 reports the repo as current. Meanwhile, lines 431–434 still update last_checked, last_commit_seen, and updates_pending, advancing timestamps that mask repeated failures. A deleted/private repo or transient GitHub/auth failure becomes a false clean bill of health instead of surfacing a check error.
Add logging for API failures and return a non-zero exit code from cmd_check() when any repo's probes fail (or at minimum, when all repos fail). This aligns with the automation script guidelines: "Reliability and robustness", "Clear logging and feedback", and "Proper exit codes".
Also applies to: 426–434
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.agents/scripts/upstream-watch-helper.sh around lines 362 - 390, The probes
currently swallow gh api errors by assigning empty strings to release_json and
commit_json; update cmd_check() so you capture and check the exit status/output
of each gh api call (for release_json and commit_json), log a clear error
message including the repo slug and the API stderr/output on failure, set a
probe failure flag (e.g., probe_failed=true) when either call fails instead of
treating empty values as "up to date", and after processing repos return a
non-zero exit code from cmd_check() when any (or all, per policy) probes failed;
ensure you still update last_checked/last_commit_seen/updates_pending only when
probes succeeded for that repo to avoid masking failures.
|
This PR is from an external contributor (@johnwaldo). Auto-merge is disabled for external PRs — a maintainer must review and approve manually. To approve or decline, comment on this PR:
|
- Only advance 24h gate on successful check (auto-update-helper.sh) - Use AIDEVOPS_AGENTS_DIR instead of hardcoded path (auto-update-helper.sh) - Don't advance last_commit_seen on check — only on ack (upstream-watch-helper.sh) - Capture gh api and jq errors instead of suppressing with 2>/dev/null - Add schema example to upstream-watch.json comment - Align t1424 TODO description with shipped scope
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
.agents/scripts/upstream-watch-helper.sh (1)
517-523: Combine redundant API calls for each release tag.Lines 519-520 make two separate API calls to fetch
.nameand.published_atfor the same release. This doubles API requests in the release diff loop, which can be slow for repos with many releases between versions.♻️ Combine into single API call
if [[ "$in_range" == true ]]; then release_count=$((release_count + 1)) # Get one-line summary for each release - local rel_name rel_date - rel_name=$(gh api "repos/${slug}/releases/tags/${tag}" --jq '.name // .tag_name' 2>/dev/null) || rel_name="$tag" - rel_date=$(gh api "repos/${slug}/releases/tags/${tag}" --jq '.published_at // ""' 2>/dev/null) || rel_date="" + local rel_info rel_name rel_date + rel_info=$(gh api "repos/${slug}/releases/tags/${tag}" --jq '{name: (.name // .tag_name), date: (.published_at // "")}' 2>/dev/null) || rel_info='{"name":"'"$tag"'","date":""}' + rel_name=$(echo "$rel_info" | jq -r '.name') + rel_date=$(echo "$rel_info" | jq -r '.date') local date_short="${rel_date:0:10}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/upstream-watch-helper.sh around lines 517 - 523, The two separate gh api calls that set rel_name and rel_date (see variables rel_name, rel_date and the gh api invocations) should be replaced with a single gh api request that returns both .name (or .tag_name) and .published_at and then parse those values into rel_name and rel_date; update the block that currently runs rel_name=$(gh api "repos/${slug}/releases/tags/${tag}" --jq '.name // .tag_name' ...) and rel_date=$(gh api "repos/${slug}/releases/tags/${tag}" --jq '.published_at // ""' ...) to one gh api call and split its output into the existing date_short and echo logic so you only hit the API once per tag.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.agents/scripts/upstream-watch-helper.sh:
- Around line 603-616: The script currently silences failures of the gh API
calls by defaulting latest_tag and latest_commit to empty strings, which then
overwrites last_release_seen/last_commit_seen; change the logic in the block
that calls gh api (the assignments to latest_tag and latest_commit) to detect
failure (empty value) and emit a warning (e.g., via stderr or a logger)
indicating the API call failed for $slug, and when updating state (the jq update
that sets .repos[$slug].last_release_seen and .last_commit_seen before
_write_state) only overwrite those fields if the corresponding latest_tag or
latest_commit are non-empty—otherwise keep the existing values from the current
state variable and still update last_checked and updates_pending as before.
- Around line 456-476: cmd_check currently always returns 0 even when probe
failures occur; update it to track probe failures (e.g., a failure counter
variable incremented whenever probe_failed is set inside the per-repo loop) and
then return a non-zero exit code if any probes failed before the final return.
Locate the probe_failed flag usage in cmd_check and increment a failure counter
there, then replace the unconditional "return 0" at the end of cmd_check with
logic that returns a non-zero code when the failure counter is >0 (while
preserving existing behavior for updates_found logging and _write_state).
---
Nitpick comments:
In @.agents/scripts/upstream-watch-helper.sh:
- Around line 517-523: The two separate gh api calls that set rel_name and
rel_date (see variables rel_name, rel_date and the gh api invocations) should be
replaced with a single gh api request that returns both .name (or .tag_name) and
.published_at and then parse those values into rel_name and rel_date; update the
block that currently runs rel_name=$(gh api "repos/${slug}/releases/tags/${tag}"
--jq '.name // .tag_name' ...) and rel_date=$(gh api
"repos/${slug}/releases/tags/${tag}" --jq '.published_at // ""' ...) to one gh
api call and split its output into the existing date_short and echo logic so you
only hit the API once per tag.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 0c9a5e4f-c2e5-4674-bf44-68d0a1fba718
📒 Files selected for processing (4)
.agents/configs/upstream-watch.json.agents/scripts/auto-update-helper.sh.agents/scripts/upstream-watch-helper.shTODO.md
🚧 Files skipped from review as they are similar to previous changes (2)
- .agents/scripts/auto-update-helper.sh
- .agents/configs/upstream-watch.json
| if [[ "$probe_failed" != true ]]; then | ||
| state=$(echo "$state" | jq --arg slug "$slug" --arg now "$now" \ | ||
| --argjson pending "$([[ "$has_new_release" == true ]] && echo 1 || echo 0)" \ | ||
| '.repos[$slug].last_checked = $now | .repos[$slug].updates_pending = $pending') | ||
| fi | ||
|
|
||
| done <<<"$slugs" | ||
|
|
||
| # Update global last_check | ||
| state=$(echo "$state" | jq --arg now "$now" '.last_check = $now') | ||
| _write_state "$state" | ||
|
|
||
| echo "" | ||
| if [[ "$updates_found" -gt 0 ]]; then | ||
| echo -e "${YELLOW}${updates_found} repo(s) have updates to review.${NC}" | ||
| else | ||
| echo -e "${GREEN}All watched repos are up to date.${NC}" | ||
| fi | ||
|
|
||
| _log_info "Check complete: ${updates_found} updates found" | ||
| return 0 |
There was a problem hiding this comment.
Propagate probe failures to the exit code for integration reliability.
The probe_failed flag correctly skips state updates per-repo (addressing prior feedback), but cmd_check() always returns 0 at line 476. If API probes fail—especially for all repos—the calling auto-update integration receives a false success signal.
Consider tracking a failure count and returning non-zero when any (or all, per policy) probes fail:
🔧 Suggested fix
local updates_found=0
+ local probes_failed=0
local now
now=$(_now_iso)Inside the while loop after probe_failed=true is set, increment:
+ [[ "$probe_failed" == true ]] && probes_failed=$((probes_failed + 1))At the end of cmd_check():
_log_info "Check complete: ${updates_found} updates found"
- return 0
+ if [[ "$probes_failed" -gt 0 ]]; then
+ _log_warn "Check completed with ${probes_failed} probe failure(s)"
+ return 1
+ fi
+ return 0
}As per coding guidelines, "Automation scripts - focus on: Proper exit codes".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.agents/scripts/upstream-watch-helper.sh around lines 456 - 476, cmd_check
currently always returns 0 even when probe failures occur; update it to track
probe failures (e.g., a failure counter variable incremented whenever
probe_failed is set inside the per-repo loop) and then return a non-zero exit
code if any probes failed before the final return. Locate the probe_failed flag
usage in cmd_check and increment a failure counter there, then replace the
unconditional "return 0" at the end of cmd_check with logic that returns a
non-zero code when the failure counter is >0 (while preserving existing behavior
for updates_found logging and _write_state).
| # Get current latest release | ||
| local latest_tag | ||
| latest_tag=$(gh api "repos/${slug}/releases/latest" --jq '.tag_name' 2>/dev/null) || latest_tag="" | ||
|
|
||
| local latest_commit | ||
| latest_commit=$(gh api "repos/${slug}/commits?per_page=1" --jq '.[0].sha // empty' 2>/dev/null) || latest_commit="" | ||
|
|
||
| local now | ||
| now=$(_now_iso) | ||
|
|
||
| state=$(echo "$state" | jq --arg slug "$slug" --arg tag "$latest_tag" \ | ||
| --arg commit "${latest_commit:0:7}" --arg now "$now" \ | ||
| '.repos[$slug].last_release_seen = $tag | .repos[$slug].last_commit_seen = $commit | .repos[$slug].last_checked = $now | .repos[$slug].updates_pending = 0') | ||
| _write_state "$state" |
There was a problem hiding this comment.
Warn when API calls fail during acknowledge to prevent silent baseline reset.
Lines 605 and 608 silently fall back to empty strings if the API calls fail. This causes lines 613-616 to set last_release_seen and last_commit_seen to empty values, effectively resetting the baseline. The next check will flag all releases as new.
🛡️ Add warning on API failure
# Get current latest release
- local latest_tag
- latest_tag=$(gh api "repos/${slug}/releases/latest" --jq '.tag_name' 2>/dev/null) || latest_tag=""
+ local latest_tag api_failed=false
+ latest_tag=$(gh api "repos/${slug}/releases/latest" --jq '.tag_name' 2>/dev/null) || {
+ latest_tag=""
+ # 404 is OK (no releases), but log other failures
+ }
- local latest_commit
- latest_commit=$(gh api "repos/${slug}/commits?per_page=1" --jq '.[0].sha // empty' 2>/dev/null) || latest_commit=""
+ local latest_commit
+ latest_commit=$(gh api "repos/${slug}/commits?per_page=1" --jq '.[0].sha // empty' 2>/dev/null) || {
+ latest_commit=""
+ api_failed=true
+ }
+
+ if [[ "$api_failed" == true && -z "$latest_tag" && -z "$latest_commit" ]]; then
+ echo -e "${YELLOW}Warning: Could not fetch current state from API. Ack may be incomplete.${NC}" >&2
+ _log_warn "ack for ${slug} proceeding with incomplete API data"
+ fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.agents/scripts/upstream-watch-helper.sh around lines 603 - 616, The script
currently silences failures of the gh API calls by defaulting latest_tag and
latest_commit to empty strings, which then overwrites
last_release_seen/last_commit_seen; change the logic in the block that calls gh
api (the assignments to latest_tag and latest_commit) to detect failure (empty
value) and emit a warning (e.g., via stderr or a logger) indicating the API call
failed for $slug, and when updating state (the jq update that sets
.repos[$slug].last_release_seen and .last_commit_seen before _write_state) only
overwrite those fields if the corresponding latest_tag or latest_commit are
non-empty—otherwise keep the existing values from the current state variable and
still update last_checked and updates_pending as before.
|
Closing: rebased onto main, resolved task ID collision (t1424→t1426), and addressed all CodeRabbit findings. Reopening as new PR from main repo branch. |
|
Completed via PR #4003. Task t1426 merged to main. |
|
Pulse recovery: dispatched a salvage worker for this high-risk closed-unmerged PR (branch deleted).
|
|
Pulse salvage action: dispatching a recovery worker because this is high-risk closed-unmerged work (+923/-1, branch deleted). Worker will reconstruct the diff onto a fresh branch and open a replacement PR linked to the original intent. |
|
Audit cross-link: recovery was completed in PR #4003 (t1426), which superseded this closed-unmerged PR after task-ID collision resolution and review-finding fixes. |
|
Pulse dispatching salvage worker for this closed-unmerged high-risk PR.\n- Model: default headless rotation (sonnet/codex via headless-runtime-helper)\n- Scope: Assess whether PR can be safely recovered (reopen/cherry-pick) and execute the best recovery path\n- Direction: Verify against current main and avoid duplicate/superseded recovery |
|
Starting salvage recovery dispatch for closed-unmerged high-risk PR.\n- Model: default headless rotation (anthropic/claude-sonnet-4-6 or openai/gpt-5.3-codex)\n- Branch: recreate from diff (source branch deleted)\n- Scope: recover valuable upstream-watch code from PR #3994 and reopen as a fresh implementation PR\n- Attempt: 1 of 1\n- Direction: recreate only validated, non-superseded changes and link resulting PR back to #3994 |
Summary
upstream-watch-helper.shto maintain a watchlist of external repos we've borrowed ideas/code fromAIDEVOPS_UPSTREAM_WATCH)Why
We pull ideas and code patterns from early-release repos (portless, plankton, fractals, etc.) but had no mechanism to track their future releases. Skill imports track code we pulled in; contribution-watch tracks repos we filed issues on. This fills the gap: passive monitoring of repos we're watching for improvements relevant to our implementation.
What's included
upstream-watch-helper.shadd,remove,check,ack,statusconfigs/upstream-watch.jsonauto-update-helper.shcheck_upstream_watch()in daily cycleAGENTS.mdreference/services.mdreference/settings.mdWorkflow
Testing
vercel-labs/portless(v0.5.2)Closes #TBD
Summary by CodeRabbit
New Features
Documentation