Skip to content

t1264: Daily repo sync — auto-pull latest for git repos in configured parent directories#1989

Merged
marcusquinn merged 1 commit intomainfrom
feature/t1264
Feb 20, 2026
Merged

t1264: Daily repo sync — auto-pull latest for git repos in configured parent directories#1989
marcusquinn merged 1 commit intomainfrom
feature/t1264

Conversation

@marcusquinn
Copy link
Owner

@marcusquinn marcusquinn commented Feb 20, 2026

Summary

  • New repo-sync-helper.sh script (follows auto-update-helper.sh pattern) that runs daily via launchd/cron
  • Scans configured parent directories (default: ~/Git/) for git repos and runs git pull --ff-only on repos where the working tree is clean and on the default branch (main/master)
  • aidevops repo-sync [enable|disable|status|check|config|logs] CLI subcommand
  • setup.sh offers to enable during install (like auto-update and supervisor pulse)
  • Onboarding wizard asks user to specify parent directories

Changes

.agents/scripts/repo-sync-helper.sh (new)

  • enable — installs launchd LaunchAgent (macOS) or cron entry (Linux, daily at 3am)
  • disable — removes scheduler
  • status — shows scheduler state, configured dirs, last sync results
  • check — one-shot sync of all configured repos
  • config — shows configuration and how to edit repos.json
  • logs [--tail N] [--follow] — view sync logs

aidevops.sh

  • Added repo-sync | reposync subcommand dispatch
  • Added help text and examples

setup.sh

  • Added repo-sync enable offer after supervisor pulse section

.agents/aidevops/onboarding.md

  • Added "Repo Sync Configuration" section with interactive flow to ask for parent directories

Safety

  • Only git pull --ff-only — never creates merge commits
  • Skips repos with dirty working trees
  • Skips repos not on their default branch
  • Skips repos with no remote configured
  • Logs failures without stopping (other repos still sync)
  • Worktrees ignored — only main checkouts matter

Configuration

Add git_parent_dirs array to ~/.config/aidevops/repos.json:

{"git_parent_dirs": ["~/Git", "~/Projects"]}

Default: ~/Git

Ref #1984

Summary by CodeRabbit

Release Notes

  • New Features
    • Added automated daily repository synchronization across multiple Git repositories
    • New sync scheduler with enable/disable/status/check/logs commands
    • Configuration support for specifying Git repository parent directories
    • Safe syncing with checks for clean working trees and proper branch tracking
    • New onboarding guidance for initial sync setup

New repo-sync-helper.sh script (follows auto-update-helper.sh pattern) that
runs daily via launchd/cron. Scans configured parent directories (default:
~/Git/) for git repos and runs git pull --ff-only on repos where the working
tree is clean and on the default branch (main/master).

- repo-sync-helper.sh: enable/disable/status/check/config/logs commands
- aidevops.sh: adds 'repo-sync' subcommand and help text
- setup.sh: offers to enable repo-sync during install (like auto-update)
- onboarding.md: asks user to specify parent directories

Config: add 'git_parent_dirs' array to ~/.config/aidevops/repos.json
Safety: ff-only pulls only, skip dirty trees, skip non-default branches,
log failures without stopping. Worktrees ignored (main checkouts only).

Chose launchd on macOS / cron on Linux — matches auto-update-helper.sh pattern.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 20, 2026

Walkthrough

This PR introduces a comprehensive repository synchronization feature that enables automatic daily git repository updates across configured parent directories. It includes a new daemon-like helper script with scheduler integration, CLI wiring, onboarding prompts, and setup automation to orchestrate the feature.

Changes

Cohort / File(s) Summary
Documentation & Onboarding
.agents/aidevops/onboarding.md
Adds two instances of "Repo Sync Configuration" subsections guiding users to set git_parent_dirs and enable daily repo-sync scheduling, with fallback instructions for skipping configuration.
Repo Sync Helper Script
.agents/scripts/repo-sync-helper.sh
New 881-line Bash utility orchestrating daily Git repository synchronization. Implements platform-aware scheduling (launchd on macOS, cron on Linux), configuration discovery from repos.json, safety checks (clean worktree, default branch verification, configured remote), fs-based locking, JSON state tracking, and comprehensive logging.
CLI Integration & Setup
aidevops.sh, setup.sh
Wires repo-sync command to CLI dispatcher with fallback path resolution. Adds conditional scheduler enablement in setup.sh supporting both interactive (with user prompt) and non-interactive modes, integrating with existing supervisor-pulse flow.

Sequence Diagram

sequenceDiagram
    actor Scheduler as Scheduler<br/>(launchd/cron)
    participant Helper as repo-sync-helper.sh
    participant FS as Filesystem<br/>(Lock, Config)
    participant Git as Git Repos
    participant Log as State & Logs
    
    Scheduler->>Helper: Trigger daily sync
    Helper->>FS: Acquire filesystem lock
    FS-->>Helper: Lock acquired (or wait/fail)
    Helper->>FS: Read repos.json config
    FS-->>Helper: git_parent_dirs list
    Helper->>Helper: Discover subdirectories as repos
    
    loop For each repo
        Helper->>Git: Check if valid Git repo
        Git-->>Helper: Remote URL present?
        Helper->>Git: Determine default branch
        Git-->>Helper: main/master/trunk/develop
        Helper->>Git: Verify working tree clean
        Git-->>Helper: Status (clean/dirty)
        
        alt Conditions met
            Helper->>Git: git fetch & compare HEAD
            Git-->>Helper: Upstream changes available?
            alt Pull needed
                Helper->>Git: git pull --ff-only
                Git-->>Helper: Success/Fast-forward only
                Helper->>Log: Log PULLED
            else Up-to-date
                Helper->>Log: Log OK
            end
        else Conditions not met
            Helper->>Log: Log SKIP or FAIL
        end
    end
    
    Helper->>FS: Release lock
    Helper->>Log: Update JSON state (timestamps, counts)
    Log-->>Helper: State persisted
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related issues

Possibly related PRs

Poem

🔄 Repos now sync with morning's light,
While launchd hums through macOS night,
Cron keeps Linux repos up-to-date,
Git branches merge at a steady state,
Zero debt maintained—automation's might! ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly and comprehensively summarizes the main change: adding daily automatic git repository synchronization across configured parent directories.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/t1264

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link

Summary of Changes

Hello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new automated daily Git repository synchronization feature. It aims to keep local Git repositories up-to-date by performing safe, fast-forward-only pulls, thereby reducing the need for manual updates and ensuring developers are always working with the latest code. The feature is integrated into the existing CLI and setup process, providing a seamless user experience for configuration and management.

Highlights

  • New Daily Repo Sync Feature: Introduced a new repo-sync-helper.sh script that automatically performs git pull --ff-only on local Git repositories. This ensures your local repos stay up-to-date without manual intervention.
  • Intelligent Sync Logic: The sync process is designed with safety in mind: it only pulls if the working tree is clean, the current branch is the default branch (main/master), and a remote is configured. It also gracefully handles failures and ignores Git worktrees.
  • Platform-Aware Scheduling: The helper script integrates with native system schedulers, using launchd on macOS and cron on Linux, to run daily. Users can enable, disable, check status, and view logs via the aidevops repo-sync CLI.
  • Enhanced User Onboarding and Configuration: The setup.sh script now offers to enable repo sync during installation, and the onboarding documentation (onboarding.md) guides users through configuring their Git parent directories, which are stored in ~/.config/aidevops/repos.json.
Changelog
  • .agents/aidevops/onboarding.md
    • Added a new 'Repo Sync Configuration' section.
    • Included instructions for users to specify Git parent directories during onboarding.
    • Provided examples for configuring repos.json and enabling the sync.
  • .agents/scripts/repo-sync-helper.sh
    • Added a new shell script for daily Git repository synchronization.
    • Implemented commands for enabling, disabling, checking status, performing one-shot syncs, configuring, and viewing logs.
    • Incorporated logic for platform-specific scheduling (launchd for macOS, cron for Linux).
    • Included robust safety checks to ensure pulls are fast-forward only, working trees are clean, and repos are on their default branch.
    • Added lock management to prevent concurrent syncs and state tracking for sync results.
  • aidevops.sh
    • Added repo-sync and reposync as new subcommands to the main aidevops CLI.
    • Updated the help text to include documentation and examples for the new repo-sync command.
    • Implemented dispatch logic to execute the repo-sync-helper.sh script when the repo-sync command is invoked.
  • setup.sh
    • Integrated an interactive prompt during setup to offer enabling the daily repo sync.
    • Added logic to automatically enable repo sync in non-interactive setups, if not explicitly disabled by an environment variable.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 30 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Fri Feb 20 00:30:57 UTC 2026: Code review monitoring started
Fri Feb 20 00:30:57 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 30

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 30
  • VULNERABILITIES: 0

Generated on: Fri Feb 20 00:31:00 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

@sonarqubecloud
Copy link

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new repo-sync-helper.sh script to automatically keep local git repositories updated. The changes include the helper script itself, a new aidevops repo-sync command, and integration into the setup.sh and onboarding documentation. The implementation is robust, with good use of locking for safety and platform-aware schedulers (launchd/cron).

My review focuses on improving robustness and adherence to the repository's shell scripting style guide and established rules. I've identified a few areas for improvement:

  • A high-severity issue where the script relies on parsing log files to determine sync status, which is a fragile and racy pattern. I've suggested a more robust method using function output.
  • Several instances of 2>/dev/null that violate the style guide and can mask important errors, aligning with the rule against suppressing errors on file operations.
  • A recommendation to improve an example command in the documentation to use mktemp for safer temporary file handling.

Overall, this is a well-designed and useful feature. Addressing these points will make it even more reliable.

Comment on lines +466 to +479
if sync_repo "$repo_dir"; then
# Determine if it was pulled or skipped based on log
local last_log
last_log=$(tail -1 "$LOG_FILE" 2>/dev/null || true)
if [[ "$last_log" == *"PULLED"* ]]; then
synced=$((synced + 1))
elif [[ "$last_log" == *"OK"* ]]; then
: # already up to date — not counted as synced or skipped
else
skipped=$((skipped + 1))
fi
else
failed=$((failed + 1))
fi

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The logic to determine if a repository was "pulled" or "skipped" relies on parsing the last line of the log file using tail -1 "$LOG_FILE". This is a fragile and potentially racy approach. If any other process writes to the log file, or if logging from different sync_repo calls gets interleaved, the status could be misidentified, leading to incorrect sync statistics.

A more robust approach is for sync_repo to communicate its result directly to the caller via stdout.

Please modify sync_repo to echo a status string before returning. For example:

  • log_info "PULLED ..."; echo "PULLED"; return 0
  • log_info "SKIP ..."; echo "SKIPPED"; return 0
  • log_info "OK ..."; echo "OK"; return 0

Then, you can update this loop to capture and check the status directly:

Suggested change
if sync_repo "$repo_dir"; then
# Determine if it was pulled or skipped based on log
local last_log
last_log=$(tail -1 "$LOG_FILE" 2>/dev/null || true)
if [[ "$last_log" == *"PULLED"* ]]; then
synced=$((synced + 1))
elif [[ "$last_log" == *"OK"* ]]; then
: # already up to date — not counted as synced or skipped
else
skipped=$((skipped + 1))
fi
else
failed=$((failed + 1))
fi
local status
status=$(sync_repo "$repo_dir")
local rc=$?
if [[ $rc -eq 0 ]]; then
case "$status" in
"PULLED")
synced=$((synced + 1))
;;
"SKIPPED")
skipped=$((skipped + 1))
;;
"OK")
: # Already up to date, not counted.
;;
esac
else
failed=$((failed + 1))
fi

Comment on lines +1101 to +1104
jq --argjson dirs '["~/Git", "~/Projects"]' \
'. + {git_parent_dirs: $dirs}' \
~/.config/aidevops/repos.json > /tmp/repos.json && \
mv /tmp/repos.json ~/.config/aidevops/repos.json

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The example jq command for updating repos.json uses a hardcoded temporary file path (/tmp/repos.json). This is not a safe pattern as it can lead to race conditions or conflicts if multiple processes attempt this operation simultaneously. The scripts in this repository correctly use mktemp for creating temporary files, and the documentation should reflect this best practice for consistency and safety.

Suggested change
jq --argjson dirs '["~/Git", "~/Projects"]' \
'. + {git_parent_dirs: $dirs}' \
~/.config/aidevops/repos.json > /tmp/repos.json && \
mv /tmp/repos.json ~/.config/aidevops/repos.json
tmp_json=$(mktemp)
jq --argjson dirs '["~/Git", "~/Projects"]' \
'. + {git_parent_dirs: $dirs}' \
~/.config/aidevops/repos.json > "$tmp_json" && \
mv "$tmp_json" ~/.config/aidevops/repos.json

Comment on lines +271 to +272
git -C "$repo_path" diff --quiet 2>/dev/null &&
git -C "$repo_path" diff --cached --quiet 2>/dev/null

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The git diff commands suppress stderr using 2>/dev/null. This violates the repository style guide (line 50), which prohibits blanket error suppression. If git fails for reasons other than a dirty working tree (e.g., not a git repository, permissions issue), the error will be silenced, which can make debugging difficult. It's better to redirect stderr to the log file to capture any unexpected errors.

Suggested change
git -C "$repo_path" diff --quiet 2>/dev/null &&
git -C "$repo_path" diff --cached --quiet 2>/dev/null
git -C "$repo_path" diff --quiet 2>>"$LOG_FILE" &&
git -C "$repo_path" diff --cached --quiet 2>>"$LOG_FILE"
References
  1. Blanket suppression of stderr with 2>/dev/null is not allowed. It is only acceptable when redirecting to log files. (link)
  2. Avoid using 2>/dev/null to suppress errors on file operations if the file's existence has already been verified by a preceding check (e.g., [[ -f "$file" ]] or an early return). This practice is redundant for 'file not found' errors and can mask other important issues like permissions problems.

else
failed=$((failed + 1))
fi
done < <(find "$parent_dir" -maxdepth 1 -mindepth 1 -type d -print0 2>/dev/null)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The find command uses 2>/dev/null to suppress errors. This violates the repository style guide (line 50), which disallows blanket error suppression. This can hide important errors, such as permission denied when scanning a directory. Errors should be allowed to go to stderr, where they will be captured by the scheduler's logging mechanism.

Suggested change
done < <(find "$parent_dir" -maxdepth 1 -mindepth 1 -type d -print0 2>/dev/null)
done < <(find "$parent_dir" -maxdepth 1 -mindepth 1 -type d -print0)
References
  1. Blanket suppression of stderr with 2>/dev/null is not allowed. It is only acceptable when redirecting to log files. (link)
  2. Avoid using 2>/dev/null to suppress errors on file operations if the file's existence has already been verified by a preceding check (e.g., [[ -f "$file" ]] or an early return). This practice is redundant for 'file not found' errors and can mask other important issues like permissions problems.

dir="${dir/#\~/$HOME}"
if [[ -d "$dir" ]]; then
local count
count=$(find "$dir" -maxdepth 1 -mindepth 1 -type d 2>/dev/null | wc -l | tr -d ' ')

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The find command uses 2>/dev/null to suppress errors. This violates the repository style guide (line 50), which disallows blanket error suppression. This can hide important errors, such as permission denied when scanning a directory. Since this command is for status reporting, showing an error to the user would be more informative than silently failing to count the directories.

Suggested change
count=$(find "$dir" -maxdepth 1 -mindepth 1 -type d 2>/dev/null | wc -l | tr -d ' ')
count=$(find "$dir" -maxdepth 1 -mindepth 1 -type d | wc -l | tr -d ' ')
References
  1. Blanket suppression of stderr with 2>/dev/null is not allowed. It is only acceptable when redirecting to log files. (link)
  2. Avoid using 2>/dev/null to suppress errors on file operations if the file's existence has already been verified by a preceding check (e.g., [[ -f "$file" ]] or an early return). This practice is redundant for 'file not found' errors and can mask other important issues like permissions problems.

coderabbitai[bot]
coderabbitai bot previously requested changes Feb 20, 2026
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (5)
.agents/scripts/repo-sync-helper.sh (5)

218-232: get_parent_dirs doesn't expand ~ in paths read from JSON.

The jq output from repos.json will contain literal ~ characters (e.g., ~/Git). The tilde expansion happens later in cmd_check (line 440) and cmd_status (line 691), but cmd_config (line 743) displays them without expansion. More importantly, if another consumer calls get_parent_dirs and forgets to expand ~, paths will silently fail. Consider centralizing the expansion inside get_parent_dirs.

Expand ~ inside get_parent_dirs
 get_parent_dirs() {
 	if [[ -f "$CONFIG_FILE" ]] && command -v jq &>/dev/null; then
 		local dirs
 		dirs=$(jq -r '.git_parent_dirs[]? // empty' "$CONFIG_FILE" 2>/dev/null || true)
 		if [[ -n "$dirs" ]]; then
-			echo "$dirs"
+			while IFS= read -r d; do
+				echo "${d/#\~/$HOME}"
+			done <<< "$dirs"
 			return 0
 		fi
 	fi
 	# Fall back to defaults
 	for dir in "${DEFAULT_PARENT_DIRS[@]}"; do
 		echo "$dir"
 	done
 	return 0
 }

Then remove the duplicate expansion in cmd_check (line 440) and cmd_status (line 691).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/scripts/repo-sync-helper.sh around lines 218 - 232, get_parent_dirs
currently returns paths from jq with literal '~' which isn't expanded;
centralize tilde expansion in get_parent_dirs by detecting entries that start
with '~/' and replacing the leading '~' with the value of $HOME (or using shell
expansion) before echoing each dir so callers always receive absolute paths;
then remove the redundant tilde-expansion logic from cmd_check and cmd_status
and keep cmd_config showing the expanded paths (update references to
get_parent_dirs in those functions accordingly).

45-51: init_log_file result is unused — LOG_FILE is hardcoded separately.

init_log_file (line 45) sets AIDEVOPS_LOG_FILE, but all logging in this script uses the separately declared LOG_FILE (line 51). The init_log_file call is effectively dead. Either remove it or use AIDEVOPS_LOG_FILE in your log() function.

Option A: Remove the unused call
-init_log_file
-
 # Configuration
 readonly CONFIG_FILE="$HOME/.config/aidevops/repos.json"
Option B: Use init_log_file output instead of hardcoding
 init_log_file
 
 # Configuration
 readonly CONFIG_FILE="$HOME/.config/aidevops/repos.json"
 readonly LOCK_DIR="$HOME/.aidevops/locks"
 readonly LOCK_FILE="$LOCK_DIR/repo-sync.lock"
-readonly LOG_FILE="$HOME/.aidevops/logs/repo-sync.log"
+readonly LOG_FILE="$AIDEVOPS_LOG_FILE"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/scripts/repo-sync-helper.sh around lines 45 - 51, The call to
init_log_file is unused because the script hardcodes LOG_FILE; either remove the
dead init_log_file call or wire the init to the rest of the script: ensure
init_log_file sets AIDEVOPS_LOG_FILE and then replace the hardcoded readonly
LOG_FILE with a reference to AIDEVOPS_LOG_FILE (and update the log() function to
use AIDEVOPS_LOG_FILE instead of LOG_FILE) so logging consistently uses the
initialized path; search for init_log_file, AIDEVOPS_LOG_FILE, LOG_FILE, and
log() to make the change.

414-428: EXIT trap in cmd_check overwrites any previously set EXIT trap.

Line 428 sets trap 'release_lock' EXIT, which will replace any existing EXIT trap. If the script later calls update_state (line 484) which sets a RETURN trap (line 378, safe), that's fine — but if cmd_check is invoked as part of a larger flow that already has an EXIT trap, the lock-release trap silently replaces it.

Consider stacking the trap or using a cleanup function pattern consistent with the rest of the codebase.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/scripts/repo-sync-helper.sh around lines 414 - 428, The EXIT trap
set in cmd_check currently overwrites any existing EXIT trap; change cmd_check
to preserve and chain existing EXIT handlers instead of replacing them by
capturing the current EXIT trap (via trap -p EXIT or similar) and installing a
new trap that calls release_lock and then invokes the previous handler, or use
the project’s established cleanup helper (e.g., an add_exit_trap function) if
available; update references to acquire_lock/release_lock and ensure
release_lock is only called once when chaining so other parts of the script
(like update_state and its RETURN trap) are not overridden.

459-464: Worktree skip on line 464 is unreachable dead code.

Line 461 checks [[ -d "$repo_dir/.git" ]] || continue — if .git is a file (as in a worktree), the -d test is false and continue fires, so execution never reaches line 464's -f check. The comment on line 463 is accurate about worktree behavior, but the guard is redundant.

Remove the unreachable check
 			# Skip if not a git repo
 			[[ -d "$repo_dir/.git" ]] || continue
 
-			# Skip git worktrees (they have .git as a file, not a directory)
-			[[ -f "$repo_dir/.git" ]] && continue
-
 			if sync_repo "$repo_dir"; then
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/scripts/repo-sync-helper.sh around lines 459 - 464, The second guard
that checks [[ -f "$repo_dir/.git" ]] is dead code because the earlier [[ -d
"$repo_dir/.git" ]] || continue already continues when .git is a file; update
the logic in the while loop that iterates over repo_dir so worktrees are skipped
correctly and without unreachable checks: either remove the redundant [[ -f
"$repo_dir/.git" ]] check, or invert to check for a file first (if [[ -f
"$repo_dir/.git" ]]; then continue) followed by the directory check (elif [[ -d
"$repo_dir/.git" ]]; then …) so the intent to skip worktrees is preserved and no
branch is unreachable.

535-535: Replace deprecated launchctl load -w with launchctl bootstrap using proper domain.

Apple labels launchctl load/unload as legacy subcommands. Use launchctl bootstrap gui/$(id -u) "$LAUNCHD_PLIST" instead (for user LaunchAgents) or sudo launchctl bootstrap system (for system daemons). The bootstrap command implicitly persists the job, so no -w equivalent is needed.

Recommended replacement pattern
-		if launchctl load -w "$LAUNCHD_PLIST" 2>/dev/null; then
+		if launchctl bootstrap gui/$(id -u) "$LAUNCHD_PLIST" 2>/dev/null; then
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/scripts/repo-sync-helper.sh at line 535, The script currently uses
the deprecated launchctl subcommand in the conditional that runs launchctl load
-w "$LAUNCHD_PLIST"; replace that invocation with the modern bootstrap form: for
user LaunchAgents call launchctl bootstrap gui/$(id -u) "$LAUNCHD_PLIST" (no -w)
and for system daemons use sudo launchctl bootstrap system "$LAUNCHD_PLIST" as
appropriate; update the conditional and any corresponding unload/failure
branches to use bootstrap/bootout semantics and preserve the LAUNCHD_PLIST and
user-id resolution (id -u) references so the script still targets the same job
domain.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.agents/aidevops/onboarding.md:
- Around line 1099-1104: The example uses a predictable /tmp/repos.json which is
insecure; update the snippet that writes to ~/.config/aidevops/repos.json to use
a safe temporary file (mktemp) or the redirect-to-same-file pattern with sponge
(or write into the target directory) instead of /tmp/repos.json; specifically
modify the jq + mv pipeline in the code block so it creates a unique temp file
via mktemp (or pipes to sponge) and then atomically replaces
~/.config/aidevops/repos.json, ensuring the commands that reference
~/.config/aidevops/repos.json and /tmp/repos.json are replaced with the safer
mktemp or sponge variants.

In @.agents/scripts/repo-sync-helper.sh:
- Around line 240-261: The get_default_branch function's fallback loop currently
checks main, master, trunk, and develop which can cause unintended auto-pulls
from non-standard branches; update the fallback in get_default_branch to only
check "main" and "master" (leave the remote HEAD logic intact) so only those two
branch names are considered when remote HEAD isn't available.
- Around line 459-476: The counting is brittle because it parses LOG_FILE;
change sync_repo to explicitly communicate its outcome (e.g., echo a single
token like "PULLED", "OK", "SKIP", or "ERROR" to stdout or use distinct exit
codes) and update the caller in the while loop to capture that output
(status=$(sync_repo "$repo_dir")) and branch on status to increment synced or
skipped instead of tailing LOG_FILE; keep logging to LOG_FILE as-is inside
sync_repo but do not rely on it for control flow, and ensure any redirected logs
from sync_repo are preserved so its stdout token remains reliable.
- Around line 127-158: The plist generator is inserting raw variable values
(${script_path}, ${env_path}, ${LAUNCHD_LABEL}, ${LOG_FILE}) into XML which can
break the plist if they contain &, <, or >; update the plist emission in the
function that prints this heredoc in .agents/scripts/repo-sync-helper.sh to
XML-escape those variables before interpolation (e.g. transform & to &amp;, < to
&lt;, > to &gt/) — you can do this by escaping each variable with parameter
expansion (e.g. replace occurrences of the variables with their escaped
equivalents) or by piping the heredoc through sed to perform s/&/\&amp;/g;
s/</\&lt;/g; s/>/\&gt;/g; and apply the same fix to the corresponding plist
generator in auto-update-helper.sh (refer to the heredoc that outputs the plist
and the variable names ${script_path}, ${env_path}, ${LAUNCHD_LABEL},
${LOG_FILE} to locate the changes).

---

Nitpick comments:
In @.agents/scripts/repo-sync-helper.sh:
- Around line 218-232: get_parent_dirs currently returns paths from jq with
literal '~' which isn't expanded; centralize tilde expansion in get_parent_dirs
by detecting entries that start with '~/' and replacing the leading '~' with the
value of $HOME (or using shell expansion) before echoing each dir so callers
always receive absolute paths; then remove the redundant tilde-expansion logic
from cmd_check and cmd_status and keep cmd_config showing the expanded paths
(update references to get_parent_dirs in those functions accordingly).
- Around line 45-51: The call to init_log_file is unused because the script
hardcodes LOG_FILE; either remove the dead init_log_file call or wire the init
to the rest of the script: ensure init_log_file sets AIDEVOPS_LOG_FILE and then
replace the hardcoded readonly LOG_FILE with a reference to AIDEVOPS_LOG_FILE
(and update the log() function to use AIDEVOPS_LOG_FILE instead of LOG_FILE) so
logging consistently uses the initialized path; search for init_log_file,
AIDEVOPS_LOG_FILE, LOG_FILE, and log() to make the change.
- Around line 414-428: The EXIT trap set in cmd_check currently overwrites any
existing EXIT trap; change cmd_check to preserve and chain existing EXIT
handlers instead of replacing them by capturing the current EXIT trap (via trap
-p EXIT or similar) and installing a new trap that calls release_lock and then
invokes the previous handler, or use the project’s established cleanup helper
(e.g., an add_exit_trap function) if available; update references to
acquire_lock/release_lock and ensure release_lock is only called once when
chaining so other parts of the script (like update_state and its RETURN trap)
are not overridden.
- Around line 459-464: The second guard that checks [[ -f "$repo_dir/.git" ]] is
dead code because the earlier [[ -d "$repo_dir/.git" ]] || continue already
continues when .git is a file; update the logic in the while loop that iterates
over repo_dir so worktrees are skipped correctly and without unreachable checks:
either remove the redundant [[ -f "$repo_dir/.git" ]] check, or invert to check
for a file first (if [[ -f "$repo_dir/.git" ]]; then continue) followed by the
directory check (elif [[ -d "$repo_dir/.git" ]]; then …) so the intent to skip
worktrees is preserved and no branch is unreachable.
- Line 535: The script currently uses the deprecated launchctl subcommand in the
conditional that runs launchctl load -w "$LAUNCHD_PLIST"; replace that
invocation with the modern bootstrap form: for user LaunchAgents call launchctl
bootstrap gui/$(id -u) "$LAUNCHD_PLIST" (no -w) and for system daemons use sudo
launchctl bootstrap system "$LAUNCHD_PLIST" as appropriate; update the
conditional and any corresponding unload/failure branches to use
bootstrap/bootout semantics and preserve the LAUNCHD_PLIST and user-id
resolution (id -u) references so the script still targets the same job domain.

Comment on lines +1099 to +1104
```bash
# Update repos.json with git_parent_dirs
jq --argjson dirs '["~/Git", "~/Projects"]' \
'. + {git_parent_dirs: $dirs}' \
~/.config/aidevops/repos.json > /tmp/repos.json && \
mv /tmp/repos.json ~/.config/aidevops/repos.json
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Example uses /tmp/repos.json — insecure temp file in user-facing documentation.

Users will copy-paste this. Using a predictable path in /tmp/ is a classic symlink-attack vector (CWE-377). Use a user-scoped temp path or the redirect-to-same-file pattern with sponge/mktemp.

Safer example
 # Update repos.json with git_parent_dirs
 jq --argjson dirs '["~/Git", "~/Projects"]' \
   '. + {git_parent_dirs: $dirs}' \
-  ~/.config/aidevops/repos.json > /tmp/repos.json && \
-  mv /tmp/repos.json ~/.config/aidevops/repos.json
+  ~/.config/aidevops/repos.json > ~/.config/aidevops/repos.json.tmp && \
+  mv ~/.config/aidevops/repos.json.tmp ~/.config/aidevops/repos.json
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/aidevops/onboarding.md around lines 1099 - 1104, The example uses a
predictable /tmp/repos.json which is insecure; update the snippet that writes to
~/.config/aidevops/repos.json to use a safe temporary file (mktemp) or the
redirect-to-same-file pattern with sponge (or write into the target directory)
instead of /tmp/repos.json; specifically modify the jq + mv pipeline in the code
block so it creates a unique temp file via mktemp (or pipes to sponge) and then
atomically replaces ~/.config/aidevops/repos.json, ensuring the commands that
reference ~/.config/aidevops/repos.json and /tmp/repos.json are replaced with
the safer mktemp or sponge variants.

Comment on lines +127 to +158
cat <<EOF
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>${LAUNCHD_LABEL}</string>
<key>ProgramArguments</key>
<array>
<string>${script_path}</string>
<string>check</string>
</array>
<key>StartInterval</key>
<integer>${interval_seconds}</integer>
<key>StandardOutPath</key>
<string>${LOG_FILE}</string>
<key>StandardErrorPath</key>
<string>${LOG_FILE}</string>
<key>EnvironmentVariables</key>
<dict>
<key>PATH</key>
<string>${env_path}</string>
</dict>
<key>RunAtLoad</key>
<false/>
<key>KeepAlive</key>
<false/>
</dict>
</plist>
EOF
return 0
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's view the actual file context around the snippet
cat -n .agents/scripts/repo-sync-helper.sh | head -170 | tail -50

Repository: marcusquinn/aidevops

Length of output: 1605


🏁 Script executed:

# Search for XML escaping functions and patterns in shell scripts
rg -n 'xml_escape|sed.*&amp;|gsub.*&.*&amp;' --type=sh -i

Repository: marcusquinn/aidevops

Length of output: 46


🏁 Script executed:

# Check if auto-update-helper.sh exists and examine its plist generation
fd 'auto-update-helper.sh' --exec cat {} \;

Repository: marcusquinn/aidevops

Length of output: 28582


XML variables in plist should be escaped to prevent malformed output.

${script_path}, ${env_path} (which is $PATH), ${LAUNCHD_LABEL}, and ${LOG_FILE} are interpolated directly into the plist XML. While the primary variables are application-controlled, ${env_path} reflects the system PATH, which could theoretically contain special XML characters (&, <, >). Standard practice for plist generation is to XML-escape these values.

The same issue exists in auto-update-helper.sh (_generate_auto_update_plist).

Recommended fix: Use sed to escape special characters before interpolation, e.g., ${script_path//&/\&amp;} in the heredoc, or pipe the heredoc through sed -e 's/&/\&amp;/g; s/</\&lt;/g; s/>/\&gt;/g'.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/scripts/repo-sync-helper.sh around lines 127 - 158, The plist
generator is inserting raw variable values (${script_path}, ${env_path},
${LAUNCHD_LABEL}, ${LOG_FILE}) into XML which can break the plist if they
contain &, <, or >; update the plist emission in the function that prints this
heredoc in .agents/scripts/repo-sync-helper.sh to XML-escape those variables
before interpolation (e.g. transform & to &amp;, < to &lt;, > to &gt/) — you can
do this by escaping each variable with parameter expansion (e.g. replace
occurrences of the variables with their escaped equivalents) or by piping the
heredoc through sed to perform s/&/\&amp;/g; s/</\&lt;/g; s/>/\&gt;/g; and apply
the same fix to the corresponding plist generator in auto-update-helper.sh
(refer to the heredoc that outputs the plist and the variable names
${script_path}, ${env_path}, ${LAUNCHD_LABEL}, ${LOG_FILE} to locate the
changes).

Comment on lines +240 to +261
get_default_branch() {
local repo_path="$1"

# Try to get from remote HEAD reference
local remote_head
remote_head=$(git -C "$repo_path" symbolic-ref refs/remotes/origin/HEAD 2>/dev/null || true)
if [[ -n "$remote_head" ]]; then
echo "${remote_head##*/}"
return 0
fi

# Fall back to checking common default branch names
for branch in main master trunk develop; do
if git -C "$repo_path" show-ref --verify --quiet "refs/heads/$branch" 2>/dev/null; then
echo "$branch"
return 0
fi
done

echo ""
return 0
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Default branch detection fallback includes develop and trunk — potential unintended auto-pulls.

The fallback loop (line 252) checks main, master, trunk, and develop. Auto-pulling develop or trunk may surprise users, since these aren't universally considered stable branches. The PR description specifically mentions "main or master" as the intended targets.

Restrict fallback to main/master only
 	# Fall back to checking common default branch names
-	for branch in main master trunk develop; do
+	for branch in main master; do
 		if git -C "$repo_path" show-ref --verify --quiet "refs/heads/$branch" 2>/dev/null; then
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
get_default_branch() {
local repo_path="$1"
# Try to get from remote HEAD reference
local remote_head
remote_head=$(git -C "$repo_path" symbolic-ref refs/remotes/origin/HEAD 2>/dev/null || true)
if [[ -n "$remote_head" ]]; then
echo "${remote_head##*/}"
return 0
fi
# Fall back to checking common default branch names
for branch in main master trunk develop; do
if git -C "$repo_path" show-ref --verify --quiet "refs/heads/$branch" 2>/dev/null; then
echo "$branch"
return 0
fi
done
echo ""
return 0
}
get_default_branch() {
local repo_path="$1"
# Try to get from remote HEAD reference
local remote_head
remote_head=$(git -C "$repo_path" symbolic-ref refs/remotes/origin/HEAD 2>/dev/null || true)
if [[ -n "$remote_head" ]]; then
echo "${remote_head##*/}"
return 0
fi
# Fall back to checking common default branch names
for branch in main master; do
if git -C "$repo_path" show-ref --verify --quiet "refs/heads/$branch" 2>/dev/null; then
echo "$branch"
return 0
fi
done
echo ""
return 0
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/scripts/repo-sync-helper.sh around lines 240 - 261, The
get_default_branch function's fallback loop currently checks main, master,
trunk, and develop which can cause unintended auto-pulls from non-standard
branches; update the fallback in get_default_branch to only check "main" and
"master" (leave the remote HEAD logic intact) so only those two branch names are
considered when remote HEAD isn't available.

Comment on lines +459 to +476
while IFS= read -r -d '' repo_dir; do
# Skip if not a git repo
[[ -d "$repo_dir/.git" ]] || continue

# Skip git worktrees (they have .git as a file, not a directory)
[[ -f "$repo_dir/.git" ]] && continue

if sync_repo "$repo_dir"; then
# Determine if it was pulled or skipped based on log
local last_log
last_log=$(tail -1 "$LOG_FILE" 2>/dev/null || true)
if [[ "$last_log" == *"PULLED"* ]]; then
synced=$((synced + 1))
elif [[ "$last_log" == *"OK"* ]]; then
: # already up to date — not counted as synced or skipped
else
skipped=$((skipped + 1))
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fragile counting logic — parsing last log line is racy and brittle.

The sync result counting reads the last line of LOG_FILE to determine whether a repo was PULLED, OK, or SKIP. This is unreliable because:

  1. Other background processes or concurrent writes could append lines between sync_repo returning and tail -1 reading.
  2. If sync_repo returns 0 without logging (e.g., the [[ ! -d "$repo_path/.git" ]] early return at line 290), the last log line belongs to a previous repo.

A more robust approach: have sync_repo communicate its outcome via a return code or stdout convention rather than parsing the shared log file.

Proposed fix: use stdout from sync_repo to report status
 sync_repo() {
 	local repo_path="$1"
 	local repo_name
 	repo_name=$(basename "$repo_path")
 
 	# Must be a git repo with a remote
 	if [[ ! -d "$repo_path/.git" ]]; then
+		echo "NOTREPO"
 		return 0
 	fi
 
 	# Must have at least one remote configured
 	local remote
 	remote=$(git -C "$repo_path" remote 2>/dev/null | head -1 || true)
 	if [[ -z "$remote" ]]; then
 		log_info "SKIP $repo_name: no remote configured"
+		echo "SKIP"
 		return 0
 	fi
 	# ... (similar for other SKIP paths)
 
 	# After successful pull:
 		log_info "PULLED $repo_name: updated to $new_sha"
+		echo "PULLED"
 		return 0
 	# After already up to date:
 		log_info "OK $repo_name: already up to date"
+		echo "OK"
 		return 0

Then in cmd_check:

-			if sync_repo "$repo_dir"; then
-				# Determine if it was pulled or skipped based on log
-				local last_log
-				last_log=$(tail -1 "$LOG_FILE" 2>/dev/null || true)
-				if [[ "$last_log" == *"PULLED"* ]]; then
-					synced=$((synced + 1))
-				elif [[ "$last_log" == *"OK"* ]]; then
-					: # already up to date
-				else
-					skipped=$((skipped + 1))
-				fi
+			local result
+			result=$(sync_repo "$repo_dir") && rc=0 || rc=$?
+			if [[ $rc -eq 0 ]]; then
+				case "$result" in
+					PULLED) synced=$((synced + 1)) ;;
+					SKIP) skipped=$((skipped + 1)) ;;
+					*) ;; # OK / NOTREPO
+				esac
 			else
 				failed=$((failed + 1))
 			fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/scripts/repo-sync-helper.sh around lines 459 - 476, The counting is
brittle because it parses LOG_FILE; change sync_repo to explicitly communicate
its outcome (e.g., echo a single token like "PULLED", "OK", "SKIP", or "ERROR"
to stdout or use distinct exit codes) and update the caller in the while loop to
capture that output (status=$(sync_repo "$repo_dir")) and branch on status to
increment synced or skipped instead of tailing LOG_FILE; keep logging to
LOG_FILE as-is inside sync_repo but do not rely on it for control flow, and
ensure any redirected logs from sync_repo are preserved so its stdout token
remains reliable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant