t1264: Daily repo sync — auto-pull latest for git repos in configured parent directories#1989
t1264: Daily repo sync — auto-pull latest for git repos in configured parent directories#1989marcusquinn merged 1 commit intomainfrom
Conversation
New repo-sync-helper.sh script (follows auto-update-helper.sh pattern) that runs daily via launchd/cron. Scans configured parent directories (default: ~/Git/) for git repos and runs git pull --ff-only on repos where the working tree is clean and on the default branch (main/master). - repo-sync-helper.sh: enable/disable/status/check/config/logs commands - aidevops.sh: adds 'repo-sync' subcommand and help text - setup.sh: offers to enable repo-sync during install (like auto-update) - onboarding.md: asks user to specify parent directories Config: add 'git_parent_dirs' array to ~/.config/aidevops/repos.json Safety: ff-only pulls only, skip dirty trees, skip non-default branches, log failures without stopping. Worktrees ignored (main checkouts only). Chose launchd on macOS / cron on Linux — matches auto-update-helper.sh pattern.
WalkthroughThis PR introduces a comprehensive repository synchronization feature that enables automatic daily git repository updates across configured parent directories. It includes a new daemon-like helper script with scheduler integration, CLI wiring, onboarding prompts, and setup automation to orchestrate the feature. Changes
Sequence DiagramsequenceDiagram
actor Scheduler as Scheduler<br/>(launchd/cron)
participant Helper as repo-sync-helper.sh
participant FS as Filesystem<br/>(Lock, Config)
participant Git as Git Repos
participant Log as State & Logs
Scheduler->>Helper: Trigger daily sync
Helper->>FS: Acquire filesystem lock
FS-->>Helper: Lock acquired (or wait/fail)
Helper->>FS: Read repos.json config
FS-->>Helper: git_parent_dirs list
Helper->>Helper: Discover subdirectories as repos
loop For each repo
Helper->>Git: Check if valid Git repo
Git-->>Helper: Remote URL present?
Helper->>Git: Determine default branch
Git-->>Helper: main/master/trunk/develop
Helper->>Git: Verify working tree clean
Git-->>Helper: Status (clean/dirty)
alt Conditions met
Helper->>Git: git fetch & compare HEAD
Git-->>Helper: Upstream changes available?
alt Pull needed
Helper->>Git: git pull --ff-only
Git-->>Helper: Success/Fast-forward only
Helper->>Log: Log PULLED
else Up-to-date
Helper->>Log: Log OK
end
else Conditions not met
Helper->>Log: Log SKIP or FAIL
end
end
Helper->>FS: Release lock
Helper->>Log: Update JSON state (timestamps, counts)
Log-->>Helper: State persisted
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related issues
Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a new automated daily Git repository synchronization feature. It aims to keep local Git repositories up-to-date by performing safe, fast-forward-only pulls, thereby reducing the need for manual updates and ensuring developers are always working with the latest code. The feature is integrated into the existing CLI and setup process, providing a seamless user experience for configuration and management. Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Fri Feb 20 00:31:00 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
|
There was a problem hiding this comment.
Code Review
This pull request introduces a new repo-sync-helper.sh script to automatically keep local git repositories updated. The changes include the helper script itself, a new aidevops repo-sync command, and integration into the setup.sh and onboarding documentation. The implementation is robust, with good use of locking for safety and platform-aware schedulers (launchd/cron).
My review focuses on improving robustness and adherence to the repository's shell scripting style guide and established rules. I've identified a few areas for improvement:
- A high-severity issue where the script relies on parsing log files to determine sync status, which is a fragile and racy pattern. I've suggested a more robust method using function output.
- Several instances of
2>/dev/nullthat violate the style guide and can mask important errors, aligning with the rule against suppressing errors on file operations. - A recommendation to improve an example command in the documentation to use
mktempfor safer temporary file handling.
Overall, this is a well-designed and useful feature. Addressing these points will make it even more reliable.
| if sync_repo "$repo_dir"; then | ||
| # Determine if it was pulled or skipped based on log | ||
| local last_log | ||
| last_log=$(tail -1 "$LOG_FILE" 2>/dev/null || true) | ||
| if [[ "$last_log" == *"PULLED"* ]]; then | ||
| synced=$((synced + 1)) | ||
| elif [[ "$last_log" == *"OK"* ]]; then | ||
| : # already up to date — not counted as synced or skipped | ||
| else | ||
| skipped=$((skipped + 1)) | ||
| fi | ||
| else | ||
| failed=$((failed + 1)) | ||
| fi |
There was a problem hiding this comment.
The logic to determine if a repository was "pulled" or "skipped" relies on parsing the last line of the log file using tail -1 "$LOG_FILE". This is a fragile and potentially racy approach. If any other process writes to the log file, or if logging from different sync_repo calls gets interleaved, the status could be misidentified, leading to incorrect sync statistics.
A more robust approach is for sync_repo to communicate its result directly to the caller via stdout.
Please modify sync_repo to echo a status string before returning. For example:
log_info "PULLED ..."; echo "PULLED"; return 0log_info "SKIP ..."; echo "SKIPPED"; return 0log_info "OK ..."; echo "OK"; return 0
Then, you can update this loop to capture and check the status directly:
| if sync_repo "$repo_dir"; then | |
| # Determine if it was pulled or skipped based on log | |
| local last_log | |
| last_log=$(tail -1 "$LOG_FILE" 2>/dev/null || true) | |
| if [[ "$last_log" == *"PULLED"* ]]; then | |
| synced=$((synced + 1)) | |
| elif [[ "$last_log" == *"OK"* ]]; then | |
| : # already up to date — not counted as synced or skipped | |
| else | |
| skipped=$((skipped + 1)) | |
| fi | |
| else | |
| failed=$((failed + 1)) | |
| fi | |
| local status | |
| status=$(sync_repo "$repo_dir") | |
| local rc=$? | |
| if [[ $rc -eq 0 ]]; then | |
| case "$status" in | |
| "PULLED") | |
| synced=$((synced + 1)) | |
| ;; | |
| "SKIPPED") | |
| skipped=$((skipped + 1)) | |
| ;; | |
| "OK") | |
| : # Already up to date, not counted. | |
| ;; | |
| esac | |
| else | |
| failed=$((failed + 1)) | |
| fi |
| jq --argjson dirs '["~/Git", "~/Projects"]' \ | ||
| '. + {git_parent_dirs: $dirs}' \ | ||
| ~/.config/aidevops/repos.json > /tmp/repos.json && \ | ||
| mv /tmp/repos.json ~/.config/aidevops/repos.json |
There was a problem hiding this comment.
The example jq command for updating repos.json uses a hardcoded temporary file path (/tmp/repos.json). This is not a safe pattern as it can lead to race conditions or conflicts if multiple processes attempt this operation simultaneously. The scripts in this repository correctly use mktemp for creating temporary files, and the documentation should reflect this best practice for consistency and safety.
| jq --argjson dirs '["~/Git", "~/Projects"]' \ | |
| '. + {git_parent_dirs: $dirs}' \ | |
| ~/.config/aidevops/repos.json > /tmp/repos.json && \ | |
| mv /tmp/repos.json ~/.config/aidevops/repos.json | |
| tmp_json=$(mktemp) | |
| jq --argjson dirs '["~/Git", "~/Projects"]' \ | |
| '. + {git_parent_dirs: $dirs}' \ | |
| ~/.config/aidevops/repos.json > "$tmp_json" && \ | |
| mv "$tmp_json" ~/.config/aidevops/repos.json |
| git -C "$repo_path" diff --quiet 2>/dev/null && | ||
| git -C "$repo_path" diff --cached --quiet 2>/dev/null |
There was a problem hiding this comment.
The git diff commands suppress stderr using 2>/dev/null. This violates the repository style guide (line 50), which prohibits blanket error suppression. If git fails for reasons other than a dirty working tree (e.g., not a git repository, permissions issue), the error will be silenced, which can make debugging difficult. It's better to redirect stderr to the log file to capture any unexpected errors.
| git -C "$repo_path" diff --quiet 2>/dev/null && | |
| git -C "$repo_path" diff --cached --quiet 2>/dev/null | |
| git -C "$repo_path" diff --quiet 2>>"$LOG_FILE" && | |
| git -C "$repo_path" diff --cached --quiet 2>>"$LOG_FILE" |
References
- Blanket suppression of stderr with
2>/dev/nullis not allowed. It is only acceptable when redirecting to log files. (link) - Avoid using
2>/dev/nullto suppress errors on file operations if the file's existence has already been verified by a preceding check (e.g.,[[ -f "$file" ]]or an early return). This practice is redundant for 'file not found' errors and can mask other important issues like permissions problems.
| else | ||
| failed=$((failed + 1)) | ||
| fi | ||
| done < <(find "$parent_dir" -maxdepth 1 -mindepth 1 -type d -print0 2>/dev/null) |
There was a problem hiding this comment.
The find command uses 2>/dev/null to suppress errors. This violates the repository style guide (line 50), which disallows blanket error suppression. This can hide important errors, such as permission denied when scanning a directory. Errors should be allowed to go to stderr, where they will be captured by the scheduler's logging mechanism.
| done < <(find "$parent_dir" -maxdepth 1 -mindepth 1 -type d -print0 2>/dev/null) | |
| done < <(find "$parent_dir" -maxdepth 1 -mindepth 1 -type d -print0) |
References
- Blanket suppression of stderr with
2>/dev/nullis not allowed. It is only acceptable when redirecting to log files. (link) - Avoid using
2>/dev/nullto suppress errors on file operations if the file's existence has already been verified by a preceding check (e.g.,[[ -f "$file" ]]or an early return). This practice is redundant for 'file not found' errors and can mask other important issues like permissions problems.
| dir="${dir/#\~/$HOME}" | ||
| if [[ -d "$dir" ]]; then | ||
| local count | ||
| count=$(find "$dir" -maxdepth 1 -mindepth 1 -type d 2>/dev/null | wc -l | tr -d ' ') |
There was a problem hiding this comment.
The find command uses 2>/dev/null to suppress errors. This violates the repository style guide (line 50), which disallows blanket error suppression. This can hide important errors, such as permission denied when scanning a directory. Since this command is for status reporting, showing an error to the user would be more informative than silently failing to count the directories.
| count=$(find "$dir" -maxdepth 1 -mindepth 1 -type d 2>/dev/null | wc -l | tr -d ' ') | |
| count=$(find "$dir" -maxdepth 1 -mindepth 1 -type d | wc -l | tr -d ' ') |
References
- Blanket suppression of stderr with
2>/dev/nullis not allowed. It is only acceptable when redirecting to log files. (link) - Avoid using
2>/dev/nullto suppress errors on file operations if the file's existence has already been verified by a preceding check (e.g.,[[ -f "$file" ]]or an early return). This practice is redundant for 'file not found' errors and can mask other important issues like permissions problems.
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (5)
.agents/scripts/repo-sync-helper.sh (5)
218-232:get_parent_dirsdoesn't expand~in paths read from JSON.The jq output from
repos.jsonwill contain literal~characters (e.g.,~/Git). The tilde expansion happens later incmd_check(line 440) andcmd_status(line 691), butcmd_config(line 743) displays them without expansion. More importantly, if another consumer callsget_parent_dirsand forgets to expand~, paths will silently fail. Consider centralizing the expansion insideget_parent_dirs.Expand ~ inside get_parent_dirs
get_parent_dirs() { if [[ -f "$CONFIG_FILE" ]] && command -v jq &>/dev/null; then local dirs dirs=$(jq -r '.git_parent_dirs[]? // empty' "$CONFIG_FILE" 2>/dev/null || true) if [[ -n "$dirs" ]]; then - echo "$dirs" + while IFS= read -r d; do + echo "${d/#\~/$HOME}" + done <<< "$dirs" return 0 fi fi # Fall back to defaults for dir in "${DEFAULT_PARENT_DIRS[@]}"; do echo "$dir" done return 0 }Then remove the duplicate expansion in
cmd_check(line 440) andcmd_status(line 691).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/repo-sync-helper.sh around lines 218 - 232, get_parent_dirs currently returns paths from jq with literal '~' which isn't expanded; centralize tilde expansion in get_parent_dirs by detecting entries that start with '~/' and replacing the leading '~' with the value of $HOME (or using shell expansion) before echoing each dir so callers always receive absolute paths; then remove the redundant tilde-expansion logic from cmd_check and cmd_status and keep cmd_config showing the expanded paths (update references to get_parent_dirs in those functions accordingly).
45-51:init_log_fileresult is unused —LOG_FILEis hardcoded separately.
init_log_file(line 45) setsAIDEVOPS_LOG_FILE, but all logging in this script uses the separately declaredLOG_FILE(line 51). Theinit_log_filecall is effectively dead. Either remove it or useAIDEVOPS_LOG_FILEin yourlog()function.Option A: Remove the unused call
-init_log_file - # Configuration readonly CONFIG_FILE="$HOME/.config/aidevops/repos.json"Option B: Use init_log_file output instead of hardcoding
init_log_file # Configuration readonly CONFIG_FILE="$HOME/.config/aidevops/repos.json" readonly LOCK_DIR="$HOME/.aidevops/locks" readonly LOCK_FILE="$LOCK_DIR/repo-sync.lock" -readonly LOG_FILE="$HOME/.aidevops/logs/repo-sync.log" +readonly LOG_FILE="$AIDEVOPS_LOG_FILE"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/repo-sync-helper.sh around lines 45 - 51, The call to init_log_file is unused because the script hardcodes LOG_FILE; either remove the dead init_log_file call or wire the init to the rest of the script: ensure init_log_file sets AIDEVOPS_LOG_FILE and then replace the hardcoded readonly LOG_FILE with a reference to AIDEVOPS_LOG_FILE (and update the log() function to use AIDEVOPS_LOG_FILE instead of LOG_FILE) so logging consistently uses the initialized path; search for init_log_file, AIDEVOPS_LOG_FILE, LOG_FILE, and log() to make the change.
414-428: EXIT trap incmd_checkoverwrites any previously set EXIT trap.Line 428 sets
trap 'release_lock' EXIT, which will replace any existing EXIT trap. If the script later callsupdate_state(line 484) which sets aRETURNtrap (line 378, safe), that's fine — but ifcmd_checkis invoked as part of a larger flow that already has an EXIT trap, the lock-release trap silently replaces it.Consider stacking the trap or using a cleanup function pattern consistent with the rest of the codebase.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/repo-sync-helper.sh around lines 414 - 428, The EXIT trap set in cmd_check currently overwrites any existing EXIT trap; change cmd_check to preserve and chain existing EXIT handlers instead of replacing them by capturing the current EXIT trap (via trap -p EXIT or similar) and installing a new trap that calls release_lock and then invokes the previous handler, or use the project’s established cleanup helper (e.g., an add_exit_trap function) if available; update references to acquire_lock/release_lock and ensure release_lock is only called once when chaining so other parts of the script (like update_state and its RETURN trap) are not overridden.
459-464: Worktree skip on line 464 is unreachable dead code.Line 461 checks
[[ -d "$repo_dir/.git" ]] || continue— if.gitis a file (as in a worktree), the-dtest is false andcontinuefires, so execution never reaches line 464's-fcheck. The comment on line 463 is accurate about worktree behavior, but the guard is redundant.Remove the unreachable check
# Skip if not a git repo [[ -d "$repo_dir/.git" ]] || continue - # Skip git worktrees (they have .git as a file, not a directory) - [[ -f "$repo_dir/.git" ]] && continue - if sync_repo "$repo_dir"; then🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/repo-sync-helper.sh around lines 459 - 464, The second guard that checks [[ -f "$repo_dir/.git" ]] is dead code because the earlier [[ -d "$repo_dir/.git" ]] || continue already continues when .git is a file; update the logic in the while loop that iterates over repo_dir so worktrees are skipped correctly and without unreachable checks: either remove the redundant [[ -f "$repo_dir/.git" ]] check, or invert to check for a file first (if [[ -f "$repo_dir/.git" ]]; then continue) followed by the directory check (elif [[ -d "$repo_dir/.git" ]]; then …) so the intent to skip worktrees is preserved and no branch is unreachable.
535-535: Replace deprecatedlaunchctl load -wwithlaunchctl bootstrapusing proper domain.Apple labels
launchctl load/unloadas legacy subcommands. Uselaunchctl bootstrap gui/$(id -u) "$LAUNCHD_PLIST"instead (for user LaunchAgents) orsudo launchctl bootstrap system(for system daemons). The bootstrap command implicitly persists the job, so no-wequivalent is needed.Recommended replacement pattern
- if launchctl load -w "$LAUNCHD_PLIST" 2>/dev/null; then + if launchctl bootstrap gui/$(id -u) "$LAUNCHD_PLIST" 2>/dev/null; then🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/repo-sync-helper.sh at line 535, The script currently uses the deprecated launchctl subcommand in the conditional that runs launchctl load -w "$LAUNCHD_PLIST"; replace that invocation with the modern bootstrap form: for user LaunchAgents call launchctl bootstrap gui/$(id -u) "$LAUNCHD_PLIST" (no -w) and for system daemons use sudo launchctl bootstrap system "$LAUNCHD_PLIST" as appropriate; update the conditional and any corresponding unload/failure branches to use bootstrap/bootout semantics and preserve the LAUNCHD_PLIST and user-id resolution (id -u) references so the script still targets the same job domain.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.agents/aidevops/onboarding.md:
- Around line 1099-1104: The example uses a predictable /tmp/repos.json which is
insecure; update the snippet that writes to ~/.config/aidevops/repos.json to use
a safe temporary file (mktemp) or the redirect-to-same-file pattern with sponge
(or write into the target directory) instead of /tmp/repos.json; specifically
modify the jq + mv pipeline in the code block so it creates a unique temp file
via mktemp (or pipes to sponge) and then atomically replaces
~/.config/aidevops/repos.json, ensuring the commands that reference
~/.config/aidevops/repos.json and /tmp/repos.json are replaced with the safer
mktemp or sponge variants.
In @.agents/scripts/repo-sync-helper.sh:
- Around line 240-261: The get_default_branch function's fallback loop currently
checks main, master, trunk, and develop which can cause unintended auto-pulls
from non-standard branches; update the fallback in get_default_branch to only
check "main" and "master" (leave the remote HEAD logic intact) so only those two
branch names are considered when remote HEAD isn't available.
- Around line 459-476: The counting is brittle because it parses LOG_FILE;
change sync_repo to explicitly communicate its outcome (e.g., echo a single
token like "PULLED", "OK", "SKIP", or "ERROR" to stdout or use distinct exit
codes) and update the caller in the while loop to capture that output
(status=$(sync_repo "$repo_dir")) and branch on status to increment synced or
skipped instead of tailing LOG_FILE; keep logging to LOG_FILE as-is inside
sync_repo but do not rely on it for control flow, and ensure any redirected logs
from sync_repo are preserved so its stdout token remains reliable.
- Around line 127-158: The plist generator is inserting raw variable values
(${script_path}, ${env_path}, ${LAUNCHD_LABEL}, ${LOG_FILE}) into XML which can
break the plist if they contain &, <, or >; update the plist emission in the
function that prints this heredoc in .agents/scripts/repo-sync-helper.sh to
XML-escape those variables before interpolation (e.g. transform & to &, < to
<, > to >/) — you can do this by escaping each variable with parameter
expansion (e.g. replace occurrences of the variables with their escaped
equivalents) or by piping the heredoc through sed to perform s/&/\&/g;
s/</\</g; s/>/\>/g; and apply the same fix to the corresponding plist
generator in auto-update-helper.sh (refer to the heredoc that outputs the plist
and the variable names ${script_path}, ${env_path}, ${LAUNCHD_LABEL},
${LOG_FILE} to locate the changes).
---
Nitpick comments:
In @.agents/scripts/repo-sync-helper.sh:
- Around line 218-232: get_parent_dirs currently returns paths from jq with
literal '~' which isn't expanded; centralize tilde expansion in get_parent_dirs
by detecting entries that start with '~/' and replacing the leading '~' with the
value of $HOME (or using shell expansion) before echoing each dir so callers
always receive absolute paths; then remove the redundant tilde-expansion logic
from cmd_check and cmd_status and keep cmd_config showing the expanded paths
(update references to get_parent_dirs in those functions accordingly).
- Around line 45-51: The call to init_log_file is unused because the script
hardcodes LOG_FILE; either remove the dead init_log_file call or wire the init
to the rest of the script: ensure init_log_file sets AIDEVOPS_LOG_FILE and then
replace the hardcoded readonly LOG_FILE with a reference to AIDEVOPS_LOG_FILE
(and update the log() function to use AIDEVOPS_LOG_FILE instead of LOG_FILE) so
logging consistently uses the initialized path; search for init_log_file,
AIDEVOPS_LOG_FILE, LOG_FILE, and log() to make the change.
- Around line 414-428: The EXIT trap set in cmd_check currently overwrites any
existing EXIT trap; change cmd_check to preserve and chain existing EXIT
handlers instead of replacing them by capturing the current EXIT trap (via trap
-p EXIT or similar) and installing a new trap that calls release_lock and then
invokes the previous handler, or use the project’s established cleanup helper
(e.g., an add_exit_trap function) if available; update references to
acquire_lock/release_lock and ensure release_lock is only called once when
chaining so other parts of the script (like update_state and its RETURN trap)
are not overridden.
- Around line 459-464: The second guard that checks [[ -f "$repo_dir/.git" ]] is
dead code because the earlier [[ -d "$repo_dir/.git" ]] || continue already
continues when .git is a file; update the logic in the while loop that iterates
over repo_dir so worktrees are skipped correctly and without unreachable checks:
either remove the redundant [[ -f "$repo_dir/.git" ]] check, or invert to check
for a file first (if [[ -f "$repo_dir/.git" ]]; then continue) followed by the
directory check (elif [[ -d "$repo_dir/.git" ]]; then …) so the intent to skip
worktrees is preserved and no branch is unreachable.
- Line 535: The script currently uses the deprecated launchctl subcommand in the
conditional that runs launchctl load -w "$LAUNCHD_PLIST"; replace that
invocation with the modern bootstrap form: for user LaunchAgents call launchctl
bootstrap gui/$(id -u) "$LAUNCHD_PLIST" (no -w) and for system daemons use sudo
launchctl bootstrap system "$LAUNCHD_PLIST" as appropriate; update the
conditional and any corresponding unload/failure branches to use
bootstrap/bootout semantics and preserve the LAUNCHD_PLIST and user-id
resolution (id -u) references so the script still targets the same job domain.
| ```bash | ||
| # Update repos.json with git_parent_dirs | ||
| jq --argjson dirs '["~/Git", "~/Projects"]' \ | ||
| '. + {git_parent_dirs: $dirs}' \ | ||
| ~/.config/aidevops/repos.json > /tmp/repos.json && \ | ||
| mv /tmp/repos.json ~/.config/aidevops/repos.json |
There was a problem hiding this comment.
Example uses /tmp/repos.json — insecure temp file in user-facing documentation.
Users will copy-paste this. Using a predictable path in /tmp/ is a classic symlink-attack vector (CWE-377). Use a user-scoped temp path or the redirect-to-same-file pattern with sponge/mktemp.
Safer example
# Update repos.json with git_parent_dirs
jq --argjson dirs '["~/Git", "~/Projects"]' \
'. + {git_parent_dirs: $dirs}' \
- ~/.config/aidevops/repos.json > /tmp/repos.json && \
- mv /tmp/repos.json ~/.config/aidevops/repos.json
+ ~/.config/aidevops/repos.json > ~/.config/aidevops/repos.json.tmp && \
+ mv ~/.config/aidevops/repos.json.tmp ~/.config/aidevops/repos.json🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.agents/aidevops/onboarding.md around lines 1099 - 1104, The example uses a
predictable /tmp/repos.json which is insecure; update the snippet that writes to
~/.config/aidevops/repos.json to use a safe temporary file (mktemp) or the
redirect-to-same-file pattern with sponge (or write into the target directory)
instead of /tmp/repos.json; specifically modify the jq + mv pipeline in the code
block so it creates a unique temp file via mktemp (or pipes to sponge) and then
atomically replaces ~/.config/aidevops/repos.json, ensuring the commands that
reference ~/.config/aidevops/repos.json and /tmp/repos.json are replaced with
the safer mktemp or sponge variants.
| cat <<EOF | ||
| <?xml version="1.0" encoding="UTF-8"?> | ||
| <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> | ||
| <plist version="1.0"> | ||
| <dict> | ||
| <key>Label</key> | ||
| <string>${LAUNCHD_LABEL}</string> | ||
| <key>ProgramArguments</key> | ||
| <array> | ||
| <string>${script_path}</string> | ||
| <string>check</string> | ||
| </array> | ||
| <key>StartInterval</key> | ||
| <integer>${interval_seconds}</integer> | ||
| <key>StandardOutPath</key> | ||
| <string>${LOG_FILE}</string> | ||
| <key>StandardErrorPath</key> | ||
| <string>${LOG_FILE}</string> | ||
| <key>EnvironmentVariables</key> | ||
| <dict> | ||
| <key>PATH</key> | ||
| <string>${env_path}</string> | ||
| </dict> | ||
| <key>RunAtLoad</key> | ||
| <false/> | ||
| <key>KeepAlive</key> | ||
| <false/> | ||
| </dict> | ||
| </plist> | ||
| EOF | ||
| return 0 | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's view the actual file context around the snippet
cat -n .agents/scripts/repo-sync-helper.sh | head -170 | tail -50Repository: marcusquinn/aidevops
Length of output: 1605
🏁 Script executed:
# Search for XML escaping functions and patterns in shell scripts
rg -n 'xml_escape|sed.*&|gsub.*&.*&' --type=sh -iRepository: marcusquinn/aidevops
Length of output: 46
🏁 Script executed:
# Check if auto-update-helper.sh exists and examine its plist generation
fd 'auto-update-helper.sh' --exec cat {} \;Repository: marcusquinn/aidevops
Length of output: 28582
XML variables in plist should be escaped to prevent malformed output.
${script_path}, ${env_path} (which is $PATH), ${LAUNCHD_LABEL}, and ${LOG_FILE} are interpolated directly into the plist XML. While the primary variables are application-controlled, ${env_path} reflects the system PATH, which could theoretically contain special XML characters (&, <, >). Standard practice for plist generation is to XML-escape these values.
The same issue exists in auto-update-helper.sh (_generate_auto_update_plist).
Recommended fix: Use sed to escape special characters before interpolation, e.g., ${script_path//&/\&} in the heredoc, or pipe the heredoc through sed -e 's/&/\&/g; s/</\</g; s/>/\>/g'.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.agents/scripts/repo-sync-helper.sh around lines 127 - 158, The plist
generator is inserting raw variable values (${script_path}, ${env_path},
${LAUNCHD_LABEL}, ${LOG_FILE}) into XML which can break the plist if they
contain &, <, or >; update the plist emission in the function that prints this
heredoc in .agents/scripts/repo-sync-helper.sh to XML-escape those variables
before interpolation (e.g. transform & to &, < to <, > to >/) — you can
do this by escaping each variable with parameter expansion (e.g. replace
occurrences of the variables with their escaped equivalents) or by piping the
heredoc through sed to perform s/&/\&/g; s/</\</g; s/>/\>/g; and apply
the same fix to the corresponding plist generator in auto-update-helper.sh
(refer to the heredoc that outputs the plist and the variable names
${script_path}, ${env_path}, ${LAUNCHD_LABEL}, ${LOG_FILE} to locate the
changes).
| get_default_branch() { | ||
| local repo_path="$1" | ||
|
|
||
| # Try to get from remote HEAD reference | ||
| local remote_head | ||
| remote_head=$(git -C "$repo_path" symbolic-ref refs/remotes/origin/HEAD 2>/dev/null || true) | ||
| if [[ -n "$remote_head" ]]; then | ||
| echo "${remote_head##*/}" | ||
| return 0 | ||
| fi | ||
|
|
||
| # Fall back to checking common default branch names | ||
| for branch in main master trunk develop; do | ||
| if git -C "$repo_path" show-ref --verify --quiet "refs/heads/$branch" 2>/dev/null; then | ||
| echo "$branch" | ||
| return 0 | ||
| fi | ||
| done | ||
|
|
||
| echo "" | ||
| return 0 | ||
| } |
There was a problem hiding this comment.
Default branch detection fallback includes develop and trunk — potential unintended auto-pulls.
The fallback loop (line 252) checks main, master, trunk, and develop. Auto-pulling develop or trunk may surprise users, since these aren't universally considered stable branches. The PR description specifically mentions "main or master" as the intended targets.
Restrict fallback to main/master only
# Fall back to checking common default branch names
- for branch in main master trunk develop; do
+ for branch in main master; do
if git -C "$repo_path" show-ref --verify --quiet "refs/heads/$branch" 2>/dev/null; then📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| get_default_branch() { | |
| local repo_path="$1" | |
| # Try to get from remote HEAD reference | |
| local remote_head | |
| remote_head=$(git -C "$repo_path" symbolic-ref refs/remotes/origin/HEAD 2>/dev/null || true) | |
| if [[ -n "$remote_head" ]]; then | |
| echo "${remote_head##*/}" | |
| return 0 | |
| fi | |
| # Fall back to checking common default branch names | |
| for branch in main master trunk develop; do | |
| if git -C "$repo_path" show-ref --verify --quiet "refs/heads/$branch" 2>/dev/null; then | |
| echo "$branch" | |
| return 0 | |
| fi | |
| done | |
| echo "" | |
| return 0 | |
| } | |
| get_default_branch() { | |
| local repo_path="$1" | |
| # Try to get from remote HEAD reference | |
| local remote_head | |
| remote_head=$(git -C "$repo_path" symbolic-ref refs/remotes/origin/HEAD 2>/dev/null || true) | |
| if [[ -n "$remote_head" ]]; then | |
| echo "${remote_head##*/}" | |
| return 0 | |
| fi | |
| # Fall back to checking common default branch names | |
| for branch in main master; do | |
| if git -C "$repo_path" show-ref --verify --quiet "refs/heads/$branch" 2>/dev/null; then | |
| echo "$branch" | |
| return 0 | |
| fi | |
| done | |
| echo "" | |
| return 0 | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.agents/scripts/repo-sync-helper.sh around lines 240 - 261, The
get_default_branch function's fallback loop currently checks main, master,
trunk, and develop which can cause unintended auto-pulls from non-standard
branches; update the fallback in get_default_branch to only check "main" and
"master" (leave the remote HEAD logic intact) so only those two branch names are
considered when remote HEAD isn't available.
| while IFS= read -r -d '' repo_dir; do | ||
| # Skip if not a git repo | ||
| [[ -d "$repo_dir/.git" ]] || continue | ||
|
|
||
| # Skip git worktrees (they have .git as a file, not a directory) | ||
| [[ -f "$repo_dir/.git" ]] && continue | ||
|
|
||
| if sync_repo "$repo_dir"; then | ||
| # Determine if it was pulled or skipped based on log | ||
| local last_log | ||
| last_log=$(tail -1 "$LOG_FILE" 2>/dev/null || true) | ||
| if [[ "$last_log" == *"PULLED"* ]]; then | ||
| synced=$((synced + 1)) | ||
| elif [[ "$last_log" == *"OK"* ]]; then | ||
| : # already up to date — not counted as synced or skipped | ||
| else | ||
| skipped=$((skipped + 1)) | ||
| fi |
There was a problem hiding this comment.
Fragile counting logic — parsing last log line is racy and brittle.
The sync result counting reads the last line of LOG_FILE to determine whether a repo was PULLED, OK, or SKIP. This is unreliable because:
- Other background processes or concurrent writes could append lines between
sync_reporeturning andtail -1reading. - If
sync_reporeturns 0 without logging (e.g., the[[ ! -d "$repo_path/.git" ]]early return at line 290), the last log line belongs to a previous repo.
A more robust approach: have sync_repo communicate its outcome via a return code or stdout convention rather than parsing the shared log file.
Proposed fix: use stdout from sync_repo to report status
sync_repo() {
local repo_path="$1"
local repo_name
repo_name=$(basename "$repo_path")
# Must be a git repo with a remote
if [[ ! -d "$repo_path/.git" ]]; then
+ echo "NOTREPO"
return 0
fi
# Must have at least one remote configured
local remote
remote=$(git -C "$repo_path" remote 2>/dev/null | head -1 || true)
if [[ -z "$remote" ]]; then
log_info "SKIP $repo_name: no remote configured"
+ echo "SKIP"
return 0
fi
# ... (similar for other SKIP paths)
# After successful pull:
log_info "PULLED $repo_name: updated to $new_sha"
+ echo "PULLED"
return 0
# After already up to date:
log_info "OK $repo_name: already up to date"
+ echo "OK"
return 0Then in cmd_check:
- if sync_repo "$repo_dir"; then
- # Determine if it was pulled or skipped based on log
- local last_log
- last_log=$(tail -1 "$LOG_FILE" 2>/dev/null || true)
- if [[ "$last_log" == *"PULLED"* ]]; then
- synced=$((synced + 1))
- elif [[ "$last_log" == *"OK"* ]]; then
- : # already up to date
- else
- skipped=$((skipped + 1))
- fi
+ local result
+ result=$(sync_repo "$repo_dir") && rc=0 || rc=$?
+ if [[ $rc -eq 0 ]]; then
+ case "$result" in
+ PULLED) synced=$((synced + 1)) ;;
+ SKIP) skipped=$((skipped + 1)) ;;
+ *) ;; # OK / NOTREPO
+ esac
else
failed=$((failed + 1))
fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.agents/scripts/repo-sync-helper.sh around lines 459 - 476, The counting is
brittle because it parses LOG_FILE; change sync_repo to explicitly communicate
its outcome (e.g., echo a single token like "PULLED", "OK", "SKIP", or "ERROR"
to stdout or use distinct exit codes) and update the caller in the while loop to
capture that output (status=$(sync_repo "$repo_dir")) and branch on status to
increment synced or skipped instead of tailing LOG_FILE; keep logging to
LOG_FILE as-is inside sync_repo but do not rely on it for control flow, and
ensure any redirected logs from sync_repo are preserved so its stdout token
remains reliable.
Auto-dismissed: bot review does not block autonomous pipeline



Summary
repo-sync-helper.shscript (followsauto-update-helper.shpattern) that runs daily via launchd/cron~/Git/) for git repos and runsgit pull --ff-onlyon repos where the working tree is clean and on the default branch (main/master)aidevops repo-sync [enable|disable|status|check|config|logs]CLI subcommandsetup.shoffers to enable during install (like auto-update and supervisor pulse)Changes
.agents/scripts/repo-sync-helper.sh(new)enable— installs launchd LaunchAgent (macOS) or cron entry (Linux, daily at 3am)disable— removes schedulerstatus— shows scheduler state, configured dirs, last sync resultscheck— one-shot sync of all configured reposconfig— shows configuration and how to editrepos.jsonlogs [--tail N] [--follow]— view sync logsaidevops.shrepo-sync | reposyncsubcommand dispatchsetup.sh.agents/aidevops/onboarding.mdSafety
git pull --ff-only— never creates merge commitsConfiguration
Add
git_parent_dirsarray to~/.config/aidevops/repos.json:{"git_parent_dirs": ["~/Git", "~/Projects"]}Default:
~/GitRef #1984
Summary by CodeRabbit
Release Notes