fix: harden 14 files across 17 quality-debt issues#3916
fix: harden 14 files across 17 quality-debt issues#3916johnwaldo wants to merge 17 commits intomarcusquinn:mainfrom
Conversation
content-scanner-helper.sh (GH#3897, 6 findings): - Reorder scan_content() to normalize before pre-filter (prevents Unicode bypass) - Pre-filter uses regex matching for keywords with metacharacters - Combine $RANDOM+$$+$SECONDS for boundary ID fallback - Source shared-constants.sh with || true instead of 2>/dev/null - Propagate perl/python3 normalization stderr and exit codes - Remove od stderr suppression, use tr -d '[:space:]' deploy-agents-on-merge.sh (GH#3909, 4 findings): - Validate since_commit rejects options and verifies via rev-parse - Guard find cleanup with TARGET_DIR empty check - Add -- separator to cp, chmod, rm in deploy_changed_files - Remove 2>/dev/null from find -delete fallback
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the security and robustness of two critical shell scripts: Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (38)
WalkthroughReorders normalization before pre-filtering and strengthens multiple shell scripts: fixes Unicode bypasses and regex handling in the content scanner, tightens input validation and argument-safety in deploy and various helpers, removes broad stderr suppression, and adds several robustness and parsing improvements across scripts and tests. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request introduces a valuable set of security and robustness enhancements to content-scanner-helper.sh and deploy-agents-on-merge.sh. The changes are well-documented and effectively address important issues like potential command injection, Unicode bypass vulnerabilities, and unsafe file operations. The hardening measures are solid. I have one suggestion to further improve the robustness of the keyword pre-filtering logic in content-scanner-helper.sh.
| return 0 | ||
| # Keywords containing regex metacharacters (.*+?|) use regex matching; | ||
| # plain keywords use faster literal substring matching. | ||
| if [[ "$keyword" =~ [.*+?\|] ]]; then |
There was a problem hiding this comment.
The character class [.*+?|] used to detect if a keyword should be treated as a regex is incomplete. It misses other common ERE metacharacters like (, ), ^, and $. This could lead to keywords intended as regular expressions being incorrectly treated as literal strings, which might cause them to not match as expected. For example, a keyword like (foo|bar) would be treated as a literal string. Consider expanding the character class to be more comprehensive for better accuracy.
| if [[ "$keyword" =~ [.*+?\|] ]]; then | |
| if [[ "$keyword" =~ [.*+?|()^$] ]]; then |
There was a problem hiding this comment.
Applied in commit e34ea8c — the character class was expanded to [.*+?|()^$] to include ()^$. Also cleaned up an unnecessary backslash escape before | in the latest push.
…ation (GH#3899, GH#3904, GH#3896) terminal-title-setup.sh (GH#3899, 1 finding): - Propagate check_and_fix_tabby exit code in cmd_install via || return 1 - Add || return 1 to tabby_enable_dynamic_titles call within check_and_fix_tabby - Add explicit return 0 at end of check_and_fix_tabby session-manager.md (GH#3904, 3 findings): - Escape $dir via printf %q in spawn_terminal_tab and spawn_iterm_tab to prevent osascript command injection from special characters in paths - Remove 2>/dev/null from grep, gh, VERSION read, and git describe examples response-scoring-helper.sh (GH#3896, 2 findings): - Deduplicate scores to one scorer per criterion (latest by rowid) via LEFT JOIN subquery, ensuring per-criterion MAX(CASE) and weighted avg use the same row set — fixes inconsistency when multiple scorers exist - Inline weighted avg computation instead of WEIGHTED_AVG_SQL correlated subquery so both aggregations operate on the deduplicated join
…ll, dedup docs (GH#3898, GH#3905, GH#3912, GH#3901, GH#3913) credential-helper.sh (GH#3898, 7 findings): - Remove 2>/dev/null from 5 file reads where existence is already checked - Remove 2>/dev/null from 2 validate_tenant_name calls for consistency cloudron.md (GH#3905, 3 findings): - Use MYSQL_PWD env var instead of -p flag to avoid password in process list - Remove obsolete security notes about -p flag exposure opencode-gitlab-ci.yml (GH#3912, 1 finding): - Download glab archive to file before extracting (enables checksum verification) - Add comment with instructions for pinning SHA-256 checksum - Clean up archive after extraction video.md (GH#3901): already fixed in current codebase — no changes needed AGENTS.md (GH#3913, 1 finding): - Deduplicate claim-task-id.sh guidance in Self-Improvement section by referencing the cross-repo task creation workflow instead of repeating it
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
…y docs (GH#3225, GH#3220, GH#3319, GH#3153, GH#3297, GH#3199) settings-helper.sh (GH#3225, 1 critical finding): - Use --argjson for boolean and number types instead of string interpolation into jq filter, preventing potential injection if validation is relaxed - Number validation now uses jq tonumber (supports floats) instead of regex prompt-guard-helper.sh (GH#3220, 3 critical findings): - Sanitize matched_text in pipe-delimited output: replace | with fullwidth pipe and newlines with spaces to prevent format corruption and log injection - Limit stdin input to 10MB in cmd_scan_stdin to prevent DoS via memory exhaustion from untrusted content - Harden YAML pattern parser regexes: use negated character classes ([^"]+ / [^']+) instead of greedy .+ to handle trailing comments and edge cases plugins.sh (GH#3319, 1 critical finding): - Remove 2>/dev/null from uv python install to surface installation errors - Add missing failure feedback when uv installs but verification fails pulse-wrapper.sh (GH#3153, partial — 2 of 8 findings): - Remove 2>/dev/null from grep calls on sonar-project.properties where file existence is already checked local-hosting.md (GH#3297, 1 critical finding): - Replace 2>/dev/null with || true on xargs kill command google-chat.md (GH#3199, 1 critical finding): - Add WARNING about never disabling verifyGoogleTokens in production
…ession, arg guards (GH#3234, GH#3355, GH#3147, GH#3252, GH#3239, GH#3400, GH#3322, GH#3317, GH#3434, GH#3157, GH#3627, GH#3728, GH#3200)
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
.agents/scripts/skills-helper.sh (1)
197-205:⚠️ Potential issue | 🟡 MinorDon't collapse registry failures into “no results.”
With
|| true, a brokennpxinvocation now falls through to the empty-results branch after printing the real error, which gives conflicting feedback and hides the failure from callers.♻️ Suggested change
local raw_output - # Strip ANSI escape codes for parsing, but capture raw for display - raw_output=$(npx --yes skills find "$query" || true) + if ! raw_output=$(npx --yes skills find "$query"); then + log_error "Registry search failed for '$query'" + return 1 + fi if [[ -z "$raw_output" ]]; then log_warning "No results from skills.sh registry for '$query'"As per coding guidelines,
.agents/scripts/*.shautomation scripts should provide clear logging and feedback.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/skills-helper.sh around lines 197 - 205, The current raw_output assignment uses "npx --yes skills find \"$query\" || true" which hides failures by always succeeding and then treating errors as "no results"; change this so the npx invocation's exit status is checked: run the npx command into raw_output (raw_output=$(npx --yes skills find "$query")) and if it fails (non-zero exit) call log_error with the command's stderr (or a descriptive message) and return a non-zero status instead of falling through to the empty-results branch; keep the existing empty-result handling (the [[ -z "$raw_output" ]] branch and log_warning) for genuine empty responses from the skills find call..agents/scripts/settings-helper.sh (1)
227-238:⚠️ Potential issue | 🔴 CriticalConsolidate key validation logic using
default_typeto prevent false-valued settings and section corruption.Lines 227–238 contain two critical issues:
False-valued defaults are rejected as missing: The check
getpath($k | split(".")) // "__MISSING__"treatsfalsethe same asnull, so valid settings likeui.verboseandonboarding.completedfail validation despite existing in defaults with typeboolean.Section keys can be corrupted to scalars: Object-valued keys like
auto_updatepass validation and then get overwritten with a string via--argat line 276, replacing{"enabled":true,"interval_minutes":10}with"false".Replace the validation and type-detection logic to use
default_typeas the single source of truth, checking fornull(missing key) andobject(non-leaf section):Suggested fix
- # Validate the key exists in defaults (use getpath with key as data, not code) - local default_check - default_check=$(_generate_defaults | jq -r --arg k "$key" 'getpath($k | split(".")) // "__MISSING__"' 2>/dev/null || echo "__MISSING__") - if [[ "$default_check" == "__MISSING__" ]]; then + # Determine the default type once, then validate against it. + local default_type + default_type=$(_generate_defaults | jq -r --arg k "$key" 'getpath($k | split(".")) | type' 2>/dev/null || echo "null") + if [[ "$default_type" == "null" ]]; then print_error "Unknown setting: $key" print_info "Run 'settings-helper.sh list' to see available settings" return 1 fi - - # Determine value type from defaults and coerce accordingly - local default_type - default_type=$(_generate_defaults | jq -r --arg k "$key" 'getpath($k | split(".")) | type' 2>/dev/null || echo "string") + if [[ "$default_type" == "object" ]]; then + print_error "Setting '$key' is a section, not a writable leaf setting" + print_info "Run 'settings-helper.sh list' to see available leaf settings" + return 1 + fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/settings-helper.sh around lines 227 - 238, Compute the key's type once via default_type (using _generate_defaults | jq -r --arg k "$key" 'getpath($k|split(".")) | type' 2>/dev/null || echo "null") and use that as the single source of truth: if default_type == "null" call print_error "Unknown setting: $key" and print_info "Run 'settings-helper.sh list'…" and return 1; if default_type == "object" treat it as a section key (print_error about invalid section key and return 1) so object-valued keys are not overwritten by scalar --arg later; remove the prior existence check that relied on getpath ... // "__MISSING__" (which treats false as missing) and ensure downstream code that writes the value uses default_type to choose --argjson for booleans/numbers/objects or --arg for strings.
♻️ Duplicate comments (5)
.github/workflows/issue-sync.yml (1)
315-315:⚠️ Potential issue | 🟠 MajorTighten and align the closing-keyword regexes.
Line 315 now accepts singular forms, but Line 568 still rejects them, so
Fix#123will be processed on merge while still triggering the warning flow on open/edit. Both patterns also match inside larger words like `bugfixes `#123, which can pull unrelated issues intoALL_ISSUESand applystatus:donehygiene to the wrong ticket.🔧 Suggested fix
- LINKED_ISSUES=$(echo "$PR_BODY" | grep -oiE '(closes?|fixes?|resolves?)[[:space:]]*#[0-9]+' | grep -oE '[0-9]+' | sort -u | tr '\n' ' ' || true) + LINKED_ISSUES=$(printf '%s\n' "$PR_BODY" | grep -oiE '(^|[^[:alnum:]_])(close|closes|fix|fixes|resolve|resolves)[[:space:]]*#[0-9]+([^[:alnum:]_]|$)' | grep -oE '[0-9]+' | sort -u | tr '\n' ' ' || true) - if echo "$PR_BODY" | grep -qiE '(closes|fixes|resolves)[[:space:]]*#[0-9]+'; then + if printf '%s\n' "$PR_BODY" | grep -qiE '(^|[^[:alnum:]_])(close|closes|fix|fixes|resolve|resolves)[[:space:]]*#[0-9]+([^[:alnum:]_]|$)'; thenAlso applies to: 568-568
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/issue-sync.yml at line 315, The closing-keyword regex used to populate LINKED_ISSUES is too permissive and inconsistent with the one at the other location (and can match inside larger words); update both occurrences (the grep that sets LINKED_ISSUES and the grep that builds ALL_ISSUES/warning flow) to use a single, consistent regex that enforces word boundaries and allows singular or plural forms (e.g. use a pattern equivalent to \b(?:close|closes|fix|fixes|resolve|resolves)\b[[:space:]]*#[0-9]+ with grep -E/ -P as appropriate) so “Fix `#123`” and “Fixes `#123`” are accepted consistently and strings like “bugfixes `#123`” are not matched..agents/scripts/memory/recall.sh (1)
136-145:⚠️ Potential issue | 🟠 MajorUse exact token matching for
--typevalidation.
[[ ... =~ ... ]]still treatstype_filteras a regex, so values like.*can pass and malformed patterns can fail before your error path runs. A simple exact-match loop avoids both cases.Proposed fix
local type_where="" if [[ -n "$type_filter" ]]; then - local type_pattern=" $type_filter " - if [[ ! " $VALID_TYPES " =~ $type_pattern ]]; then + local valid_type=false + local candidate + for candidate in $VALID_TYPES; do + if [[ "$candidate" == "$type_filter" ]]; then + valid_type=true + break + fi + done + if [[ "$valid_type" != "true" ]]; then log_error "Invalid type: $type_filter" log_error "Valid types: $VALID_TYPES" return 1 fi type_where="AND l.type = '$type_filter'" fi#!/bin/bash # Reproduce the current regex behavior and show the affected code. valid='WORKING_SOLUTION FAILED_APPROACH CODEBASE_PATTERN USER_PREFERENCE TOOL_CONFIG DECISION CONTEXT ARCHITECTURAL_DECISION ERROR_FIX OPEN_THREAD SUCCESS_PATTERN FAILURE_PATTERN' for candidate in 'SUCCESS_PATTERN' '.*' '['; do printf 'candidate=%q -> ' "$candidate" bash -lc ' valid="$1" candidate="$2" type_pattern=" $candidate " if [[ " $valid " =~ $type_pattern ]]; then echo accepted else echo rejected fi ' _ "$valid" "$candidate" done echo sed -n '136,145p' .agents/scripts/memory/recall.sh🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/memory/recall.sh around lines 136 - 145, The type validation currently uses a regex match via [[ " $VALID_TYPES " =~ $type_pattern ]] which treats type_filter as a regex; replace this with an exact-match check by iterating over the VALID_TYPES tokens (or using a case statement) and comparing each token to the variable type_filter; if no token equals type_filter call log_error twice as before and return 1, then set type_where="AND l.type = '$type_filter'"; reference the variables/type names type_filter, VALID_TYPES, type_where and the log_error calls when making the change..agents/scripts/email-thread-reconstruction.py (1)
239-249:⚠️ Potential issue | 🟠 MajorBuild thread-index links relative to
output_file.
Path(email["file"]).nameonly works when the index is written beside the emails. If--outputpoints somewhere else, every markdown link in the generated index breaks.🛠️ Proposed fix
+import os import sys import re from pathlib import Path @@ - file_path = Path(email["file"]).name + file_path = Path( + os.path.relpath(email["file"], start=Path(output_file).parent) + ).as_posix() email_subject = email.get("subject", "No Subject") from_addr = email.get("from", "Unknown") date_sent = email.get("date_sent", "Unknown")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/email-thread-reconstruction.py around lines 239 - 249, The markdown links are built using Path(email["file"]).name which breaks when --output points to a different directory; instead compute the link path relative to the output file location (use output_file or its Path parent) so links point correctly from the index to each email. Update the file_path calculation that uses email["file"] to produce a relative path (e.g., via os.path.relpath or Path(...).relative_to with a safe fallback) before appending the line in the block that builds lines (the code that sets file_path and appends f"{indent}{i}. [{email_subject}]({file_path}) - {from_addr} - {date_sent}")..agents/scripts/response-scoring-helper.sh (1)
345-369:⚠️ Potential issue | 🟠 MajorThe scorer-dedup fix still is not the single source of truth.
These changes fix
WEIGHTED_AVG_SQLand_sync_score_to_patterns(), butcmd_leaderboard()still averages rawscores, andcmd_compare()/cmd_export()still surface per-criterion data from raw rows. With multiple scorers on one criterion, leaderboard/export output can still disagree with compare/sync.Also applies to: 398-433
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/response-scoring-helper.sh around lines 345 - 369, The leaderboard/compare/export paths still read raw scores causing inconsistencies; update cmd_leaderboard(), cmd_compare(), and cmd_export() to use the same deduped/latest-per-criterion logic as WEIGHTED_AVG_SQL (the inner subquery that selects s1.* joined to max(rowid) per response_id,criterion) so they compute averages and per-criterion values from the deduplicated set, and ensure _sync_score_to_patterns() remains the canonical sync path used by these commands; in short, replace raw references to scores in cmd_leaderboard/cmd_compare/cmd_export with queries or helpers that select from the deduped "s" derived set (the same pattern used inside WEIGHTED_AVG_SQL) so all outputs agree..agents/scripts/pulse-wrapper.sh (1)
2508-2510:⚠️ Potential issue | 🟡 Minor
sonar-project.propertiesparsing is still too brittle.Exact
^key=matching misses validkey = valueentries, andcut -d= -f2preserves trailing\rfrom CRLF files. That can leaveproject_key/org_keyempty or malformed and make SonarCloud checks get skipped for otherwise valid configs.🛠️ Proposed fix
- project_key=$(grep '^sonar.projectKey=' "${repo_path}/sonar-project.properties" | cut -d= -f2 || true) + project_key=$(sed -nE 's/^[[:space:]]*sonar\.projectKey[[:space:]]*=[[:space:]]*//p' "${repo_path}/sonar-project.properties" | head -n1 | tr -d '\r') local org_key - org_key=$(grep '^sonar.organization=' "${repo_path}/sonar-project.properties" | cut -d= -f2 || true) + org_key=$(sed -nE 's/^[[:space:]]*sonar\.organization[[:space:]]*=[[:space:]]*//p' "${repo_path}/sonar-project.properties" | head -n1 | tr -d '\r')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/pulse-wrapper.sh around lines 2508 - 2510, The sonar-project.properties parsing is brittle: update how project_key and org_key are extracted from "${repo_path}/sonar-project.properties" so keys with optional surrounding whitespace like "sonar.projectKey = value" are matched and any trailing CRLF/whitespace is removed; replace the current grep+cut lines that set project_key and org_key with commands (e.g., using awk/sed/grep+sed) that match /^[[:space:]]*sonar.projectKey[[:space:]]*=/ and /^[[:space:]]*sonar.organization[[:space:]]*=/, extract the right-hand side, trim leading/trailing whitespace and strip trailing '\r' characters before assigning to the variables project_key and org_key (preserve the "|| true" behavior).
🧹 Nitpick comments (6)
.agents/AGENTS.md (1)
60-60: Avoid duplicating theclaim-task-id.shinvocation here.This line now hardcodes command syntax that is already defined in the authoritative cross-repo workflow at Lines 162-169. Keeping the example in both places will drift the next time that workflow changes; keep this bullet focused on ownership/routing and point readers to the workflow section for the exact command.
Based on learnings,
AGENTS.mdshould use progressive disclosure with pointers rather than inline duplicated guidance.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/AGENTS.md at line 60, Remove the duplicated hardcoded claim-task-id.sh example in the "Framework-level" bullet: delete the inline command invocation (claim-task-id.sh --repo-path ...) and instead replace it with a short pointer that directs readers to the cross-repo task creation workflow section (the workflow that defines the authoritative claim-task-id.sh usage) while keeping the bullet focused on ownership/routing and noting that the exact command is documented in the workflow section..agents/services/hosting/cloudron.md (2)
293-293: Expand the security note to cover shell history exposure.The security note currently mentions only
docker inspectrevealing credentials, but the command at line 290 also exposes the password through shell history and the host's process list (via command substitution). Consider expanding the note to provide complete guidance.📝 Suggested security note expansion
-> **Security note**: The `docker inspect` command above reveals database credentials. Redact passwords before pasting output into forum posts, tickets, or chat. +> **Security note**: Both the `docker inspect` command and the root-access command expose database credentials. The password file is stored at `/home/yellowtent/platformdata/mysql/root_password`. When using command substitution to read passwords, be aware they will appear in your shell history. Always redact credentials before sharing commands, output, or logs.Based on learnings: Use placeholders in code examples and note secure storage locations for sensitive configuration.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/services/hosting/cloudron.md at line 293, Expand the existing security note that references `docker inspect` to also warn that running commands with embedded passwords or command substitution can expose secrets in shell history and the host process list; instruct readers to avoid inline passwords (use placeholders in examples), clear or redact shell history (or use a secure shell session), and store credentials in secure locations (env variables, secret managers, or mounted files) rather than embedding them in commands so that both `docker inspect` output and process/shell history leaks are prevented.
289-290: Consider a two-step approach to avoid password exposure in host shell history.While
MYSQL_PWDcorrectly prevents the password from appearing in the container's MySQL process list (an improvement over-p), the command substitution$(cat /home/yellowtent/platformdata/mysql/root_password)still exposes the password in the host's shell history and process list.For documentation intended as a reference pattern, consider showing a safer two-step approach:
🔐 Safer alternative using a two-step process
# Or use root access -docker exec -it -e MYSQL_PWD="$(cat /home/yellowtent/platformdata/mysql/root_password)" mysql mysql -uroot +# Step 1: Read password into a variable (use 'read' to avoid shell history) +read -r -s MYSQL_ROOT_PWD < /home/yellowtent/platformdata/mysql/root_password +# Step 2: Pass to container via environment +docker exec -it -e MYSQL_PWD="$MYSQL_ROOT_PWD" mysql mysql -uroot +unset MYSQL_ROOT_PWDAlternatively, for interactive use where shell history is less of a concern, document the current command with an explicit note about the trade-offs.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/services/hosting/cloudron.md around lines 289 - 290, Update the documentation to avoid exposing the password via command substitution: replace the single-line example that uses MYSQL_PWD="$(cat /home/yellowtent/platformdata/mysql/root_password)" with a safer two-step pattern—briefly describe reading the password into a temporary environment (or using a silent prompt), running docker exec mysql -uroot with that temp credential, then immediately unsetting/clearing it; reference the original symbols MYSQL_PWD, docker exec, /home/yellowtent/platformdata/mysql/root_password and mysql -uroot so readers can locate the exact example, and also add a short note explaining the interactive trade-offs if they prefer the original one-liner..agents/scripts/worktree-helper.sh (1)
718-720: Consider usingcutfor consistency with existing code.This file already computes
main_worktreein two other places (lines 76 and 571) usingcut -d' ' -f2-. Usingsedhere works correctly but creates minor inconsistency.♻️ Align with existing pattern
# Identify the main worktree path (the one with the actual .git directory) local main_worktree - main_worktree="$(git worktree list --porcelain | head -1 | sed 's/^worktree //')" + main_worktree="$(git worktree list --porcelain | head -1 | cut -d' ' -f2-)"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/worktree-helper.sh around lines 718 - 720, Replace the sed extraction for main_worktree with the same cut pattern used elsewhere for consistency: when assigning main_worktree (the variable set from "git worktree list --porcelain"), pipe the output through cut -d' ' -f2- instead of sed 's/^worktree //' so it matches the other occurrences (lines computing main_worktree) and keeps the parsing approach consistent across the script..agents/scripts/prompt-guard-helper.sh (2)
1650-1662: Consider aligning error handling withscan-stdinpattern.The
check-stdinpath uses direct assignmentcontent=$(head -c "$max_bytes")and relies onset -efor error handling, whilecmd_scan_stdin(line 719) explicitly checksif ! content=$(head -c "$max_bytes"); then.Both work correctly due to
set -e, but the explicit error check incmd_scan_stdinprovides clearer diagnostics ("Failed to read from stdin"). Consider aligning for consistency:♻️ Optional: Add explicit error check for consistency
check-stdin) local max_bytes=10485760 local content - content=$(head -c "$max_bytes") + if ! content=$(head -c "$max_bytes"); then + _pg_log_error "Failed to read from stdin" + return 1 + fi if [[ -z "$content" ]]; then _pg_log_error "No input received on stdin" return 1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/prompt-guard-helper.sh around lines 1650 - 1662, The check-stdin block uses content=$(head -c "$max_bytes") without an explicit failure check; update it to mirror cmd_scan_stdin's pattern by performing an if ! content=$(head -c "$max_bytes"); then _pg_log_error "Failed to read from stdin" and return 1; keep the existing empty-content check, byte_count calculation, warning and cmd_check "$content" logic intact so error handling is consistent with cmd_scan_stdin.
1665-1677: Same error handling inconsistency ascheck-stdin.For consistency with
cmd_scan_stdin's explicit error handling pattern, this path could also benefit from theif ! content=$(...)guard. Same optional refactor applies here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/prompt-guard-helper.sh around lines 1665 - 1677, The stdin-read path should use the same guarded assignment pattern as cmd_scan_stdin/check-stdin to catch read errors; change the plain content=$(head -c "$max_bytes") to an if ! content=$(head -c "$max_bytes"); then _pg_log_error "Failed to read stdin" && return 1; fi form, then keep the existing empty-check, byte_count calculation, truncation warning, and final call to cmd_sanitize "$content" so that cmd_sanitize and the surrounding function handle both empty and failed reads consistently.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.agents/scripts/memory/recall.sh:
- Around line 148-151: The --recent branch (when recent_mode is true) is missing
project_filter and max_age_days so it ignores --project and --max-age-days;
update the recent query construction in recall.sh to reuse the same filter
builder used for non-recent queries (include project_filter and max_age_days
conditions or the combined extra_filters variable) so the SELECT uses
entity_where, project_filter, max_age_days (or extra_filters), auto_filter and
type_where before ORDER BY; alternatively, explicitly detect and error out when
recent_mode is combined with --project or --max-age-days (check recent_mode,
project_filter, and max_age_days) to prevent silent unscoped results.
- Around line 136-145: The validation rejects mission-related memory types
because VALID_TYPES does not include the constants used elsewhere; update
VALID_TYPES to include "MISSION_PATTERN", "MISSION_AGENT", and "MISSION_SCRIPT"
(the values of MEMORY_TYPE_MISSION_PATTERN, MEMORY_TYPE_MISSION_AGENT,
MEMORY_TYPE_MISSION_SCRIPT) so the type_filter check in the recall type_where
logic accepts these values, or alternatively change all references to use only
values currently listed in VALID_TYPES; ensure the added strings match the exact
constants used in mission-skill-learner.sh and the recall type_where code that
builds type_where="AND l.type = '$type_filter'".
In @.agents/scripts/pre-commit-hook.sh:
- Line 84: The current grep pipeline that builds violations_output is excluding
patterns matching '\$[1-9]\s*\|' which hides valid shell positional-parameter
violations like cmd "$1" | jq ...; update the pipeline used in
validate_positional_parameters (the violations_output assignment) to remove the
grep -vE '\$[1-9]\s*\|' exclusion so that pipelines containing positional
parameters are reported, keeping the other existing exclusions intact.
In @.agents/workflows/session-manager.md:
- Line 57: Replace the fragile grep invocation that produces "0\n0" with a
pipeline that counts matches reliably and uses portable whitespace: set
incomplete=$(grep -E '^[[:space:]]*- \[ \]' TODO.md | wc -l) (reference: the
variable incomplete and the grep line in the snippet and the completion check
later on). This avoids relying on grep's exit code, makes the whitespace class
portable across macOS/Linux, and yields a single numeric value for the
completion check.
In `@configs/mcp-templates/opencode-gitlab-ci.yml`:
- Around line 39-45: The download step currently fetches and extracts an
unverified artifact and hard-codes linux_amd64; change the logic around
GLAB_VERSION/GLAB_ARCHIVE to detect runner architecture (map uname -m to
amd64/arm64/etc.) instead of using linux_amd64, then download the release
checksums (checksums.txt) for that GLAB_VERSION and verify the downloaded
GLAB_ARCHIVE with sha256sum --check (or comparable verification) before running
tar xzf; ensure curl failures or checksum mismatches cause a non-zero exit and
do not proceed to tar.
- Around line 42-43: The GLAB_ARCHIVE variable hardcodes amd64; detect the
runner architecture and choose the proper suffix before setting GLAB_ARCHIVE.
Add logic that inspects `uname -m` (or an ARCH variable) and maps architectures
like x86_64 -> amd64 and aarch64/arm64 -> arm64, then construct GLAB_ARCHIVE
using that mapped arch and GLAB_VERSION (e.g.,
glab_${GLAB_VERSION#v}_linux_${MAPPED_ARCH}.tar.gz). Update any downstream
references to use the new GLAB_ARCHIVE and ensure GLAB_ARCHIVE is set after the
architecture mapping.
In `@tests/entity-extraction-test-fixtures/date-formats.md`:
- Line 15: The test fixture line "Board review: Monday, 6 April 2026" was
corrected from the previous incorrect weekday; update the test commit/PR
description or the changelog to explicitly mention this incidental date-fix so
reviewers know it’s an accuracy fix unrelated to the security hardening, and add
a brief note in the test file (e.g., a one-line comment above the fixture)
indicating the change corrects a day-of-week mismatch for clarity.
---
Outside diff comments:
In @.agents/scripts/settings-helper.sh:
- Around line 227-238: Compute the key's type once via default_type (using
_generate_defaults | jq -r --arg k "$key" 'getpath($k|split(".")) | type'
2>/dev/null || echo "null") and use that as the single source of truth: if
default_type == "null" call print_error "Unknown setting: $key" and print_info
"Run 'settings-helper.sh list'…" and return 1; if default_type == "object" treat
it as a section key (print_error about invalid section key and return 1) so
object-valued keys are not overwritten by scalar --arg later; remove the prior
existence check that relied on getpath ... // "__MISSING__" (which treats false
as missing) and ensure downstream code that writes the value uses default_type
to choose --argjson for booleans/numbers/objects or --arg for strings.
In @.agents/scripts/skills-helper.sh:
- Around line 197-205: The current raw_output assignment uses "npx --yes skills
find \"$query\" || true" which hides failures by always succeeding and then
treating errors as "no results"; change this so the npx invocation's exit status
is checked: run the npx command into raw_output (raw_output=$(npx --yes skills
find "$query")) and if it fails (non-zero exit) call log_error with the
command's stderr (or a descriptive message) and return a non-zero status instead
of falling through to the empty-results branch; keep the existing empty-result
handling (the [[ -z "$raw_output" ]] branch and log_warning) for genuine empty
responses from the skills find call.
---
Duplicate comments:
In @.agents/scripts/email-thread-reconstruction.py:
- Around line 239-249: The markdown links are built using
Path(email["file"]).name which breaks when --output points to a different
directory; instead compute the link path relative to the output file location
(use output_file or its Path parent) so links point correctly from the index to
each email. Update the file_path calculation that uses email["file"] to produce
a relative path (e.g., via os.path.relpath or Path(...).relative_to with a safe
fallback) before appending the line in the block that builds lines (the code
that sets file_path and appends f"{indent}{i}. [{email_subject}]({file_path}) -
{from_addr} - {date_sent}").
In @.agents/scripts/memory/recall.sh:
- Around line 136-145: The type validation currently uses a regex match via [[ "
$VALID_TYPES " =~ $type_pattern ]] which treats type_filter as a regex; replace
this with an exact-match check by iterating over the VALID_TYPES tokens (or
using a case statement) and comparing each token to the variable type_filter; if
no token equals type_filter call log_error twice as before and return 1, then
set type_where="AND l.type = '$type_filter'"; reference the variables/type names
type_filter, VALID_TYPES, type_where and the log_error calls when making the
change.
In @.agents/scripts/pulse-wrapper.sh:
- Around line 2508-2510: The sonar-project.properties parsing is brittle: update
how project_key and org_key are extracted from
"${repo_path}/sonar-project.properties" so keys with optional surrounding
whitespace like "sonar.projectKey = value" are matched and any trailing
CRLF/whitespace is removed; replace the current grep+cut lines that set
project_key and org_key with commands (e.g., using awk/sed/grep+sed) that match
/^[[:space:]]*sonar.projectKey[[:space:]]*=/ and
/^[[:space:]]*sonar.organization[[:space:]]*=/, extract the right-hand side,
trim leading/trailing whitespace and strip trailing '\r' characters before
assigning to the variables project_key and org_key (preserve the "|| true"
behavior).
In @.agents/scripts/response-scoring-helper.sh:
- Around line 345-369: The leaderboard/compare/export paths still read raw
scores causing inconsistencies; update cmd_leaderboard(), cmd_compare(), and
cmd_export() to use the same deduped/latest-per-criterion logic as
WEIGHTED_AVG_SQL (the inner subquery that selects s1.* joined to max(rowid) per
response_id,criterion) so they compute averages and per-criterion values from
the deduplicated set, and ensure _sync_score_to_patterns() remains the canonical
sync path used by these commands; in short, replace raw references to scores in
cmd_leaderboard/cmd_compare/cmd_export with queries or helpers that select from
the deduped "s" derived set (the same pattern used inside WEIGHTED_AVG_SQL) so
all outputs agree.
In @.github/workflows/issue-sync.yml:
- Line 315: The closing-keyword regex used to populate LINKED_ISSUES is too
permissive and inconsistent with the one at the other location (and can match
inside larger words); update both occurrences (the grep that sets LINKED_ISSUES
and the grep that builds ALL_ISSUES/warning flow) to use a single, consistent
regex that enforces word boundaries and allows singular or plural forms (e.g.
use a pattern equivalent to
\b(?:close|closes|fix|fixes|resolve|resolves)\b[[:space:]]*#[0-9]+ with grep -E/
-P as appropriate) so “Fix `#123`” and “Fixes `#123`” are accepted consistently and
strings like “bugfixes `#123`” are not matched.
---
Nitpick comments:
In @.agents/AGENTS.md:
- Line 60: Remove the duplicated hardcoded claim-task-id.sh example in the
"Framework-level" bullet: delete the inline command invocation (claim-task-id.sh
--repo-path ...) and instead replace it with a short pointer that directs
readers to the cross-repo task creation workflow section (the workflow that
defines the authoritative claim-task-id.sh usage) while keeping the bullet
focused on ownership/routing and noting that the exact command is documented in
the workflow section.
In @.agents/scripts/prompt-guard-helper.sh:
- Around line 1650-1662: The check-stdin block uses content=$(head -c
"$max_bytes") without an explicit failure check; update it to mirror
cmd_scan_stdin's pattern by performing an if ! content=$(head -c "$max_bytes");
then _pg_log_error "Failed to read from stdin" and return 1; keep the existing
empty-content check, byte_count calculation, warning and cmd_check "$content"
logic intact so error handling is consistent with cmd_scan_stdin.
- Around line 1665-1677: The stdin-read path should use the same guarded
assignment pattern as cmd_scan_stdin/check-stdin to catch read errors; change
the plain content=$(head -c "$max_bytes") to an if ! content=$(head -c
"$max_bytes"); then _pg_log_error "Failed to read stdin" && return 1; fi form,
then keep the existing empty-check, byte_count calculation, truncation warning,
and final call to cmd_sanitize "$content" so that cmd_sanitize and the
surrounding function handle both empty and failed reads consistently.
In @.agents/scripts/worktree-helper.sh:
- Around line 718-720: Replace the sed extraction for main_worktree with the
same cut pattern used elsewhere for consistency: when assigning main_worktree
(the variable set from "git worktree list --porcelain"), pipe the output through
cut -d' ' -f2- instead of sed 's/^worktree //' so it matches the other
occurrences (lines computing main_worktree) and keeps the parsing approach
consistent across the script.
In @.agents/services/hosting/cloudron.md:
- Line 293: Expand the existing security note that references `docker inspect`
to also warn that running commands with embedded passwords or command
substitution can expose secrets in shell history and the host process list;
instruct readers to avoid inline passwords (use placeholders in examples), clear
or redact shell history (or use a secure shell session), and store credentials
in secure locations (env variables, secret managers, or mounted files) rather
than embedding them in commands so that both `docker inspect` output and
process/shell history leaks are prevented.
- Around line 289-290: Update the documentation to avoid exposing the password
via command substitution: replace the single-line example that uses
MYSQL_PWD="$(cat /home/yellowtent/platformdata/mysql/root_password)" with a
safer two-step pattern—briefly describe reading the password into a temporary
environment (or using a silent prompt), running docker exec mysql -uroot with
that temp credential, then immediately unsetting/clearing it; reference the
original symbols MYSQL_PWD, docker exec,
/home/yellowtent/platformdata/mysql/root_password and mysql -uroot so readers
can locate the exact example, and also add a short note explaining the
interactive trade-offs if they prefer the original one-liner.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 91816086-14e4-4316-ab4e-d63067c22ea1
📒 Files selected for processing (36)
.agents/AGENTS.md.agents/plugins/opencode-aidevops/tools.mjs.agents/scripts/circuit-breaker-helper.sh.agents/scripts/commands/patterns.md.agents/scripts/commands/route.md.agents/scripts/content-scanner-helper.sh.agents/scripts/credential-helper.sh.agents/scripts/document-creation-helper.sh.agents/scripts/email-thread-reconstruction.py.agents/scripts/memory/recall.sh.agents/scripts/pre-commit-hook.sh.agents/scripts/prompt-guard-helper.sh.agents/scripts/pulse-wrapper.sh.agents/scripts/response-scoring-helper.sh.agents/scripts/session-miner-pulse.sh.agents/scripts/settings-helper.sh.agents/scripts/skill-update-helper.sh.agents/scripts/skills-helper.sh.agents/scripts/task-complete-helper.sh.agents/scripts/task-decompose-helper.sh.agents/scripts/terminal-title-setup.sh.agents/scripts/test-ocr-extraction-pipeline.sh.agents/scripts/worktree-helper.sh.agents/services/communications/google-chat.md.agents/services/communications/msteams.md.agents/services/hosting/cloudron.md.agents/services/hosting/local-hosting.md.agents/workflows/session-manager.md.github/workflows/issue-sync.ymlconfigs/mcp-templates/opencode-gitlab-ci.ymlsetup-modules/plugins.shtests/entity-extraction-test-fixtures/date-formats.mdtests/test-audit-e2e.shtests/test-email-thread-reconstruction.shtests/test-memory-mail.shtests/test-smoke-help.sh
✅ Files skipped from review due to trivial changes (1)
- .agents/services/communications/google-chat.md
🚧 Files skipped from review as they are similar to previous changes (1)
- .agents/scripts/content-scanner-helper.sh
| # To pin a checksum: download the checksums.txt from the release page | ||
| # https://gitlab.com/gitlab-org/cli/-/releases and verify with sha256sum --check | ||
| - | | ||
| GLAB_VERSION="v1.89.0" | ||
| curl -sL "https://gitlab.com/gitlab-org/cli/-/releases/${GLAB_VERSION}/downloads/glab_${GLAB_VERSION#v}_linux_amd64.tar.gz" | tar xz | ||
| GLAB_ARCHIVE="glab_${GLAB_VERSION#v}_linux_amd64.tar.gz" | ||
| curl -fsLO "https://gitlab.com/gitlab-org/cli/-/releases/${GLAB_VERSION}/downloads/${GLAB_ARCHIVE}" | ||
| tar xzf "${GLAB_ARCHIVE}" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the file and see the full context
cat -n configs/mcp-templates/opencode-gitlab-ci.yml | head -60Repository: marcusquinn/aidevops
Length of output: 2388
Enforce checksum verification before extracting the glab release artifact and remove hard-coded architecture.
The comments document checksum pinning practices but the script never validates them. Line 44 downloads an unverified binary that will be executed as root in the CI environment—this is a supply chain vulnerability even from official sources. Additionally, the hard-coded linux_amd64 will fail on arm64 GitLab runners.
Suggested approach
GLAB_VERSION="v1.89.0"
+ GLAB_SHA256="...pin official checksum..."
GLAB_ARCHIVE="glab_${GLAB_VERSION#v}_linux_amd64.tar.gz"
- curl -fsLO "https://gitlab.com/gitlab-org/cli/-/releases/${GLAB_VERSION}/downloads/${GLAB_ARCHIVE}"
+ curl -fsSLo "${GLAB_ARCHIVE}" "https://gitlab.com/gitlab-org/cli/-/releases/${GLAB_VERSION}/downloads/${GLAB_ARCHIVE}"
+ printf '%s %s\n' "${GLAB_SHA256}" "${GLAB_ARCHIVE}" | sha256sum --check --status
tar xzf "${GLAB_ARCHIVE}"Also detect the runner architecture instead of hard-coding amd64.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@configs/mcp-templates/opencode-gitlab-ci.yml` around lines 39 - 45, The
download step currently fetches and extracts an unverified artifact and
hard-codes linux_amd64; change the logic around GLAB_VERSION/GLAB_ARCHIVE to
detect runner architecture (map uname -m to amd64/arm64/etc.) instead of using
linux_amd64, then download the release checksums (checksums.txt) for that
GLAB_VERSION and verify the downloaded GLAB_ARCHIVE with sha256sum --check (or
comparable verification) before running tar xzf; ensure curl failures or
checksum mismatches cause a non-zero exit and do not proceed to tar.
|
Worker killed after 4h04m with 2 commits (struggle_ratio: 19996).
|
Use sentinel character pattern to prevent bash command substitution from stripping trailing newlines, which caused false 'changed' detection and unnecessary double-scans in content-scanner.
Wrap file reads in explicit error handling so permission/read failures degrade gracefully to fallback paths instead of aborting under set -e.
Under set -euo pipefail, jq failures exit before reaching the check, so $? is always 0. Only the file-size check matters.
The $(<VERSION) || echo pattern doesn't trigger the fallback because the subshell succeeds with empty output. Use cat with proper fallback.
Command substitution $() strips trailing newlines, causing the normalized vs original comparison to always differ when content has trailing newlines. This triggered unnecessary double-scans. Fix: append sentinel character 'x' inside python3/perl output, then strip it after capture. Exit code propagation preserved since the sentinel is part of the normalizer process itself.
|
@coderabbitai review All review comments from the latest review have been addressed:
|
|
🧠 Learnings used✅ Actions performedReview triggered.
|
marcusquinn's commit 222a550 reverted to the broken pattern: version=$(cat VERSION 2>/dev/null) || version="unknown" Under set -e, if cat fails the script exits before || runs. Correct pattern (matching latest_tag on line 66): version=$(cat VERSION 2>/dev/null || echo "unknown")
|
@coderabbitai review Commit 10 (5aafcca): Re-fixed session-manager.md version fallback — marcusquinn's commit 222a550 reverted our fix back to the broken All review comments from rounds 2 and 3 are now replied to with fix references. |
|
Triggering the incremental review now to cover commit 🧠 Learnings used✅ Actions performedReview triggered.
|
- memory-helper.sh: add MISSION_PATTERN/AGENT/SCRIPT to VALID_TYPES (prevents rejection of mission-skill-learner.sh store/recall calls) - recall.sh: --recent mode now applies --project and --max-age-days filters (previously silently ignored); refactor to share filter construction between --recent and regular query paths - store.sh: use exact-match loop for type validation (consistent with recall.sh, prevents regex metacharacter bypass) - pre-commit-hook.sh: remove pipe exclusion from positional parameter check (only runs on .sh files where $1|... is a real violation) - session-manager.md: fix grep -c fallback that produced '0\n0' on zero matches; use portable [[:space:]] class - opencode-gitlab-ci.yml: add arch detection for arm64 runners, add curl -f flag for HTTP error handling, add checksum verification guidance
|
Closing — this PR touches too many files (blast radius issue, see t1422 / GH#3955). Quality-debt PRs are now capped at 5 files max. The underlying fixes will be re-created as smaller, atomic PRs. |
|
@coderabbitai review All 8 outstanding review comments have been replied to — all were already addressed in earlier commits (9b62dd5–abd61ec, 0e1b29b). No new code changes needed. |
|
A couple of notes given the context:
🧠 Learnings used✅ Actions performedReview triggered.
|
Fixes lost when batch PRs (marcusquinn#3916, marcusquinn#3849) were closed for blast radius (t1422 cap at 5 files). Re-verified against current main. - accessibility/package.json: Playwright ^1.58.2 → ^1.44.0 (invalid version) - postflight.yml: check cancelled/timed_out/action_required conclusions - test-matterbridge-helper.sh: remove 2>/dev/null from cross-platform stat - test-audit-e2e.sh: add PRAGMA busy_timeout=5000 to both SQLite inits - test-email-thread-reconstruction.sh: fix '1 messages' → '1 message' Closes marcusquinn#3540 Closes marcusquinn#3327 Closes marcusquinn#3392 Closes marcusquinn#3708 Closes marcusquinn#3699
Fixes lost when batch PRs (marcusquinn#3916, marcusquinn#3843) were closed for blast radius. - settings-helper.sh: prevent jq injection via --arg instead of interpolation - pre-commit-hook.sh: deduplicate 5-stage grep pipeline (run once, reuse) - patterns.md: add required search query to recall invocations - tools.mjs: remove 2>/dev/null from memory recall/store (masks errors) - test-smoke-help.sh: remove || true that masked timeout exit code Closes marcusquinn#3225 Closes marcusquinn#3317 Closes marcusquinn#3400 Closes marcusquinn#3434 Closes marcusquinn#3728
…3975) * fix: resubmit 5 quality-debt fixes from closed batch PRs (batch 2) Fixes lost when batch PRs (#3916, #3843) were closed for blast radius. - settings-helper.sh: prevent jq injection via --arg instead of interpolation - pre-commit-hook.sh: deduplicate 5-stage grep pipeline (run once, reuse) - patterns.md: add required search query to recall invocations - tools.mjs: remove 2>/dev/null from memory recall/store (masks errors) - test-smoke-help.sh: remove || true that masked timeout exit code Closes #3225 Closes #3317 Closes #3400 Closes #3434 Closes #3728 * fix: prevent shell injection in memory tool commands Add shellEscape() helper that wraps values in single quotes with internal quote escaping. Apply to args.query, args.limit, content, and args.confidence before interpolating into shell command strings. Addresses Gemini security-high review finding on PR #3975. --------- Co-authored-by: AI DevOps <backup@customwater.com>
* fix: resubmit 5 quality-debt fixes from closed batch PRs Fixes lost when batch PRs (#3916, #3849) were closed for blast radius (t1422 cap at 5 files). Re-verified against current main. - accessibility/package.json: Playwright ^1.58.2 → ^1.44.0 (invalid version) - postflight.yml: check cancelled/timed_out/action_required conclusions - test-matterbridge-helper.sh: remove 2>/dev/null from cross-platform stat - test-audit-e2e.sh: add PRAGMA busy_timeout=5000 to both SQLite inits - test-email-thread-reconstruction.sh: fix '1 messages' → '1 message' Closes #3540 Closes #3327 Closes #3392 Closes #3708 Closes #3699 * fix: wrap all sqlite3 calls with busy_timeout in test-audit-e2e.sh Address CodeRabbit review: PRAGMA busy_timeout is connection-scoped, so only the two sessions that set it were protected. Add sqlite_with_timeout() wrapper that applies .timeout 5000 to every sqlite3 invocation. Also fix pre-existing unbound $pulse_script variable in test_checkpoint_8 that caused a crash under set -u. * fix: use uname-based stat flavor detection instead of fallback chain Avoids printing expected 'stat -c' error on macOS before the BSD fallback succeeds. Uses case/uname to select the correct stat flag per platform. Addresses CodeRabbit nitpick on PR #3974. * fix: guard stat calls in matterbridge test against set -e abort The stat invocations inside the case block at line 155-157 could fail and abort the entire test script under set -e. Add 2>/dev/null and || echo "unknown" fallback to each stat call so failures are captured as 'unknown' rather than causing a fatal exit. Addresses CodeRabbit CHANGES_REQUESTED on PR #3974. * fix: guard stat permission probe against set -e abort in matterbridge test Move the || fallback outside the command substitution so stat failures are caught at the case-branch level rather than inside $(), preventing set -e from aborting the test. Empty perms now routes through print_result with a clear diagnostic instead of silently falling back. Addresses CodeRabbit CHANGES_REQUESTED on PR #3974. * fix: pin playwright to exact version 1.44.1 to resolve Codacy variant-version warning --------- Co-authored-by: AI DevOps <backup@customwater.com> Co-authored-by: marcusquinn <6428977+marcusquinn@users.noreply.github.com>
|
Pulse recovery: dispatched a salvage worker for this high-risk closed-unmerged PR (branch deleted).
|
Salvage high-value security fixes without reverting newer framework behavior by restoring only the stable script hardening changes for content scanning, deploy safety, and settings updates.
|
Dispatching worker.\n- Model: default headless alternating (anthropic/claude-sonnet-4-6 or openai/gpt-5.3-codex)\n- Branch: bugfix/recover-pr-3916-salvage\n- Scope: address CHANGES_REQUESTED feedback on PR #4126 and move it to mergeable state\n- Attempt: 1 of 2\n- Direction: keep PR #4126 open, apply only requested fixes, run targeted verification before push |
Summary
Addresses 31 security and robustness findings from Gemini, CodeRabbit, and Augment reviews across 14 files.
Commit 1: content-scanner + deploy-agents (CRITICAL/HIGH)
--separators, stderrCommit 2: exit codes, osascript injection, scorer aggregation (CRITICAL/HIGH)
Commit 3: error suppression, mysql security, glab hardening, docs dedup (MEDIUM)
Commit 4: backlog critical fixes (CRITICAL/HIGH)
Already fixed (recommend closing)
Also recommend closing (already triaged with comments)
Verification
Summary by CodeRabbit
Bug Fixes
New Features
Improvements
Documentation