t3657: address PR #1688 review feedback — path resolution, sqlite busy_timeout, shared token extraction#4706
Conversation
…e busy_timeout, shared token extraction
- Fix pattern-tracker-helper.sh: add .timeout 5000 to sqlite3 INSERT in cmd_record
to prevent SQLITE_BUSY failures under concurrent access (CodeRabbit finding)
- Fix evaluate.sh: use parent-relative path ${SCRIPT_DIR}/../pattern-tracker-helper.sh
as primary lookup before falling back to $HOME path — SCRIPT_DIR in supervisor
context points to supervisor-archived/ subdir, not the scripts root (Gemini finding)
- Fix memory-integration.sh: same parent-relative path fix as evaluate.sh
- Add extract_tokens_from_log() to supervisor-archived/_common.sh to eliminate
duplicated camelCase/snake_case token extraction logic across evaluate.sh and
memory-integration.sh; uses module-level globals for bash 3.2 compatibility
- Note: awk variable rename (or→outr) was already applied in the original PR
Closes #3657
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (4)
✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses previously unactioned review feedback from a prior feature (t1114 ROI cost tracking). The changes focus on improving the robustness of SQLite operations, ensuring correct script path resolution in various execution environments, and centralizing duplicated code for token extraction to enhance maintainability and reduce redundancy across scripts. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Sat Mar 14 07:45:20 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
|
There was a problem hiding this comment.
Code Review
The pull request effectively addresses feedback from a prior review by fixing a script path resolution, improving SQLite write robustness with a timeout, and refactoring duplicated code into a shared function. These are all positive changes. However, my review identified a critical SQL injection vulnerability in a modified line within pattern-tracker-helper.sh. The suggested fix now emphasizes the use of parameterized queries for sqlite3 for robust protection, while providing an immediate escaping solution. Additionally, I've provided a more comprehensive performance improvement for the newly created extract_tokens_from_log function, optimizing I/O and parsing efficiency by consolidating commands and using awk.
| [[ -n "$estimated_cost" ]] && sql_estimated_cost="$estimated_cost" | ||
|
|
||
| sqlite3 "$MEMORY_DB" "INSERT OR REPLACE INTO pattern_metadata (id, strategy, quality, failure_mode, tokens_in, tokens_out, estimated_cost) VALUES ('$mem_id', '$sql_strategy', $sql_quality, $sql_failure_mode, $sql_tokens_in, $sql_tokens_out, $sql_estimated_cost);" 2>/dev/null || log_warn "Failed to store pattern metadata for $mem_id" | ||
| sqlite3 -cmd ".timeout 5000" "$MEMORY_DB" "INSERT OR REPLACE INTO pattern_metadata (id, strategy, quality, failure_mode, tokens_in, tokens_out, estimated_cost) VALUES ('$mem_id', '$sql_strategy', $sql_quality, $sql_failure_mode, $sql_tokens_in, $sql_tokens_out, $sql_estimated_cost);" 2>/dev/null || log_warn "Failed to store pattern metadata for $mem_id" |
There was a problem hiding this comment.
This line is vulnerable to SQL injection. The variables $mem_id and $sql_strategy are directly embedded into the SQL query string. If their values contain a single quote, an attacker could break out of the string and execute arbitrary SQL commands, which is a critical security risk.
For sqlite3 in shell scripts, the most robust way to prevent SQL injection is to use parameterized queries with .param set within a helper function. This separates the SQL command from the data, eliminating injection risks. As an immediate fix for this line, you must escape single quotes within these variables before they are used in the query. The standard way to escape a single quote in SQL is to double it (''). You can achieve this in bash using parameter expansion: ${variable//'/'/''}.
| sqlite3 -cmd ".timeout 5000" "$MEMORY_DB" "INSERT OR REPLACE INTO pattern_metadata (id, strategy, quality, failure_mode, tokens_in, tokens_out, estimated_cost) VALUES ('$mem_id', '$sql_strategy', $sql_quality, $sql_failure_mode, $sql_tokens_in, $sql_tokens_out, $sql_estimated_cost);" 2>/dev/null || log_warn "Failed to store pattern metadata for $mem_id" | |
| sqlite3 -cmd ".timeout 5000" "$MEMORY_DB" "INSERT OR REPLACE INTO pattern_metadata (id, strategy, quality, failure_mode, tokens_in, tokens_out, estimated_cost) VALUES ('${mem_id//'/'/''}', '${sql_strategy//'/'/''}', $sql_quality, $sql_failure_mode, $sql_tokens_in, $sql_tokens_out, $sql_estimated_cost);" 2>/dev/null || log_warn "Failed to store pattern metadata for $mem_id" |
References
- Employ a defense-in-depth strategy for handling user-provided input. Sanitize input at the entry point using a strict allowlist, and also apply context-specific escaping or safe handling mechanisms (e.g., parameterized queries for SQL,
--argforjq) at each point of use. - To prevent SQL injection in shell scripts using
sqlite3, create a helper function that uses.param setfor safe parameterized bindings instead of direct string interpolation.
| local raw_in raw_out | ||
| raw_in=$(grep -oE '"inputTokens":[0-9]+' "$log_file" 2>/dev/null | tail -1 | grep -oE '[0-9]+' || true) | ||
| raw_out=$(grep -oE '"outputTokens":[0-9]+' "$log_file" 2>/dev/null | tail -1 | grep -oE '[0-9]+' || true) | ||
| if [[ -z "$raw_in" ]]; then | ||
| raw_in=$(grep -oE '"input_tokens":[0-9]+' "$log_file" 2>/dev/null | tail -1 | grep -oE '[0-9]+' || true) | ||
| fi | ||
| if [[ -z "$raw_out" ]]; then | ||
| raw_out=$(grep -oE '"output_tokens":[0-9]+' "$log_file" 2>/dev/null | tail -1 | grep -oE '[0-9]+' || true) | ||
| fi |
There was a problem hiding this comment.
The current implementation for extracting token counts can be inefficient, as it may read the log file from disk up to four separate times. For large log files, this can lead to a noticeable performance degradation.
You can significantly improve efficiency by reading the file only once to gather all potential matches. Furthermore, instead of repeated grep | cut pipelines, consider using a single awk command or a while IFS='=' read -r key value loop to parse the extracted matches, which is more efficient and readable for processing multiple key-value pairs from a single source. Also, the initial grep | sed pipeline can be consolidated into a single sed command for further optimization.
| local raw_in raw_out | |
| raw_in=$(grep -oE '"inputTokens":[0-9]+' "$log_file" 2>/dev/null | tail -1 | grep -oE '[0-9]+' || true) | |
| raw_out=$(grep -oE '"outputTokens":[0-9]+' "$log_file" 2>/dev/null | tail -1 | grep -oE '[0-9]+' || true) | |
| if [[ -z "$raw_in" ]]; then | |
| raw_in=$(grep -oE '"input_tokens":[0-9]+' "$log_file" 2>/dev/null | tail -1 | grep -oE '[0-9]+' || true) | |
| fi | |
| if [[ -z "$raw_out" ]]; then | |
| raw_out=$(grep -oE '"output_tokens":[0-9]+' "$log_file" 2>/dev/null | tail -1 | grep -oE '[0-9]+' || true) | |
| fi | |
| local matches raw_in raw_out | |
| matches=$(sed -nE 's/.*"(inputTokens|input_tokens|outputTokens|output_tokens)":([0-9]+).*/\1:\2/p' "$log_file" 2>/dev/null || true) | |
| if [[ -n "$matches" ]]; then | |
| read -r raw_in raw_out <<< "$(echo "$matches" | awk -F: ' | |
| /^(inputTokens|input_tokens):/ { last_in=$2 } | |
| /^(outputTokens|output_tokens):/ { last_out=$2 } | |
| END { print last_in, last_out } | |
| ')" | |
| fi |
References
- Optimize shell script pipelines by replacing 'grep | sed' combinations with a single, more efficient 'sed' command where possible to improve performance.
- When parsing multiple key-value pairs from a single source in shell scripts, use a single 'while IFS='=' read -r key value' loop instead of repeated 'grep | cut' pipelines to improve efficiency and readability.
…e busy_timeout, shared token extraction (#4706) - Fix pattern-tracker-helper.sh: add .timeout 5000 to sqlite3 INSERT in cmd_record to prevent SQLITE_BUSY failures under concurrent access (CodeRabbit finding) - Fix evaluate.sh: use parent-relative path ${SCRIPT_DIR}/../pattern-tracker-helper.sh as primary lookup before falling back to $HOME path — SCRIPT_DIR in supervisor context points to supervisor-archived/ subdir, not the scripts root (Gemini finding) - Fix memory-integration.sh: same parent-relative path fix as evaluate.sh - Add extract_tokens_from_log() to supervisor-archived/_common.sh to eliminate duplicated camelCase/snake_case token extraction logic across evaluate.sh and memory-integration.sh; uses module-level globals for bash 3.2 compatibility - Note: awk variable rename (or→outr) was already applied in the original PR Closes #3657



Summary
Addresses unactioned review feedback from PR #1688 (t1114 ROI cost tracking feature). All findings verified against current code before applying.
Fixes applied
CodeRabbit —
pattern-tracker-helper.shL353-368.timeout 5000to the rawsqlite3INSERT incmd_recordto preventSQLITE_BUSYfailures under concurrent access. Othersqlite3calls in this file use the same pattern; this brings the new INSERT into line.Gemini —
evaluate.shL793-796 andmemory-integration.shL254-258pattern_helper="${SCRIPT_DIR}/pattern-tracker-helper.sh"as the primary lookup. In the supervisor context,SCRIPT_DIRresolves to.agents/scripts/supervisor-archived/, so this path never exists. Fixed by trying${SCRIPT_DIR}/../pattern-tracker-helper.shfirst (sibling directory), then the original${SCRIPT_DIR}/path, then the$HOMEfallback.CodeRabbit —
memory-integration.shL228-243 (duplicated token extraction)evaluate.shandmemory-integration.sh. Extracted intoextract_tokens_from_log()insupervisor-archived/_common.sh(which is sourced before both modules). Uses module-level globals_EXTRACT_TOKENS_IN/_EXTRACT_TOKENS_OUTfor bash 3.2 compatibility (macOS ships bash 3.2;local -nnamerefs require bash 4.3+). Verified with unit tests covering empty path, non-existent file, camelCase, and snake_case formats.Note: The awk variable rename (
or→outr) flagged by CodeRabbit was already applied in the original PR commit — confirmed atpattern-tracker-helper.sh:69.Closes #3657