feat: add session time tracking to health dashboard#3920
Conversation
Add per-repo and cross-repo session time stats from OpenCode SQLite DB. Sessions classified as interactive (human) vs workers/runners by title pattern. Duration computed from message timestamps using sqlite3 -json mode to avoid pipe-separator issues in session titles. - session_time() and cross_repo_session_time() in contributor-activity-helper.sh - Integrated into pulse-wrapper.sh health issue body assembly - Cross-repo session time pre-computed once in update_health_issues() - Passed via $4 parameter to _update_health_issue_for_repo()
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
WalkthroughThis PR adds session-time analytics capabilities to track interactive vs. worker sessions within individual repositories and across multiple repositories. Two new Bash functions query SQLite session databases, classify sessions by type, compute durations, and output aggregated metrics as JSON or Markdown tables. The functionality is integrated into the CLI and wired into health issue updates with dedicated display sections. Changes
Sequence DiagramsequenceDiagram
actor User
participant CLI as CLI Handler
participant SessionTime as session_time()
participant SQLite as SQLite DB
participant CrossRepo as cross_repo_session_time()
participant Python as Python Aggregator
participant Output as JSON/Markdown
User->>CLI: cross-repo-session-time [paths] [options]
CLI->>CrossRepo: Invoke with paths
loop For each repo
CrossRepo->>SessionTime: Call session_time(repo)
SessionTime->>SQLite: Query sessions for repo
SQLite-->>SessionTime: Raw session records
SessionTime->>SessionTime: Classify by title pattern<br/>(interactive vs. worker)
SessionTime->>SessionTime: Compute durations
SessionTime-->>CrossRepo: Return JSON per-repo
end
CrossRepo->>Python: Aggregate all repo JSONs
Python->>Python: Sum hours/sessions<br/>Deduplicate active days
Python-->>CrossRepo: Aggregated data
CrossRepo->>Output: Format as JSON or Markdown
Output-->>User: Return formatted result
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Sun Mar 8 19:47:11 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the health dashboard by integrating comprehensive session time tracking. It provides valuable insights into how time is spent across repositories, differentiating between human-driven interactive sessions and automated worker processes. This new feature allows for a clearer understanding of operational efficiency and resource allocation by presenting both individual repository and aggregated cross-repository session metrics directly within the health issues. Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces session time tracking to the health dashboard by querying the AI assistant's SQLite database. The changes add new functions to contributor-activity-helper.sh for calculating and aggregating session times, and integrate this data into pulse-wrapper.sh. My review focuses on improving the security, robustness, and performance of the new code, aligning with established repository rules. I've identified a significant SQL injection vulnerability, fragile JSON construction logic, and several instances of error suppression that could mask underlying issues, providing suggestions for performance optimization and safer error handling.
Note: Security Review did not run due to the size of the PR.
| query_result=$(sqlite3 -json "$db_path" " | ||
| SELECT | ||
| s.title, | ||
| (max(m.time_created) - min(m.time_created)) as duration_ms | ||
| FROM session s | ||
| JOIN message m ON m.session_id = s.id | ||
| WHERE s.parent_id IS NULL | ||
| AND s.time_created > ${since_ms} | ||
| AND (s.directory = '${abs_repo_path}' | ||
| OR s.directory LIKE '${abs_repo_path}.%' | ||
| OR s.directory LIKE '${abs_repo_path}-%') | ||
| GROUP BY s.id | ||
| HAVING count(m.id) >= 2 | ||
| AND duration_ms > 5000 | ||
| " 2>/dev/null) || query_result="[]" |
There was a problem hiding this comment.
The SQLite query is constructed using string interpolation with abs_repo_path and since_ms. This is vulnerable to SQL injection if the path contains special characters like a single quote. Please use parameter binding to safely pass values to the query.
Additionally, suppressing stderr with 2>/dev/null hides important error messages from sqlite3, making debugging difficult. The || query_result="[]" already provides a fallback for command failure, so the error suppression is not necessary and can be harmful.
Here's a suggestion that uses parameter binding and removes the stderr suppression:
| query_result=$(sqlite3 -json "$db_path" " | |
| SELECT | |
| s.title, | |
| (max(m.time_created) - min(m.time_created)) as duration_ms | |
| FROM session s | |
| JOIN message m ON m.session_id = s.id | |
| WHERE s.parent_id IS NULL | |
| AND s.time_created > ${since_ms} | |
| AND (s.directory = '${abs_repo_path}' | |
| OR s.directory LIKE '${abs_repo_path}.%' | |
| OR s.directory LIKE '${abs_repo_path}-%') | |
| GROUP BY s.id | |
| HAVING count(m.id) >= 2 | |
| AND duration_ms > 5000 | |
| " 2>/dev/null) || query_result="[]" | |
| query_result=$(sqlite3 -json "$db_path" \ | |
| "SELECT | |
| s.title, | |
| (max(m.time_created) - min(m.time_created)) as duration_ms | |
| FROM session s | |
| JOIN message m ON m.session_id = s.id | |
| WHERE s.parent_id IS NULL | |
| AND s.time_created > ?1 | |
| AND (s.directory = ?2 | |
| OR s.directory LIKE ?3 | |
| OR s.directory LIKE ?4) | |
| GROUP BY s.id | |
| HAVING count(m.id) >= 2 | |
| AND duration_ms > 5000" \ | |
| "$since_ms" \ | |
| "$abs_repo_path" \ | |
| "${abs_repo_path}.%" \ | |
| "${abs_repo_path}-%") || query_result="[]" |
References
- To prevent SQL injection in shell scripts using
sqlite3, create a helper function that uses.param setfor safe parameterized bindings instead of direct string interpolation. - Avoid using '2>/dev/null' for blanket suppression of command errors in shell scripts to ensure that authentication, syntax, or system issues remain visible for debugging.
| local all_json="[" | ||
| local first="true" | ||
| local repo_count=0 | ||
| for rp in "${repo_paths[@]}"; do | ||
| local repo_json | ||
| repo_json=$(session_time "$rp" --period "$period" --format json) || repo_json="{}" | ||
| if [[ "$first" == "true" ]]; then | ||
| first="false" | ||
| else | ||
| all_json="${all_json}," | ||
| fi | ||
| all_json="${all_json}${repo_json}" | ||
| repo_count=$((repo_count + 1)) | ||
| done | ||
| all_json="${all_json}]" |
There was a problem hiding this comment.
The all_json variable is constructed by string concatenation. This is fragile and can lead to invalid JSON if session_time returns a non-JSON string (like an error message) but exits with code 0. For example, if session_time returns _Session database not found._, the resulting string would be invalid JSON.
A more robust approach is to collect the JSON outputs in a bash array and then use jq to assemble them into a valid JSON array. This ensures the final output is always well-formed.
| local all_json="[" | |
| local first="true" | |
| local repo_count=0 | |
| for rp in "${repo_paths[@]}"; do | |
| local repo_json | |
| repo_json=$(session_time "$rp" --period "$period" --format json) || repo_json="{}" | |
| if [[ "$first" == "true" ]]; then | |
| first="false" | |
| else | |
| all_json="${all_json}," | |
| fi | |
| all_json="${all_json}${repo_json}" | |
| repo_count=$((repo_count + 1)) | |
| done | |
| all_json="${all_json}]" | |
| local all_json="" | |
| local repo_count=0 | |
| for rp in "${repo_paths[@]}"; do | |
| repo_count=$((repo_count + 1)) | |
| local repo_json | |
| repo_json=$(session_time "$rp" --period "$period" --format json) || repo_json="{}" | |
| # Filter out non-JSON responses to avoid breaking the array | |
| if jq -e . >/dev/null 2>&1 <<<"$repo_json"; then | |
| all_json+="${repo_json}"$'\n' | |
| fi | |
| done | |
| all_json=$(echo -n "$all_json" | jq -s '.') |
References
- In shell scripts, use
jq --argjsonto safely pass JSON content from variables when merging JSON arrays, instead of direct string interpolation, to prevent parsing errors. The suggested approach of collecting JSON outputs and usingjq -s '.'is a robust way to achieve this.
| local since_ms | ||
| case "$period" in | ||
| day) | ||
| since_ms=$(python3 -c "import time; print(int((time.time() - 86400) * 1000))") | ||
| ;; | ||
| week) | ||
| since_ms=$(python3 -c "import time; print(int((time.time() - 604800) * 1000))") | ||
| ;; | ||
| month) | ||
| since_ms=$(python3 -c "import time; print(int((time.time() - 2592000) * 1000))") | ||
| ;; | ||
| year) | ||
| since_ms=$(python3 -c "import time; print(int((time.time() - 31536000) * 1000))") | ||
| ;; | ||
| *) | ||
| since_ms=$(python3 -c "import time; print(int((time.time() - 2592000) * 1000))") | ||
| ;; | ||
| esac |
There was a problem hiding this comment.
The case statement makes multiple calls to python3 -c to calculate the since_ms timestamp. This can be optimized for performance and readability by calculating the number of seconds in bash and making a single call to Python.
| local since_ms | |
| case "$period" in | |
| day) | |
| since_ms=$(python3 -c "import time; print(int((time.time() - 86400) * 1000))") | |
| ;; | |
| week) | |
| since_ms=$(python3 -c "import time; print(int((time.time() - 604800) * 1000))") | |
| ;; | |
| month) | |
| since_ms=$(python3 -c "import time; print(int((time.time() - 2592000) * 1000))") | |
| ;; | |
| year) | |
| since_ms=$(python3 -c "import time; print(int((time.time() - 31536000) * 1000))") | |
| ;; | |
| *) | |
| since_ms=$(python3 -c "import time; print(int((time.time() - 2592000) * 1000))") | |
| ;; | |
| esac | |
| local since_ms | |
| local seconds | |
| case "$period" in | |
| day) seconds=86400 ;; | |
| week) seconds=604800 ;; | |
| year) seconds=31536000 ;; | |
| *) seconds=2592000 ;; # month or default | |
| esac | |
| since_ms=$(python3 -c "import time; print(int((time.time() - ${seconds}) * 1000))") |
References
- In shell scripts, move the calculation of loop-invariant variables outside of loops to improve efficiency. This principle extends to pre-calculating values in bash to reduce the number of external calls, thereby improving overall script performance.
.agents/scripts/pulse-wrapper.sh
Outdated
| local cross_repo_session_time_md="" | ||
| if [[ -x "$activity_helper" ]]; then | ||
| local all_repo_paths_st | ||
| all_repo_paths_st=$(jq -r '.initialized_repos[] | select(.pulse == true and (.local_only // false) == false) | .path' "$repos_json" 2>/dev/null || echo "") |
There was a problem hiding this comment.
Suppressing stderr with 2>/dev/null is not recommended as it can hide important error messages from jq, such as a malformed repos.json file. This makes debugging more difficult. Please remove the redirection to allow errors to be visible. The || echo "" already provides a fallback for command failure.
| all_repo_paths_st=$(jq -r '.initialized_repos[] | select(.pulse == true and (.local_only // false) == false) | .path' "$repos_json" 2>/dev/null || echo "") | |
| all_repo_paths_st=$(jq -r '.initialized_repos[] | select(.pulse == true and (.local_only // false) == false) | .path' "$repos_json" || echo "") |
References
- Avoid using '2>/dev/null' for blanket suppression of command errors in shell scripts to ensure that authentication, syntax, or system issues remain visible for debugging.
- In shell scripts with 'set -e' enabled, use '|| true' to prevent the script from exiting when a command like 'jq' fails on an optional lookup. Do not suppress stderr with '2>/dev/null' so that actual syntax or system errors remain visible for debugging.
- Escape single quotes in repo paths for safe SQL embedding (prevents injection if path contains apostrophes) - Remove blanket 2>/dev/null on sqlite3 and jq calls (errors now visible for debugging, fallback via || still handles failures) - Optimize since_ms calculation: pre-compute seconds in bash, single Python call instead of one per case branch - Use jq -s to assemble cross-repo JSON array (robust against non-JSON responses from session_time)
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Sun Mar 8 19:54:35 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
.agents/scripts/pulse-wrapper.sh (1)
2194-2210: Consider reusingall_repo_pathsto eliminate redundant jq query.Lines 2198-2204 duplicate the repo path collection logic from lines 2181-2187. The
all_repo_paths_stvariable queries the same data asall_repo_pathsabove. Since both use identical jq filters and loop logic, reusing the existing variable would reduce redundancy.♻️ Proposed refactor to reuse existing repo paths
# Pre-compute cross-repo session time ONCE for all health issues. - # Uses the same repo paths already collected above. + # Reuses the same repo paths already collected above. local cross_repo_session_time_md="" if [[ -x "$activity_helper" ]]; then - local all_repo_paths_st - all_repo_paths_st=$(jq -r '.initialized_repos[] | select(.pulse == true and (.local_only // false) == false) | .path' "$repos_json" 2>/dev/null || echo "") - if [[ -n "$all_repo_paths_st" ]]; then - local -a st_args=() - while IFS= read -r rp; do - [[ -n "$rp" ]] && st_args+=("$rp") - done <<<"$all_repo_paths_st" - if [[ ${`#st_args`[@]} -gt 1 ]]; then - cross_repo_session_time_md=$(bash "$activity_helper" cross-repo-session-time "${st_args[@]}" --period month --format markdown || echo "_Cross-repo session data unavailable._") + # Reuse cross_args array from above (same repo paths) + if [[ ${`#cross_args`[@]} -gt 1 ]]; then + cross_repo_session_time_md=$(bash "$activity_helper" cross-repo-session-time "${cross_args[@]}" --period month --format markdown || echo "_Cross-repo session data unavailable._") - fi fi fiThis requires moving the
cross_argsarray declaration outside the innerifblock (around line 2184) so it remains in scope.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/pulse-wrapper.sh around lines 2194 - 2210, The block recomputes repo paths into all_repo_paths_st; instead reuse the already-populated all_repo_paths by moving the cross-repo args array declaration (st_args / cross_args) to an outer scope where all_repo_paths is visible, populate it from all_repo_paths instead of running the duplicate jq query, and only call bash "$activity_helper" cross-repo-session-time when the reused args array length is >1; update references to all_repo_paths (and remove all_repo_paths_st/jq duplication) and keep the existing fallback markdown string unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.agents/scripts/contributor-activity-helper.sh:
- Around line 526-545: The SQL query built for query_result interpolates
abs_repo_path directly, which breaks on single quotes and lets %/_ act as LIKE
wildcards; update the session query in contributor-activity-helper.sh to use
sqlite parameter binding (prepare/execute with ? placeholders against
"$db_path") or, if binding is not feasible, sanitize abs_repo_path before
interpolation by escaping single quotes and escaping/quoting % and _ (e.g.,
replace ' with '' and prefix % and _ with an escape char and add ESCAPE clause)
so the WHERE clauses comparing s.directory and the LIKE patterns use a safe,
literal path; adjust the variables used in the sqlite3 invocation (db_path,
since_ms, abs_repo_path) accordingly.
- Around line 488-503: The early-exit branches that currently echo "[]" for JSON
success (when session DB missing or sqlite3 not found) should instead emit a
stable zero-object (e.g., "{}" or a named empty object) to represent "no data"
and leave exit code 0, while actual failures must print an error to stderr and
exit with a non-zero code; update the blocks around the checks for the session
DB and the sqlite3 availability (the branches that check "$format" and echo
"[]") to output a stable empty JSON object on success and to call >&2 with a
clear error message and return/exit non-zero on real failures so
cross_repo_session_time() and downstream .get(...) can distinguish no-data from
failure—apply the same fix to the other occurrences referenced (around lines
535-549 and 667).
- Around line 538-542: The SQL WHERE clause is filtering on s.time_created but
the duration uses message timestamps (max(m.time_created) -
min(m.time_created)), so change the filter to use message timestamps instead:
update the WHERE condition to use m.time_created > ${since_ms} (keeping the join
and s.parent_id IS NULL check) so sessions with messages in the reporting window
are included; locate this in the query that computes duration_ms using
max(m.time_created) and min(m.time_created).
- Around line 665-675: The loop over repo_paths increments repo_count even when
session_time fails for an invalid repo, causing incorrect "Across N managed
repos" claims; update the loop in contributor-activity-helper.sh to validate
each $rp by checking the exit of session_time (repo_json) and only append to
all_json and increment repo_count when session_time succeeds, otherwise log a
warning and continue (referencing repo_paths, session_time, repo_json, all_json,
and repo_count), mirroring cross_repo_summary's validation behavior so invalid
repos are skipped and not counted.
---
Nitpick comments:
In @.agents/scripts/pulse-wrapper.sh:
- Around line 2194-2210: The block recomputes repo paths into all_repo_paths_st;
instead reuse the already-populated all_repo_paths by moving the cross-repo args
array declaration (st_args / cross_args) to an outer scope where all_repo_paths
is visible, populate it from all_repo_paths instead of running the duplicate jq
query, and only call bash "$activity_helper" cross-repo-session-time when the
reused args array length is >1; update references to all_repo_paths (and remove
all_repo_paths_st/jq duplication) and keep the existing fallback markdown string
unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: af3cadc4-231d-467e-bc70-d20c2970b96f
📒 Files selected for processing (2)
.agents/scripts/contributor-activity-helper.sh.agents/scripts/pulse-wrapper.sh
| if [[ "$format" == "json" ]]; then | ||
| echo "[]" | ||
| else | ||
| echo "_Session database not found._" | ||
| fi | ||
| return 0 | ||
| fi | ||
| fi | ||
|
|
||
| if ! command -v sqlite3 &>/dev/null; then | ||
| if [[ "$format" == "json" ]]; then | ||
| echo "[]" | ||
| else | ||
| echo "_sqlite3 not available._" | ||
| fi | ||
| return 0 |
There was a problem hiding this comment.
Differentiate empty data from failed collection.
In JSON mode the early exits emit [], the sqlite query path also collapses any failure to [], and cross_repo_session_time() then masks non-zero exits with {}. That makes “DB unavailable”, “query failed”, and “no sessions” indistinguishable, and the [] shape can still crash the downstream .get(...) calls. Keep successful no-data responses as a stable zero-object, and propagate real sqlite failures with a non-zero exit. As per coding guidelines, Automation scripts - focus on: Reliability and robustness; Clear logging and feedback; Proper exit codes; Error recovery mechanisms.
Also applies to: 535-549, 667-667
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.agents/scripts/contributor-activity-helper.sh around lines 488 - 503, The
early-exit branches that currently echo "[]" for JSON success (when session DB
missing or sqlite3 not found) should instead emit a stable zero-object (e.g.,
"{}" or a named empty object) to represent "no data" and leave exit code 0,
while actual failures must print an error to stderr and exit with a non-zero
code; update the blocks around the checks for the session DB and the sqlite3
availability (the branches that check "$format" and echo "[]") to output a
stable empty JSON object on success and to call >&2 with a clear error message
and return/exit non-zero on real failures so cross_repo_session_time() and
downstream .get(...) can distinguish no-data from failure—apply the same fix to
the other occurrences referenced (around lines 535-549 and 667).
| for rp in "${repo_paths[@]}"; do | ||
| local repo_json | ||
| repo_json=$(session_time "$rp" --period "$period" --format json) || repo_json="{}" | ||
| if [[ "$first" == "true" ]]; then | ||
| first="false" | ||
| else | ||
| all_json="${all_json}," | ||
| fi | ||
| all_json="${all_json}${repo_json}" | ||
| repo_count=$((repo_count + 1)) | ||
| done |
There was a problem hiding this comment.
Skip invalid repos instead of counting them.
Unlike cross_repo_summary(), this loop never validates $rp. A typo still increments repo_count, so the markdown can claim “Across N managed repos” while silently dropping one repo’s data. As per coding guidelines, Automation scripts - focus on: Reliability and robustness; Clear logging and feedback.
💡 Suggested change
for rp in "${repo_paths[@]}"; do
+ if [[ ! -d "$rp/.git" && ! -f "$rp/.git" ]]; then
+ echo "Warning: $rp is not a git repository, skipping" >&2
+ continue
+ fi
local repo_json
repo_json=$(session_time "$rp" --period "$period" --format json) || repo_json="{}"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for rp in "${repo_paths[@]}"; do | |
| local repo_json | |
| repo_json=$(session_time "$rp" --period "$period" --format json) || repo_json="{}" | |
| if [[ "$first" == "true" ]]; then | |
| first="false" | |
| else | |
| all_json="${all_json}," | |
| fi | |
| all_json="${all_json}${repo_json}" | |
| repo_count=$((repo_count + 1)) | |
| done | |
| for rp in "${repo_paths[@]}"; do | |
| if [[ ! -d "$rp/.git" && ! -f "$rp/.git" ]]; then | |
| echo "Warning: $rp is not a git repository, skipping" >&2 | |
| continue | |
| fi | |
| local repo_json | |
| repo_json=$(session_time "$rp" --period "$period" --format json) || repo_json="{}" | |
| if [[ "$first" == "true" ]]; then | |
| first="false" | |
| else | |
| all_json="${all_json}," | |
| fi | |
| all_json="${all_json}${repo_json}" | |
| repo_count=$((repo_count + 1)) | |
| done |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.agents/scripts/contributor-activity-helper.sh around lines 665 - 675, The
loop over repo_paths increments repo_count even when session_time fails for an
invalid repo, causing incorrect "Across N managed repos" claims; update the loop
in contributor-activity-helper.sh to validate each $rp by checking the exit of
session_time (repo_json) and only append to all_json and increment repo_count
when session_time succeeds, otherwise log a warning and continue (referencing
repo_paths, session_time, repo_json, all_json, and repo_count), mirroring
cross_repo_summary's validation behavior so invalid repos are skipped and not
counted.
- Return stable empty JSON object (not []) for no-data cases so downstream .get() calls work correctly - Escape % and _ in LIKE patterns with ESCAPE clause to prevent wildcard matching on paths containing these characters - Filter on m.time_created instead of s.time_created so sessions with recent messages are included even if started before cutoff - Skip invalid repo paths in cross_repo_session_time to avoid inflating the repo count - Reuse cross_args array in pulse-wrapper.sh instead of duplicating the jq query for session time pre-computation
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Sun Mar 8 19:58:08 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
|
All review findings addressed in subsequent commits



Summary
update_health_issues()and passed via$4to avoid redundant DB queriesChanges
contributor-activity-helper.sh(+318 lines)session_time()— queries OpenCode SQLite DB for sessions matching a repo path, classifies by title pattern (worker vs interactive), computes hours from message timestampscross_repo_session_time()— aggregates session time across all repossqlite3 -jsonmode to avoid pipe-separator issues (session titles contain|)parent_id IS NOT NULL) and sessions < 5 seconds~/Git/aidevops.feature-foomatches~/Git/aidevops)pulse-wrapper.sh(+33 lines)_update_health_issue_for_repo()accepts$4for cross-repo session time markdown### Session Time (last 30 days)and### Cross-Repo Session Time (last 30 days)sectionsupdate_health_issues()pre-computes cross-repo session time once for all reposTesting
session-timecommand: 1,434 interactive sessions (907.1h) + 1,681 worker sessions (88.5h) for aidevopscross-repo-session-timecommand: 1,717 interactive (1,433.7h) + 2,132 workers (145.0h) across 8 reposSummary by CodeRabbit