Skip to content

feat: add session time tracking to health dashboard#3920

Merged
marcusquinn merged 3 commits intomainfrom
feature/session-time-stats
Mar 8, 2026
Merged

feat: add session time tracking to health dashboard#3920
marcusquinn merged 3 commits intomainfrom
feature/session-time-stats

Conversation

@marcusquinn
Copy link
Copy Markdown
Owner

@marcusquinn marcusquinn commented Mar 8, 2026

Summary

  • Adds session time tracking to the health dashboard, showing interactive (human) vs worker/runner hours from the OpenCode SQLite database
  • Per-repo session time table in each repo's health issue, plus cross-repo totals aggregated across all 8 managed repos
  • Cross-repo session time pre-computed once in update_health_issues() and passed via $4 to avoid redundant DB queries

Changes

contributor-activity-helper.sh (+318 lines)

  • session_time() — queries OpenCode SQLite DB for sessions matching a repo path, classifies by title pattern (worker vs interactive), computes hours from message timestamps
  • cross_repo_session_time() — aggregates session time across all repos
  • Uses sqlite3 -json mode to avoid pipe-separator issues (session titles contain |)
  • Excludes subagent sessions (parent_id IS NOT NULL) and sessions < 5 seconds
  • Worktree directories matched by prefix (e.g., ~/Git/aidevops.feature-foo matches ~/Git/aidevops)

pulse-wrapper.sh (+33 lines)

  • _update_health_issue_for_repo() accepts $4 for cross-repo session time markdown
  • Body assembly includes ### Session Time (last 30 days) and ### Cross-Repo Session Time (last 30 days) sections
  • update_health_issues() pre-computes cross-repo session time once for all repos

Testing

  • session-time command: 1,434 interactive sessions (907.1h) + 1,681 worker sessions (88.5h) for aidevops
  • cross-repo-session-time command: 1,717 interactive (1,433.7h) + 2,132 workers (145.0h) across 8 repos
  • ShellCheck clean on both files

Summary by CodeRabbit

  • New Features
    • Added session-time analytics to measure interactive and worker sessions per repository with JSON and Markdown output options.
    • Added cross-repository session-time aggregation to track session metrics across multiple repositories.
    • Integrated new session-time data into health issue reports with dedicated sections for per-repo and cross-repo analytics.

Add per-repo and cross-repo session time stats from OpenCode SQLite DB.
Sessions classified as interactive (human) vs workers/runners by title
pattern. Duration computed from message timestamps using sqlite3 -json
mode to avoid pipe-separator issues in session titles.

- session_time() and cross_repo_session_time() in contributor-activity-helper.sh
- Integrated into pulse-wrapper.sh health issue body assembly
- Cross-repo session time pre-computed once in update_health_issues()
- Passed via $4 parameter to _update_health_issue_for_repo()
@github-actions github-actions bot added the enhancement Auto-created from TODO.md tag label Mar 8, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 8, 2026

Warning

Rate limit exceeded

@marcusquinn has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 8 minutes and 52 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: d1a09f14-bac4-4ba6-834e-d53df95e2ad5

📥 Commits

Reviewing files that changed from the base of the PR and between edce2dd and 0625a33.

📒 Files selected for processing (2)
  • .agents/scripts/contributor-activity-helper.sh
  • .agents/scripts/pulse-wrapper.sh

Walkthrough

This PR adds session-time analytics capabilities to track interactive vs. worker sessions within individual repositories and across multiple repositories. Two new Bash functions query SQLite session databases, classify sessions by type, compute durations, and output aggregated metrics as JSON or Markdown tables. The functionality is integrated into the CLI and wired into health issue updates with dedicated display sections.

Changes

Cohort / File(s) Summary
Session Time Analytics
.agents/scripts/contributor-activity-helper.sh
Added session_time() function to query SQLite DB, classify sessions by title patterns (interactive vs. worker), compute durations, and output formatted results. Added cross_repo_session_time() to aggregate per-repo session data via Python with deduplication. Integrated both commands into main CLI dispatcher with help text updates describing session types and analytics purpose.
Health Issue Integration
.agents/scripts/pulse-wrapper.sh
Updated _update_health_issue_for_repo() signature to accept cross_repo_session_time_md parameter. Modified update_health_issues() to precompute cross-repo session-time markdown and pass it to the function. Added "Session Time (last 30 days)" and "Cross-Repo Session Time (last 30 days)" sections to health issue body assembly.

Sequence Diagram

sequenceDiagram
    actor User
    participant CLI as CLI Handler
    participant SessionTime as session_time()
    participant SQLite as SQLite DB
    participant CrossRepo as cross_repo_session_time()
    participant Python as Python Aggregator
    participant Output as JSON/Markdown

    User->>CLI: cross-repo-session-time [paths] [options]
    CLI->>CrossRepo: Invoke with paths
    loop For each repo
        CrossRepo->>SessionTime: Call session_time(repo)
        SessionTime->>SQLite: Query sessions for repo
        SQLite-->>SessionTime: Raw session records
        SessionTime->>SessionTime: Classify by title pattern<br/>(interactive vs. worker)
        SessionTime->>SessionTime: Compute durations
        SessionTime-->>CrossRepo: Return JSON per-repo
    end
    CrossRepo->>Python: Aggregate all repo JSONs
    Python->>Python: Sum hours/sessions<br/>Deduplicate active days
    Python-->>CrossRepo: Aggregated data
    CrossRepo->>Output: Format as JSON or Markdown
    Output-->>User: Return formatted result
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • PR #2624: Modifies pulse-wrapper.sh with _update_health_issue_for_repo() function changes that are extended by this PR's signature update to include cross-repo session time data.
  • PR #3893: Previously extended contributor-activity helper and pulse-wrapper integrations; these changes build directly upon that foundation with session-time analytics additions.

Suggested labels

enhancement

Poem

🕐 Sessions now counted, patterns aligned,
Worker and interactive both defined,
Across the repos, aggregations flow,
Health issues brighten with insight's glow,
Time analytics shine in DevOps's wake. ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add session time tracking to health dashboard' directly and concisely summarizes the main changes: addition of session time tracking functionality to the health dashboard component.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/session-time-stats

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 8, 2026

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 367 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Sun Mar 8 19:47:08 UTC 2026: Code review monitoring started
Sun Mar 8 19:47:09 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 367

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 367
  • VULNERABILITIES: 0

Generated on: Sun Mar 8 19:47:11 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the health dashboard by integrating comprehensive session time tracking. It provides valuable insights into how time is spent across repositories, differentiating between human-driven interactive sessions and automated worker processes. This new feature allows for a clearer understanding of operational efficiency and resource allocation by presenting both individual repository and aggregated cross-repository session metrics directly within the health issues.

Highlights

  • Session Time Tracking: Introduced session time tracking to the health dashboard, distinguishing between interactive (human) and worker/runner hours from the OpenCode SQLite database.
  • Per-Repo and Cross-Repo Metrics: Implemented per-repository session time tables within each repo's health issue, alongside aggregated cross-repository totals across all managed repositories.
  • Performance Optimization: Optimized performance by pre-computing cross-repository session time once in update_health_issues() and passing it as an argument, avoiding redundant database queries.
Changelog
  • .agents/scripts/contributor-activity-helper.sh
    • Added session_time() function to query and classify session data from the AI assistant database.
    • Added cross_repo_session_time() function to aggregate session data across multiple repositories.
    • Updated the script's usage and help documentation to include the new session time commands.
    • Integrated the new session-time and cross-repo-session-time commands into the main command dispatch logic.
  • .agents/scripts/pulse-wrapper.sh
    • Modified _update_health_issue_for_repo to accept and display cross-repo session time data.
    • Incorporated calls to the contributor-activity-helper.sh script to fetch per-repo session time.
    • Updated the health issue markdown body to include dedicated sections for per-repo and cross-repo session time.
    • Implemented pre-computation of cross-repo session time in update_health_issues to improve efficiency.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces session time tracking to the health dashboard by querying the AI assistant's SQLite database. The changes add new functions to contributor-activity-helper.sh for calculating and aggregating session times, and integrate this data into pulse-wrapper.sh. My review focuses on improving the security, robustness, and performance of the new code, aligning with established repository rules. I've identified a significant SQL injection vulnerability, fragile JSON construction logic, and several instances of error suppression that could mask underlying issues, providing suggestions for performance optimization and safer error handling.

Note: Security Review did not run due to the size of the PR.

Comment on lines +535 to +549
query_result=$(sqlite3 -json "$db_path" "
SELECT
s.title,
(max(m.time_created) - min(m.time_created)) as duration_ms
FROM session s
JOIN message m ON m.session_id = s.id
WHERE s.parent_id IS NULL
AND s.time_created > ${since_ms}
AND (s.directory = '${abs_repo_path}'
OR s.directory LIKE '${abs_repo_path}.%'
OR s.directory LIKE '${abs_repo_path}-%')
GROUP BY s.id
HAVING count(m.id) >= 2
AND duration_ms > 5000
" 2>/dev/null) || query_result="[]"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The SQLite query is constructed using string interpolation with abs_repo_path and since_ms. This is vulnerable to SQL injection if the path contains special characters like a single quote. Please use parameter binding to safely pass values to the query.

Additionally, suppressing stderr with 2>/dev/null hides important error messages from sqlite3, making debugging difficult. The || query_result="[]" already provides a fallback for command failure, so the error suppression is not necessary and can be harmful.

Here's a suggestion that uses parameter binding and removes the stderr suppression:

Suggested change
query_result=$(sqlite3 -json "$db_path" "
SELECT
s.title,
(max(m.time_created) - min(m.time_created)) as duration_ms
FROM session s
JOIN message m ON m.session_id = s.id
WHERE s.parent_id IS NULL
AND s.time_created > ${since_ms}
AND (s.directory = '${abs_repo_path}'
OR s.directory LIKE '${abs_repo_path}.%'
OR s.directory LIKE '${abs_repo_path}-%')
GROUP BY s.id
HAVING count(m.id) >= 2
AND duration_ms > 5000
" 2>/dev/null) || query_result="[]"
query_result=$(sqlite3 -json "$db_path" \
"SELECT
s.title,
(max(m.time_created) - min(m.time_created)) as duration_ms
FROM session s
JOIN message m ON m.session_id = s.id
WHERE s.parent_id IS NULL
AND s.time_created > ?1
AND (s.directory = ?2
OR s.directory LIKE ?3
OR s.directory LIKE ?4)
GROUP BY s.id
HAVING count(m.id) >= 2
AND duration_ms > 5000" \
"$since_ms" \
"$abs_repo_path" \
"${abs_repo_path}.%" \
"${abs_repo_path}-%") || query_result="[]"
References
  1. To prevent SQL injection in shell scripts using sqlite3, create a helper function that uses .param set for safe parameterized bindings instead of direct string interpolation.
  2. Avoid using '2>/dev/null' for blanket suppression of command errors in shell scripts to ensure that authentication, syntax, or system issues remain visible for debugging.

Comment on lines +662 to +676
local all_json="["
local first="true"
local repo_count=0
for rp in "${repo_paths[@]}"; do
local repo_json
repo_json=$(session_time "$rp" --period "$period" --format json) || repo_json="{}"
if [[ "$first" == "true" ]]; then
first="false"
else
all_json="${all_json},"
fi
all_json="${all_json}${repo_json}"
repo_count=$((repo_count + 1))
done
all_json="${all_json}]"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The all_json variable is constructed by string concatenation. This is fragile and can lead to invalid JSON if session_time returns a non-JSON string (like an error message) but exits with code 0. For example, if session_time returns _Session database not found._, the resulting string would be invalid JSON.

A more robust approach is to collect the JSON outputs in a bash array and then use jq to assemble them into a valid JSON array. This ensures the final output is always well-formed.

Suggested change
local all_json="["
local first="true"
local repo_count=0
for rp in "${repo_paths[@]}"; do
local repo_json
repo_json=$(session_time "$rp" --period "$period" --format json) || repo_json="{}"
if [[ "$first" == "true" ]]; then
first="false"
else
all_json="${all_json},"
fi
all_json="${all_json}${repo_json}"
repo_count=$((repo_count + 1))
done
all_json="${all_json}]"
local all_json=""
local repo_count=0
for rp in "${repo_paths[@]}"; do
repo_count=$((repo_count + 1))
local repo_json
repo_json=$(session_time "$rp" --period "$period" --format json) || repo_json="{}"
# Filter out non-JSON responses to avoid breaking the array
if jq -e . >/dev/null 2>&1 <<<"$repo_json"; then
all_json+="${repo_json}"$'\n'
fi
done
all_json=$(echo -n "$all_json" | jq -s '.')
References
  1. In shell scripts, use jq --argjson to safely pass JSON content from variables when merging JSON arrays, instead of direct string interpolation, to prevent parsing errors. The suggested approach of collecting JSON outputs and using jq -s '.' is a robust way to achieve this.

Comment on lines +507 to +524
local since_ms
case "$period" in
day)
since_ms=$(python3 -c "import time; print(int((time.time() - 86400) * 1000))")
;;
week)
since_ms=$(python3 -c "import time; print(int((time.time() - 604800) * 1000))")
;;
month)
since_ms=$(python3 -c "import time; print(int((time.time() - 2592000) * 1000))")
;;
year)
since_ms=$(python3 -c "import time; print(int((time.time() - 31536000) * 1000))")
;;
*)
since_ms=$(python3 -c "import time; print(int((time.time() - 2592000) * 1000))")
;;
esac
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The case statement makes multiple calls to python3 -c to calculate the since_ms timestamp. This can be optimized for performance and readability by calculating the number of seconds in bash and making a single call to Python.

Suggested change
local since_ms
case "$period" in
day)
since_ms=$(python3 -c "import time; print(int((time.time() - 86400) * 1000))")
;;
week)
since_ms=$(python3 -c "import time; print(int((time.time() - 604800) * 1000))")
;;
month)
since_ms=$(python3 -c "import time; print(int((time.time() - 2592000) * 1000))")
;;
year)
since_ms=$(python3 -c "import time; print(int((time.time() - 31536000) * 1000))")
;;
*)
since_ms=$(python3 -c "import time; print(int((time.time() - 2592000) * 1000))")
;;
esac
local since_ms
local seconds
case "$period" in
day) seconds=86400 ;;
week) seconds=604800 ;;
year) seconds=31536000 ;;
*) seconds=2592000 ;; # month or default
esac
since_ms=$(python3 -c "import time; print(int((time.time() - ${seconds}) * 1000))")
References
  1. In shell scripts, move the calculation of loop-invariant variables outside of loops to improve efficiency. This principle extends to pre-calculating values in bash to reduce the number of external calls, thereby improving overall script performance.

local cross_repo_session_time_md=""
if [[ -x "$activity_helper" ]]; then
local all_repo_paths_st
all_repo_paths_st=$(jq -r '.initialized_repos[] | select(.pulse == true and (.local_only // false) == false) | .path' "$repos_json" 2>/dev/null || echo "")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Suppressing stderr with 2>/dev/null is not recommended as it can hide important error messages from jq, such as a malformed repos.json file. This makes debugging more difficult. Please remove the redirection to allow errors to be visible. The || echo "" already provides a fallback for command failure.

Suggested change
all_repo_paths_st=$(jq -r '.initialized_repos[] | select(.pulse == true and (.local_only // false) == false) | .path' "$repos_json" 2>/dev/null || echo "")
all_repo_paths_st=$(jq -r '.initialized_repos[] | select(.pulse == true and (.local_only // false) == false) | .path' "$repos_json" || echo "")
References
  1. Avoid using '2>/dev/null' for blanket suppression of command errors in shell scripts to ensure that authentication, syntax, or system issues remain visible for debugging.
  2. In shell scripts with 'set -e' enabled, use '|| true' to prevent the script from exiting when a command like 'jq' fails on an optional lookup. Do not suppress stderr with '2>/dev/null' so that actual syntax or system errors remain visible for debugging.

- Escape single quotes in repo paths for safe SQL embedding (prevents
  injection if path contains apostrophes)
- Remove blanket 2>/dev/null on sqlite3 and jq calls (errors now visible
  for debugging, fallback via || still handles failures)
- Optimize since_ms calculation: pre-compute seconds in bash, single
  Python call instead of one per case branch
- Use jq -s to assemble cross-repo JSON array (robust against non-JSON
  responses from session_time)
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 8, 2026

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 367 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Sun Mar 8 19:54:32 UTC 2026: Code review monitoring started
Sun Mar 8 19:54:33 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 367

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 367
  • VULNERABILITIES: 0

Generated on: Sun Mar 8 19:54:35 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

coderabbitai[bot]
coderabbitai bot previously requested changes Mar 8, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
.agents/scripts/pulse-wrapper.sh (1)

2194-2210: Consider reusing all_repo_paths to eliminate redundant jq query.

Lines 2198-2204 duplicate the repo path collection logic from lines 2181-2187. The all_repo_paths_st variable queries the same data as all_repo_paths above. Since both use identical jq filters and loop logic, reusing the existing variable would reduce redundancy.

♻️ Proposed refactor to reuse existing repo paths
 	# Pre-compute cross-repo session time ONCE for all health issues.
-	# Uses the same repo paths already collected above.
+	# Reuses the same repo paths already collected above.
 	local cross_repo_session_time_md=""
 	if [[ -x "$activity_helper" ]]; then
-		local all_repo_paths_st
-		all_repo_paths_st=$(jq -r '.initialized_repos[] | select(.pulse == true and (.local_only // false) == false) | .path' "$repos_json" 2>/dev/null || echo "")
-		if [[ -n "$all_repo_paths_st" ]]; then
-			local -a st_args=()
-			while IFS= read -r rp; do
-				[[ -n "$rp" ]] && st_args+=("$rp")
-			done <<<"$all_repo_paths_st"
-			if [[ ${`#st_args`[@]} -gt 1 ]]; then
-				cross_repo_session_time_md=$(bash "$activity_helper" cross-repo-session-time "${st_args[@]}" --period month --format markdown || echo "_Cross-repo session data unavailable._")
+		# Reuse cross_args array from above (same repo paths)
+		if [[ ${`#cross_args`[@]} -gt 1 ]]; then
+			cross_repo_session_time_md=$(bash "$activity_helper" cross-repo-session-time "${cross_args[@]}" --period month --format markdown || echo "_Cross-repo session data unavailable._")
-			fi
 		fi
 	fi

This requires moving the cross_args array declaration outside the inner if block (around line 2184) so it remains in scope.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/scripts/pulse-wrapper.sh around lines 2194 - 2210, The block
recomputes repo paths into all_repo_paths_st; instead reuse the
already-populated all_repo_paths by moving the cross-repo args array declaration
(st_args / cross_args) to an outer scope where all_repo_paths is visible,
populate it from all_repo_paths instead of running the duplicate jq query, and
only call bash "$activity_helper" cross-repo-session-time when the reused args
array length is >1; update references to all_repo_paths (and remove
all_repo_paths_st/jq duplication) and keep the existing fallback markdown string
unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.agents/scripts/contributor-activity-helper.sh:
- Around line 526-545: The SQL query built for query_result interpolates
abs_repo_path directly, which breaks on single quotes and lets %/_ act as LIKE
wildcards; update the session query in contributor-activity-helper.sh to use
sqlite parameter binding (prepare/execute with ? placeholders against
"$db_path") or, if binding is not feasible, sanitize abs_repo_path before
interpolation by escaping single quotes and escaping/quoting % and _ (e.g.,
replace ' with '' and prefix % and _ with an escape char and add ESCAPE clause)
so the WHERE clauses comparing s.directory and the LIKE patterns use a safe,
literal path; adjust the variables used in the sqlite3 invocation (db_path,
since_ms, abs_repo_path) accordingly.
- Around line 488-503: The early-exit branches that currently echo "[]" for JSON
success (when session DB missing or sqlite3 not found) should instead emit a
stable zero-object (e.g., "{}" or a named empty object) to represent "no data"
and leave exit code 0, while actual failures must print an error to stderr and
exit with a non-zero code; update the blocks around the checks for the session
DB and the sqlite3 availability (the branches that check "$format" and echo
"[]") to output a stable empty JSON object on success and to call >&2 with a
clear error message and return/exit non-zero on real failures so
cross_repo_session_time() and downstream .get(...) can distinguish no-data from
failure—apply the same fix to the other occurrences referenced (around lines
535-549 and 667).
- Around line 538-542: The SQL WHERE clause is filtering on s.time_created but
the duration uses message timestamps (max(m.time_created) -
min(m.time_created)), so change the filter to use message timestamps instead:
update the WHERE condition to use m.time_created > ${since_ms} (keeping the join
and s.parent_id IS NULL check) so sessions with messages in the reporting window
are included; locate this in the query that computes duration_ms using
max(m.time_created) and min(m.time_created).
- Around line 665-675: The loop over repo_paths increments repo_count even when
session_time fails for an invalid repo, causing incorrect "Across N managed
repos" claims; update the loop in contributor-activity-helper.sh to validate
each $rp by checking the exit of session_time (repo_json) and only append to
all_json and increment repo_count when session_time succeeds, otherwise log a
warning and continue (referencing repo_paths, session_time, repo_json, all_json,
and repo_count), mirroring cross_repo_summary's validation behavior so invalid
repos are skipped and not counted.

---

Nitpick comments:
In @.agents/scripts/pulse-wrapper.sh:
- Around line 2194-2210: The block recomputes repo paths into all_repo_paths_st;
instead reuse the already-populated all_repo_paths by moving the cross-repo args
array declaration (st_args / cross_args) to an outer scope where all_repo_paths
is visible, populate it from all_repo_paths instead of running the duplicate jq
query, and only call bash "$activity_helper" cross-repo-session-time when the
reused args array length is >1; update references to all_repo_paths (and remove
all_repo_paths_st/jq duplication) and keep the existing fallback markdown string
unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: af3cadc4-231d-467e-bc70-d20c2970b96f

📥 Commits

Reviewing files that changed from the base of the PR and between c536f68 and edce2dd.

📒 Files selected for processing (2)
  • .agents/scripts/contributor-activity-helper.sh
  • .agents/scripts/pulse-wrapper.sh

Comment on lines +488 to +503
if [[ "$format" == "json" ]]; then
echo "[]"
else
echo "_Session database not found._"
fi
return 0
fi
fi

if ! command -v sqlite3 &>/dev/null; then
if [[ "$format" == "json" ]]; then
echo "[]"
else
echo "_sqlite3 not available._"
fi
return 0
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Differentiate empty data from failed collection.

In JSON mode the early exits emit [], the sqlite query path also collapses any failure to [], and cross_repo_session_time() then masks non-zero exits with {}. That makes “DB unavailable”, “query failed”, and “no sessions” indistinguishable, and the [] shape can still crash the downstream .get(...) calls. Keep successful no-data responses as a stable zero-object, and propagate real sqlite failures with a non-zero exit. As per coding guidelines, Automation scripts - focus on: Reliability and robustness; Clear logging and feedback; Proper exit codes; Error recovery mechanisms.

Also applies to: 535-549, 667-667

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/scripts/contributor-activity-helper.sh around lines 488 - 503, The
early-exit branches that currently echo "[]" for JSON success (when session DB
missing or sqlite3 not found) should instead emit a stable zero-object (e.g.,
"{}" or a named empty object) to represent "no data" and leave exit code 0,
while actual failures must print an error to stderr and exit with a non-zero
code; update the blocks around the checks for the session DB and the sqlite3
availability (the branches that check "$format" and echo "[]") to output a
stable empty JSON object on success and to call >&2 with a clear error message
and return/exit non-zero on real failures so cross_repo_session_time() and
downstream .get(...) can distinguish no-data from failure—apply the same fix to
the other occurrences referenced (around lines 535-549 and 667).

Comment on lines +665 to +675
for rp in "${repo_paths[@]}"; do
local repo_json
repo_json=$(session_time "$rp" --period "$period" --format json) || repo_json="{}"
if [[ "$first" == "true" ]]; then
first="false"
else
all_json="${all_json},"
fi
all_json="${all_json}${repo_json}"
repo_count=$((repo_count + 1))
done
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Skip invalid repos instead of counting them.

Unlike cross_repo_summary(), this loop never validates $rp. A typo still increments repo_count, so the markdown can claim “Across N managed repos” while silently dropping one repo’s data. As per coding guidelines, Automation scripts - focus on: Reliability and robustness; Clear logging and feedback.

💡 Suggested change
 	for rp in "${repo_paths[@]}"; do
+		if [[ ! -d "$rp/.git" && ! -f "$rp/.git" ]]; then
+			echo "Warning: $rp is not a git repository, skipping" >&2
+			continue
+		fi
 		local repo_json
 		repo_json=$(session_time "$rp" --period "$period" --format json) || repo_json="{}"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for rp in "${repo_paths[@]}"; do
local repo_json
repo_json=$(session_time "$rp" --period "$period" --format json) || repo_json="{}"
if [[ "$first" == "true" ]]; then
first="false"
else
all_json="${all_json},"
fi
all_json="${all_json}${repo_json}"
repo_count=$((repo_count + 1))
done
for rp in "${repo_paths[@]}"; do
if [[ ! -d "$rp/.git" && ! -f "$rp/.git" ]]; then
echo "Warning: $rp is not a git repository, skipping" >&2
continue
fi
local repo_json
repo_json=$(session_time "$rp" --period "$period" --format json) || repo_json="{}"
if [[ "$first" == "true" ]]; then
first="false"
else
all_json="${all_json},"
fi
all_json="${all_json}${repo_json}"
repo_count=$((repo_count + 1))
done
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/scripts/contributor-activity-helper.sh around lines 665 - 675, The
loop over repo_paths increments repo_count even when session_time fails for an
invalid repo, causing incorrect "Across N managed repos" claims; update the loop
in contributor-activity-helper.sh to validate each $rp by checking the exit of
session_time (repo_json) and only append to all_json and increment repo_count
when session_time succeeds, otherwise log a warning and continue (referencing
repo_paths, session_time, repo_json, all_json, and repo_count), mirroring
cross_repo_summary's validation behavior so invalid repos are skipped and not
counted.

- Return stable empty JSON object (not []) for no-data cases so
  downstream .get() calls work correctly
- Escape % and _ in LIKE patterns with ESCAPE clause to prevent
  wildcard matching on paths containing these characters
- Filter on m.time_created instead of s.time_created so sessions
  with recent messages are included even if started before cutoff
- Skip invalid repo paths in cross_repo_session_time to avoid
  inflating the repo count
- Reuse cross_args array in pulse-wrapper.sh instead of duplicating
  the jq query for session time pre-computation
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 8, 2026

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 367 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Sun Mar 8 19:58:05 UTC 2026: Code review monitoring started
Sun Mar 8 19:58:05 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 367

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 367
  • VULNERABILITIES: 0

Generated on: Sun Mar 8 19:58:08 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

@sonarqubecloud
Copy link
Copy Markdown

sonarqubecloud bot commented Mar 8, 2026

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement Auto-created from TODO.md tag

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant