t1114: Track opus vs sonnet token cost ratio in pattern tracker for ROI analysis#1688
t1114: Track opus vs sonnet token cost ratio in pattern tracker for ROI analysis#1688marcusquinn merged 2 commits intomainfrom
Conversation
- Add estimated_cost REAL column to pattern_metadata table (schema + migration) - Add calc_estimated_cost() to pattern-tracker-helper.sh with tier pricing table (haiku $0.80/$4.00, flash $0.15/$0.60, sonnet $3.00/$15.00, opus $15.00/$75.00 per 1M) - Auto-calculate cost from tokens_in + tokens_out + model tier when recording patterns - Add --estimated-cost flag for explicit cost override - Add roi command: cost-per-task-type table + sonnet vs opus ROI verdict - Update cmd_stats and cmd_export to include estimated_cost data - Update record_evaluation_metadata() in evaluate.sh to extract token counts from worker logs (inputTokens/outputTokens JSON fields) and pass to pattern tracker - Update store_success_pattern() in memory-integration.sh to use pattern-tracker directly for richer metadata including token counts and auto-calculated cost
Summary of ChangesHello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces comprehensive cost tracking and Return on Investment (ROI) analysis capabilities for patterns generated by different AI model tiers. By integrating model pricing, automatically calculating estimated costs based on token usage, and providing a dedicated ROI analysis command, it enables users to evaluate the cost-effectiveness of various models for different task types. This enhancement provides crucial insights into resource allocation and model selection, helping to optimize operational expenses. Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. WalkthroughIntroduces cost-tracking and ROI analysis across the pattern-tracking system by adding an Changes
Sequence Diagram(s)sequenceDiagram
participant Worker as Worker Process
participant Eval as supervisor/evaluate.sh
participant Memory as supervisor/memory-integration.sh
participant Tracker as pattern-tracker-helper.sh
participant DB as pattern_metadata (DB)
Worker->>Eval: Complete task execution<br/>(write logs with tokens)
Eval->>Eval: Extract tokens_in, tokens_out<br/>from worker log
Eval->>Tracker: record command with<br/>tokens and model info
Tracker->>Tracker: calc_estimated_cost()<br/>(if model + tokens present)
Tracker->>DB: INSERT/UPDATE pattern_metadata<br/>with estimated_cost
Memory->>Memory: Extract token counts<br/>from worker log
Memory->>Tracker: Invoke pattern-helper<br/>with rich args<br/>(tokens_in, tokens_out)
Tracker->>DB: Store SUCCESS_PATTERN<br/>with cost data
Tracker->>Tracker: cmd_roi: Calculate<br/>cost-per-task-type ROI
Tracker->>Tracker: Compare Sonnet vs Opus<br/>verdicts
Tracker-->>User: Display ROI analysis<br/>with cost breakdown
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly Related Issues
Possibly Related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Wed Feb 18 16:34:17 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
There was a problem hiding this comment.
Code Review
This pull request introduces a valuable feature for tracking model costs and analyzing return on investment. The implementation is comprehensive, adding schema migrations, cost calculation logic, and a new roi command. Overall, the changes are well-structured. My review focuses on a few key areas: improving database query performance in the new roi command by suggesting a more efficient SQLite aggregation pattern, correcting an inconsistency in the ROI metric calculation to ensure accurate reporting, refactoring duplicated code to enhance maintainability, and adhering more closely to the repository's style guide and rules for SQL query construction.
| successes=$(sqlite3 "$MEMORY_DB" " | ||
| SELECT COUNT(*) FROM learnings l | ||
| WHERE l.type IN ('SUCCESS_PATTERN', 'WORKING_SOLUTION') | ||
| $tier_filter $type_filter;" 2>/dev/null || echo "0") | ||
| failures=$(sqlite3 "$MEMORY_DB" " | ||
| SELECT COUNT(*) FROM learnings l | ||
| WHERE l.type IN ('FAILURE_PATTERN', 'FAILED_APPROACH', 'ERROR_FIX') | ||
| $tier_filter $type_filter;" 2>/dev/null || echo "0") | ||
| avg_cost=$(sqlite3 "$MEMORY_DB" " | ||
| SELECT COALESCE(AVG(pm.estimated_cost), 0) | ||
| FROM learnings l | ||
| LEFT JOIN pattern_metadata pm ON l.id = pm.id | ||
| WHERE l.type IN ('SUCCESS_PATTERN','WORKING_SOLUTION','FAILURE_PATTERN','FAILED_APPROACH','ERROR_FIX') | ||
| AND pm.estimated_cost IS NOT NULL | ||
| $tier_filter $type_filter;" 2>/dev/null || echo "0") | ||
| total_cost=$(sqlite3 "$MEMORY_DB" " | ||
| SELECT COALESCE(SUM(pm.estimated_cost), 0) | ||
| FROM learnings l | ||
| LEFT JOIN pattern_metadata pm ON l.id = pm.id | ||
| WHERE l.type IN ('SUCCESS_PATTERN','WORKING_SOLUTION','FAILURE_PATTERN','FAILED_APPROACH','ERROR_FIX') | ||
| AND pm.estimated_cost IS NOT NULL | ||
| $tier_filter $type_filter;" 2>/dev/null || echo "0") |
There was a problem hiding this comment.
This loop executes four separate database queries for each model tier, which is inefficient and can lead to poor performance. To efficiently fetch these multiple aggregate statistics from SQLite in a shell script, please consolidate them into a single query using subselects that returns a delimited string, and parse it using IFS='|' read. This aligns with the repository's guidelines for efficient SQLite aggregation in shell scripts.
References
- To efficiently fetch multiple aggregate statistics from SQLite in a shell script, use a single query with subselects that returns a delimited string, and parse it using
IFS='|' read.
| sonnet_cps=$(sqlite3 "$MEMORY_DB" " | ||
| SELECT CASE WHEN COUNT(*) > 0 AND SUM(pm.estimated_cost) > 0 | ||
| THEN SUM(pm.estimated_cost) / COUNT(*) | ||
| ELSE 0 END | ||
| FROM learnings l | ||
| LEFT JOIN pattern_metadata pm ON l.id = pm.id | ||
| WHERE l.type IN ('SUCCESS_PATTERN', 'WORKING_SOLUTION') | ||
| AND pm.estimated_cost IS NOT NULL | ||
| AND (l.tags LIKE '%model:sonnet%' OR l.content LIKE '%model:sonnet%') | ||
| $type_filter;" 2>/dev/null || echo "0") | ||
| opus_cps=$(sqlite3 "$MEMORY_DB" " | ||
| SELECT CASE WHEN COUNT(*) > 0 AND SUM(pm.estimated_cost) > 0 | ||
| THEN SUM(pm.estimated_cost) / COUNT(*) | ||
| ELSE 0 END | ||
| FROM learnings l | ||
| LEFT JOIN pattern_metadata pm ON l.id = pm.id | ||
| WHERE l.type IN ('SUCCESS_PATTERN', 'WORKING_SOLUTION') | ||
| AND pm.estimated_cost IS NOT NULL | ||
| AND (l.tags LIKE '%model:opus%' OR l.content LIKE '%model:opus%') | ||
| $type_filter;" 2>/dev/null || echo "0") | ||
|
|
There was a problem hiding this comment.
There is a logical inconsistency in how "cost per success" is calculated between the main summary table and the final 'Sonnet vs Opus ROI Verdict'.
The table correctly calculates it as (total cost of all attempts) / (number of successes), which reflects the true cost to achieve a success. However, the sonnet_cps and opus_cps variables for the verdict are calculated as the average cost of only successful attempts, ignoring the cost of failures.
This makes the final verdict misleading, as it's comparing a different and less representative metric. The verdict's calculations should be updated to use the same total_cost / successes logic as the table to ensure an accurate, apples-to-apples comparison.
| [[ -n "$estimated_cost" ]] && sql_estimated_cost="$estimated_cost" | ||
|
|
||
| sqlite3 "$MEMORY_DB" "INSERT OR REPLACE INTO pattern_metadata (id, strategy, quality, failure_mode, tokens_in, tokens_out) VALUES ('$mem_id', '$sql_strategy', $sql_quality, $sql_failure_mode, $sql_tokens_in, $sql_tokens_out);" 2>/dev/null || log_warn "Failed to store pattern metadata for $mem_id" | ||
| sqlite3 "$MEMORY_DB" "INSERT OR REPLACE INTO pattern_metadata (id, strategy, quality, failure_mode, tokens_in, tokens_out, estimated_cost) VALUES ('$mem_id', '$sql_strategy', $sql_quality, $sql_failure_mode, $sql_tokens_in, $sql_tokens_out, $sql_estimated_cost);" 2>/dev/null || log_warn "Failed to store pattern metadata for $mem_id" |
There was a problem hiding this comment.
While the variables seem to be sanitized, constructing SQL queries by directly interpolating variables is generally unsafe and can lead to SQL injection vulnerabilities. This also deviates from the repository's style guide, which recommends using parameterized queries.
To improve security and adhere to best practices, please refactor this to use a method that properly separates the query from the data, such as using a here-doc with .parameter bindings if your sqlite3 version supports it.
References
- Use parameterized queries where possible. (link)
- To prevent SQL injection in shell scripts using
sqlite3, create a helper function that uses.param setfor safe parameterized bindings instead of direct string interpolation.
| # Extract token counts from worker log for cost tracking (t1114) | ||
| # opencode/claude --format json logs emit usage stats in the final JSON entry. | ||
| local tokens_in="" tokens_out="" | ||
| if [[ -n "$log_file" && -f "$log_file" ]]; then | ||
| local raw_in raw_out | ||
| raw_in=$(grep -oE '"inputTokens":[0-9]+' "$log_file" 2>/dev/null | tail -1 | grep -oE '[0-9]+' || true) | ||
| raw_out=$(grep -oE '"outputTokens":[0-9]+' "$log_file" 2>/dev/null | tail -1 | grep -oE '[0-9]+' || true) | ||
| if [[ -z "$raw_in" ]]; then | ||
| raw_in=$(grep -oE '"input_tokens":[0-9]+' "$log_file" 2>/dev/null | tail -1 | grep -oE '[0-9]+' || true) | ||
| fi | ||
| if [[ -z "$raw_out" ]]; then | ||
| raw_out=$(grep -oE '"output_tokens":[0-9]+' "$log_file" 2>/dev/null | tail -1 | grep -oE '[0-9]+' || true) | ||
| fi | ||
| [[ -n "$raw_in" ]] && tokens_in="$raw_in" | ||
| [[ -n "$raw_out" ]] && tokens_out="$raw_out" | ||
| fi |
There was a problem hiding this comment.
This block of code for extracting token counts from log files is duplicated in .agents/scripts/supervisor/evaluate.sh (lines 835-857). To improve maintainability and adhere to the DRY (Don't Repeat Yourself) principle, this logic should be extracted into a shared helper function.
You could create a common script file within the .agents/scripts/supervisor/ directory that both this script and evaluate.sh can source.
References
- In shell scripts, extract repeated logic into an internal helper function to improve maintainability. This applies even for standalone scripts where external
sourcedependencies are avoided.
There was a problem hiding this comment.
🧹 Nitpick comments (5)
.agents/scripts/supervisor/memory-integration.sh (2)
254-258: Primary path forpattern_helperwill never resolve in the supervisor context.
SCRIPT_DIRin the supervisor context points to.agents/scripts/supervisor/, so${SCRIPT_DIR}/pattern-tracker-helper.shlooks for it inside the supervisor directory where it doesn't exist. The fallback to$HOME/.aidevops/agents/scripts/works for installed deployments, but in a development/checkout layout, neither path may resolve.A sibling-directory reference would be more robust as the primary attempt:
Use parent-relative path as the primary lookup
- local pattern_helper="${SCRIPT_DIR}/pattern-tracker-helper.sh" - if [[ ! -x "$pattern_helper" ]]; then - pattern_helper="$HOME/.aidevops/agents/scripts/pattern-tracker-helper.sh" - fi + local pattern_helper="${SCRIPT_DIR}/../pattern-tracker-helper.sh" + if [[ ! -x "$pattern_helper" ]]; then + pattern_helper="${SCRIPT_DIR}/pattern-tracker-helper.sh" + fi + if [[ ! -x "$pattern_helper" ]]; then + pattern_helper="$HOME/.aidevops/agents/scripts/pattern-tracker-helper.sh" + fiNote: the same pattern exists in
evaluate.shat Lines 793-796 — apply the same fix there for consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/supervisor/memory-integration.sh around lines 254 - 258, The primary lookup for pattern-tracker-helper.sh uses SCRIPT_DIR which in supervisor context points into the supervisor subdir and thus never finds the sibling helper; update the resolution logic for the pattern_helper variable to first try a parent-relative sibling path (e.g. resolve "${SCRIPT_DIR}/../pattern-tracker-helper.sh" or equivalent realpath/dirname traversal) before falling back to the existing $HOME path, and apply the same change in the evaluate.sh occurrence; modify the assignment and the -x check around the pattern_helper variable so it prefers the parent-relative sibling file but still falls back to "$HOME/.aidevops/agents/scripts/pattern-tracker-helper.sh" when absent.
228-243: Token extraction logic is correct but duplicated across two files.This camelCase/snake_case extraction block (Lines 228-243) is nearly identical to
evaluate.shLines 835-857. Both grep forinputTokens/outputTokenswith atail -1fallback toinput_tokens/output_tokens.Consider extracting this into a shared function (e.g., in
_common.shor a shared helper) to keep them in sync — especially since token format patterns may evolve as worker tooling changes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/supervisor/memory-integration.sh around lines 228 - 243, The token-extraction block is duplicated and should be centralized: create a shared helper function (e.g., extract_tokens_from_log or get_tokens_from_log) in a common script like _common.sh that accepts a log file path and returns tokens_in and tokens_out (handling both "inputTokens"/"outputTokens" and "input_tokens"/"output_tokens", using tail -1 and safe defaults), then update .agents/scripts/supervisor/memory-integration.sh and the other script (evaluate.sh) to source the shared helper and call this function instead of duplicating the grep logic; ensure the function preserves variable names tokens_in and tokens_out or returns values in a documented way and keeps existing fallback behavior when tokens are missing..agents/scripts/pattern-tracker-helper.sh (2)
353-368: SQL insertion is safe — all dynamic values are pre-validated.
mem_idis regex-matched to^mem_[0-9]{14}_[0-9a-f]+$, strategy is enum-validated, token counts are^[0-9]+$, andestimated_costpasses the decimal regex. No injection risk here.One note: this uses raw
sqlite3instead of thedb()wrapper (which setsbusy_timeout). Other parts of this script do the same (e.g.,cmd_analyze,cmd_stats), so this is a pre-existing pattern, but under concurrent access this INSERT could fail withSQLITE_BUSY.Consider using db() wrapper for busy_timeout protection
The
db()function in_common.shsets.timeout 5000to handle concurrent access. This script bypasses it by callingsqlite3directly. While this is a pre-existing pattern throughout the file, it's worth noting for future hardening — especially as more automation feeds into the pattern tracker concurrently.- sqlite3 "$MEMORY_DB" "INSERT OR REPLACE INTO pattern_metadata (id, strategy, quality, failure_mode, tokens_in, tokens_out, estimated_cost) VALUES ('$mem_id', '$sql_strategy', $sql_quality, $sql_failure_mode, $sql_tokens_in, $sql_tokens_out, $sql_estimated_cost);" 2>/dev/null || log_warn "Failed to store pattern metadata for $mem_id" + sqlite3 -cmd ".timeout 5000" "$MEMORY_DB" "INSERT OR REPLACE INTO pattern_metadata (id, strategy, quality, failure_mode, tokens_in, tokens_out, estimated_cost) VALUES ('$mem_id', '$sql_strategy', $sql_quality, $sql_failure_mode, $sql_tokens_in, $sql_tokens_out, $sql_estimated_cost);" 2>/dev/null || log_warn "Failed to store pattern metadata for $mem_id"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/pattern-tracker-helper.sh around lines 353 - 368, The INSERT into pattern_metadata in the mem_id handling block uses a raw sqlite3 call which can fail with SQLITE_BUSY under concurrent access; replace the direct sqlite3 invocation with the db() wrapper used elsewhere (the db function in _common.sh that sets ".timeout 5000") so the same busy_timeout is applied, preserving the exact SQL string and the 2>/dev/null redirection and the subsequent || log_warn "Failed to store pattern metadata for $mem_id" error handling; edit the code around the mem_id conditional (the sql_* variables and the sqlite3 call) to call db "INSERT OR REPLACE INTO pattern_metadata (...)" instead of sqlite3 "$MEMORY_DB" ... and keep argument quoting/expansion semantics intact.
29-72: Solid cost-calculation function with one small awk naming concern.
calc_estimated_cost()handles edge cases well — empty tier, zero tokens, unknown tier all return cleanly. The pricing table is clear and maintainable.One thing: Line 70 uses
oras an awk variable name. In GNU awk (gawk),or()is a built-in bitwise function. While-v or=...will shadow it without error in practice, it could confuse maintainers or cause subtle issues if the awk body is later extended to use bitwise operations.Rename awk variable to avoid shadowing gawk built-in
- awk -v ti="$tokens_in" -v to="$tokens_out" -v ir="$input_rate" -v or="$output_rate" \ - 'BEGIN { printf "%.6f", (ti * ir + to * or) / 1000000 }' + awk -v ti="$tokens_in" -v to="$tokens_out" -v ir="$input_rate" -v outr="$output_rate" \ + 'BEGIN { printf "%.6f", (ti * ir + to * outr) / 1000000 }'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/pattern-tracker-helper.sh around lines 29 - 72, The awk variable name "or" in calc_estimated_cost shadows gawk's built-in or() and should be renamed to avoid confusion; update the awk invocation in calc_estimated_cost to use a different -v name (e.g. output_rate or out_rate) and change its use inside the awk script accordingly (replace -v or="$output_rate" with -v out_rate="$output_rate" and use out_rate in the expression and BEGIN printf), leaving TIER_PRICING parsing and the rest of the arithmetic unchanged..agents/scripts/supervisor/evaluate.sh (1)
793-796: SameSCRIPT_DIRpath issue asmemory-integration.sh.
SCRIPT_DIRin the supervisor context is.agents/scripts/supervisor/, so the primary lookup forpattern-tracker-helper.shat${SCRIPT_DIR}/pattern-tracker-helper.shwill never resolve. The fix suggested in thememory-integration.shreview (using${SCRIPT_DIR}/../pattern-tracker-helper.shas the primary path) applies here too.Use parent-relative path as the primary lookup
local pattern_helper="${SCRIPT_DIR}/pattern-tracker-helper.sh" + if [[ ! -x "$pattern_helper" ]]; then + pattern_helper="${SCRIPT_DIR}/../pattern-tracker-helper.sh" + fi if [[ ! -x "$pattern_helper" ]]; then pattern_helper="$HOME/.aidevops/agents/scripts/pattern-tracker-helper.sh" fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/scripts/supervisor/evaluate.sh around lines 793 - 796, The lookup for pattern-tracker-helper.sh uses SCRIPT_DIR which points to .agents/scripts/supervisor/, so update the pattern_helper resolution in evaluate.sh to first try the parent-relative path: set pattern_helper to "${SCRIPT_DIR}/../pattern-tracker-helper.sh" and keep the existing fallback to "$HOME/.aidevops/agents/scripts/pattern-tracker-helper.sh"; ensure the executable check still uses [[ -x "$pattern_helper" ]] and only falls back if that check fails so the primary parent-relative path is preferred.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In @.agents/scripts/pattern-tracker-helper.sh:
- Around line 353-368: The INSERT into pattern_metadata in the mem_id handling
block uses a raw sqlite3 call which can fail with SQLITE_BUSY under concurrent
access; replace the direct sqlite3 invocation with the db() wrapper used
elsewhere (the db function in _common.sh that sets ".timeout 5000") so the same
busy_timeout is applied, preserving the exact SQL string and the 2>/dev/null
redirection and the subsequent || log_warn "Failed to store pattern metadata for
$mem_id" error handling; edit the code around the mem_id conditional (the sql_*
variables and the sqlite3 call) to call db "INSERT OR REPLACE INTO
pattern_metadata (...)" instead of sqlite3 "$MEMORY_DB" ... and keep argument
quoting/expansion semantics intact.
- Around line 29-72: The awk variable name "or" in calc_estimated_cost shadows
gawk's built-in or() and should be renamed to avoid confusion; update the awk
invocation in calc_estimated_cost to use a different -v name (e.g. output_rate
or out_rate) and change its use inside the awk script accordingly (replace -v
or="$output_rate" with -v out_rate="$output_rate" and use out_rate in the
expression and BEGIN printf), leaving TIER_PRICING parsing and the rest of the
arithmetic unchanged.
In @.agents/scripts/supervisor/evaluate.sh:
- Around line 793-796: The lookup for pattern-tracker-helper.sh uses SCRIPT_DIR
which points to .agents/scripts/supervisor/, so update the pattern_helper
resolution in evaluate.sh to first try the parent-relative path: set
pattern_helper to "${SCRIPT_DIR}/../pattern-tracker-helper.sh" and keep the
existing fallback to "$HOME/.aidevops/agents/scripts/pattern-tracker-helper.sh";
ensure the executable check still uses [[ -x "$pattern_helper" ]] and only falls
back if that check fails so the primary parent-relative path is preferred.
In @.agents/scripts/supervisor/memory-integration.sh:
- Around line 254-258: The primary lookup for pattern-tracker-helper.sh uses
SCRIPT_DIR which in supervisor context points into the supervisor subdir and
thus never finds the sibling helper; update the resolution logic for the
pattern_helper variable to first try a parent-relative sibling path (e.g.
resolve "${SCRIPT_DIR}/../pattern-tracker-helper.sh" or equivalent
realpath/dirname traversal) before falling back to the existing $HOME path, and
apply the same change in the evaluate.sh occurrence; modify the assignment and
the -x check around the pattern_helper variable so it prefers the
parent-relative sibling file but still falls back to
"$HOME/.aidevops/agents/scripts/pattern-tracker-helper.sh" when absent.
- Around line 228-243: The token-extraction block is duplicated and should be
centralized: create a shared helper function (e.g., extract_tokens_from_log or
get_tokens_from_log) in a common script like _common.sh that accepts a log file
path and returns tokens_in and tokens_out (handling both
"inputTokens"/"outputTokens" and "input_tokens"/"output_tokens", using tail -1
and safe defaults), then update .agents/scripts/supervisor/memory-integration.sh
and the other script (evaluate.sh) to source the shared helper and call this
function instead of duplicating the grep logic; ensure the function preserves
variable names tokens_in and tokens_out or returns values in a documented way
and keeps existing fallback behavior when tokens are missing.
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Wed Feb 18 17:02:24 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
|



Add estimated_cost field to pattern_metadata for ROI analysis across model tiers.
Changes
estimated_cost REALcolumn topattern_metadatatable with migration for existing DBscalc_estimated_cost()auto-computes cost fromtokens_in + tokens_out + model tierwhen recording patternsroicommand:pattern-tracker-helper.sh roishows cost-per-task-type table + sonnet vs opus ROI verdictrecord_evaluation_metadata()in evaluate.sh now extractsinputTokens/outputTokensfrom worker JSON logsstore_success_pattern()in memory-integration.sh routes through pattern-tracker for richer metadatacmd_exportandcmd_statsincludeestimated_costin outputUsage
ROI Analysis Output
The
roicommand answers: does opus's higher success rate justify 10-15x cost for chore tasks?Ref #1663
Summary by CodeRabbit