Skip to content

Conversation

@marcusquinn
Copy link
Owner

@marcusquinn marcusquinn commented Feb 5, 2026

Summary

Selectively adopts high-value concepts from Claude-Flow v3 while maintaining aidevops' lightweight, shell-script-based philosophy. Cherry-picks concepts, not implementation.

Changes

Phase 1: Cost-Aware Model Routing (t102.1)

  • New tools/context/model-routing.md subagent with 5-tier guidance (haiku/flash/sonnet/pro/opus)
  • New /route command to suggest optimal model tier for a task
  • Routing rules, cost estimation table, and decision flowchart

Phase 2: Semantic Memory with Embeddings (t102.2)

  • New memory-embeddings-helper.sh for opt-in vector similarity search
  • Uses all-MiniLM-L6-v2 (~90MB) via sentence-transformers
  • Added --semantic/--similar flags to memory-helper.sh recall
  • FTS5 keyword search remains the default; embeddings are opt-in

Phase 3: Success Pattern Tracking (t102.3)

  • New pattern-tracker-helper.sh for recording and analyzing task outcomes
  • New /patterns command for querying patterns
  • Added SUCCESS_PATTERN and FAILURE_PATTERN memory types
  • Patterns tagged with task type and model tier for routing optimization

Phase 4: Documentation & Integration (t102.4)

  • New aidevops/claude-flow-comparison.md documenting adoption decisions
  • Updated memory/README.md with semantic search and pattern tracking docs
  • Updated subagent-index.toon with new scripts and subagents
  • Updated README.md counts (614+ subagents, 163 scripts, 28 commands)

What Was Skipped (and Why)

  • Swarm consensus: Async TOON mailbox is sufficient for aidevops scale
  • WASM transforms: Edit tool is already fast enough

Files Changed

  • 7 new files, 3 modified files
  • 1,340 lines added, 13 removed
  • All scripts pass ShellCheck, all markdown passes markdownlint

Summary by CodeRabbit

  • New Features

    • Added semantic search capability for memory and context recall using vector embeddings
    • Introduced pattern tracking to record and analyze success/failure patterns across tasks
    • Added cost-aware model routing guidance for selecting optimal model tiers based on task complexity
  • Documentation

    • New guides for semantic memory search, pattern analysis, and model tier recommendations
    • Added feature comparison documentation and routing decision flowcharts

Add model-routing.md subagent with tier guidance (haiku/flash/sonnet/pro/opus),
routing rules, cost estimation, and decision flowchart. Add /route command
for suggesting optimal model tier for a given task description.

Part of t102: Claude-Flow Inspirations - Selective Feature Adoption.
Create memory-embeddings-helper.sh for opt-in vector similarity search using
all-MiniLM-L6-v2 (~90MB). Supports setup, index, search, add, status, rebuild.
Add --semantic/--similar flags to memory-helper.sh recall command to delegate
to embeddings engine. FTS5 remains the default; embeddings are opt-in.

Part of t102: Claude-Flow Inspirations - Selective Feature Adoption.
Create pattern-tracker-helper.sh for recording and analyzing task outcome
patterns (record, analyze, suggest, stats). Add /patterns command for
querying patterns. Extend memory-helper.sh with SUCCESS_PATTERN and
FAILURE_PATTERN types. Patterns are tagged with task type and model tier
for routing optimization.

Part of t102: Claude-Flow Inspirations - Selective Feature Adoption.
Create aidevops/claude-flow-comparison.md documenting feature adoption
decisions. Update memory/README.md with semantic search and pattern tracking
docs. Update subagent-index.toon with new scripts and subagents. Update
README.md counts (614+ subagents, 163 scripts, 28 commands).

Part of t102: Claude-Flow Inspirations - Selective Feature Adoption.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 5, 2026

Walkthrough

This PR introduces semantic memory search via embeddings, cost-aware model routing, and success/failure pattern tracking. It adds documentation for architectural comparisons, new Bash helper scripts for embeddings and pattern tracking, command reference files, and updates existing memory utilities to support optional semantic search alongside existing full-text search capabilities.

Changes

Cohort / File(s) Summary
Documentation & Comparison
.agent/aidevops/claude-flow-comparison.md, .agent/tools/context/model-routing.md
Architectural comparison between Claude-Flow and aidevops systems; cost-aware model routing scheme with tier recommendations (haiku/flash/sonnet/pro/opus) based on task characteristics and cost analysis.
Command Reference
.agent/scripts/commands/route.md, .agent/scripts/commands/patterns.md
Subagent workflows for model tier recommendations and pattern analysis with structured YAML frontmatter and templated output formatting.
Semantic Memory Embeddings
.agent/scripts/memory-embeddings-helper.sh
End-to-end embeddings orchestration: dependency checks, Python embedding engine (all-MiniLM-L6-v2), SQLite indexing, semantic search with cosine similarity ranking, and setup/rebuild workflows.
Pattern Tracking
.agent/scripts/pattern-tracker-helper.sh
Pattern recorder and analyzer for success/failure outcomes; tracks task type, model, descriptions, and tags; provides statistical summaries and pattern suggestions with optional jq-based formatting.
Existing Memory Integration
.agent/scripts/memory-helper.sh
Added semantic recall capability via --semantic flag; extended type validation to include SUCCESS_PATTERN and FAILURE_PATTERN; delegates to embeddings helper when semantic mode enabled.
Index & Manifest
.agent/subagent-index.toon, README.md
Updated subagent counts (572→614+), helper script counts (157→163), and slash command counts (22→28) to reflect new documentation and helpers.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🧠✨ Embeddings flow through SQLite dreams,
Patterns tracked in success streams,
Models routed by cost and task,
Semantic search—no more to ask.
Memory evolved with wisdom's embrace. 🚀

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 70.37% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main changes: introducing Claude-Flow inspired features (model routing, semantic memory, pattern tracking) with a clear reference ticket.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/t102-claude-flow-inspirations

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link

Summary of Changes

Hello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the aidevops framework by selectively integrating advanced features inspired by Claude-Flow v3. It introduces intelligent model routing to optimize cost and performance, enriches memory capabilities with optional semantic search using vector embeddings, and enables self-improvement through systematic tracking of successful and failed task patterns. These additions aim to make the agent more efficient and adaptive, providing better guidance for task execution and learning from past experiences, all while maintaining the project's lightweight design philosophy.

Highlights

  • Cost-Aware Model Routing: Introduces a new subagent (tools/context/model-routing.md) and a /route command to suggest optimal model tiers (haiku/flash/sonnet/pro/opus) based on task complexity and cost, aiming to use the cheapest model that can handle a task effectively.
  • Semantic Memory with Embeddings: Adds an opt-in memory-embeddings-helper.sh for vector similarity search using the all-MiniLM-L6-v2 model. This integrates --semantic and --similar flags into memory-helper.sh recall for meaning-based memory retrieval, complementing the existing FTS5 keyword search.
  • Success Pattern Tracking: Implements pattern-tracker-helper.sh and a /patterns command to record and analyze SUCCESS_PATTERN and FAILURE_PATTERN memories. These patterns are tagged with task type and model tier to provide guidance for future tasks.
  • Documentation & Integration: Includes a new aidevops/claude-flow-comparison.md detailing feature adoption decisions from Claude-Flow, updates memory/README.md with semantic search and pattern tracking docs, and revises subagent-index.toon and README.md to reflect the new scripts and subagents.
Changelog
  • .agent/aidevops/claude-flow-comparison.md
    • New file detailing a comparison between Claude-Flow and aidevops, explaining which features were adopted and why.
  • .agent/memory/README.md
    • Updated to document the new semantic search functionality and pattern tracking, including usage examples and the addition of embeddings.db to storage location.
    • New memory types SUCCESS_PATTERN and FAILURE_PATTERN are added to the valid types list.
  • .agent/scripts/commands/patterns.md
    • New command definition for /patterns, which analyzes and displays success/failure patterns from memory to guide task approaches.
  • .agent/scripts/commands/route.md
    • New command definition for /route, designed to suggest the optimal model tier for a given task description based on predefined routing rules.
  • .agent/scripts/memory-embeddings-helper.sh
    • New shell script implementing the core logic for semantic memory, including setup, indexing, searching, and status checks for vector embeddings using Python and sentence-transformers.
  • .agent/scripts/memory-helper.sh
    • Modified to incorporate SUCCESS_PATTERN and FAILURE_PATTERN into its list of valid learning types.
    • Added --semantic and --similar flags for delegating to the new memory-embeddings-helper.sh for semantic recall.
  • .agent/scripts/pattern-tracker-helper.sh
    • New shell script for recording, analyzing, and suggesting success/failure patterns, integrating with memory-helper.sh to store pattern data.
  • .agent/subagent-index.toon
    • Updated to register the new aidevops/claude-flow-comparison.md subagent.
    • Updated to include the tools/context/model-routing.md context tool.
    • Updated to include the new memory-embeddings-helper.sh and pattern-tracker-helper.sh scripts.
  • .agent/tools/context/model-routing.md
    • New subagent providing comprehensive guidance on cost-aware model routing, including model tiers, routing rules, cost estimations, and a decision flowchart.
  • README.md
    • Updated the statistics for subagents, helper scripts, and slash commands to reflect the new additions, increasing the counts from 572+ to 614+ subagents, 157 to 163 helper scripts, and 22 to 28 slash commands.
Activity
  • The pull request introduces 7 new files and modifies 3 existing ones.
  • A total of 1,340 lines were added, and 13 lines were removed across the changes.
  • All new and modified scripts have passed ShellCheck linting, and all markdown files adhere to markdownlint standards.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link

github-actions bot commented Feb 5, 2026

🔍 Code Quality Report

�[0;35m[MONITOR]�[0m Code Review Monitoring Report

�[0;34m[INFO]�[0m Latest Quality Status:
SonarCloud: 0 bugs, 0 vulnerabilities, 2 code smells

�[0;34m[INFO]�[0m Recent monitoring activity:
Thu Feb 5 19:39:18 UTC 2026: Code review monitoring started
Thu Feb 5 19:39:19 UTC 2026: SonarCloud - Bugs: 0, Vulnerabilities: 0, Code Smells: 2
Thu Feb 5 19:39:19 UTC 2026: Qlty - 0 issues found, auto-formatting applied
Thu Feb 5 19:39:21 UTC 2026: Codacy analysis completed with auto-fixes
Thu Feb 5 19:39:22 UTC 2026: Applied 1 automatic fixes

📈 Current Quality Metrics

  • BUGS: 0
  • CODE SMELLS: 2
  • VULNERABILITIES: 0

Generated on: Thu Feb 5 19:39:24 UTC 2026


Generated by AI DevOps Framework Code Review Monitoring

@sonarqubecloud
Copy link

sonarqubecloud bot commented Feb 5, 2026

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces several significant features inspired by Claude-Flow, including cost-aware model routing, semantic memory with embeddings, and success/failure pattern tracking. The changes are well-structured, adding new helper scripts, commands, and comprehensive documentation. The implementation is generally robust, with thoughtful additions like fallbacks for missing dependencies. My review focuses on a few areas for improvement in the new helper scripts, primarily concerning performance optimization and enhanced usability.

Comment on lines +175 to +191
mem_conn = sqlite3.connect(memory_db)
output = []
for memory_id, score in top_results:
row = mem_conn.execute(
"SELECT content, type, tags, confidence, created_at FROM learnings WHERE id = ?",
(memory_id,)
).fetchone()
if row:
output.append({
"id": memory_id,
"content": row[0],
"type": row[1],
"tags": row[2],
"confidence": row[3],
"created_at": row[4],
"score": round(score, 4),
})

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation to fetch memory content for top results in cmd_search uses a loop to execute a SQL query for each memory ID. This is an N+1 query problem and can be inefficient, especially as the number of memories grows. It's better to fetch all the required memories in a single query using IN (...).

    mem_conn = sqlite3.connect(memory_db)
    output = []
    if top_results:
        memory_ids = [r[0] for r in top_results]
        placeholders = ",".join("?" for _ in memory_ids)
        rows = mem_conn.execute(
            f"SELECT id, content, type, tags, confidence, created_at FROM learnings WHERE id IN ({placeholders})",
            memory_ids
        ).fetchall()

        rows_by_id = {row[0]: row for row in rows}

        for memory_id, score in top_results:
            row = rows_by_id.get(memory_id)
            if row:
                output.append({
                    "id": memory_id,
                    "content": row[1],
                    "type": row[2],
                    "tags": row[3],
                    "confidence": row[4],
                    "created_at": row[5],
                    "score": round(score, 4),
                })

# Create the Python embedding engine
# Kept as a single file for simplicity
#######################################
create_python_engine() {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

In the embedded Python script, import hashlib is called inside cmd_index (line 212) and cmd_add (line 257). For better performance and to follow Python best practices, this import should be moved to the top of the script with the other global imports (e.g., around line 104).

Comment on lines +167 to +174
else
if [[ "$success_results" != "[]" && -n "$success_results" ]]; then
echo "$success_results"
else
echo " (none recorded)"
success_count=0
fi
fi

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

If jq is not available, the script currently prints the raw JSON output, which is not user-friendly. It would be better to provide a fallback using Python to parse and format the output, similar to how it's done in memory-embeddings-helper.sh. This would improve usability for users who don't have jq installed. This same improvement should be applied to the failure patterns section below (lines 195-202) and in the cmd_suggest function.

Suggested change
else
if [[ "$success_results" != "[]" && -n "$success_results" ]]; then
echo "$success_results"
else
echo " (none recorded)"
success_count=0
fi
fi
else
# Python fallback if jq is not available
success_count=$(echo "$success_results" | python3 -c "import json, sys; print(len(json.load(sys.stdin)))" 2>/dev/null || echo "0")
if [[ "$success_count" -gt 0 ]]; then
python3 -c "
import json, sys
results = json.load(sys.stdin)
for r in results:
print(f' + {r[\"content\"]}')
" <<< "$success_results"
else
echo " (none recorded)"
fi
fi

Comment on lines +248 to +250
else
echo " (install jq for formatted output)"
fi

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the cmd_analyze function, the fallback for when jq is not available is not very helpful. A Python-based fallback should be implemented to parse and format the JSON output for better usability. This also applies to the failure patterns section below (lines 267-269).

Suggested change
else
echo " (install jq for formatted output)"
fi
else
# Python fallback if jq is not available
success_count=$(echo "$success_results" | python3 -c "import json, sys; print(len(json.load(sys.stdin)))" 2>/dev/null || echo "0")
if [[ "$success_count" -gt 0 ]]; then
python3 -c "
import json, sys
results = json.load(sys.stdin)
for r in results:
score = r.get('score', 'N/A')
print(f' + {r[\"content\"]} (score: {score})')
" <<< "$success_results"
else
echo " (no matching success patterns)"
fi
fi

@marcusquinn marcusquinn merged commit 8c29a97 into main Feb 5, 2026
10 of 11 checks passed
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
.agent/scripts/memory-helper.sh (1)

370-437: ⚠️ Potential issue | 🟡 Minor

Guard against silent filter misbehavior in semantic search mode.

The embeddings helper supports only --limit and --json; it ignores --type, --max-age-days, and --project entirely. Additionally, semantic search does not update access counters. Reject these unsupported filters with a clear error message to prevent misleading results and maintain the reliability standards outlined in automation script guidelines.

Suggested guard
     # Handle --semantic mode (delegate to embeddings helper)
     if [[ "$semantic_mode" == true ]]; then
+        if [[ -n "$type_filter" || -n "$max_age_days" || -n "$project_filter" ]]; then
+            log_error "--semantic does not currently support --type/--max-age-days/--project filters"
+            return 1
+        fi
         local embeddings_script
         embeddings_script="$(dirname "$0")/memory-embeddings-helper.sh"
         if [[ ! -x "$embeddings_script" ]]; then
             log_error "Semantic search not available. Run: memory-embeddings-helper.sh setup"
             return 1
         fi
🤖 Fix all issues with AI agents
In @.agent/scripts/memory-embeddings-helper.sh:
- Around line 506-511: The script sets total_memories from sqlite3 which can be
"?" on error and then uses arithmetic $(( total_memories - count )), causing a
syntax error; update the block around MEMORY_DB to validate total_memories
(e.g., [[ "$total_memories" =~ ^[0-9]+$ ]]) before performing arithmetic,
compute unindexed only when numeric, otherwise set unindexed to "?" (or a safe
fallback) and call log_info "Total memories: $total_memories ($unindexed
unindexed)"; reference total_memories, count, MEMORY_DB, and log_info when
making the change.

In @.agent/scripts/pattern-tracker-helper.sh:
- Around line 278-306: The cmd_stats function silently returns zeros when jq is
missing because it pipes MEMORY_HELPER output to jq 'length' with a fallback
that masks errors; update cmd_stats (and the loop over VALID_TASK_TYPES) to
either (A) enforce jq presence at the start of the function and print a clear
warning/error if not found, or (B) implement a Python fallback that reads the
JSON returned by MEMORY_HELPER and computes lengths (mirroring cmd_search's
approach) for success_count, failure_count, and each task_type; reference the
cmd_stats function, the MEMORY_HELPER recall invocations, and VALID_TASK_TYPES
loop to locate and replace the jq 'length' usage so the function reports
accurate counts or fails loudly with a diagnostic message.

In `@README.md`:
- Around line 94-97: The README has mismatched helper-script counts; update the
Architecture section's helper/script count to match the Quick Reference (change
the lower number to "163 helper scripts in `.agent/scripts/`") so both sections
consistently read "163 helper scripts in `.agent/scripts/`"; locate the
Architecture block that lists subagent/script counts and update the numeric
value and any adjacent wording to match the Quick Reference exact phrasing.
🧹 Nitpick comments (7)
.agent/scripts/commands/patterns.md (1)

12-41: Replace inline command snippets with file:line references.

This command doc embeds executable examples directly; please point to the authoritative script locations instead to keep disclosure progressive and examples sourced.

🔧 Suggested doc tweak
-```bash
-~/.aidevops/agents/scripts/pattern-tracker-helper.sh suggest "$ARGUMENTS"
-```
+See file:.agent/scripts/pattern-tracker-helper.sh:<line> (cmd_suggest usage).

-```bash
-~/.aidevops/agents/scripts/pattern-tracker-helper.sh stats
-~/.aidevops/agents/scripts/pattern-tracker-helper.sh analyze --limit 5
-```
+See file:.agent/scripts/pattern-tracker-helper.sh:<line> (cmd_stats) and file:.agent/scripts/pattern-tracker-helper.sh:<line> (cmd_analyze).

-```text
-No patterns recorded yet. Patterns are recorded automatically during
-development loops, or manually with:
-
-  pattern-tracker-helper.sh record --outcome success \
-      --task-type bugfix --model sonnet \
-      --description "Structured debugging approach found root cause quickly"
-```
+No patterns recorded yet. For manual recording, see file:.agent/scripts/pattern-tracker-helper.sh:<line> (cmd_record usage).

As per coding guidelines, Apply progressive disclosure pattern by using pointers to subagents rather than including inline content in agent documentation; Include code examples only when authoritative; use file:line references to point to actual implementation instead of inline code snippets.

.agent/tools/context/model-routing.md (1)

75-92: Use an authoritative frontmatter reference instead of the inline YAML example.

Replace the example block with a pointer to a real subagent frontmatter so the doc stays authoritative.

🔧 Suggested doc tweak
-```yaml
----
-description: Simple text formatting utility
-mode: subagent
-model: haiku
-tools:
-  read: true
----
-```
+See file:.agent/scripts/commands/route.md:1-6 for an authoritative frontmatter example that includes `model:`.

As per coding guidelines, Apply progressive disclosure pattern by using pointers to subagents rather than including inline content in agent documentation; Include code examples only when authoritative; use file:line references to point to actual implementation instead of inline code snippets.

.agent/scripts/pattern-tracker-helper.sh (2)

51-65: Unknown arguments silently consumed in positional fallback.

When an unrecognized flag is passed (e.g., --typo), it falls through to the * case and may be assigned to description if empty, or silently skipped. This could mask user typos.

Consider logging a warning for arguments starting with -- that aren't recognized:

🛡️ Optional: warn on unknown flags
             *)
-                if [[ -z "$description" ]]; then
+                if [[ "$1" == --* ]]; then
+                    log_warn "Unknown option ignored: $1"
+                elif [[ -z "$description" ]]; then
                     description="$1"
                 fi
                 shift
                 ;;

240-269: cmd_suggest provides no useful output without jq.

When jq is unavailable, lines 248-250 and 267-269 only print "(install jq for formatted output)" without showing any results. Unlike cmd_analyze which at least dumps raw JSON, this leaves the user with no actionable information.

Consider adding a Python fallback (since Python is likely available given the embeddings helper dependency) or at minimum echo the raw JSON:

♻️ Suggested fallback to show raw results
     else
-        echo "  (install jq for formatted output)"
+        if [[ "$success_results" != "[]" && -n "$success_results" ]]; then
+            echo "$success_results"
+        else
+            echo "  (no matching success patterns)"
+        fi
     fi
.agent/scripts/memory-embeddings-helper.sh (3)

326-328: pip install may require elevated privileges or fail in restricted environments.

Line 327 runs pip install globally which could fail without sudo or in managed Python environments. Consider suggesting --user flag or checking for a virtual environment:

🛡️ Safer pip install with --user fallback
     log_info "Installing Python dependencies..."
-    pip install --quiet sentence-transformers numpy
+    if ! pip install --quiet sentence-transformers numpy 2>/dev/null; then
+        log_info "Retrying with --user flag..."
+        pip install --quiet --user sentence-transformers numpy
+    fi

154-172: Linear scan of all embeddings for each search query.

The Python cmd_search loads all embeddings into memory and computes cosine similarity against each one. This is fine for small indices but will degrade with thousands of memories.

For now this is acceptable given the opt-in nature and expected scale, but consider documenting the performance characteristics or adding a TODO for approximate nearest neighbor (ANN) indexing if usage grows.


211-213: Import inside loop is suboptimal but Python caches it.

The import hashlib on line 212 is inside the loop. Python's import mechanism caches modules so this won't cause repeated disk reads, but moving it to the top of the function would be cleaner.

♻️ Move import to function top
 def cmd_index(memory_db: str, embeddings_db: str):
+    import hashlib
     mem_conn = sqlite3.connect(memory_db)
     rows = mem_conn.execute("SELECT id, content, type, tags FROM learnings").fetchall()
     mem_conn.close()
     ...
     for memory_id, content, mem_type, tags in rows:
-        import hashlib
         content_hash = hashlib.md5(content.encode()).hexdigest()

Same applies to line 257 in cmd_add.

Comment on lines +506 to +511
# Compare with memory DB
if [[ -f "$MEMORY_DB" ]]; then
local total_memories
total_memories=$(sqlite3 "$MEMORY_DB" "SELECT COUNT(*) FROM learnings;" 2>/dev/null || echo "?")
log_info "Total memories: $total_memories ($(( total_memories - count )) unindexed)"
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Arithmetic operation fails if total_memories is "?".

Line 509 sets total_memories to "?" on error, but line 510 attempts arithmetic $(( total_memories - count )) which will fail with a syntax error.

🐛 Fix: guard arithmetic with numeric check
     if [[ -f "$MEMORY_DB" ]]; then
         local total_memories
         total_memories=$(sqlite3 "$MEMORY_DB" "SELECT COUNT(*) FROM learnings;" 2>/dev/null || echo "?")
-        log_info "Total memories: $total_memories ($(( total_memories - count )) unindexed)"
+        if [[ "$total_memories" =~ ^[0-9]+$ ]]; then
+            log_info "Total memories: $total_memories ($(( total_memories - count )) unindexed)"
+        else
+            log_info "Total memories: unknown"
+        fi
     fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Compare with memory DB
if [[ -f "$MEMORY_DB" ]]; then
local total_memories
total_memories=$(sqlite3 "$MEMORY_DB" "SELECT COUNT(*) FROM learnings;" 2>/dev/null || echo "?")
log_info "Total memories: $total_memories ($(( total_memories - count )) unindexed)"
fi
# Compare with memory DB
if [[ -f "$MEMORY_DB" ]]; then
local total_memories
total_memories=$(sqlite3 "$MEMORY_DB" "SELECT COUNT(*) FROM learnings;" 2>/dev/null || echo "?")
if [[ "$total_memories" =~ ^[0-9]+$ ]]; then
log_info "Total memories: $total_memories ($(( total_memories - count )) unindexed)"
else
log_info "Total memories: unknown"
fi
fi
🤖 Prompt for AI Agents
In @.agent/scripts/memory-embeddings-helper.sh around lines 506 - 511, The
script sets total_memories from sqlite3 which can be "?" on error and then uses
arithmetic $(( total_memories - count )), causing a syntax error; update the
block around MEMORY_DB to validate total_memories (e.g., [[ "$total_memories" =~
^[0-9]+$ ]]) before performing arithmetic, compute unindexed only when numeric,
otherwise set unindexed to "?" (or a safe fallback) and call log_info "Total
memories: $total_memories ($unindexed unindexed)"; reference total_memories,
count, MEMORY_DB, and log_info when making the change.

Comment on lines +278 to +306
cmd_stats() {
echo ""
echo -e "${CYAN}=== Pattern Statistics ===${NC}"
echo ""

# Count by type
local success_count failure_count
success_count=$("$MEMORY_HELPER" recall --query "SUCCESS_PATTERN" --type SUCCESS_PATTERN --limit 100 --json 2>/dev/null | jq 'length' 2>/dev/null || echo "0")
failure_count=$("$MEMORY_HELPER" recall --query "FAILURE_PATTERN" --type FAILURE_PATTERN --limit 100 --json 2>/dev/null | jq 'length' 2>/dev/null || echo "0")

echo " Success patterns: $success_count"
echo " Failure patterns: $failure_count"
echo " Total patterns: $(( success_count + failure_count ))"
echo ""

# Show task type breakdown if jq available
if command -v jq &>/dev/null; then
echo " Task types with patterns:"
for task_type in $VALID_TASK_TYPES; do
local type_count
type_count=$("$MEMORY_HELPER" recall --query "task:$task_type" --limit 100 --json 2>/dev/null | jq 'length' 2>/dev/null || echo "0")
if [[ "$type_count" -gt 0 ]]; then
echo " $task_type: $type_count"
fi
done
echo ""
fi
return 0
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

cmd_stats fails silently when jq is unavailable.

Lines 285-286 use jq 'length' with || echo "0" fallback, which masks the actual error and reports zero patterns regardless of actual content. Additionally, the task-type breakdown loop (lines 296-302) only runs when jq is available, so the output is inconsistent.

For a script aiming for zero technical debt, consider either requiring jq for this command or providing a Python-based fallback similar to cmd_search.

🛠️ Suggested fix: warn user or use Python fallback
 cmd_stats() {
     echo ""
     echo -e "${CYAN}=== Pattern Statistics ===${NC}"
     echo ""

+    if ! command -v jq &>/dev/null; then
+        log_warn "jq not installed - stats may be incomplete. Install jq for full functionality."
+    fi
+
     # Count by type
🤖 Prompt for AI Agents
In @.agent/scripts/pattern-tracker-helper.sh around lines 278 - 306, The
cmd_stats function silently returns zeros when jq is missing because it pipes
MEMORY_HELPER output to jq 'length' with a fallback that masks errors; update
cmd_stats (and the loop over VALID_TASK_TYPES) to either (A) enforce jq presence
at the start of the function and print a clear warning/error if not found, or
(B) implement a Python fallback that reads the JSON returned by MEMORY_HELPER
and computes lengths (mirroring cmd_search's approach) for success_count,
failure_count, and each task_type; reference the cmd_stats function, the
MEMORY_HELPER recall invocations, and VALID_TASK_TYPES loop to locate and
replace the jq 'length' usage so the function reports accurate counts or fails
loudly with a diagnostic message.

Comment on lines 94 to +97
- Primary agents (Build+, SEO, Marketing, etc.) with @plan-plus subagent for planning-only mode
- 572+ subagent markdown files organized by domain
- 157 helper scripts in `.agent/scripts/`
- 22 slash commands for common workflows
- 614+ subagent markdown files organized by domain
- 163 helper scripts in `.agent/scripts/`
- 28 slash commands for common workflows
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Sync updated helper/script counts across README sections.

The Quick Reference now says 163 helper scripts, but the Architecture section later still shows a lower helper-script count; please align those so readers don’t see conflicting numbers.

🤖 Prompt for AI Agents
In `@README.md` around lines 94 - 97, The README has mismatched helper-script
counts; update the Architecture section's helper/script count to match the Quick
Reference (change the lower number to "163 helper scripts in `.agent/scripts/`")
so both sections consistently read "163 helper scripts in `.agent/scripts/`";
locate the Architecture block that lists subagent/script counts and update the
numeric value and any adjacent wording to match the Quick Reference exact
phrasing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant