-
Notifications
You must be signed in to change notification settings - Fork 5
feat: Claude-Flow inspired features - model routing, semantic memory, pattern tracking (t102) #341
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Add model-routing.md subagent with tier guidance (haiku/flash/sonnet/pro/opus), routing rules, cost estimation, and decision flowchart. Add /route command for suggesting optimal model tier for a given task description. Part of t102: Claude-Flow Inspirations - Selective Feature Adoption.
Create memory-embeddings-helper.sh for opt-in vector similarity search using all-MiniLM-L6-v2 (~90MB). Supports setup, index, search, add, status, rebuild. Add --semantic/--similar flags to memory-helper.sh recall command to delegate to embeddings engine. FTS5 remains the default; embeddings are opt-in. Part of t102: Claude-Flow Inspirations - Selective Feature Adoption.
Create pattern-tracker-helper.sh for recording and analyzing task outcome patterns (record, analyze, suggest, stats). Add /patterns command for querying patterns. Extend memory-helper.sh with SUCCESS_PATTERN and FAILURE_PATTERN types. Patterns are tagged with task type and model tier for routing optimization. Part of t102: Claude-Flow Inspirations - Selective Feature Adoption.
Create aidevops/claude-flow-comparison.md documenting feature adoption decisions. Update memory/README.md with semantic search and pattern tracking docs. Update subagent-index.toon with new scripts and subagents. Update README.md counts (614+ subagents, 163 scripts, 28 commands). Part of t102: Claude-Flow Inspirations - Selective Feature Adoption.
WalkthroughThis PR introduces semantic memory search via embeddings, cost-aware model routing, and success/failure pattern tracking. It adds documentation for architectural comparisons, new Bash helper scripts for embeddings and pattern tracking, command reference files, and updates existing memory utilities to support optional semantic search alongside existing full-text search capabilities. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the aidevops framework by selectively integrating advanced features inspired by Claude-Flow v3. It introduces intelligent model routing to optimize cost and performance, enriches memory capabilities with optional semantic search using vector embeddings, and enables self-improvement through systematic tracking of successful and failed task patterns. These additions aim to make the agent more efficient and adaptive, providing better guidance for task execution and learning from past experiences, all while maintaining the project's lightweight design philosophy. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Thu Feb 5 19:39:24 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces several significant features inspired by Claude-Flow, including cost-aware model routing, semantic memory with embeddings, and success/failure pattern tracking. The changes are well-structured, adding new helper scripts, commands, and comprehensive documentation. The implementation is generally robust, with thoughtful additions like fallbacks for missing dependencies. My review focuses on a few areas for improvement in the new helper scripts, primarily concerning performance optimization and enhanced usability.
| mem_conn = sqlite3.connect(memory_db) | ||
| output = [] | ||
| for memory_id, score in top_results: | ||
| row = mem_conn.execute( | ||
| "SELECT content, type, tags, confidence, created_at FROM learnings WHERE id = ?", | ||
| (memory_id,) | ||
| ).fetchone() | ||
| if row: | ||
| output.append({ | ||
| "id": memory_id, | ||
| "content": row[0], | ||
| "type": row[1], | ||
| "tags": row[2], | ||
| "confidence": row[3], | ||
| "created_at": row[4], | ||
| "score": round(score, 4), | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation to fetch memory content for top results in cmd_search uses a loop to execute a SQL query for each memory ID. This is an N+1 query problem and can be inefficient, especially as the number of memories grows. It's better to fetch all the required memories in a single query using IN (...).
mem_conn = sqlite3.connect(memory_db)
output = []
if top_results:
memory_ids = [r[0] for r in top_results]
placeholders = ",".join("?" for _ in memory_ids)
rows = mem_conn.execute(
f"SELECT id, content, type, tags, confidence, created_at FROM learnings WHERE id IN ({placeholders})",
memory_ids
).fetchall()
rows_by_id = {row[0]: row for row in rows}
for memory_id, score in top_results:
row = rows_by_id.get(memory_id)
if row:
output.append({
"id": memory_id,
"content": row[1],
"type": row[2],
"tags": row[3],
"confidence": row[4],
"created_at": row[5],
"score": round(score, 4),
})| # Create the Python embedding engine | ||
| # Kept as a single file for simplicity | ||
| ####################################### | ||
| create_python_engine() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| else | ||
| if [[ "$success_results" != "[]" && -n "$success_results" ]]; then | ||
| echo "$success_results" | ||
| else | ||
| echo " (none recorded)" | ||
| success_count=0 | ||
| fi | ||
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If jq is not available, the script currently prints the raw JSON output, which is not user-friendly. It would be better to provide a fallback using Python to parse and format the output, similar to how it's done in memory-embeddings-helper.sh. This would improve usability for users who don't have jq installed. This same improvement should be applied to the failure patterns section below (lines 195-202) and in the cmd_suggest function.
| else | |
| if [[ "$success_results" != "[]" && -n "$success_results" ]]; then | |
| echo "$success_results" | |
| else | |
| echo " (none recorded)" | |
| success_count=0 | |
| fi | |
| fi | |
| else | |
| # Python fallback if jq is not available | |
| success_count=$(echo "$success_results" | python3 -c "import json, sys; print(len(json.load(sys.stdin)))" 2>/dev/null || echo "0") | |
| if [[ "$success_count" -gt 0 ]]; then | |
| python3 -c " | |
| import json, sys | |
| results = json.load(sys.stdin) | |
| for r in results: | |
| print(f' + {r[\"content\"]}') | |
| " <<< "$success_results" | |
| else | |
| echo " (none recorded)" | |
| fi | |
| fi |
| else | ||
| echo " (install jq for formatted output)" | ||
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the cmd_analyze function, the fallback for when jq is not available is not very helpful. A Python-based fallback should be implemented to parse and format the JSON output for better usability. This also applies to the failure patterns section below (lines 267-269).
| else | |
| echo " (install jq for formatted output)" | |
| fi | |
| else | |
| # Python fallback if jq is not available | |
| success_count=$(echo "$success_results" | python3 -c "import json, sys; print(len(json.load(sys.stdin)))" 2>/dev/null || echo "0") | |
| if [[ "$success_count" -gt 0 ]]; then | |
| python3 -c " | |
| import json, sys | |
| results = json.load(sys.stdin) | |
| for r in results: | |
| score = r.get('score', 'N/A') | |
| print(f' + {r[\"content\"]} (score: {score})') | |
| " <<< "$success_results" | |
| else | |
| echo " (no matching success patterns)" | |
| fi | |
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.agent/scripts/memory-helper.sh (1)
370-437:⚠️ Potential issue | 🟡 MinorGuard against silent filter misbehavior in semantic search mode.
The embeddings helper supports only
--limitand--json; it ignores--type,--max-age-days, and--projectentirely. Additionally, semantic search does not update access counters. Reject these unsupported filters with a clear error message to prevent misleading results and maintain the reliability standards outlined in automation script guidelines.Suggested guard
# Handle --semantic mode (delegate to embeddings helper) if [[ "$semantic_mode" == true ]]; then + if [[ -n "$type_filter" || -n "$max_age_days" || -n "$project_filter" ]]; then + log_error "--semantic does not currently support --type/--max-age-days/--project filters" + return 1 + fi local embeddings_script embeddings_script="$(dirname "$0")/memory-embeddings-helper.sh" if [[ ! -x "$embeddings_script" ]]; then log_error "Semantic search not available. Run: memory-embeddings-helper.sh setup" return 1 fi
🤖 Fix all issues with AI agents
In @.agent/scripts/memory-embeddings-helper.sh:
- Around line 506-511: The script sets total_memories from sqlite3 which can be
"?" on error and then uses arithmetic $(( total_memories - count )), causing a
syntax error; update the block around MEMORY_DB to validate total_memories
(e.g., [[ "$total_memories" =~ ^[0-9]+$ ]]) before performing arithmetic,
compute unindexed only when numeric, otherwise set unindexed to "?" (or a safe
fallback) and call log_info "Total memories: $total_memories ($unindexed
unindexed)"; reference total_memories, count, MEMORY_DB, and log_info when
making the change.
In @.agent/scripts/pattern-tracker-helper.sh:
- Around line 278-306: The cmd_stats function silently returns zeros when jq is
missing because it pipes MEMORY_HELPER output to jq 'length' with a fallback
that masks errors; update cmd_stats (and the loop over VALID_TASK_TYPES) to
either (A) enforce jq presence at the start of the function and print a clear
warning/error if not found, or (B) implement a Python fallback that reads the
JSON returned by MEMORY_HELPER and computes lengths (mirroring cmd_search's
approach) for success_count, failure_count, and each task_type; reference the
cmd_stats function, the MEMORY_HELPER recall invocations, and VALID_TASK_TYPES
loop to locate and replace the jq 'length' usage so the function reports
accurate counts or fails loudly with a diagnostic message.
In `@README.md`:
- Around line 94-97: The README has mismatched helper-script counts; update the
Architecture section's helper/script count to match the Quick Reference (change
the lower number to "163 helper scripts in `.agent/scripts/`") so both sections
consistently read "163 helper scripts in `.agent/scripts/`"; locate the
Architecture block that lists subagent/script counts and update the numeric
value and any adjacent wording to match the Quick Reference exact phrasing.
🧹 Nitpick comments (7)
.agent/scripts/commands/patterns.md (1)
12-41: Replace inline command snippets with file:line references.This command doc embeds executable examples directly; please point to the authoritative script locations instead to keep disclosure progressive and examples sourced.
🔧 Suggested doc tweak
-```bash -~/.aidevops/agents/scripts/pattern-tracker-helper.sh suggest "$ARGUMENTS" -``` +See file:.agent/scripts/pattern-tracker-helper.sh:<line> (cmd_suggest usage). -```bash -~/.aidevops/agents/scripts/pattern-tracker-helper.sh stats -~/.aidevops/agents/scripts/pattern-tracker-helper.sh analyze --limit 5 -``` +See file:.agent/scripts/pattern-tracker-helper.sh:<line> (cmd_stats) and file:.agent/scripts/pattern-tracker-helper.sh:<line> (cmd_analyze). -```text -No patterns recorded yet. Patterns are recorded automatically during -development loops, or manually with: - - pattern-tracker-helper.sh record --outcome success \ - --task-type bugfix --model sonnet \ - --description "Structured debugging approach found root cause quickly" -``` +No patterns recorded yet. For manual recording, see file:.agent/scripts/pattern-tracker-helper.sh:<line> (cmd_record usage).As per coding guidelines, Apply progressive disclosure pattern by using pointers to subagents rather than including inline content in agent documentation; Include code examples only when authoritative; use
file:linereferences to point to actual implementation instead of inline code snippets..agent/tools/context/model-routing.md (1)
75-92: Use an authoritative frontmatter reference instead of the inline YAML example.Replace the example block with a pointer to a real subagent frontmatter so the doc stays authoritative.
🔧 Suggested doc tweak
-```yaml ---- -description: Simple text formatting utility -mode: subagent -model: haiku -tools: - read: true ---- -``` +See file:.agent/scripts/commands/route.md:1-6 for an authoritative frontmatter example that includes `model:`.As per coding guidelines, Apply progressive disclosure pattern by using pointers to subagents rather than including inline content in agent documentation; Include code examples only when authoritative; use
file:linereferences to point to actual implementation instead of inline code snippets..agent/scripts/pattern-tracker-helper.sh (2)
51-65: Unknown arguments silently consumed in positional fallback.When an unrecognized flag is passed (e.g.,
--typo), it falls through to the*case and may be assigned todescriptionif empty, or silently skipped. This could mask user typos.Consider logging a warning for arguments starting with
--that aren't recognized:🛡️ Optional: warn on unknown flags
*) - if [[ -z "$description" ]]; then + if [[ "$1" == --* ]]; then + log_warn "Unknown option ignored: $1" + elif [[ -z "$description" ]]; then description="$1" fi shift ;;
240-269:cmd_suggestprovides no useful output without jq.When jq is unavailable, lines 248-250 and 267-269 only print "(install jq for formatted output)" without showing any results. Unlike
cmd_analyzewhich at least dumps raw JSON, this leaves the user with no actionable information.Consider adding a Python fallback (since Python is likely available given the embeddings helper dependency) or at minimum echo the raw JSON:
♻️ Suggested fallback to show raw results
else - echo " (install jq for formatted output)" + if [[ "$success_results" != "[]" && -n "$success_results" ]]; then + echo "$success_results" + else + echo " (no matching success patterns)" + fi fi.agent/scripts/memory-embeddings-helper.sh (3)
326-328:pip installmay require elevated privileges or fail in restricted environments.Line 327 runs
pip installglobally which could fail without sudo or in managed Python environments. Consider suggesting--userflag or checking for a virtual environment:🛡️ Safer pip install with --user fallback
log_info "Installing Python dependencies..." - pip install --quiet sentence-transformers numpy + if ! pip install --quiet sentence-transformers numpy 2>/dev/null; then + log_info "Retrying with --user flag..." + pip install --quiet --user sentence-transformers numpy + fi
154-172: Linear scan of all embeddings for each search query.The Python
cmd_searchloads all embeddings into memory and computes cosine similarity against each one. This is fine for small indices but will degrade with thousands of memories.For now this is acceptable given the opt-in nature and expected scale, but consider documenting the performance characteristics or adding a TODO for approximate nearest neighbor (ANN) indexing if usage grows.
211-213: Import inside loop is suboptimal but Python caches it.The
import hashlibon line 212 is inside the loop. Python's import mechanism caches modules so this won't cause repeated disk reads, but moving it to the top of the function would be cleaner.♻️ Move import to function top
def cmd_index(memory_db: str, embeddings_db: str): + import hashlib mem_conn = sqlite3.connect(memory_db) rows = mem_conn.execute("SELECT id, content, type, tags FROM learnings").fetchall() mem_conn.close() ... for memory_id, content, mem_type, tags in rows: - import hashlib content_hash = hashlib.md5(content.encode()).hexdigest()Same applies to line 257 in
cmd_add.
| # Compare with memory DB | ||
| if [[ -f "$MEMORY_DB" ]]; then | ||
| local total_memories | ||
| total_memories=$(sqlite3 "$MEMORY_DB" "SELECT COUNT(*) FROM learnings;" 2>/dev/null || echo "?") | ||
| log_info "Total memories: $total_memories ($(( total_memories - count )) unindexed)" | ||
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Arithmetic operation fails if total_memories is "?".
Line 509 sets total_memories to "?" on error, but line 510 attempts arithmetic $(( total_memories - count )) which will fail with a syntax error.
🐛 Fix: guard arithmetic with numeric check
if [[ -f "$MEMORY_DB" ]]; then
local total_memories
total_memories=$(sqlite3 "$MEMORY_DB" "SELECT COUNT(*) FROM learnings;" 2>/dev/null || echo "?")
- log_info "Total memories: $total_memories ($(( total_memories - count )) unindexed)"
+ if [[ "$total_memories" =~ ^[0-9]+$ ]]; then
+ log_info "Total memories: $total_memories ($(( total_memories - count )) unindexed)"
+ else
+ log_info "Total memories: unknown"
+ fi
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Compare with memory DB | |
| if [[ -f "$MEMORY_DB" ]]; then | |
| local total_memories | |
| total_memories=$(sqlite3 "$MEMORY_DB" "SELECT COUNT(*) FROM learnings;" 2>/dev/null || echo "?") | |
| log_info "Total memories: $total_memories ($(( total_memories - count )) unindexed)" | |
| fi | |
| # Compare with memory DB | |
| if [[ -f "$MEMORY_DB" ]]; then | |
| local total_memories | |
| total_memories=$(sqlite3 "$MEMORY_DB" "SELECT COUNT(*) FROM learnings;" 2>/dev/null || echo "?") | |
| if [[ "$total_memories" =~ ^[0-9]+$ ]]; then | |
| log_info "Total memories: $total_memories ($(( total_memories - count )) unindexed)" | |
| else | |
| log_info "Total memories: unknown" | |
| fi | |
| fi |
🤖 Prompt for AI Agents
In @.agent/scripts/memory-embeddings-helper.sh around lines 506 - 511, The
script sets total_memories from sqlite3 which can be "?" on error and then uses
arithmetic $(( total_memories - count )), causing a syntax error; update the
block around MEMORY_DB to validate total_memories (e.g., [[ "$total_memories" =~
^[0-9]+$ ]]) before performing arithmetic, compute unindexed only when numeric,
otherwise set unindexed to "?" (or a safe fallback) and call log_info "Total
memories: $total_memories ($unindexed unindexed)"; reference total_memories,
count, MEMORY_DB, and log_info when making the change.
| cmd_stats() { | ||
| echo "" | ||
| echo -e "${CYAN}=== Pattern Statistics ===${NC}" | ||
| echo "" | ||
|
|
||
| # Count by type | ||
| local success_count failure_count | ||
| success_count=$("$MEMORY_HELPER" recall --query "SUCCESS_PATTERN" --type SUCCESS_PATTERN --limit 100 --json 2>/dev/null | jq 'length' 2>/dev/null || echo "0") | ||
| failure_count=$("$MEMORY_HELPER" recall --query "FAILURE_PATTERN" --type FAILURE_PATTERN --limit 100 --json 2>/dev/null | jq 'length' 2>/dev/null || echo "0") | ||
|
|
||
| echo " Success patterns: $success_count" | ||
| echo " Failure patterns: $failure_count" | ||
| echo " Total patterns: $(( success_count + failure_count ))" | ||
| echo "" | ||
|
|
||
| # Show task type breakdown if jq available | ||
| if command -v jq &>/dev/null; then | ||
| echo " Task types with patterns:" | ||
| for task_type in $VALID_TASK_TYPES; do | ||
| local type_count | ||
| type_count=$("$MEMORY_HELPER" recall --query "task:$task_type" --limit 100 --json 2>/dev/null | jq 'length' 2>/dev/null || echo "0") | ||
| if [[ "$type_count" -gt 0 ]]; then | ||
| echo " $task_type: $type_count" | ||
| fi | ||
| done | ||
| echo "" | ||
| fi | ||
| return 0 | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cmd_stats fails silently when jq is unavailable.
Lines 285-286 use jq 'length' with || echo "0" fallback, which masks the actual error and reports zero patterns regardless of actual content. Additionally, the task-type breakdown loop (lines 296-302) only runs when jq is available, so the output is inconsistent.
For a script aiming for zero technical debt, consider either requiring jq for this command or providing a Python-based fallback similar to cmd_search.
🛠️ Suggested fix: warn user or use Python fallback
cmd_stats() {
echo ""
echo -e "${CYAN}=== Pattern Statistics ===${NC}"
echo ""
+ if ! command -v jq &>/dev/null; then
+ log_warn "jq not installed - stats may be incomplete. Install jq for full functionality."
+ fi
+
# Count by type🤖 Prompt for AI Agents
In @.agent/scripts/pattern-tracker-helper.sh around lines 278 - 306, The
cmd_stats function silently returns zeros when jq is missing because it pipes
MEMORY_HELPER output to jq 'length' with a fallback that masks errors; update
cmd_stats (and the loop over VALID_TASK_TYPES) to either (A) enforce jq presence
at the start of the function and print a clear warning/error if not found, or
(B) implement a Python fallback that reads the JSON returned by MEMORY_HELPER
and computes lengths (mirroring cmd_search's approach) for success_count,
failure_count, and each task_type; reference the cmd_stats function, the
MEMORY_HELPER recall invocations, and VALID_TASK_TYPES loop to locate and
replace the jq 'length' usage so the function reports accurate counts or fails
loudly with a diagnostic message.
| - Primary agents (Build+, SEO, Marketing, etc.) with @plan-plus subagent for planning-only mode | ||
| - 572+ subagent markdown files organized by domain | ||
| - 157 helper scripts in `.agent/scripts/` | ||
| - 22 slash commands for common workflows | ||
| - 614+ subagent markdown files organized by domain | ||
| - 163 helper scripts in `.agent/scripts/` | ||
| - 28 slash commands for common workflows |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sync updated helper/script counts across README sections.
The Quick Reference now says 163 helper scripts, but the Architecture section later still shows a lower helper-script count; please align those so readers don’t see conflicting numbers.
🤖 Prompt for AI Agents
In `@README.md` around lines 94 - 97, The README has mismatched helper-script
counts; update the Architecture section's helper/script count to match the Quick
Reference (change the lower number to "163 helper scripts in `.agent/scripts/`")
so both sections consistently read "163 helper scripts in `.agent/scripts/`";
locate the Architecture block that lists subagent/script counts and update the
numeric value and any adjacent wording to match the Quick Reference exact
phrasing.



Summary
Selectively adopts high-value concepts from Claude-Flow v3 while maintaining aidevops' lightweight, shell-script-based philosophy. Cherry-picks concepts, not implementation.
Changes
Phase 1: Cost-Aware Model Routing (t102.1)
tools/context/model-routing.mdsubagent with 5-tier guidance (haiku/flash/sonnet/pro/opus)/routecommand to suggest optimal model tier for a taskPhase 2: Semantic Memory with Embeddings (t102.2)
memory-embeddings-helper.shfor opt-in vector similarity search--semantic/--similarflags tomemory-helper.sh recallPhase 3: Success Pattern Tracking (t102.3)
pattern-tracker-helper.shfor recording and analyzing task outcomes/patternscommand for querying patternsSUCCESS_PATTERNandFAILURE_PATTERNmemory typesPhase 4: Documentation & Integration (t102.4)
aidevops/claude-flow-comparison.mddocumenting adoption decisionsmemory/README.mdwith semantic search and pattern tracking docssubagent-index.toonwith new scripts and subagentsREADME.mdcounts (614+ subagents, 163 scripts, 28 commands)What Was Skipped (and Why)
Files Changed
Summary by CodeRabbit
New Features
Documentation