docs: update README with recent PR features#193
Conversation
- Add WaterCrawl to browser automation tools (7 tools now) - Add LibPDF and Unstract to document processing section - Add Cloudron app packaging enhancement note - Add multi-tenant credential storage documentation - Update MCP count to 19 (added Unstract) - Update subagent count to 560+ and scripts to 146+ - Document MCP lazy-loading optimization (12-24s startup savings) - Add WaterCrawl to tool selection guide Based on PRs #178-#192
Summary of ChangesHello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request updates the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
WalkthroughREADME.md documentation expanded to reflect framework enhancements: MCP server count increased from 18 to 19, new document processing tools (LibPDF, Unstract) and browser automation tool (WaterCrawl) integrated, Multi-Tenant Credential Storage section added, and performance optimization strategy documented. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
There was a problem hiding this comment.
Code Review
This pull request does a great job of updating the README with features from recent pull requests. The documentation for new tools like WaterCrawl, LibPDF, and Unstract is clear, and the new section on Multi-Tenant Credential Storage is a valuable addition. All the counters for MCPs, subagents, and scripts have been updated accordingly. I found one minor inconsistency in the MCP counts in the performance optimization section, which I've commented on. Otherwise, the changes look good.
| - **Global** - Tools always available (loaded into every session) | ||
| - **Per-agent** - Tools disabled globally, enabled per-agent via config (zero context overhead when unused) | ||
|
|
||
| **Performance optimization:** MCP packages are installed globally via `bun install -g` for instant startup (~0.1s vs 2-3s with `npx`). The framework uses a three-tier loading strategy: 8 MCPs load eagerly at startup, 12 MCPs load on-demand when their subagent is invoked. This reduces OpenCode startup time by 12-24 seconds. |
There was a problem hiding this comment.
There's a discrepancy in the MCP counts. This note mentions a total of 20 MCPs (8 eager + 12 on-demand). However, the badge, heading, and table above all indicate a total of 19 MCPs. Additionally, the table lists 3 'Global' (eager) and 16 'Per-agent' (on-demand) MCPs, which doesn't align with the 8/12 split mentioned here. Please correct these numbers for consistency.
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Sun Jan 25 02:39:51 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
🤖 Augment PR SummarySummary: Updates
🤖 Was this summary useful? React with 👍 or 👎 |
| - **Global** - Tools always available (loaded into every session) | ||
| - **Per-agent** - Tools disabled globally, enabled per-agent via config (zero context overhead when unused) | ||
|
|
||
| **Performance optimization:** MCP packages are installed globally via `bun install -g` for instant startup (~0.1s vs 2-3s with `npx`). The framework uses a three-tier loading strategy: 8 MCPs load eagerly at startup, 12 MCPs load on-demand when their subagent is invoked. This reduces OpenCode startup time by 12-24 seconds. |
There was a problem hiding this comment.
This sentence says “three-tier loading strategy” but only describes eager vs on-demand, and the “12 MCPs load on-demand” count appears to be out of sync with the current loading policy in .agent/scripts/generate-opencode-agents.sh (8 eager / 13 lazy). Consider reconciling the wording/numbers so readers don’t rely on stale counts when configuring MCP loading.
🤖 Was this useful? React with 👍 or 👎
| echo "client-acme" > .aidevops-tenant | ||
|
|
||
| # Export for scripts | ||
| eval $(credential-helper.sh export) |
There was a problem hiding this comment.
In this snippet, eval $(credential-helper.sh export) collapses newlines into spaces, which can change how the multiple export ... lines are interpreted (e.g., potentially exporting a stray variable named export). Consider ensuring the README example preserves the intended multi-line export semantics for reliability.
🤖 Was this useful? React with 👍 or 👎
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@README.md`:
- Line 709: The numbers in README.md are inconsistent: the line stating "8 MCPs
load eagerly at startup, 12 MCPs load on-demand" totals 20 but the badge/header
claims "19 active MCPs"; verify the actual count in the MCP table and update to
be consistent by either changing the badge/header to "20 active MCPs" or
adjusting the loading strategy to "8 + 11" or "7 + 12" as appropriate; ensure
the strings "8 MCPs", "12 MCPs", "19 active MCPs" and the MCP table entries are
all updated to match the verified total.
| - **Global** - Tools always available (loaded into every session) | ||
| - **Per-agent** - Tools disabled globally, enabled per-agent via config (zero context overhead when unused) | ||
|
|
||
| **Performance optimization:** MCP packages are installed globally via `bun install -g` for instant startup (~0.1s vs 2-3s with `npx`). The framework uses a three-tier loading strategy: 8 MCPs load eagerly at startup, 12 MCPs load on-demand when their subagent is invoked. This reduces OpenCode startup time by 12-24 seconds. |
There was a problem hiding this comment.
Math inconsistency: 8 + 12 ≠ 19.
The performance optimization note states "8 MCPs load eagerly at startup, 12 MCPs load on-demand" which totals 20, but the badge and header claim 19 active MCPs. Either the loading strategy numbers are wrong, or the total count is wrong.
🔢 Verify and correct the counts
One of these needs correction:
- Badge/header should say 20, not 19
- Loading strategy should be 8 + 11 = 19 or 7 + 12 = 19
Check the actual MCP table count and update accordingly.
🤖 Prompt for AI Agents
In `@README.md` at line 709, The numbers in README.md are inconsistent: the line
stating "8 MCPs load eagerly at startup, 12 MCPs load on-demand" totals 20 but
the badge/header claims "19 active MCPs"; verify the actual count in the MCP
table and update to be consistent by either changing the badge/header to "20
active MCPs" or adjusting the loading strategy to "8 + 11" or "7 + 12" as
appropriate; ensure the strings "8 MCPs", "12 MCPs", "19 active MCPs" and the
MCP table entries are all updated to match the verified total.



Summary
Updates README.md to reflect features added in PRs #178-#192.
Changes
New Features Documented
Updated Counts
Performance Documentation
Tool Selection Guide
Testing
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.