feat: add web performance subagent and /performance command#209
feat: add web performance subagent and /performance command#209marcusquinn merged 2 commits intomainfrom
Conversation
Inspired by @elithrar's web-perf agent skill. Provides comprehensive web performance analysis using Chrome DevTools MCP: - Core Web Vitals monitoring (FCP, LCP, CLS, FID, TTFB) - Network dependency analysis (third-party scripts, request chains) - Accessibility auditing (WCAG compliance) - Actionable output format with file:line references New files: - tools/performance/performance.md - Main subagent - scripts/commands/performance.md - /performance command - Updated subagent-index.toon Closes #t032
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @marcusquinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the agent's capabilities by introducing a dedicated web performance analysis subagent and a new Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a new web performance subagent and a corresponding /performance command, which is a valuable addition for performance analysis. The documentation is comprehensive, providing useful examples and potential fixes for common performance issues. My review focuses on improving the clarity and correctness of the provided documentation and examples. I've identified a few minor issues, including an inconsistency in command arguments, a potential for suppressed error messages in a prerequisite check, and a bug in a shell script example. Overall, this is a strong contribution that will be even better with these small adjustments.
| ```text | ||
| Default: Full audit (performance + accessibility + network) | ||
| Options: | ||
| --categories=perf,a11y,network Specific categories |
There was a problem hiding this comment.
There's an inconsistency in the values for the --categories argument. Here you've defined them with short names (perf, a11y), but the example on line 102 uses the full names (performance, accessibility). Using the full, more descriptive names in both places would improve clarity and consistency for the user.
| --categories=perf,a11y,network Specific categories | |
| --categories=performance,accessibility,network Specific categories |
|
|
||
| ```bash | ||
| # Verify Chrome DevTools MCP is available | ||
| which npx && npx chrome-devtools-mcp@latest --version 2>/dev/null || echo "Install: npm i -g chrome-devtools-mcp" |
There was a problem hiding this comment.
The prerequisite check redirects stderr to /dev/null (2>/dev/null). This can hide useful error messages from npx if it fails to fetch or run chrome-devtools-mcp (e.g., due to network issues or other problems). Removing this redirection would provide better diagnostics for users trying to troubleshoot setup issues.
| which npx && npx chrome-devtools-mcp@latest --version 2>/dev/null || echo "Install: npm i -g chrome-devtools-mcp" | |
| which npx && npx chrome-devtools-mcp@latest --version || echo "Install: npm i -g chrome-devtools-mcp" |
| --chrome-flags="--headless" | ||
|
|
||
| # Check performance score | ||
| SCORE=$(jq '.categories.performance.score * 100' lighthouse.json) |
There was a problem hiding this comment.
The shell script in this CI/CD example has a potential bug. This jq command can produce a floating-point number for the score, but the if condition on the next line (if [ "$SCORE" -lt 90 ]) performs an integer comparison. This will cause a "integer expression expected" error in bash if the score is not a whole number.
To fix this, you can use jq's round function to convert the score to an integer, ensuring the comparison works correctly.
| SCORE=$(jq '.categories.performance.score * 100' lighthouse.json) | |
| SCORE=$(jq '.categories.performance.score * 100 | round' lighthouse.json) |
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Sun Jan 25 15:04:22 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
- Use full category names (performance, accessibility) for consistency - Remove stderr suppression in prerequisite check for better diagnostics - Add jq round function to CI/CD example to fix integer comparison
|
🤖 Augment PR SummarySummary: This PR introduces a new web performance “subagent” and a matching Changes:
Technical Notes: The docs emphasize actionable output (metric tables + file:line references), optional baseline comparisons, and using 🤖 Was this summary useful? React with 👍 or 👎 |
| grep: true | ||
| webfetch: true | ||
| task: true | ||
| mcp: |
There was a problem hiding this comment.
The aidevops/OpenCode frontmatter pattern enables MCPs via the tools: map (e.g. fluentcrm_*: true), and I don’t see other agents using an mcp: key; if mcp: is ignored by the loader, this subagent may not actually have Chrome DevTools tools available when invoked.
🤖 Was this useful? React with 👍 or 👎
| ```text | ||
| Default: Full audit (performance + accessibility + network) | ||
| Options: | ||
| --categories=perf,a11y,network Specific categories |
🔍 Code Quality Report�[0;35m[MONITOR]�[0m Code Review Monitoring Report �[0;34m[INFO]�[0m Latest Quality Status: �[0;34m[INFO]�[0m Recent monitoring activity: 📈 Current Quality Metrics
Generated on: Sun Jan 25 15:07:09 UTC 2026 Generated by AI DevOps Framework Code Review Monitoring |
New features documented: - /performance command for web performance audits (PR #209) - /email-health-check command for deliverability checks (PR #213) - @debug-opengraph and @debug-favicon SEO subagents (PR #206) - Auto-task completion from commit messages (PR #208) Updated Core Capabilities section with new monitoring features.
New features documented: - /performance command for web performance audits (PR #209) - /email-health-check command for deliverability checks (PR #213) - @debug-opengraph and @debug-favicon SEO subagents (PR #206) - Auto-task completion from commit messages (PR #208) Updated Core Capabilities section with new monitoring features.
Performance subagent and /performance command implemented in PR #209.



Summary
Adds a comprehensive web performance analysis subagent inspired by @elithrar's web-perf agent skill.
Changes
tools/performance/performance.md- Core Web Vitals, network dependencies, accessibility analysis/performance- Quick performance audits from CLIsubagent-index.toon- Added performance subagent entryFeatures
chrome-devtools-mcpfor deep analysisUsage
Related
tools/browser/pagespeed.mdandtools/browser/chrome-devtools.md