chore(agents): Broaden perf-tuning prompt to encourage high-impact changes#1242
chore(agents): Broaden perf-tuning prompt to encourage high-impact changes#1242
Conversation
…anges Rewrite the Goal section to emphasize broad thinking — algorithm changes, architectural restructuring, parallelization, caching, dependency replacements, and I/O reduction — rather than minor logic tweaks. Explicitly discourage micro-optimizations that yield only marginal gains. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request refines the guidance for the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
📝 WalkthroughWalkthroughUpdated the Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request updates the prompt for the performance tuning agent to encourage more significant, high-impact changes rather than minor micro-optimizations. The new prompt is more descriptive and provides better guidance. I've suggested a small formatting change to further improve the clarity of the prompt by using a bulleted list for examples, which should be easier for both humans and the AI agent to parse.
.agents/commands/code/perf-tuning.md
Outdated
|
|
||
| * Performance tuning | ||
| * Memory usage reduction | ||
| Think broadly — algorithm changes, architectural restructuring, parallelization, caching strategies, dependency replacements, and I/O reduction are all fair game. Small logic tweaks that only shave a few milliseconds on a 1000-file run are not worth pursuing. Aim for changes with meaningful, measurable impact. |
There was a problem hiding this comment.
The new instructions are a great improvement. To make them even clearer and more structured for the agent, consider breaking out the examples of high-impact changes into a bulleted list. This improves readability and can help the agent better parse the different types of encouraged changes.
| Think broadly — algorithm changes, architectural restructuring, parallelization, caching strategies, dependency replacements, and I/O reduction are all fair game. Small logic tweaks that only shave a few milliseconds on a 1000-file run are not worth pursuing. Aim for changes with meaningful, measurable impact. | |
| Think broadly and aim for changes with meaningful, measurable impact. Avoid small logic tweaks that only shave a few milliseconds on a 1000-file run. | |
| Consider the following types of high-impact changes: | |
| * Algorithm changes | |
| * Architectural restructuring | |
| * Parallelization | |
| * Caching strategies | |
| * Dependency replacements | |
| * I/O reduction |
Broaden target from src-only to src and related code (tests, configs, dependencies). Add library replacements and dependency upgrades as explicit optimization strategies. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Deploying repomix with
|
| Latest commit: |
eb65b8d
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://e2c3cf6f.repomix.pages.dev |
| Branch Preview URL: | https://chore-improve-perf-tuning-pr.repomix.pages.dev |
Code Review — ClaudeOverall: Approve ✅ This is a clean, focused documentation-only change to the perf-tuning agent prompt. The diff is small (+2/-3 lines) and well-scoped. What's good
Minor observationsDetails
Premortem
No blocking issues. LGTM. 🤖 Generated with Claude Code |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #1242 +/- ##
=======================================
Coverage 87.18% 87.18%
=======================================
Files 115 115
Lines 4324 4324
Branches 1002 1002
=======================================
Hits 3770 3770
Misses 554 554 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Code Review — ClaudeOverall: Approve ✅ This is a small, well-scoped documentation change (+2/-3 lines) to
Changes across both commits
Assessment
PremortemDetails
No blocking issues. LGTM. 🤖 Generated with Claude Code |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
.agents/commands/code/perf-tuning.md (1)
4-4: Define “meaningful, measurable impact” more concretely.The intent is strong, but this line is still subjective. Add a brief benchmark criterion (for example: baseline workload, minimum improvement threshold, and variance handling) so agents make consistent go/no-go decisions.
Suggested wording tweak
-Think broadly — algorithm changes, architectural restructuring, parallelization, caching strategies, library replacements, dependency upgrades, and I/O reduction are all fair game. Small logic tweaks that only shave a few milliseconds on a 1000-file run are not worth pursuing. Aim for changes with meaningful, measurable impact. +Think broadly — algorithm changes, architectural restructuring, parallelization, caching strategies, library replacements, dependency upgrades, and I/O reduction are all fair game. Small logic tweaks that only shave a few milliseconds on a 1000-file run are not worth pursuing. Aim for changes with meaningful, measurable impact, defined by a reproducible benchmark setup (same workload/env, repeated runs) and a clearly stated minimum improvement threshold.Based on learnings: "Check the rules written in
.agents/rules/base.mdas they contain important project-specific guidelines and instructions."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/commands/code/perf-tuning.md at line 4, Clarify "meaningful, measurable impact" by adding a concrete benchmark criterion: specify a baseline workload (e.g., representative dataset/number of files or standard test harness), a minimum improvement threshold (e.g., ≥10% wall-clock reduction or ≥2x throughput) and how to handle variance (e.g., run N=5–10 trials, report median and confidence interval, require improvement to exceed measurement noise). Mention that agents should consult the project-specific guidelines in `.agents/rules/base.md` when selecting baselines and metrics to ensure consistency.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In @.agents/commands/code/perf-tuning.md:
- Line 4: Clarify "meaningful, measurable impact" by adding a concrete benchmark
criterion: specify a baseline workload (e.g., representative dataset/number of
files or standard test harness), a minimum improvement threshold (e.g., ≥10%
wall-clock reduction or ≥2x throughput) and how to handle variance (e.g., run
N=5–10 trials, report median and confidence interval, require improvement to
exceed measurement noise). Mention that agents should consult the
project-specific guidelines in `.agents/rules/base.md` when selecting baselines
and metrics to ensure consistency.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 5cbc1363-c03e-4e47-8fa4-a19844cb65d0
📒 Files selected for processing (1)
.agents/commands/code/perf-tuning.md
Rewrite the perf-tuning command's Goal section to encourage broader thinking about performance improvements.
Before: Listed "performance tuning" and "memory usage reduction" as bullet points — tended to produce narrow, micro-optimization focused changes.
After: Explicitly encourages algorithm changes, architectural restructuring, parallelization, caching strategies, dependency replacements, and I/O reduction. Discourages small logic tweaks that only yield marginal gains (e.g., a few ms on 1000 files).
Checklist
npm run test(N/A — documentation-only change)npm run lint(N/A — documentation-only change)🤖 Generated with Claude Code