Skip to content

chore(agents): Broaden perf-tuning prompt to encourage high-impact changes#1242

Merged
yamadashy merged 2 commits intomainfrom
chore/improve-perf-tuning-prompt
Mar 19, 2026
Merged

chore(agents): Broaden perf-tuning prompt to encourage high-impact changes#1242
yamadashy merged 2 commits intomainfrom
chore/improve-perf-tuning-prompt

Conversation

@yamadashy
Copy link
Copy Markdown
Owner

@yamadashy yamadashy commented Mar 19, 2026

Rewrite the perf-tuning command's Goal section to encourage broader thinking about performance improvements.

Before: Listed "performance tuning" and "memory usage reduction" as bullet points — tended to produce narrow, micro-optimization focused changes.

After: Explicitly encourages algorithm changes, architectural restructuring, parallelization, caching strategies, dependency replacements, and I/O reduction. Discourages small logic tweaks that only yield marginal gains (e.g., a few ms on 1000 files).

Checklist

  • Run npm run test (N/A — documentation-only change)
  • Run npm run lint (N/A — documentation-only change)

🤖 Generated with Claude Code


Open with Devin

…anges

Rewrite the Goal section to emphasize broad thinking — algorithm changes,
architectural restructuring, parallelization, caching, dependency
replacements, and I/O reduction — rather than minor logic tweaks.
Explicitly discourage micro-optimizations that yield only marginal gains.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refines the guidance for the perf-tuning command to promote more significant and effective performance enhancements. By broadening the scope of suggested improvements and discouraging minor, localized tweaks, the change aims to direct efforts towards substantial optimizations such as architectural changes and parallelization, ensuring that performance work delivers tangible benefits.

Highlights

  • Prompt Refinement: The perf-tuning command's prompt has been updated to encourage a broader and more impactful approach to performance improvements, moving away from narrow micro-optimizations.
  • Guidance Expansion: The revised prompt explicitly suggests considering algorithm changes, architectural restructuring, parallelization, caching strategies, dependency replacements, and I/O reduction.
  • Discouraging Micro-optimizations: Small logic tweaks yielding only marginal gains are now explicitly discouraged, emphasizing the pursuit of changes with meaningful, measurable impact.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 19, 2026

📝 Walkthrough

Walkthrough

Updated the .agents/commands/code/perf-tuning.md guidance file to broaden the scope of performance tuning from the src folder to include tests, configs, and dependencies. Replaced specific bullet points with expanded guidance covering algorithm changes, parallelization, caching, library upgrades, and I/O reduction while deprioritizing micro-optimizations.

Changes

Cohort / File(s) Summary
Performance Tuning Guidance
.agents/commands/code/perf-tuning.md
Updated goal statement to expand scope beyond src to include related code (tests, configs, dependencies); replaced narrow bullet points with broader optimization guidance (algorithms, parallelization, caching, dependencies) and explicit de-prioritization of negligible micro-optimizations.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Possibly related PRs

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately reflects the main change: broadening the perf-tuning prompt to encourage higher-impact performance improvements rather than narrow micro-optimizations.
Description check ✅ Passed The description provides clear context on the before/after changes and explicitly lists what was changed. It follows the repository template structure with a checklist section, though testing is marked N/A due to the documentation-only nature.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch chore/improve-perf-tuning-prompt
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the prompt for the performance tuning agent to encourage more significant, high-impact changes rather than minor micro-optimizations. The new prompt is more descriptive and provides better guidance. I've suggested a small formatting change to further improve the clarity of the prompt by using a bulleted list for examples, which should be easier for both humans and the AI agent to parse.


* Performance tuning
* Memory usage reduction
Think broadly — algorithm changes, architectural restructuring, parallelization, caching strategies, dependency replacements, and I/O reduction are all fair game. Small logic tweaks that only shave a few milliseconds on a 1000-file run are not worth pursuing. Aim for changes with meaningful, measurable impact.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The new instructions are a great improvement. To make them even clearer and more structured for the agent, consider breaking out the examples of high-impact changes into a bulleted list. This improves readability and can help the agent better parse the different types of encouraged changes.

Suggested change
Think broadly — algorithm changes, architectural restructuring, parallelization, caching strategies, dependency replacements, and I/O reduction are all fair game. Small logic tweaks that only shave a few milliseconds on a 1000-file run are not worth pursuing. Aim for changes with meaningful, measurable impact.
Think broadly and aim for changes with meaningful, measurable impact. Avoid small logic tweaks that only shave a few milliseconds on a 1000-file run.
Consider the following types of high-impact changes:
* Algorithm changes
* Architectural restructuring
* Parallelization
* Caching strategies
* Dependency replacements
* I/O reduction

Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no bugs or issues to report.

Open in Devin Review

Broaden target from src-only to src and related code (tests, configs,
dependencies). Add library replacements and dependency upgrades as
explicit optimization strategies.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@cloudflare-workers-and-pages
Copy link
Copy Markdown

cloudflare-workers-and-pages bot commented Mar 19, 2026

Deploying repomix with  Cloudflare Pages  Cloudflare Pages

Latest commit: eb65b8d
Status: ✅  Deploy successful!
Preview URL: https://e2c3cf6f.repomix.pages.dev
Branch Preview URL: https://chore-improve-perf-tuning-pr.repomix.pages.dev

View logs

@claude
Copy link
Copy Markdown
Contributor

claude bot commented Mar 19, 2026

Code Review — Claude

Overall: Approve

This is a clean, focused documentation-only change to the perf-tuning agent prompt. The diff is small (+2/-3 lines) and well-scoped.

What's good

  • The new Goal section is significantly more actionable — explicitly listing categories of high-impact changes (algorithm, architecture, parallelization, caching, dependency replacement, I/O reduction) gives the agent much better direction than the previous generic bullet points.
  • The explicit discouragement of micro-optimizations ("a few milliseconds on a 1000-file run") sets a clear threshold, which should reduce low-value PRs from the agent.
  • The commit message and PR description are thorough and follow project conventions.

Minor observations

Details
  • The threshold "a few milliseconds on a 1000-file run" is a useful heuristic but could vary by operation. For hot paths called millions of times, a few ms per call could compound. That said, for the typical Repomix use case (repo packing), this threshold is reasonable.
  • The rest of the file (Steps, Rules sections) already complements this change well — the "always run benchmarks" and "do not create a PR if the benefit is marginal" rules reinforce the new Goal section's intent.

Premortem

  • Risk: Minimal. This is a prompt change for an agent command, not production code. No runtime behavior is affected.
  • Edge case: An agent might interpret "small logic tweaks" too broadly and skip valid targeted optimizations (e.g., replacing an O(n²) loop with a Set lookup — small diff, big impact). However, the "meaningful, measurable impact" qualifier should mitigate this.

No blocking issues. LGTM.

🤖 Generated with Claude Code

@codecov
Copy link
Copy Markdown

codecov bot commented Mar 19, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 87.18%. Comparing base (a8cff7d) to head (eb65b8d).
⚠️ Report is 5 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #1242   +/-   ##
=======================================
  Coverage   87.18%   87.18%           
=======================================
  Files         115      115           
  Lines        4324     4324           
  Branches     1002     1002           
=======================================
  Hits         3770     3770           
  Misses        554      554           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@claude
Copy link
Copy Markdown
Contributor

claude bot commented Mar 19, 2026

Code Review — Claude

Overall: Approve

This is a small, well-scoped documentation change (+2/-3 lines) to .agents/commands/code/perf-tuning.md. No runtime code is affected.

Note: A previous Claude review already exists on this PR. This is an updated review after the second commit was added.

Changes across both commits

  1. Commit 1 (8eba852): Rewrote the Goal section to encourage broad thinking over micro-optimizations.
  2. Commit 2 (eb65b8d): Expanded scope from src-only to src and related code (tests, configs, dependencies), and added library replacements/dependency upgrades as strategies.

Assessment

  • Positive: The new prompt is substantially more actionable. Listing specific categories (algorithm changes, architectural restructuring, parallelization, caching, library replacements, dependency upgrades, I/O reduction) gives the agent clear direction. The explicit discouragement of marginal gains ("a few milliseconds on a 1000-file run") is a useful guardrail.
  • Scope expansion is appropriate: Including tests, configs, and dependencies in scope makes sense — a dependency upgrade or library swap can yield bigger gains than any src-level tweak.
  • Well-reinforced by existing Rules section: The existing "always run benchmarks" and "do not create a PR if the benefit is marginal" rules complement the new Goal section well.

Premortem

Details
  • Low risk: Documentation-only change to an agent prompt. No functional code, no deployment risk.
  • Potential over-filtering: An agent might interpret "small logic tweaks" too broadly and skip valid targeted fixes (e.g., O(n²) → O(n) via a Set). The "meaningful, measurable impact" qualifier should mitigate this, and the benchmark requirement in Rules provides a safety net.
  • No concerns with the scope expansion to tests/configs/dependencies — this aligns with the project's holistic approach to performance.

No blocking issues. LGTM.

🤖 Generated with Claude Code

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
.agents/commands/code/perf-tuning.md (1)

4-4: Define “meaningful, measurable impact” more concretely.

The intent is strong, but this line is still subjective. Add a brief benchmark criterion (for example: baseline workload, minimum improvement threshold, and variance handling) so agents make consistent go/no-go decisions.

Suggested wording tweak
-Think broadly — algorithm changes, architectural restructuring, parallelization, caching strategies, library replacements, dependency upgrades, and I/O reduction are all fair game. Small logic tweaks that only shave a few milliseconds on a 1000-file run are not worth pursuing. Aim for changes with meaningful, measurable impact.
+Think broadly — algorithm changes, architectural restructuring, parallelization, caching strategies, library replacements, dependency upgrades, and I/O reduction are all fair game. Small logic tweaks that only shave a few milliseconds on a 1000-file run are not worth pursuing. Aim for changes with meaningful, measurable impact, defined by a reproducible benchmark setup (same workload/env, repeated runs) and a clearly stated minimum improvement threshold.

Based on learnings: "Check the rules written in .agents/rules/base.md as they contain important project-specific guidelines and instructions."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/commands/code/perf-tuning.md at line 4, Clarify "meaningful,
measurable impact" by adding a concrete benchmark criterion: specify a baseline
workload (e.g., representative dataset/number of files or standard test
harness), a minimum improvement threshold (e.g., ≥10% wall-clock reduction or
≥2x throughput) and how to handle variance (e.g., run N=5–10 trials, report
median and confidence interval, require improvement to exceed measurement
noise). Mention that agents should consult the project-specific guidelines in
`.agents/rules/base.md` when selecting baselines and metrics to ensure
consistency.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In @.agents/commands/code/perf-tuning.md:
- Line 4: Clarify "meaningful, measurable impact" by adding a concrete benchmark
criterion: specify a baseline workload (e.g., representative dataset/number of
files or standard test harness), a minimum improvement threshold (e.g., ≥10%
wall-clock reduction or ≥2x throughput) and how to handle variance (e.g., run
N=5–10 trials, report median and confidence interval, require improvement to
exceed measurement noise). Mention that agents should consult the
project-specific guidelines in `.agents/rules/base.md` when selecting baselines
and metrics to ensure consistency.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 5cbc1363-c03e-4e47-8fa4-a19844cb65d0

📥 Commits

Reviewing files that changed from the base of the PR and between b17c0e9 and eb65b8d.

📒 Files selected for processing (1)
  • .agents/commands/code/perf-tuning.md

@yamadashy yamadashy merged commit b3458e0 into main Mar 19, 2026
58 checks passed
@yamadashy yamadashy deleted the chore/improve-perf-tuning-prompt branch March 19, 2026 16:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant