Skip to content

ci(perf): Track perf/auto-perf-tuning benchmarks on separate gh-pages page#1449

Merged
yamadashy merged 1 commit intomainfrom
ci/perf-benchmark-auto-perf-tuning
Apr 11, 2026
Merged

ci(perf): Track perf/auto-perf-tuning benchmarks on separate gh-pages page#1449
yamadashy merged 1 commit intomainfrom
ci/perf-benchmark-auto-perf-tuning

Conversation

@yamadashy
Copy link
Copy Markdown
Owner

@yamadashy yamadashy commented Apr 11, 2026

Summary

  • Trigger the performance benchmark history workflow on pushes to perf/auto-perf-tuning in addition to main.
  • Publish perf/auto-perf-tuning results to a dedicated dev/bench/auto-perf-tuning/ directory on gh-pages so it renders as an independent page — main's existing dashboard at dev/bench/ stays untouched.
  • Branch on github.ref for both name and benchmark-data-dir-path passed to benchmark-action/github-action-benchmark.

Resulting URLs

Why a separate page

perf/auto-perf-tuning is an experimental branch that gets force-pushed, so mixing its commits into main's historical chart would make the timeline hard to read. Isolating it into its own data directory keeps the two histories completely independent — the data files (data.js) do not share state, so neither branch's runs can disturb the other's chart.

Checklist

  • Run `npm run test` (1102 tests passed)
  • Run `npm run lint` (no new warnings/errors from this change)

🤖 Generated with Claude Code


Open with Devin

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Note

Gemini is unable to generate a review for this pull request due to the file types involved not being currently supported.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 11, 2026

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: fdd3c21f-2aca-4d32-a688-c7c9fa86b4ac

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

The performance benchmark history workflow is now configured to trigger on pushes to the perf/auto-perf-tuning branch in addition to main. Benchmark dataset names and directories are conditionally set based on the triggering branch to maintain separate performance tracking.

Changes

Cohort / File(s) Summary
GitHub Actions Workflow Configuration
.github/workflows/perf-benchmark-history.yml
Added perf/auto-perf-tuning branch to workflow triggers. Conditionally set benchmark dataset name and benchmark-data-dir-path based on active branch: main uses Repomix Performance/dev/bench, while perf/auto-perf-tuning uses Repomix Performance (auto-perf-tuning)/dev/bench/auto-perf-tuning.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~5 minutes

Possibly related PRs

  • #1318: Modifies the same perf-benchmark-history workflow to extend triggers and conditionally change benchmark dataset naming and paths.
  • #1348: Updates the same workflow file with changes to benchmark run handling and dataset configuration.
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: enabling perf/auto-perf-tuning branch benchmarks to be tracked on a separate gh-pages page, which directly matches the workflow modification in the changeset.
Description check ✅ Passed The description is comprehensive and well-structured with clear summary, rationale, resulting URLs, and completed checklist items. All required template sections are addressed with detailed context.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch ci/perf-benchmark-auto-perf-tuning

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 11, 2026

⚡ Performance Benchmark

Latest commit:a0b6397 ci(perf): Track perf/auto-perf-tuning benchmarks on separate gh-pages page
Status:✅ Benchmark complete!
Ubuntu:1.44s (±0.04s) → 1.43s (±0.03s) · -0.01s (-0.8%)
macOS:0.86s (±0.03s) → 0.87s (±0.05s) · +0.01s (+0.7%)
Windows:1.79s (±0.06s) → 1.79s (±0.13s) · +0.00s (+0.1%)
Details
  • Packing the repomix repository with node bin/repomix.cjs
  • Warmup: 2 runs (discarded), interleaved execution
  • Measurement: 20 runs / 30 on macOS (median ± IQR)
  • Workflow run
History

7d39815 ci(perf): Track perf/auto-perf-tuning benchmarks on separate gh-pages page

Ubuntu:1.45s (±0.02s) → 1.47s (±0.04s) · +0.02s (+1.4%)
macOS:1.32s (±0.17s) → 1.24s (±0.12s) · -0.08s (-6.4%)
Windows:1.38s (±0.03s) → 1.39s (±0.02s) · +0.01s (+0.4%)

Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no potential bugs to report.

View in Devin Review to see 1 additional finding.

Open in Devin Review

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 11, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 87.21%. Comparing base (9ebeb11) to head (a0b6397).
⚠️ Report is 2 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #1449   +/-   ##
=======================================
  Coverage   87.21%   87.21%           
=======================================
  Files         117      117           
  Lines        4435     4435           
  Branches     1022     1022           
=======================================
  Hits         3868     3868           
  Misses        567      567           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
.github/workflows/perf-benchmark-history.yml (1)

5-13: Scope concurrency by branch to avoid serializing runs across branches

The static concurrency.group at Line 12 will serialize all workflow runs across both main and perf/auto-perf-tuning branches. This can block main branch updates when the experimental branch is active. Scope the concurrency group by branch reference instead:

Suggested change
 concurrency:
-  group: perf-benchmark-history
+  group: perf-benchmark-history-${{ github.ref }}
   cancel-in-progress: false
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/perf-benchmark-history.yml around lines 5 - 13, The static
concurrency.group "perf-benchmark-history" serializes runs across branches;
update the concurrency.group to include the branch reference so runs are scoped
per branch (e.g., change concurrency.group to perf-benchmark-history-${{
github.ref_name }} or similar) and keep cancel-in-progress: false; edit the
concurrency block (the concurrency.group symbol) so it uses the GitHub context
variable (github.ref_name or github.ref) to avoid cross-branch serialization.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In @.github/workflows/perf-benchmark-history.yml:
- Around line 5-13: The static concurrency.group "perf-benchmark-history"
serializes runs across branches; update the concurrency.group to include the
branch reference so runs are scoped per branch (e.g., change concurrency.group
to perf-benchmark-history-${{ github.ref_name }} or similar) and keep
cancel-in-progress: false; edit the concurrency block (the concurrency.group
symbol) so it uses the GitHub context variable (github.ref_name or github.ref)
to avoid cross-branch serialization.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 26ecbc0c-96c4-489f-9de0-c35e99ccd6a8

📥 Commits

Reviewing files that changed from the base of the PR and between a6f8885 and 7d39815.

📒 Files selected for processing (1)
  • .github/workflows/perf-benchmark-history.yml

@claude
Copy link
Copy Markdown
Contributor

claude bot commented Apr 11, 2026

Code Review

Verdict: Approve

Clean, minimal CI change that correctly isolates perf/auto-perf-tuning benchmark history into its own gh-pages directory. The conditional expressions are idiomatic and correct, data isolation is well-structured, and no security concerns were found.

One suggestion: scope the concurrency group by branch

The concurrency group perf-benchmark-history is shared across both main and perf/auto-perf-tuning with cancel-in-progress: false. Since perf/auto-perf-tuning is force-pushed frequently, a burst of pushes there could queue up and delay main's benchmark runs.

Consider scoping the group by branch:

concurrency:
  group: perf-benchmark-history-${{ github.ref_name }}
  cancel-in-progress: false

This lets each branch run independently without blocking the other. Low severity since benchmarks aren't on the critical path, but a nice improvement.

Detailed review notes
  • Code quality: Expressions are correct. Minor readability improvement possible by extracting into env variables, but not blocking for a two-line diff.
  • Security: No issues. Only trusted GitHub context variables used, no injection risk. Permissions correctly scoped, actions pinned to SHAs.
  • Performance: Adding the branch doubles CI trigger surface. Worth monitoring Actions minutes if the experimental branch sees heavy activity.
  • Test coverage: CI-only change, no tests needed.
  • Conventions: Commit message and PR body follow project conventions correctly.

🤖 Generated with Claude Code

@cloudflare-workers-and-pages
Copy link
Copy Markdown

cloudflare-workers-and-pages bot commented Apr 11, 2026

Deploying repomix with  Cloudflare Pages  Cloudflare Pages

Latest commit: a0b6397
Status: ✅  Deploy successful!
Preview URL: https://25fb312a.repomix.pages.dev
Branch Preview URL: https://ci-perf-benchmark-auto-perf.repomix.pages.dev

View logs

… page

Trigger the performance benchmark history workflow on pushes to
perf/auto-perf-tuning in addition to main, and publish its results to
a dedicated gh-pages directory so the chart lives independently from
main's history.

Why: perf/auto-perf-tuning is an experimental branch that gets
force-pushed, and we want to observe its performance over time without
disturbing the main performance dashboard at dev/bench/.

How to apply: the name and benchmark-data-dir-path passed to
github-action-benchmark are now branched on github.ref:
- main -> "Repomix Performance" under dev/bench/ (unchanged)
- perf/auto-perf-tuning -> "Repomix Performance (auto-perf-tuning)"
  under dev/bench/auto-perf-tuning/

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@yamadashy yamadashy force-pushed the ci/perf-benchmark-auto-perf-tuning branch from 7d39815 to a0b6397 Compare April 11, 2026 09:10
@yamadashy
Copy link
Copy Markdown
Owner Author

Thanks for the suggestion from both @claude and @coderabbitai re: scoping the concurrency group by branch.

Leaving this as-is intentionally. The serialization is by design: both branches push to the same gh-pages branch, and scoping by github.ref would let runs race on the git push to gh-pages. While benchmark-action/github-action-benchmark has retry logic, serializing is the simpler and safer default. perf/auto-perf-tuning activity is expected to be infrequent enough that queuing behind main is acceptable. If this becomes a real bottleneck later, we can revisit. 🤖

@claude
Copy link
Copy Markdown
Contributor

claude bot commented Apr 11, 2026

Claude finished @yamadashy's task in 37s —— View job


That makes perfect sense — I hadn't considered the gh-pages push contention. Serializing across branches is the right call when they share a deployment target. Thanks for the explanation! 👍

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 11, 2026

Tip

For best results, initiate chat on the files or code changes.

@yamadashy That makes perfect sense — serializing at the workflow level is the right call when both branches share the same gh-pages target, since even with retry logic in benchmark-action/github-action-benchmark, concurrent git push operations would still risk conflicts. Thanks for the clear explanation!


✏️ Learnings added
Learnt from: yamadashy
URL: https://github.com/yamadashy/repomix/pull/1449

Timestamp: 2026-04-11T09:20:19.067Z
Learning: In the `perf-benchmark-history.yml` GitHub Actions workflow for yamadashy/repomix, the concurrency group is intentionally NOT scoped by branch (`github.ref`). Both `main` and `perf/auto-perf-tuning` branches push benchmark results to the same `gh-pages` branch, so serializing all runs (global concurrency group) is a deliberate design choice to prevent `git push` race conditions. Do not suggest scoping concurrency by branch for this workflow.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: chenrui333
Repo: yamadashy/repomix PR: 151
File: .github/workflows/release.yml:9-10
Timestamp: 2024-11-02T17:45:05.140Z
Learning: The maintainer prefers not to add job controls like timeout limits or concurrency controls to the release workflow.

@yamadashy yamadashy merged commit 247646d into main Apr 11, 2026
56 checks passed
@yamadashy yamadashy deleted the ci/perf-benchmark-auto-perf-tuning branch April 11, 2026 09:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant