Skip to content

perf(metrics): Warm up all metrics worker threads in parallel#1374

Merged
yamadashy merged 5 commits intomainfrom
perf/warm-up-all-metrics-worker-threads
Apr 3, 2026
Merged

perf(metrics): Warm up all metrics worker threads in parallel#1374
yamadashy merged 5 commits intomainfrom
perf/warm-up-all-metrics-worker-threads

Conversation

@yamadashy
Copy link
Copy Markdown
Owner

@yamadashy yamadashy commented Apr 3, 2026

Previously, the metrics warmup fired only a single task, so only one worker thread had gpt-tokenizer pre-loaded. Additional threads had to cold-start the expensive gpt-tokenizer dynamic import when real metrics tasks arrived.

Now createMetricsTaskRunner fires one warmup task per maxThreads, so every worker thread has gpt-tokenizer loaded before metrics calculation begins. The warmup logic is encapsulated inside createMetricsTaskRunner, keeping the packager clean — it only receives a taskRunner and warmupPromise.

Checklist

  • Run npm run test
  • Run npm run lint

Open with Devin

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 3, 2026

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 4c46f011-89e8-49a3-b2fb-864bc83ac955

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

This PR refactors metrics worker pool initialization by modifying createMetricsTaskRunner to accept a TokenEncoding parameter and return both a TaskRunner and warmupPromise that pre-warms worker threads in parallel. The packager and all test mocks are updated to destructure and handle the new return structure.

Changes

Cohort / File(s) Summary
Core Metrics Implementation
src/core/metrics/calculateMetrics.ts
Updated createMetricsTaskRunner signature to accept TokenEncoding, compute maxThreads via getWorkerThreadCount, and return MetricsTaskRunnerWithWarmup containing both the initialized taskRunner and warmupPromise that pre-runs tasks on all worker threads.
Packager Wiring
src/core/packager.ts
Modified to destructure taskRunner and warmupPromise from the new createMetricsTaskRunner return value and properly await the warmup promise before proceeding with packaging.
Test Mocks
tests/core/packager.test.ts, tests/core/packager/diffsFunctionality.test.ts, tests/core/packager/splitOutput.test.ts, tests/integration-tests/packager.test.ts
Updated createMetricsTaskRunner mock implementations to return a nested taskRunner object containing run and cleanup methods, plus an immediately-resolved warmupPromise.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

  • PR #1302: Modifies the same metrics initialization flow and worker pool warm-up pattern in the packager.
  • PR #746: Changes getWorkerThreadCount implementation which is now imported and used in the metrics warmup logic.
  • PR #1350: Introduces TokenEncoding types and token-counter async changes that this PR now depends on and propagates through the metrics system.
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately and concisely summarizes the main objective—warming up all metrics worker threads in parallel to improve performance.
Description check ✅ Passed The description explains the problem, solution, and refactoring approach; it includes a completed checklist with both test and lint steps marked as done.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch perf/warm-up-all-metrics-worker-threads

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 3, 2026

⚡ Performance Benchmark

Latest commit:acbd2c9 fix(metrics): Ensure worker pool cleanup on all error paths
Status:✅ Benchmark complete!
Ubuntu:1.56s (±0.02s) → 1.54s (±0.02s) · -0.02s (-1.4%)
macOS:1.23s (±0.38s) → 1.14s (±0.36s) · -0.09s (-7.6%)
Windows:2.35s (±0.09s) → 2.36s (±0.46s) · +0.01s (+0.4%)
Details
  • Packing the repomix repository with node bin/repomix.cjs
  • Warmup: 2 runs (discarded), interleaved execution
  • Measurement: 20 runs / 30 on macOS (median ± IQR)
  • Workflow run
History

55215e9 perf(metrics): Warm up all worker threads instead of just one

Ubuntu:1.55s (±0.01s) → 1.53s (±0.02s) · -0.02s (-1.4%)
macOS:1.33s (±0.25s) → 1.33s (±0.18s) · -0.01s (-0.4%)
Windows:1.82s (±0.03s) → 1.81s (±0.02s) · -0.01s (-0.6%)

97a2a41 perf(metrics): Restore early warmup to overlap with pipeline stages

Ubuntu:1.66s (±0.06s) → 1.65s (±0.07s) · -0.01s (-0.6%)
macOS:1.45s (±0.13s) → 1.46s (±0.16s) · +0.02s (+1.1%)
Windows:2.38s (±0.48s) → 2.34s (±0.47s) · -0.04s (-1.6%)

8f5ef2c test(metrics): Add unit tests for createMetricsTaskRunner

Ubuntu:1.58s (±0.01s) → 1.69s (±0.02s) · +0.11s (+7.0%)
macOS:1.04s (±0.17s) → 1.15s (±0.15s) · +0.11s (+10.4%)
Windows:1.88s (±0.03s) → 1.98s (±0.03s) · +0.10s (+5.4%)

cfbab61 refactor(metrics): Encapsulate warmup logic in createMetricsTaskRunner

Ubuntu:1.55s (±0.02s) → 1.66s (±0.03s) · +0.11s (+6.9%)
macOS:0.87s (±0.09s) → 0.93s (±0.09s) · +0.07s (+7.7%)
Windows:1.94s (±0.03s) → 2.06s (±0.04s) · +0.12s (+6.0%)

022050e fix(metrics): Address review feedback on warmup and packager structure

Ubuntu:1.45s (±0.01s) → 1.56s (±0.02s) · +0.11s (+7.4%)
macOS:1.13s (±0.67s) → 1.36s (±0.74s) · +0.23s (+20.4%)
Windows:1.87s (±0.01s) → 1.98s (±0.02s) · +0.11s (+5.7%)

1b9c2a0 refactor(metrics): Encapsulate warmup logic in createMetricsTaskRunner

Ubuntu:1.53s (±0.02s) → 1.52s (±0.02s) · -0.01s (-0.8%)
macOS:0.85s (±0.06s) → 0.88s (±0.17s) · +0.03s (+3.2%)
Windows:1.93s (±0.53s) → 1.91s (±0.54s) · -0.02s (-1.1%)

23abf8a refactor(metrics): Encapsulate warmup logic in createMetricsTaskRunner

Ubuntu:1.47s (±0.02s) → 1.53s (±0.03s) · +0.06s (+4.0%)
macOS:1.08s (±0.17s) → 1.13s (±0.08s) · +0.05s (+4.8%)
Windows:1.86s (±0.01s) → 1.90s (±0.04s) · +0.03s (+1.8%)

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 3, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 87.37%. Comparing base (d7a8979) to head (acbd2c9).
⚠️ Report is 6 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1374      +/-   ##
==========================================
+ Coverage   87.29%   87.37%   +0.07%     
==========================================
  Files         115      115              
  Lines        4369     4371       +2     
  Branches     1015     1015              
==========================================
+ Hits         3814     3819       +5     
+ Misses        555      552       -3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no potential bugs to report.

View in Devin Review to see 3 additional findings.

Open in Devin Review

@claude

This comment has been minimized.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the metrics task runner initialization to support warming up all worker threads in parallel, improving performance by overlapping module loading with other pipeline stages. It introduces the MetricsTaskRunnerWithWarmup interface and updates createMetricsTaskRunner to return both the task runner and a warmup promise. Corresponding updates were made to the packager and test suites to accommodate this new structure. I have no feedback to provide as the changes are consistent and well-tested.

coderabbitai[bot]

This comment was marked as resolved.

@yamadashy yamadashy force-pushed the perf/warm-up-all-metrics-worker-threads branch from 23abf8a to 1b9c2a0 Compare April 3, 2026 05:41
@cloudflare-workers-and-pages
Copy link
Copy Markdown

cloudflare-workers-and-pages bot commented Apr 3, 2026

Deploying repomix with  Cloudflare Pages  Cloudflare Pages

Latest commit: acbd2c9
Status: ✅  Deploy successful!
Preview URL: https://5e7cbe2a.repomix.pages.dev
Branch Preview URL: https://perf-warm-up-all-metrics-wor.repomix.pages.dev

View logs

@claude

This comment has been minimized.

@yamadashy yamadashy force-pushed the perf/warm-up-all-metrics-worker-threads branch from 022050e to 490b1bd Compare April 3, 2026 05:56
Move worker thread warmup from packager into createMetricsTaskRunner,
which now returns both a taskRunner and warmupPromise. This keeps the
packager clean — it no longer needs to know warmup implementation details.

Also:
- Skip metrics worker pool creation on skill-generation path where
  it is unused
- Await warmupPromise in finally block before cleanup to prevent
  tearing down workers during initialization

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@yamadashy yamadashy force-pushed the perf/warm-up-all-metrics-worker-threads branch from 490b1bd to cfbab61 Compare April 3, 2026 05:57
@claude
Copy link
Copy Markdown
Contributor

claude bot commented Apr 3, 2026

Code Review — PR #1374

Overall this is a solid refactoring that improves encapsulation and correctly avoids unnecessary worker pool creation on the skill path. A few items worth discussing:

1. PR title is misleading (minor)

The title says "Warm up all metrics worker threads in parallel" but the final implementation warms up only one thread (multi-thread warmup was reverted in commit 3 due to IPC overhead regression). Consider updating the title to reflect the actual change, e.g. perf(metrics): Encapsulate metrics warmup and skip worker pool for skill path.

2. Warmup overlap window reduced compared to before (performance)

Details

Before this PR, the worker pool was created and warmup fired early — before security check and file processing. The warmup ran concurrently with those stages, giving it a large window to complete before metrics were needed.

After this PR, the worker pool is created after security check + file processing (line 168), and the warmup is awaited at line 183 before output generation starts. The warmup now only overlaps with the trivial synchronous filePathsByRoot construction (lines 177-180).

This means the gpt-tokenizer loading no longer gets the "free" overlap with security check + file processing that it had before. On repos where security check takes significant time, this could be a measurable regression.

The comment on line 167 says "overlap gpt-tokenizer loading with output generation" but the warmup is awaited at line 183 before produceOutput starts at line 188, so there's no overlap with output generation either.

Suggestion: If the goal is to keep warmup encapsulated in createMetricsTaskRunner while maintaining the original overlap, consider creating the metrics runner earlier (before security+processing) but still short-circuit it for the skill path. Or remove the await metricsWarmupPromise at line 183 since calculateMetrics will naturally wait for the warmup task to complete as part of the task queue.

3. Skipping worker pool on skill path is a clear win

The early return at line 146 that avoids creating the metrics worker pool for skill generation is a good optimization — no resources wasted on unused threads.

4. Cleanup pattern in finally is correct

await metricsWarmupPromise.catch(() => {});
await metricsTaskRunner.cleanup();

Awaiting the warmup before cleanup prevents tearing down workers during initialization. The .catch(() => {}) correctly prevents unhandled rejection if warmup failed.

5. Test mocks updated but no new test scenarios

The test mocks are updated to match the new return shape, which is correct. However, there's no test coverage for:

  • The skill generation early-return path (verifying createMetricsTaskRunner is NOT called)
  • Warmup promise rejection propagation
  • The finally block cleanup ordering

These aren't blockers but would strengthen confidence in the new flow.


Summary: The encapsulation and skill-path optimization are good. The main concern is the reduced warmup overlap window (item 2) which may negate the original performance benefit. Worth verifying with a benchmark on a larger repo.

🤖 Generated with Claude Code

@claude
Copy link
Copy Markdown
Contributor

claude bot commented Apr 3, 2026

Code Review — Round 3

This review builds on two prior rounds. The structural improvements are solid — the encapsulation and skill-path optimization are good changes. However, there are a few remaining concerns worth addressing.

1. PR title/description does not match the implementation

The PR title says "Warm up all metrics worker threads in parallel" and the description says "fires one warmup task per maxThreads." The actual code fires one warmup task:

const warmupPromise = taskRunner.run({ content: '', encoding }).catch(() => 0);

There is no loop, no getWorkerThreadCount call, no Promise.all over multiple tasks. Only one worker thread gets pre-warmed. The JSDoc comment correctly says "trigger a single warmup task" — this contradicts the PR body. Either update the title/description to reflect a single-task warmup, or implement the multi-task warmup described.

2. The warmup no longer overlaps with any meaningful pipeline stage

Details

Before this PR: The warmup was fired early — before collectFiles, getGitDiffs, getGitLogs, validateFileSafety, and processFiles. It ran in the background during all of those stages.

After this PR: The warmup is created at line 168 (after all those stages complete), and then immediately awaited at line 183 — before produceOutput starts. The comment says "overlap gpt-tokenizer loading with output generation," but the await prevents that overlap. The warmup is effectively sequential.

This is likely why benchmarks showed regression (+4% Ubuntu, +4.8% macOS). If the intent is to overlap warmup with output generation, remove the await metricsWarmupPromise at line 183 — calculateMetrics already receives the pre-warmed taskRunner and the warmup will have settled by the time real tasks are dispatched. If the await is intentional (to guarantee warmup completion), then the "overlap" framing in the comment is misleading.

3. Promise<unknown> should be more precise

Details

warmupPromise: Promise<unknown> in MetricsTaskRunnerWithWarmup could be Promise<number> (since .catch(() => 0) guarantees a number resolution) or Promise<void> to signal callers shouldn't use the resolved value.

4. Test coverage gaps

Details
  • No test asserts createMetricsTaskRunner is called with the correct encoding argument. A regression that drops or corrupts the encoding would pass all tests.
  • No unit test for createMetricsTaskRunner itself (accounts for Codecov's 5 missing lines / 28.57% patch coverage).
  • No test verifies the skill-path early return skips worker pool creation (expect(mockDeps.createMetricsTaskRunner).not.toHaveBeenCalled()).
  • No test exercises the warmup rejection path (warmupPromise: Promise.reject(...) → verify pack() still resolves).

The encapsulation refactor and skill-path optimization are genuine improvements and the code structure is clean. The main concern is item #2 — the warmup timing regression — which should be investigated given the benchmark results show this is measurably slower than before.

🤖 Generated with Claude Code

yamadashy and others added 2 commits April 3, 2026 16:06
Cover the new factory function: return shape validation, warmup task
payload verification, and error swallowing behavior.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Move createMetricsTaskRunner back before collectFiles/securityCheck so
gpt-tokenizer initialization overlaps with those stages. The previous
position (after skill-path check) put warmup on the critical path,
causing benchmark regression. On the skill path, the worker pool is
cleaned up before packSkill runs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
devin-ai-integration[bot]

This comment was marked as resolved.

@claude
Copy link
Copy Markdown
Contributor

claude bot commented Apr 3, 2026

Code Review

Thanks for the encapsulation improvement — moving warmup into createMetricsTaskRunner and returning { taskRunner, warmupPromise } is a clean refactor that keeps the packager focused. The new tests and the encoding parameter addition are good improvements.

However, I found two issues that should be addressed before merging:

1. PR title/description doesn't match implementation

The PR title says "Warm up all metrics worker threads in parallel" and the body claims "fires one warmup task per maxThreads, so every worker thread has gpt-tokenizer loaded". However, the implementation fires exactly one warmup task:

const warmupPromise = taskRunner.run({ content: '', encoding }).catch(() => 0);

The JSDoc correctly says "trigger a single warmup task", which contradicts the PR title. The behavior is functionally equivalent to the old code — just relocated. Either the implementation should be updated to actually warm all threads (e.g. firing maxThreads parallel warmup tasks), or the title/description should be corrected to reflect the actual change (encapsulation refactor).

2. Worker pool leak when pre-try stages throw

The restructuring moved collectFiles, getGitDiffs, getGitLogs, validateFileSafety, and processFiles outside the try/finally block that guarantees metricsTaskRunner.cleanup(). If any of these throw, the worker pool is never cleaned up:

createMetricsTaskRunner()        ← pool created
await collectFiles(...)          ← outside try — can throw
await validateFileSafety(...)    ← outside try — can throw
await processFiles(...)          ← outside try — can throw
// skill path has its own cleanup ✓
try {
  // output + metrics only
} finally {
  await metricsTaskRunner.cleanup()  ← only reached if try is entered
}

The previous code had all these stages inside the same try/finally, so cleanup was always guaranteed. The simplest fix is to move the try back to right after createMetricsTaskRunner.

Minor observations
  • Redundant .catch() on warmupPromise: Since warmupPromise is already .catch(() => 0)-guarded inside createMetricsTaskRunner, it can never reject. The additional .catch(() => {}) calls in packager.ts (finally block and skill path) are unnecessary.
  • Double-await of warmupPromise: In the normal path, warmupPromise is awaited at line 187 inside try and again at line 239 in finally. The second is a harmless no-op but adds noise.
  • Skill path regression: The skill path now awaits the warmup promise before cleanup, adding latency for a warmup that's never used. Previously, cleanup would just terminate the worker.
  • Promise<unknown> type: MetricsTaskRunnerWithWarmup.warmupPromise could be Promise<number> (always resolves to 0 on error) or Promise<void> for clearer intent.
  • Test gap: No packager test asserts that cleanup() is actually called, so removing the finally cleanup would go undetected.

🤖 Generated with Claude Code

Fire maxThreads warmup tasks so every worker thread has gpt-tokenizer
loaded before metrics calculation begins. Combined with the early
warmup position (before collectFiles/securityCheck), this eliminates
cold-start latency on all threads without adding to the critical path.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@claude
Copy link
Copy Markdown
Contributor

claude bot commented Apr 3, 2026

Code Review — PR #1374

perf(metrics): Warm up all metrics worker threads in parallel

Overall this is a well-designed performance optimization. The encapsulation of warmup logic into createMetricsTaskRunner is clean, the interface change is well-justified, and the approach directly solves the stated problem. No blocking issues found.

Noteworthy Findings

1. Test gap: warmup task count not verified

The core behavioral change is firing maxThreads warmup tasks instead of 1. However, the test at tests/core/metrics/calculateMetrics.test.ts only checks that run was called with the correct arguments — it doesn't verify toHaveBeenCalledTimes(maxThreads). Consider adding:

await result.warmupPromise;
const { maxThreads } = getWorkerThreadCount(50);
expect(result.taskRunner.run).toHaveBeenCalledTimes(maxThreads);

Also, with TASKS_PER_THREAD = 100, the test cases using 50 and 10 tasks both yield maxThreads = 1, so the multi-thread warmup path isn't exercised. A test with 200+ tasks would better validate the parallel warmup behavior.

2. CodeRabbit suggestion about surfacing warmup errors — Not needed

CodeRabbit suggested removing the .catch(() => 0) on warmup tasks to surface failures. I disagree — warmup is best-effort optimization. If a warmup task fails, the actual metrics tasks will still execute (they'll just cold-start that thread). Swallowing warmup errors is the correct design choice here since:

  • Warmup failure shouldn't abort the entire pack operation
  • The real metrics tasks will naturally retry the initialization
  • The .catch(() => 0) pattern is consistent and well-tested
3. CodeRabbit suggestion about moving task runner creation after skill check — Not needed

CodeRabbit suggested creating the task runner only on the non-skill path to avoid "wasting" warmups. This would defeat the purpose — the warmup is intentionally fired early to overlap with file collection/security/processing. The skill path's explicit cleanup (await metricsWarmupPromise.catch(() => {}) + cleanup()) handles this correctly and the cost is negligible.

4. CodeRabbit suggestion about asserting encoding argument in packager test — Recommended

Adding expect(mockDeps.createMetricsTaskRunner).toHaveBeenCalledWith(mockFilePaths.length, mockConfig.tokenCount.encoding) to the packager test would protect the new contract. This is a reasonable improvement.

5. Minor: getWorkerThreadCount called redundantly

getWorkerThreadCount(numOfTasks) is called once in createMetricsTaskRunner (to determine warmup count) and again inside initTaskRunner → createWorkerPool (to configure the pool). The cost is negligible (~1μs for os.availableParallelism()), but could be eliminated by passing the computed thread config through. Not worth changing now.

Summary

Aspect Assessment
Design & encapsulation Excellent — warmup logic properly encapsulated
Security No issues
Performance Achieves stated goal effectively
Error handling Robust — warmup errors properly swallowed
Resource cleanup Improved — explicit cleanup in skill path
Test coverage Good, but missing warmup task count assertion
Conventions Fully compliant

LGTM with the minor test suggestion above. Nice optimization!


Reviewed with Claude

Restore try/finally to wrap all code after createMetricsTaskRunner,
matching the original scope. Previously, errors in collectFiles,
getGitDiffs, or validateFileSafety could leak worker threads.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@claude
Copy link
Copy Markdown
Contributor

claude bot commented Apr 3, 2026

PR Review

Clean, well-scoped performance improvement. The encapsulation of warmup logic inside createMetricsTaskRunner is a clear win — the packager no longer needs to know warmup internals. The finally-based shutdown is correct for all three code paths (normal, skill, error).

Test Gap: Core Contract Untested

The main assertion gap is that no test verifies run is called exactly maxThreads times. The test at calculateMetrics.test.ts:116 uses toHaveBeenCalledWith(...) which passes if run was called at least once — it would still pass if the implementation regressed to a single warmup task.

Consider mocking getWorkerThreadCount to return a deterministic maxThreads and asserting toHaveBeenCalledTimes(maxThreads).

Minor Observations

Details

warmupPromise: Promise<unknown> type — Since callers never consume the resolved value, Promise<void> would communicate intent more clearly.

Redundant getWorkerThreadCount callcreateMetricsTaskRunner calls getWorkerThreadCount(numOfTasks) to determine warmup count, but initTaskRunnercreateWorkerPool already calls it internally with the same argument. Not a bug (deterministic), but a duplication of knowledge about pool sizing. Exposing maxThreads from TaskRunner would be cleaner.

Double error suppression in finally — Each warmup task already has .catch(() => 0), so Promise.all cannot reject. The outer .catch(() => {}) in the finally block is dead code. Harmless as belt-and-suspenders, but may confuse future readers into thinking the promise can reject.

Lazy thread spawning caveat — Tinypool spawns workers lazily. Firing maxThreads tasks simultaneously gives the pool the opportunity to saturate all slots, but for very fast tasks (empty string tokenization), a worker could complete and pick up a second task before a new worker is spawned. In practice this is a probabilistic improvement, not a hard guarantee — but it's a clear net positive over the single-task warmup.

Overall: solid change, well-structured. The test gap is the only actionable item.


🤖 Generated with Claude Code

@yamadashy yamadashy merged commit a579381 into main Apr 3, 2026
63 checks passed
@yamadashy yamadashy deleted the perf/warm-up-all-metrics-worker-threads branch April 3, 2026 07:47
yamadashy pushed a commit that referenced this pull request Apr 3, 2026
Merge main's MetricsTaskRunnerWithWarmup encapsulation (PR #1374)
with the branch's batch-only worker approach. Updated warmup calls
to use batch task format and fixed all test mocks.

https://claude.ai/code/session_01FD8WB4DVPvj7tg7Yq5sUvg
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant