feat: add performance testing infrastructure with CDP metrics#9170
feat: add performance testing infrastructure with CDP metrics#9170christian-byrne merged 10 commits intomainfrom
Conversation
Add a permanent, non-failing performance regression detection system: - PerformanceHelper fixture class using Chrome DevTools Protocol (Performance.getMetrics) to collect RecalcStyleCount, LayoutCount, LayoutDuration, TaskDuration, and JSHeapUsedSize - @Perf Playwright project (Chromium-only, single-threaded, 60s timeout) - 4 baseline perf tests: canvas idle, mouse sweep, cursor mutations, DOM widget clipping - perfReporter module that writes test-results/perf-metrics.json - CI workflow (ci-perf-report.yaml) that runs perf tests, uploads artifacts, downloads baseline from target branch, and posts a sticky PR comment with color-coded deltas - perf-report.js script for generating markdown comparison tables The system never fails CI — it posts informational PR comments only. Follows the same patterns as pr-size-report.yaml. Amp-Thread-ID: https://ampcode.com/threads/T-019c8ed0-59ad-720b-bc4f-6f52dc452844
🎨 Storybook: ✅ Built — View Storybook |
🎭 Playwright: ✅ 547 passed, 0 failed · 5 flaky📊 Browser Reports
|
📝 WalkthroughWalkthroughAdds Playwright Changes
Sequence DiagramsequenceDiagram
participant GH as GH Actions
participant PT as Playwright Tests
participant PM as PerformanceHelper
participant PR as perfReporter
participant Artifact as Artifact Storage
participant Script as perf-report.ts
participant Comment as PR Comment
GH->>PT: start perf-tests job (container)
PT->>PM: init() (open CDP session)
PT->>PM: startMeasuring()
Note over PT,PM: execute test interactions (frames, mouse, clicks)
PT->>PM: stopMeasuring(label)
PM-->>PR: recordMeasurement (write JSON to test-results/perf-temp)
PT->>PR: writePerfReport() → create test-results/perf-metrics.json
PT-->>Artifact: upload perf-metrics artifact
GH->>GH: report job downloads current & baseline artifacts
GH->>Script: run scripts/perf-report.ts (current, baseline)
Script->>Script: compute deltas & render Markdown
Script-->>GH: Markdown report
GH->>Comment: create/update PR comment with sentinel and report
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
📦 Bundle: 4.43 MB gzip ⚪ 0 BDetailsSummary
Category Glance App Entry Points — 17.9 kB (baseline 17.9 kB) • ⚪ 0 BMain entry bundles and manifests
Graph Workspace — 986 kB (baseline 986 kB) • ⚪ 0 BGraph editor runtime, canvas, workflow orchestration
Views & Navigation — 72.1 kB (baseline 72.1 kB) • ⚪ 0 BTop-level views, pages, and routed surfaces
Panels & Settings — 435 kB (baseline 435 kB) • ⚪ 0 BConfiguration panels, inspectors, and settings screens
User & Accounts — 16 kB (baseline 16 kB) • ⚪ 0 BAuthentication, profile, and account management bundles
Editors & Dialogs — 736 B (baseline 736 B) • ⚪ 0 BModals, dialogs, drawers, and in-app editors
UI Components — 47.1 kB (baseline 47.1 kB) • ⚪ 0 BReusable component library chunks
Data & Services — 2.54 MB (baseline 2.54 MB) • ⚪ 0 BStores, services, APIs, and repositories
Utilities & Hooks — 55.5 kB (baseline 55.5 kB) • ⚪ 0 BHelpers, composables, and utility bundles
Vendor & Third-Party — 8.84 MB (baseline 8.84 MB) • ⚪ 0 BExternal libraries and shared vendor chunks
Other — 7.74 MB (baseline 7.74 MB) • ⚪ 0 BBundles that do not match a named category
|
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (2)
browser_tests/tests/performance.spec.ts (1)
22-66: "canvas mouse interaction" and "cursor style mutations" tests have identical interaction loops — one is redundant.Both tests perform the exact same 100-step
mouse.movesweep with the same trajectory formula, producing identical measurements under different labels. If the intent is to measure distinct phenomena, the interaction pattern needs to differ (e.g., hover-only vs. move-with-clicks, or different trajectories).♻️ Suggested approach
- test('cursor style mutations during mouse sweep', async ({ comfyPage }) => { - await comfyPage.workflow.loadWorkflow('default') - await comfyPage.perf.startMeasuring() - - const canvas = comfyPage.canvas - const box = await canvas.boundingBox() - if (box) { - // Sweep mouse across entire canvas — crosses nodes, empty space, slots - for (let i = 0; i < 100; i++) { - await comfyPage.page.mouse.move( - box.x + (box.width * i) / 100, - box.y + (box.height * (i % 3)) / 3 - ) - } - } - - const m = await comfyPage.perf.stopMeasuring('cursor-sweep') - recordMeasurement(m) - console.log(`Cursor sweep: ${m.styleRecalcs} style recalcs`) - })If this scenario is intentionally distinct, document what makes it different from
canvas-mouse-sweep.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@browser_tests/tests/performance.spec.ts` around lines 22 - 66, Two tests ("canvas mouse interaction style recalculations" using comfyPage.perf.stopMeasuring('canvas-mouse-sweep') and "cursor style mutations during mouse sweep" using comfyPage.perf.stopMeasuring('cursor-sweep')) currently run the exact same 100-step comfyPage.page.mouse.move trajectory, making the second test redundant; either modify the second test's interaction pattern (e.g., change comfyPage.page.mouse.move trajectory, add hover-only checks, include clicks/drag events, or vary step count/timing) to measure a different phenomenon, or explicitly document in the "cursor style mutations during mouse sweep" test why it is intentionally identical to the 'canvas-mouse-sweep' case (update test name/comment) so reviewers understand the distinction.browser_tests/fixtures/helpers/PerformanceHelper.ts (1)
3-11:PerfSnapshotis exported but has no external consumers — consider keeping it internal.Only
PerfMeasurementis imported byperfReporter.ts.PerfSnapshotis only used within this file.♻️ Proposed change
-export interface PerfSnapshot { +interface PerfSnapshot {Based on learnings: "Do not export declarations unless they are actually used elsewhere in the codebase. Keep the public API surface minimal."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@browser_tests/fixtures/helpers/PerformanceHelper.ts` around lines 3 - 11, The PerfSnapshot interface is exported but not used outside this file (only PerfMeasurement is imported by perfReporter.ts); remove the export from PerfSnapshot to keep the public API minimal and internalize it. Edit the declaration of PerfSnapshot in PerformanceHelper.ts (the interface named PerfSnapshot) to be non-exported, ensure any internal references in this file still compile, and keep exporting only PerfMeasurement for external use.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/ci-perf-report.yaml:
- Around line 50-55: The Upload perf metrics step (uses:
actions/upload-artifact@v6, name: "Upload perf metrics") should explicitly set
if-no-files-found: warn and run unconditionally by adding if: always(), and the
corresponding download step (uses: actions/download-artifact@v7, likely named
"Download PR perf metrics") should tolerate a missing artifact by adding
continue-on-error: true (or if-no-files-found: warn if supported); update those
two steps so a missing test-results/perf-metrics.json does not cause the report
job to fail with an unhelpful error.
- Around line 69-71: Replace the floating action reference "uses:
actions/setup-node@v4" with a pinned commit SHA for the setup-node action (i.e.,
change the uses line to the exact "actions/setup-node@<commit-sha>"
corresponding to the desired v4 release) so the workflow is immutably pinned;
keep the existing node-version: 22 input and update only the uses value to the
full commit SHA obtained from the actions/setup-node v4 repo tags/releases.
In `@browser_tests/globalTeardown.ts`:
- Around line 10-11: writePerfReport() in globalTeardown is a no-op because
worker processes recordMeasurement() into a different module instance; instead,
change the measurement pipeline so main process collects them: either implement
a Playwright custom reporter (use onTestEnd in a reporter to receive per-test
results and aggregate measurements from testInfo.attach) or have workers write
per-test temp files (from tests calling recordMeasurement or using
testInfo.attach) and modify globalTeardown to read and aggregate those temp
files before calling writePerfReport/aggregation logic; update references in
perfReporter.ts, globalTeardown, and test code so measurements are produced in
workers and consumed/aggregated in the main process.
In `@browser_tests/helpers/perfReporter.ts`:
- Around line 13-32: The module-level measurements array (measurements) is
populated in worker processes via recordMeasurement but writePerfReport runs in
the main process so measurements is always empty; replace this pattern with a
Playwright Reporter that collects per-test attachments: implement a PerfReporter
class implementing Reporter with a private measurements array, implement
onTestEnd to read JSON from result.attachments (name e.g. 'perf-measurement')
and push parsed PerfMeasurement into this.measurements, and implement onEnd to
mkdirSync and writeFileSync perf-metrics.json if this.measurements is non-empty;
update tests to call comfyPage.perf.stopMeasuring(...) and use
testInfo.attach('perf-measurement', { body: JSON.stringify(m), contentType:
'application/json' }) so measurements flow from workers to the reporter.
In `@browser_tests/tests/performance.spec.ts`:
- Around line 29-38: The loop that performs mouse moves is skipped when
canvas.boundingBox() returns null, causing stopMeasuring() to record a
misleading zero-interaction measurement; update each occurrence (where
canvas.boundingBox() is used before the mouse.move loop) to explicitly handle
null by either awaiting the canvas to be visible (e.g., retry/wait for selector
or bounding box) or throwing/asserting with a clear error so the test fails
instead of recording empty metrics; reference the canvas.boundingBox() call, the
comfyPage.page.mouse.move loop, and stopMeasuring() so you add the null-check
and an explicit fail/wait before attempting the interactions.
In `@scripts/perf-report.js`:
- Line 1: This new JS file violates the "no new JavaScript files" rule—rename
scripts/perf-report.js to scripts/perf-report.ts, remove the top-line "//
`@ts-check`" and convert any JSDoc-typed variables into proper TypeScript types
(update function signatures / variable declarations in this file), and ensure
the project's run/test scripts or README invoke it via tsx or ts-node (or your
project's TypeScript runner). Locate the file by name (scripts/perf-report.js →
scripts/perf-report.ts) and update imports/exports inside the file to use
TypeScript syntax so it compiles cleanly with the project's TypeScript
toolchain.
- Line 44: ESLint `no-console` is causing CI failures due to console.log usage
in scripts/perf-report.js; either add a file-level disable (/* eslint-disable
no-console */) at the top of scripts/perf-report.js or replace each console.log
call with process.stdout.write (ensuring you append "\n" where needed) — update
the console.log occurrences referenced in the diff and the other similar call
(the second console.log in the file) accordingly.
- Around line 78-93: The current percent calculations (recalcPct, layoutPct,
taskPct) silently return 0 when the baseline is zero, hiding new regressions;
update each calculation (e.g., recalcPct for base.styleRecalcs, layoutPct for
base.layouts) to detect base === 0 and m.* > 0 and set a sentinel (e.g., null or
Infinity) instead of 0; then update formatDelta to recognize that sentinel and
render a clear "new/regression" marker (or arrow) so lines.push uses
formatDelta(pct) and reports new non-zero measurements when baseline was zero.
---
Nitpick comments:
In `@browser_tests/fixtures/helpers/PerformanceHelper.ts`:
- Around line 3-11: The PerfSnapshot interface is exported but not used outside
this file (only PerfMeasurement is imported by perfReporter.ts); remove the
export from PerfSnapshot to keep the public API minimal and internalize it. Edit
the declaration of PerfSnapshot in PerformanceHelper.ts (the interface named
PerfSnapshot) to be non-exported, ensure any internal references in this file
still compile, and keep exporting only PerfMeasurement for external use.
In `@browser_tests/tests/performance.spec.ts`:
- Around line 22-66: Two tests ("canvas mouse interaction style recalculations"
using comfyPage.perf.stopMeasuring('canvas-mouse-sweep') and "cursor style
mutations during mouse sweep" using
comfyPage.perf.stopMeasuring('cursor-sweep')) currently run the exact same
100-step comfyPage.page.mouse.move trajectory, making the second test redundant;
either modify the second test's interaction pattern (e.g., change
comfyPage.page.mouse.move trajectory, add hover-only checks, include clicks/drag
events, or vary step count/timing) to measure a different phenomenon, or
explicitly document in the "cursor style mutations during mouse sweep" test why
it is intentionally identical to the 'canvas-mouse-sweep' case (update test
name/comment) so reviewers understand the distinction.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
.github/workflows/ci-perf-report.yamlbrowser_tests/fixtures/ComfyPage.tsbrowser_tests/fixtures/helpers/PerformanceHelper.tsbrowser_tests/globalTeardown.tsbrowser_tests/helpers/perfReporter.tsbrowser_tests/tests/performance.spec.tsplaywright.config.tsscripts/perf-report.js
- Replace console.log with process.stdout.write in perf-report.js (oxlint no-console) - Pin actions/setup-node to v6 to match .pinact.yaml ignore list - Add if: always() and if-no-files-found: warn to artifact upload - Add continue-on-error: true to PR metrics download - Fix measurement pipeline: write temp files per-test, aggregate in globalTeardown - Remove duplicate cursor-sweep test (identical to canvas-mouse-sweep) - Make PerfSnapshot interface non-exported (internal only) - Throw on null boundingBox() instead of silently recording empty metrics Amp-Thread-ID: https://ampcode.com/threads/T-019c8ef7-b4c9-70cc-b58f-99c036dee788
There was a problem hiding this comment.
♻️ Duplicate comments (1)
.github/workflows/ci-perf-report.yaml (1)
34-36:⚠️ Potential issue | 🟠 MajorPin
actions/checkoutandactions/setup-nodeto immutable SHAs.
Floating tags are mutable and can change without review; the rest of the workflow already uses pinned SHAs. Please pin both to the exact commit SHA for the chosen v6 release.#!/bin/bash set -euo pipefail for repo in actions/checkout actions/setup-node; do echo "== $repo v6 tags ==" curl -s "https://api.github.com/repos/$repo/tags" \ | jq -r '.[] | select(.name | test("^v6")) | "\(.name) \(.commit.sha)"' \ | head -n 5 doneAlso applies to: 70-72
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci-perf-report.yaml around lines 34 - 36, Replace the floating tags for both actions with immutable commit SHAs: change uses: actions/checkout@v6 to uses: actions/checkout@<commit-sha-for-v6> and change uses: actions/setup-node@v6 to uses: actions/setup-node@<commit-sha-for-v6>; update both occurrences (the existing uses entries for actions/checkout and actions/setup-node) to the exact commit SHAs you looked up so the workflow pins the specific v6 commits rather than a mutable tag.
🧹 Nitpick comments (1)
browser_tests/fixtures/helpers/PerformanceHelper.ts (1)
46-47: Prefer a nested function declaration forget.
This aligns with the repo preference for pure function declarations and keeps helpers consistent.♻️ Suggested tweak
- const get = (name: string) => - metrics.find((m) => m.name === name)?.value ?? 0 + function get(name: string) { + return metrics.find((m) => m.name === name)?.value ?? 0 + }Based on learnings: Prefer pure function declarations over function expressions (e.g., use function foo() { ... } instead of const foo = () => { ... }) for pure functions in the repository. Function declarations are more functional-leaning, offer better hoisting clarity, and can improve readability and tooling consistency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@browser_tests/fixtures/helpers/PerformanceHelper.ts` around lines 46 - 47, Replace the arrow-function helper with a nested function declaration: change the const get = (name: string) => metrics.find(...)?.value ?? 0 into a function get(name: string): number { ... } that returns metrics.find((m) => m.name === name)?.value ?? 0; keep it nested where metrics is in scope and preserve the return type and behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In @.github/workflows/ci-perf-report.yaml:
- Around line 34-36: Replace the floating tags for both actions with immutable
commit SHAs: change uses: actions/checkout@v6 to uses:
actions/checkout@<commit-sha-for-v6> and change uses: actions/setup-node@v6 to
uses: actions/setup-node@<commit-sha-for-v6>; update both occurrences (the
existing uses entries for actions/checkout and actions/setup-node) to the exact
commit SHAs you looked up so the workflow pins the specific v6 commits rather
than a mutable tag.
---
Nitpick comments:
In `@browser_tests/fixtures/helpers/PerformanceHelper.ts`:
- Around line 46-47: Replace the arrow-function helper with a nested function
declaration: change the const get = (name: string) => metrics.find(...)?.value
?? 0 into a function get(name: string): number { ... } that returns
metrics.find((m) => m.name === name)?.value ?? 0; keep it nested where metrics
is in scope and preserve the return type and behavior.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
.github/workflows/ci-perf-report.yamlbrowser_tests/fixtures/helpers/PerformanceHelper.tsbrowser_tests/helpers/perfReporter.tsbrowser_tests/tests/performance.spec.tsscripts/perf-report.js
🚧 Files skipped from review as they are similar to previous changes (3)
- browser_tests/tests/performance.spec.ts
- browser_tests/helpers/perfReporter.ts
- scripts/perf-report.js
⚡ Performance ReportNo baseline found — showing absolute values.
Raw data{
"timestamp": "2026-02-26T03:32:32.197Z",
"gitSha": "d8b1d4ac8db129e261a5f35de9aea33b69ab0967",
"branch": "perf/testing-infrastructure",
"measurements": [
{
"name": "canvas-idle",
"durationMs": 2034.9340000000211,
"styleRecalcs": 122,
"styleRecalcDurationMs": 16.698999999999998,
"layouts": 0,
"layoutDurationMs": 0,
"taskDurationMs": 403.1600000000001,
"heapDeltaBytes": -3781536
},
{
"name": "canvas-mouse-sweep",
"durationMs": 1806.8290000000218,
"styleRecalcs": 172,
"styleRecalcDurationMs": 41.969,
"layouts": 12,
"layoutDurationMs": 3.207,
"taskDurationMs": 771.304,
"heapDeltaBytes": -2803328
},
{
"name": "dom-widget-clipping",
"durationMs": 577.5359999999807,
"styleRecalcs": 43,
"styleRecalcDurationMs": 13.268,
"layouts": 0,
"layoutDurationMs": 0,
"taskDurationMs": 351.864,
"heapDeltaBytes": 6665424
}
]
} |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
browser_tests/fixtures/helpers/PerformanceHelper.ts (1)
46-47: Use a function declaration for the pure helper.This helper is pure and doesn’t require closure semantics, so prefer a function declaration.
♻️ Suggested refactor
- const get = (name: string) => - metrics.find((m) => m.name === name)?.value ?? 0 + function get(name: string): number { + return metrics.find((m) => m.name === name)?.value ?? 0 + }Based on learnings: Prefer pure function declarations over function expressions (e.g., use function foo() { ... } instead of const foo = () => { ... }) for pure functions in the repository. Function declarations are more functional-leaning, offer better hoisting clarity, and can improve readability and tooling consistency. Apply this guideline across TypeScript files in Comfy-Org/ComfyUI_frontend, including story and UI component code, except where a function expression is semantically required (e.g., callbacks, higher-order functions with closures).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@browser_tests/fixtures/helpers/PerformanceHelper.ts` around lines 46 - 47, Replace the const arrow helper with a pure function declaration: rename the variable-style helper "get" (currently declared as const get = (name: string) => ...) to a function declaration function get(name: string) { ... } that returns metrics.find((m) => m.name === name)?.value ?? 0; update any local references if needed so the semantics stay identical but use the function declaration form for clarity and hoisting.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@browser_tests/fixtures/helpers/PerformanceHelper.ts`:
- Line 45: The call to this.cdp.send('Performance.getMetrics') is untyped so
metrics becomes implicit any; define a TypeScript response type for the CDP
Performance.getMetrics result (e.g., an interface with metrics: Array<{name:
string; value: number; [key: string]: unknown}>) and use that type as the
generic/return type on the CDPSession.send call in PerformanceHelper (the same
method where this.cdp.send(...) is invoked) so the result is strongly typed
instead of any.
---
Nitpick comments:
In `@browser_tests/fixtures/helpers/PerformanceHelper.ts`:
- Around line 46-47: Replace the const arrow helper with a pure function
declaration: rename the variable-style helper "get" (currently declared as const
get = (name: string) => ...) to a function declaration function get(name:
string) { ... } that returns metrics.find((m) => m.name === name)?.value ?? 0;
update any local references if needed so the semantics stay identical but use
the function declaration form for clarity and hoisting.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
browser_tests/fixtures/helpers/PerformanceHelper.tsscripts/perf-report.js
🚧 Files skipped from review as they are similar to previous changes (1)
- scripts/perf-report.js
- Convert perf-report.js to TypeScript (no new JS files rule) - Fix zero-baseline delta logic to flag regressions when baseline is 0 - Type CDP Performance.getMetrics response to avoid implicit any - Convert arrow function to function declaration in PerformanceHelper Amp-Thread-ID: https://ampcode.com/threads/T-019c9204-b28e-7739-83ab-e1737a1fd581
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/ci-perf-report.yaml:
- Around line 63-68: The job-level permissions for the report job only set
pull-requests: write and are missing contents: read required by the
actions/checkout@v6 step; update the report job's permissions block to include
contents: read alongside pull-requests: write so the checkout action has the
necessary read access.
In `@scripts/perf-report.ts`:
- Around line 76-79: The code path where base is falsy pushes only styleRecalcs
and layouts rows and omits the task-duration row, so update the block that
checks if (!base) to also push a task-duration line into lines (matching the
same markdown format used elsewhere) using the measurement's duration property
(e.g., m.taskDuration) so the new-measurement case emits rows for task duration,
styleRecalcs, and layouts; reference the variables base, m, and lines to locate
and modify the code.
- Around line 59-63: JSON.parse of the metrics files can throw and currently
crashes the script; wrap the file-read + JSON.parse for CURRENT_PATH and
BASELINE_PATH in try/catch (or extract a small safeParse function) so malformed
JSON is caught, log a warning including the file path and error, and set the
corresponding variable (current or baseline) to a safe default (e.g., an empty
PerfReport object or null for baseline) so the report step continues without
throwing; reference CURRENT_PATH, BASELINE_PATH, PerfReport, current, baseline,
readFileSync, existsSync, and JSON.parse when locating where to add the
try/catch.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/workflows/ci-perf-report.yamlbrowser_tests/fixtures/helpers/PerformanceHelper.tsscripts/perf-report.ts
…dd task-duration for new measurements Amp-Thread-ID: https://ampcode.com/threads/T-019c93c1-d7c5-77c4-8b87-a72e747c4b2e
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
scripts/perf-report.ts (1)
59-63:⚠️ Potential issue | 🟠 Major
JSON.parseon unvalidated file content can crash the report step.Both parse calls (line 59 for current, lines 61-63 for baseline) throw on malformed JSON, which would fail the CI reporting step and make the pipeline non-silent.
This was flagged in a previous review and remains unaddressed in the current code.
🛡️ Proposed fix — wrap reads in a safe helper
+function readReport(path: string): PerfReport | null { + if (!existsSync(path)) return null + try { + return JSON.parse(readFileSync(path, 'utf-8')) as PerfReport + } catch { + return null + } +} + function main() { - if (!existsSync(CURRENT_PATH)) { + const current = readReport(CURRENT_PATH) + if (!current) { process.stdout.write( '## ⚡ Performance Report\n\nNo perf metrics found. Perf tests may not have run.\n' ) process.exit(0) } - - const current: PerfReport = JSON.parse(readFileSync(CURRENT_PATH, 'utf-8')) - - const baseline: PerfReport | null = existsSync(BASELINE_PATH) - ? JSON.parse(readFileSync(BASELINE_PATH, 'utf-8')) - : null + const baseline = readReport(BASELINE_PATH)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/perf-report.ts` around lines 59 - 63, Wrap the raw file reads/JSON.parse calls in a safe helper (e.g., parseJsonFile or safeReadJson) and use it for the current and baseline reads so malformed JSON won't throw; implement parseJsonFile(path: string): T | null that reads with readFileSync and returns parsed JSON on success and null on failure (catching and optionally logging the error), then replace the inline JSON.parse(readFileSync(CURRENT_PATH, 'utf-8')) with parseJsonFile<PerfReport>(CURRENT_PATH) and the baseline branch to parseJsonFile(BASELINE_PATH) (keeping existsSync check) and ensure variables current and baseline handle null results appropriately.
🧹 Nitpick comments (2)
scripts/perf-report.ts (2)
74-75: Measurements removed from PR are not surfaced in the report.The loop iterates only
current.measurements. A test renamed or deleted in the PR will disappear silently; the baseline entry that no longer has a PR counterpart is never reported.♻️ Proposed addition — report missing (removed) measurements
for (const m of current.measurements) { // ... existing per-measurement block ... } + + // Surface measurements present in baseline but absent from current PR + for (const b of baseline.measurements) { + const found = current.measurements.find((m) => m.name === b.name) + if (!found) { + lines.push(`| ${b.name}: style recalcs | ${b.styleRecalcs} | — | removed |`) + lines.push(`| ${b.name}: layouts | ${b.layouts} | — | removed |`) + lines.push(`| ${b.name}: task duration | ${b.taskDurationMs.toFixed(0)}ms | — | removed |`) + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/perf-report.ts` around lines 74 - 75, The report currently iterates only current.measurements (for const m of current.measurements) so baseline.measurements entries that were renamed or removed in the PR are never reported; add logic to detect and surface "removed" measurements by iterating baseline.measurements (or computing the set difference between baseline.measurements and current.measurements) and emit a report entry for each baseline-only measurement (e.g., mark them as removed/missing). Update the reporting/output code that consumes m/base to handle a removed flag or missing base so removed tests appear in the final perf report.
3-19: Extract duplicated perf interfaces to a shared types file.
PerfMeasurement(lines 3-12) andPerfReport(lines 14-19) are exact copies of the interfaces defined inbrowser_tests/fixtures/helpers/PerformanceHelper.tsandbrowser_tests/helpers/perfReporter.tsrespectively. This duplication creates a risk of type drift—future field additions to the browser_tests originals will silently diverge from this script unless manually replicated.Move both interfaces to a shared location (e.g.,
types/perf.ts) and import from there in bothscripts/perf-report.tsand the browser_tests modules.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/perf-report.ts` around lines 3 - 19, The PerfMeasurement and PerfReport interfaces are duplicated; extract these interfaces into a shared module (e.g., create and export them from types/perf.ts) and replace the local definitions in scripts/perf-report.ts with imports from that shared file; then update the browser_tests copies (browser_tests/fixtures/helpers/PerformanceHelper.ts and browser_tests/helpers/perfReporter.ts) to import the same types from types/perf.ts so all three modules use the single exported PerfMeasurement and PerfReport definitions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/perf-report.ts`:
- Around line 68-98: The baseline comparison loop in scripts/perf-report.ts is
missing the heap delta row for each measurement (you iterate
current.measurements and compare base via base =
baseline.measurements.find(...)), so add a heap-delta output just like the other
metrics: in the !base early-return block emit a row for `${m.name}: heap delta`
using `m.heapDeltaBytes`, and in the comparison branch compute `const heapDelta
= calcDelta(base.heapDeltaBytes, m.heapDeltaBytes)` and push a row using
`${base.heapDeltaBytes}` vs `${m.heapDeltaBytes}` with
`formatDeltaCell(heapDelta)` (format numbers similarly to taskDurationMs using
toFixed(0) or plain bytes as used elsewhere) so heap regressions are shown in
both new-measurement and baseline-comparison cases.
---
Duplicate comments:
In `@scripts/perf-report.ts`:
- Around line 59-63: Wrap the raw file reads/JSON.parse calls in a safe helper
(e.g., parseJsonFile or safeReadJson) and use it for the current and baseline
reads so malformed JSON won't throw; implement parseJsonFile(path: string): T |
null that reads with readFileSync and returns parsed JSON on success and null on
failure (catching and optionally logging the error), then replace the inline
JSON.parse(readFileSync(CURRENT_PATH, 'utf-8')) with
parseJsonFile<PerfReport>(CURRENT_PATH) and the baseline branch to
parseJsonFile(BASELINE_PATH) (keeping existsSync check) and ensure variables
current and baseline handle null results appropriately.
---
Nitpick comments:
In `@scripts/perf-report.ts`:
- Around line 74-75: The report currently iterates only current.measurements
(for const m of current.measurements) so baseline.measurements entries that were
renamed or removed in the PR are never reported; add logic to detect and surface
"removed" measurements by iterating baseline.measurements (or computing the set
difference between baseline.measurements and current.measurements) and emit a
report entry for each baseline-only measurement (e.g., mark them as
removed/missing). Update the reporting/output code that consumes m/base to
handle a removed flag or missing base so removed tests appear in the final perf
report.
- Around line 3-19: The PerfMeasurement and PerfReport interfaces are
duplicated; extract these interfaces into a shared module (e.g., create and
export them from types/perf.ts) and replace the local definitions in
scripts/perf-report.ts with imports from that shared file; then update the
browser_tests copies (browser_tests/fixtures/helpers/PerformanceHelper.ts and
browser_tests/helpers/perfReporter.ts) to import the same types from
types/perf.ts so all three modules use the single exported PerfMeasurement and
PerfReport definitions.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/ci-perf-report.yamlscripts/perf-report.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- .github/workflows/ci-perf-report.yaml
- Add gitSha/branch params to writePerfReport() for callable reuse - Add delta() helper in stopMeasuring() to reduce repetition Amp-Thread-ID: https://ampcode.com/threads/T-019c977c-d73c-70ec-81a9-cae4eb81465a
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
browser_tests/fixtures/helpers/PerformanceHelper.ts (1)
35-40: Make dispose idempotent and always clear internal state.Consider clearing
snapshotand nulling the session handle before cleanup so helper state is consistent even if cleanup throws.Suggested refactor
async dispose(): Promise<void> { - if (this.cdp) { - await this.cdp.send('Performance.disable') - await this.cdp.detach() - this.cdp = null - } + const cdp = this.cdp + this.cdp = null + this.snapshot = null + if (!cdp) return + try { + await cdp.send('Performance.disable') + } finally { + await cdp.detach() + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@browser_tests/fixtures/helpers/PerformanceHelper.ts` around lines 35 - 40, Make dispose() idempotent by clearing internal state (e.g., this.snapshot and the session handle this.cdp) before/independently of the cleanup RPCs and ensure failing RPCs don't leave the helper in a dirty state: set this.snapshot = null and capture the current this.cdp into a local variable then set this.cdp = null early, and wrap the calls to localCdp.send('Performance.disable') and localCdp.detach() in a try/catch so exceptions don't prevent state reset for subsequent calls to dispose or other methods.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@browser_tests/fixtures/helpers/PerformanceHelper.ts`:
- Around line 62-64: startMeasuring currently overwrites an existing baseline
without warning; add a guard at the top of startMeasuring() that checks
this.snapshot and throws (or rejects) if a measurement is already in progress to
prevent nested measurements from silently replacing the baseline, e.g. check
this.snapshot !== undefined/null and raise an error advising to call
stopMeasuring() first; ensure stopMeasuring() clears this.snapshot so subsequent
measurements can start cleanly and keep the existing getSnapshot() usage
unchanged.
---
Nitpick comments:
In `@browser_tests/fixtures/helpers/PerformanceHelper.ts`:
- Around line 35-40: Make dispose() idempotent by clearing internal state (e.g.,
this.snapshot and the session handle this.cdp) before/independently of the
cleanup RPCs and ensure failing RPCs don't leave the helper in a dirty state:
set this.snapshot = null and capture the current this.cdp into a local variable
then set this.cdp = null early, and wrap the calls to
localCdp.send('Performance.disable') and localCdp.detach() in a try/catch so
exceptions don't prevent state reset for subsequent calls to dispose or other
methods.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
browser_tests/fixtures/helpers/PerformanceHelper.tsbrowser_tests/helpers/perfReporter.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- browser_tests/helpers/perfReporter.ts
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub.
|
|
Warning Review the following alerts detected in dependencies. According to your organization's Security Policy, it is recommended to resolve "Warn" alerts. Learn more about Socket for GitHub.
|
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@browser_tests/fixtures/helpers/PerformanceHelper.ts`:
- Around line 35-40: The dispose() method must always clear in-progress
measurement state to prevent startMeasuring() from failing due to a left-over
this.snapshot; update dispose() (in PerformanceHelper) to use a try/finally
around CDP calls so cleanup runs even if Performance.disable or detach throws,
and in the finally block set this.snapshot = null and this.cdp = null (and any
other measurement flags) to ensure stopMeasuring()/startMeasuring() see a clean
state.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
browser_tests/fixtures/helpers/PerformanceHelper.ts (1)
30-33: Guardinit()against double-initialization.Line 31 overwrites an existing
CDPSessionifinit()is called twice, which can leave the prior session undisposed.Suggested patch
async init(): Promise<void> { + if (this.cdp) { + throw new Error('PerformanceHelper already initialized') + } this.cdp = await this.page.context().newCDPSession(this.page) await this.cdp.send('Performance.enable') }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@browser_tests/fixtures/helpers/PerformanceHelper.ts` around lines 30 - 33, init() currently overwrites this.cdp on repeated calls; add a guard at the top of the init method to no-op if this.cdp is already set (or alternatively explicitly dispose/detach the existing session before creating a new one) so you don't leak the previous CDPSession; locate init(), the this.cdp assignment (newCDPSession(this.page)) and the this.cdp.send('Performance.enable') call and either return early when this.cdp is truthy or call the appropriate dispose/detach on the existing session before reassigning.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@browser_tests/fixtures/helpers/PerformanceHelper.ts`:
- Around line 38-43: In PerformanceHelper's cleanup block (the try/finally
around this.cdp.send('Performance.disable')), avoid clearing stale this.cdp only
after awaiting detach; instead capture the current CDP instance to a local
(e.g., const cdp = this.cdp), set this.cdp = null immediately, then if (cdp)
await cdp.detach() so the helper's state is reset even if detach() throws;
update the method that contains this.cdp.send / this.cdp.detach to follow this
pattern (use the local variable and null-assign this.cdp before awaiting
detach).
---
Nitpick comments:
In `@browser_tests/fixtures/helpers/PerformanceHelper.ts`:
- Around line 30-33: init() currently overwrites this.cdp on repeated calls; add
a guard at the top of the init method to no-op if this.cdp is already set (or
alternatively explicitly dispose/detach the existing session before creating a
new one) so you don't leak the previous CDPSession; locate init(), the this.cdp
assignment (newCDPSession(this.page)) and the
this.cdp.send('Performance.enable') call and either return early when this.cdp
is truthy or call the appropriate dispose/detach on the existing session before
reassigning.
## Summary Batch `getBoundingClientRect()` calls in `updateClipPath` via `requestAnimationFrame` to avoid forced synchronous layout. ## Changes - **What**: Wrap the layout-reading portion of `updateClipPath` in `requestAnimationFrame()` with cancellation. Multiple rapid calls within the same frame are coalesced into a single layout read. Eliminates ~1,053 forced synchronous layouts per profiling session. ## Review Focus - `getBoundingClientRect()` forces synchronous layout. When interleaved with style mutations (from PrimeVue `useStyle`, cursor writes, Vue VDOM patching), this creates layout thrashing — especially in Firefox where Stylo aggressively invalidates the entire style cache. - The RAF wrapper coalesces all calls within a frame into one, reading layout only once per frame. The `cancelAnimationFrame` ensures only the latest parameters are used. - `willChange: 'clip-path'` is included to hint the browser to optimize clip-path animations. ## Stack 4 of 4 in Firefox perf fix stack. Depends on #9170. <!-- Fixes #ISSUE_NUMBER --> ┆Issue is synchronized with this [Notion page](https://www.notion.so/PR-9173-fix-batch-updateClipPath-via-requestAnimationFrame-3116d73d3650810392f7fba7ea5ceb6f) by [Unito](https://www.unito.io)
## Summary Batch `getBoundingClientRect()` calls in `updateClipPath` via `requestAnimationFrame` to avoid forced synchronous layout. ## Changes - **What**: Wrap the layout-reading portion of `updateClipPath` in `requestAnimationFrame()` with cancellation. Multiple rapid calls within the same frame are coalesced into a single layout read. Eliminates ~1,053 forced synchronous layouts per profiling session. ## Review Focus - `getBoundingClientRect()` forces synchronous layout. When interleaved with style mutations (from PrimeVue `useStyle`, cursor writes, Vue VDOM patching), this creates layout thrashing — especially in Firefox where Stylo aggressively invalidates the entire style cache. - The RAF wrapper coalesces all calls within a frame into one, reading layout only once per frame. The `cancelAnimationFrame` ensures only the latest parameters are used. - `willChange: 'clip-path'` is included to hint the browser to optimize clip-path animations. ## Stack 4 of 4 in Firefox perf fix stack. Depends on #9170. <!-- Fixes #ISSUE_NUMBER --> ┆Issue is synchronized with this [Notion page](https://www.notion.so/PR-9173-fix-batch-updateClipPath-via-requestAnimationFrame-3116d73d3650810392f7fba7ea5ceb6f) by [Unito](https://www.unito.io)
## Summary
Pre-rasterize the SubgraphNode SVG icon to a bitmap canvas to eliminate
Firefox's per-frame SVG style processing.
## Changes
- **What**: Add `getWorkflowBitmap()` that lazily rasterizes the
`data:image/svg+xml` workflow icon to an `HTMLCanvasElement` (16×16) on
first use. `SubgraphNode.drawTitleBox()` draws the cached bitmap instead
of the raw SVG.
## Review Focus
- Firefox re-processes SVG internal stylesheets (`stroke`,
`stroke-linecap`, `stroke-width`) every time `ctx.drawImage(svgImage)`
is called. Chrome caches the rasterization. This happens on every frame
for every visible SubgraphNode.
- Reporter confirmed strong subgraph correlation: "it may be happening
in the default workflow with subgraph" / "didn't seem to happen just
using manually wired up diffusion loader, clip, sampler, etc."
- Falls back to the raw SVG Image if not yet loaded or if
`getContext('2d')` returns null.
## Stack
3 of 4 in Firefox perf fix stack. Depends on #9170.
<!-- Fixes #ISSUE_NUMBER -->
┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-9172-fix-pre-rasterize-SubgraphNode-SVG-icon-to-bitmap-canvas-3116d73d365081babf17cf0848d37269)
by [Unito](https://www.unito.io)
---------
Co-authored-by: GitHub Action <action@github.com>
## Summary
Pre-rasterize the SubgraphNode SVG icon to a bitmap canvas to eliminate
Firefox's per-frame SVG style processing.
## Changes
- **What**: Add `getWorkflowBitmap()` that lazily rasterizes the
`data:image/svg+xml` workflow icon to an `HTMLCanvasElement` (16×16) on
first use. `SubgraphNode.drawTitleBox()` draws the cached bitmap instead
of the raw SVG.
## Review Focus
- Firefox re-processes SVG internal stylesheets (`stroke`,
`stroke-linecap`, `stroke-width`) every time `ctx.drawImage(svgImage)`
is called. Chrome caches the rasterization. This happens on every frame
for every visible SubgraphNode.
- Reporter confirmed strong subgraph correlation: "it may be happening
in the default workflow with subgraph" / "didn't seem to happen just
using manually wired up diffusion loader, clip, sampler, etc."
- Falls back to the raw SVG Image if not yet loaded or if
`getContext('2d')` returns null.
## Stack
3 of 4 in Firefox perf fix stack. Depends on #9170.
<!-- Fixes #ISSUE_NUMBER -->
┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-9172-fix-pre-rasterize-SubgraphNode-SVG-icon-to-bitmap-canvas-3116d73d365081babf17cf0848d37269)
by [Unito](https://www.unito.io)
---------
Co-authored-by: GitHub Action <action@github.com>
## Summary Cache `canvas.style.cursor` to avoid redundant DOM writes that dirty Firefox's style tree. ## Changes - **What**: Add `_lastCursor` field to `LGraphCanvas._updateCursorStyle()` — only writes `canvas.style.cursor` when the value changes. Eliminates ~347 redundant style mutations per profiling session. ## Review Focus - The fix is 2 lines (cache field + comparison). The unit test validates the caching pattern without requiring full LGraphCanvas instantiation. - This is one of several contributors to Firefox's cascading style recalculation freeze. Each `canvas.style.cursor` write dirties the style tree, which is flushed during the next paint in the canvas render loop. ## Stack 2 of 4 in Firefox perf fix stack. Depends on #9170. <!-- Fixes #ISSUE_NUMBER --> ┆Issue is synchronized with this [Notion page](https://www.notion.so/PR-9171-fix-cache-canvas-cursor-style-to-avoid-redundant-DOM-writes-3116d73d36508139827fe1d644fa1bd0) by [Unito](https://www.unito.io)
## Summary Cache `canvas.style.cursor` to avoid redundant DOM writes that dirty Firefox's style tree. ## Changes - **What**: Add `_lastCursor` field to `LGraphCanvas._updateCursorStyle()` — only writes `canvas.style.cursor` when the value changes. Eliminates ~347 redundant style mutations per profiling session. ## Review Focus - The fix is 2 lines (cache field + comparison). The unit test validates the caching pattern without requiring full LGraphCanvas instantiation. - This is one of several contributors to Firefox's cascading style recalculation freeze. Each `canvas.style.cursor` write dirties the style tree, which is flushed during the next paint in the canvas render loop. ## Stack 2 of 4 in Firefox perf fix stack. Depends on #9170. <!-- Fixes #ISSUE_NUMBER --> ┆Issue is synchronized with this [Notion page](https://www.notion.so/PR-9171-fix-cache-canvas-cursor-style-to-avoid-redundant-DOM-writes-3116d73d36508139827fe1d644fa1bd0) by [Unito](https://www.unito.io)
## Summary Batch `getBoundingClientRect()` calls in `updateClipPath` via `requestAnimationFrame` to avoid forced synchronous layout. ## Changes - **What**: Wrap the layout-reading portion of `updateClipPath` in `requestAnimationFrame()` with cancellation. Multiple rapid calls within the same frame are coalesced into a single layout read. Eliminates ~1,053 forced synchronous layouts per profiling session. ## Review Focus - `getBoundingClientRect()` forces synchronous layout. When interleaved with style mutations (from PrimeVue `useStyle`, cursor writes, Vue VDOM patching), this creates layout thrashing — especially in Firefox where Stylo aggressively invalidates the entire style cache. - The RAF wrapper coalesces all calls within a frame into one, reading layout only once per frame. The `cancelAnimationFrame` ensures only the latest parameters are used. - `willChange: 'clip-path'` is included to hint the browser to optimize clip-path animations. ## Stack 4 of 4 in Firefox perf fix stack. Depends on #9170. <!-- Fixes #ISSUE_NUMBER --> ┆Issue is synchronized with this [Notion page](https://www.notion.so/PR-9173-fix-batch-updateClipPath-via-requestAnimationFrame-3116d73d3650810392f7fba7ea5ceb6f) by [Unito](https://www.unito.io)
## Summary
Pre-rasterize the SubgraphNode SVG icon to a bitmap canvas to eliminate
Firefox's per-frame SVG style processing.
## Changes
- **What**: Add `getWorkflowBitmap()` that lazily rasterizes the
`data:image/svg+xml` workflow icon to an `HTMLCanvasElement` (16×16) on
first use. `SubgraphNode.drawTitleBox()` draws the cached bitmap instead
of the raw SVG.
## Review Focus
- Firefox re-processes SVG internal stylesheets (`stroke`,
`stroke-linecap`, `stroke-width`) every time `ctx.drawImage(svgImage)`
is called. Chrome caches the rasterization. This happens on every frame
for every visible SubgraphNode.
- Reporter confirmed strong subgraph correlation: "it may be happening
in the default workflow with subgraph" / "didn't seem to happen just
using manually wired up diffusion loader, clip, sampler, etc."
- Falls back to the raw SVG Image if not yet loaded or if
`getContext('2d')` returns null.
## Stack
3 of 4 in Firefox perf fix stack. Depends on #9170.
<!-- Fixes #ISSUE_NUMBER -->
┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-9172-fix-pre-rasterize-SubgraphNode-SVG-icon-to-bitmap-canvas-3116d73d365081babf17cf0848d37269)
by [Unito](https://www.unito.io)
---------
Co-authored-by: GitHub Action <action@github.com>
## Summary Cache `canvas.style.cursor` to avoid redundant DOM writes that dirty Firefox's style tree. ## Changes - **What**: Add `_lastCursor` field to `LGraphCanvas._updateCursorStyle()` — only writes `canvas.style.cursor` when the value changes. Eliminates ~347 redundant style mutations per profiling session. ## Review Focus - The fix is 2 lines (cache field + comparison). The unit test validates the caching pattern without requiring full LGraphCanvas instantiation. - This is one of several contributors to Firefox's cascading style recalculation freeze. Each `canvas.style.cursor` write dirties the style tree, which is flushed during the next paint in the canvas render loop. ## Stack 2 of 4 in Firefox perf fix stack. Depends on #9170. <!-- Fixes #ISSUE_NUMBER --> ┆Issue is synchronized with this [Notion page](https://www.notion.so/PR-9171-fix-cache-canvas-cursor-style-to-avoid-redundant-DOM-writes-3116d73d36508139827fe1d644fa1bd0) by [Unito](https://www.unito.io)
Summary
Add a permanent, non-failing performance regression detection system using Chrome DevTools Protocol metrics, with automatic PR commenting.
Changes
PerformanceHelperfixture class using CDPPerformance.getMetricsto collectRecalcStyleCount,LayoutCount,LayoutDuration,TaskDuration,JSHeapUsedSize. Adds@perfPlaywright project (Chromium-only, single-threaded, 60s timeout), 4 baseline perf tests, CI workflow with sticky PR comment reporting, andperf-report.jsscript for generating markdown comparison tables.Review Focus
PerformanceHelperusespage.context().newCDPSession(page)— CDP is Chromium-only, so perf metrics are not collected on Firefox. This is intentional since CDP gives us browser-level style recalc/layout counts thatperformance.mark/measurecannot capture.continue-on-error: trueso perf tests never block merging.dawidd6/action-download-artifactto download metrics from the target branch, following the same pattern aspr-size-report.yaml.Stack
This is the foundation PR for the Firefox performance fix stack:
perf/fix-cursor-cache— cursor style caching (depends on this)perf/fix-subgraph-svg— SVG pre-rasterization (depends on this)perf/fix-clippath-raf— RAF batching for clip-path (depends on this)PRs 2-4 are independent of each other.
┆Issue is synchronized with this Notion page by Unito