Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add frontend rendering metrics in the performance tests. #33645

Closed
4 of 10 tasks
youknowriad opened this issue Jul 23, 2021 · 6 comments · Fixed by #47037
Closed
4 of 10 tasks

Add frontend rendering metrics in the performance tests. #33645

youknowriad opened this issue Jul 23, 2021 · 6 comments · Fixed by #47037
Labels
[Type] Performance Related to performance efforts

Comments

@youknowriad
Copy link
Contributor

youknowriad commented Jul 23, 2021

Our performance test currently tracks editor-centric metrics such as: editor loading time, editor typing time, time to select a block, time to open the inserter, time to hover an element on the inserter. While we did a number of iterations and improvements to the frontend rendering of Gutenberg, we don't have a metric to track. It would be a great addition to our performance numbers.

Tasks

V1: the MVP is having visibility in what happens in the server and the browser.

While tracking the time spent in the server is useful, it doesn't directly correlate to user-perceived performance. We need to track client metrics as well. For example, WordPress processes the post to render, so it can identify the blocks in use. With that information, it enqueues only the CSS of those blocks instead of the CSS of the whole block library. This certainly takes more time in the server, though we expect it to improve the perceived performance by the user. If we only tracked server metrics, they'd report this behavior as a regression, and we'd lack tools to understand how it impacted the actual user-perceived performance.

  • #47442 Log front-end metrics in codehealth.
  • #47037 Add support for Time To First Byte. Tells us the time it takes the server to render something.
  • #47938 Add support for Largest Contentful Paint. Tell us the time it takes to render a meaningful part of the page in the client.
  • #48288 Add support for a new metric, LCP-TTFB, so we have a better sense of how certain changes impact the rendering in the browser.

Later:

  • Content: add images and other meaningful elements to the homepage template, so LCP metric can catch regressions on those areas. For awareness, core is in the process of tracking this front-end metrics as well. See https://core.trac.wordpress.org/ticket/57687 They plan to use the theme test data https://github.com/WPTT/theme-test-data
  • Content: in addition to track the homepage, track a single large post template as well.
  • Metrics: use the server-timing API to measure WordPress specific metrics. See example.
  • Metrics: add support for First Contentful Paint. Tells us the time it takes to render some part of the page in the client.
  • Runtime: consider using the latest WordPress (nightly?) so we report the latest. The runtime is now the older WordPress that Gutenberg supports, so it lacks many improvements.
  • Runtime: consider improving how some Gutenberg code is hooked into core, so it's faster. For example, add short-circuit hooks to certain global styles code (e.g.: wp_get_global_settings) so it's "hookable" by Gutenberg. By doing this sort of changes, the logic will run only once.
@youknowriad youknowriad added the [Type] Performance Related to performance efforts label Jul 23, 2021
@gziolo
Copy link
Member

gziolo commented Jul 23, 2021

We can play with Lighthouse integration with Puppeteer for the frontend. Here is a recipe from @addyosmani shared on his blog:

https://addyosmani.com/blog/puppeteer-recipes/#lighthouse-metrics

Lighthouse exposes a number of user-centric performance metrics. It's possible to pluck these metrics values out from the JSON response, as demonstrated below.

const fs = require('fs');
const lighthouse = require('lighthouse');
const puppeteer = require('puppeteer');

const chromeLauncher = require('chrome-launcher');
const reportGenerator = require('lighthouse/lighthouse-core/report/report-generator');
const request = require('request');
const util = require('util');

const options = {
  logLevel: 'info',
  disableDeviceEmulation: true,
  chromeFlags: ['--disable-mobile-emulation']
};

async function lighthouseFromPuppeteer(url, options, config = null) {
  // Launch chrome using chrome-launcher
  const chrome = await chromeLauncher.launch(options);
  options.port = chrome.port;

  // Connect chrome-launcher to puppeteer
  const resp = await util.promisify(request)(`http://localhost:${options.port}/json/version`);
  const { webSocketDebuggerUrl } = JSON.parse(resp.body);
  const browser = await puppeteer.connect({ browserWSEndpoint: webSocketDebuggerUrl });

  // Run Lighthouse
  const { lhr } = await lighthouse(url, options, config);
  await browser.disconnect();
  await chrome.kill();

  const json = reportGenerator.generateReport(lhr, 'json');

  const audits = JSON.parse(json).audits; // Lighthouse audits
  const first_contentful_paint = audits['first-contentful-paint'].displayValue;
  const total_blocking_time = audits['total-blocking-time'].displayValue;
  const time_to_interactive = audits['interactive'].displayValue;

  console.log(`\n
     Lighthouse metrics: 
     🎨 First Contentful Paint: ${first_contentful_paint}, 
     ⌛️ Total Blocking Time: ${total_blocking_time},
     👆 Time To Interactive: ${time_to_interactive}`);
}

lighthouseFromPuppeteer("https://bbc.com", options);

@oandregal
Copy link
Member

Started to work on this at #47037

@oandregal oandregal reopened this Jan 27, 2023
@oandregal oandregal changed the title Add a frontend rendering metric to the performance tests. Add frontend rendering metrics in the performance tests. Jan 27, 2023
@oandregal
Copy link
Member

Updated the issue description with some follow-ups to the work that shipped recently.

@oandregal
Copy link
Member

#47938 aims to add support for Largest Contentful Paint.

@oandregal
Copy link
Member

Codehealth is now also tracking LCP. I consider done the V1 for this task: we have now visibility on what happens in both the server and the client, so we have a balanced perspective on user-perceived performance. This is not "done" in that we still have follow-up tasks to improve. I've logged them in the issue description.

@youknowriad
Copy link
Contributor Author

I think we've made decent progress there. We can always do better and test more things but this issue is not helpful anymore it seems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
[Type] Performance Related to performance efforts
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants