Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: benchmark #1029

Merged
merged 66 commits into from
Sep 4, 2022
Merged

feat: benchmark #1029

merged 66 commits into from
Sep 4, 2022

Conversation

poyoho
Copy link
Member

@poyoho poyoho commented Mar 25, 2022

fix #917

native support benchmark.

  • Create a new bench command to start vitest and distinguish the current running mode of vitest (unit test or benchmark).
  • Reuse the Vitest suite collection benchmark, add the bench API to distinguish between unit test and benchmark and collect the benchmark execution function.
  • When running the test, unit-test normally enters the unit-test branch and executes the unit-test function once. And benchmark will use [email protected] as engine asynchronous multiple execution function.
  • provide verbose and json reporter to show the benchmark result.

@netlify
Copy link

netlify bot commented Mar 25, 2022

Deploy Preview for vitest-dev ready!

Built without sensitive environment variables

Name Link
🔨 Latest commit facaf89
🔍 Latest deploy log https://app.netlify.com/sites/vitest-dev/deploys/629f5bc82c6ad70007dd3853
😎 Deploy Preview https://deploy-preview-1029--vitest-dev.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site settings.

packages/vitest/package.json Outdated Show resolved Hide resolved
@antfu
Copy link
Member

antfu commented Mar 26, 2022

Let's discuss the design in #917 first.

About benchmark.js, if we really end up using it, I could try to contact the author and see if our team could grant the access and help on maintaining it.

@antfu antfu added the on hold label Mar 26, 2022
@antfu antfu added this to the Next milestone May 3, 2022
@poyoho poyoho marked this pull request as draft May 6, 2022 01:36
@poyoho
Copy link
Member Author

poyoho commented May 6, 2022

It seems need to make benchmark(xxxx.bench.js) whit diff files and add a new command to run benchmark 👏

@sheremet-va
Copy link
Member

I think we can now use tinybench by @Uzlopak?

@poyoho
Copy link
Member Author

poyoho commented May 9, 2022

yes, It seems to had the same api with benchmark

@Uzlopak
Copy link

Uzlopak commented May 17, 2022

Published version 1.0.2 of tinybench.

@poyoho poyoho marked this pull request as ready for review May 31, 2022 09:35
@antfu
Copy link
Member

antfu commented Aug 30, 2022

We already support the reporter interface to access the full test result.

@poyoho
Copy link
Member Author

poyoho commented Aug 31, 2022

In fact, we had the results, but this style can't fill too much data 😁

@poyoho
Copy link
Member Author

poyoho commented Aug 31, 2022

image

@poyoho the ops/sec are identical, there might be something wrong

two decimal places will be rounded off. The real result is: 0.006637481844709122, 0.005769233822295151...

@poyoho
Copy link
Member Author

poyoho commented Aug 31, 2022

image
updated

@poyoho
Copy link
Member Author

poyoho commented Aug 31, 2022

I think we can native support deno style benchmark result.

@antfu
Copy link
Member

antfu commented Aug 31, 2022

For ops/sec <1, should we display the duration of one op instead?

@poyoho
Copy link
Member Author

poyoho commented Aug 31, 2022

maybe we can change ops/sec -> ops/ms

@Uzlopak
Copy link

Uzlopak commented Aug 31, 2022

I dont think, that ops/ms is that relevant. I personally run benchmarks multiple times because it also depends on which core you run the benchmark and if there is currently a heavier process running on it. So basically benchmarks only indicate if you have some heavy performance degredation. So I would think it is totally OK to have the ops/s. Also you have the percentage on how they differ, so you can actually see which one is faster.

Is it possible to pass options to the benchmark like

describe('sort', { minSample: 100 }, () => {
  bench('normal', () => {
    const x = [1, 5, 4, 2, 3]
    x.sort((a, b) => {
      return a - b
    })
  })
})

@poyoho
Copy link
Member Author

poyoho commented Aug 31, 2022

image

feat json format report to show all the result

@sheremet-va
Copy link
Member

What do we need to release this?

@antfu
Copy link
Member

antfu commented Sep 4, 2022

I would like to improve the default output to a table, but not blocking as we could improve it later. Since this is experimental, I think we could ship it in the next minor.

packages/vitest/src/node/core.ts Outdated Show resolved Hide resolved
packages/vitest/src/runtime/run.ts Outdated Show resolved Hide resolved
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Built-in benchmarking support
8 participants