-
Notifications
You must be signed in to change notification settings - Fork 569
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running benchmarks during release #479
Comments
Potentially related: nodejs/benchmarking#293 Also on what day did the changes land which caused the regression? I'm curious if we could have caught it from: benchmarking.nodejs.org ? Throughput does look ~5% lower on 12.x than 10.x in the graphs |
The benchmarks we saw regressions on at google cloud were under large CPU
load. I have a way to reproduce, but am unsure if our benchmark suite is
checking this specific case.
…On Thu, Oct 3, 2019, 3:45 PM Michael Dawson ***@***.***> wrote:
Potentially related: nodejs/benchmarking#293
<nodejs/benchmarking#293>
Also on what day did the changes land which caused the regression. I'm
curious if we could have caught it from: benchmarking.nodejs.org ?
Throughput does look ~5% lower on 12.x than 10.x in the graphs
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#479?email_source=notifications&email_token=AADZYV65QAK3QAW37JXE62LQMZDVBA5CNFSM4I4NG6R2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAJLYYY#issuecomment-538098787>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AADZYVZMW6XLMOHMPEWBJPDQMZDVBANCNFSM4I4NG6RQ>
.
|
As discussed during the last meeting, the next step is to make sure we have a CI job that can run in a reasonable amount of time so we can use it for releases. |
The CI job that we can use for our microbenchmarks is: https://ci.nodejs.org/view/Node.js%20benchmark/job/benchmark-node-micro-benchmarks The only problem with the job is that we have to explicitly name a module that we want to check for. We can not just run all of them. I am not sure but I believe our micro-benchmarks setup does not provide a run all version currently, so we have to fix that first (but I will check about that again). We also likely have to trim the run time of some of our benchmarks as they sometimes have a very long run time. |
I removed it from the agenda as we already discussed it properly and we just have to now improve the way we do this. |
@BridgeAR from what I understand a run all would take days.... So very much would need to stip down to a subset. |
We recently ran into some significant performance drop due to some changes in-between releases.
Our current microbenchmarks partially take a very long time but we could start evaluating what microbenchmarks we could run before we release a new version to detect significant performance drops early on.
The text was updated successfully, but these errors were encountered: