-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Re-run test in v23 is slower than v22 on Windows #6783
Comments
Anyone would like to help finding which commit might be responsible for the slowdown? |
yes, I experienced this as well. Also console.log messages are not showing up consistently compared to v22. I suspect it is being overwritten by jest messages. |
I can confirm the problem.. on this test suite,, though not as pronounced. The problem seems to be down to startup ;- for me, on a large test suite I didn't notice a significant change. Jest 23 - 1.6s (real 3.7s), 1.25s (real 3.074s), 1.24s (real 3.06s) |
I wasn't running in watch mode. with watch all after already warmed up jest 23 - 0.5s pre-warmed first run, 0.8 after changing file |
I did a bisect and it looks like the problem was introduced with a deps upgrade in d1ce3cd. The upgrade to |
Are you sure? Besides micromatch, the only deps that changed were dev ones |
Actually I think there might be something else involved too. The deps upgrade definitely increased the watch mode runs on file change from ~0.46s to ~1-1.1s. However, reverting micromatch (e930c70) didn't resolve the problem. In fact it's actually slower than when micromatch was upgraded. I think some other change in between is contributing to the problem. I'll do another bisect to see if I can find it. |
Ah, it looks like a significant contributor is 664681a. I don't think it's the only issue but it certainly hurts this particular repo, as it only has one test. There's overhead spinning up a worker. Reverting that commit on master brings it down from 2.5-3.3s to 0.7-0.8s. That's still more than 0.46s, but it's better. Not sure what the impact would be on a larger battery of tests. |
I tested on a larger (but by no means large) codebase and found that a6aa304 decreased performance measurably on my machine as well.
For reference: 22.4.2
master
I haven't been able to determine what else is slowing down test runs between 22.4.2 and master. Performance goes up and down between the two revisions so it's hard to really find a cause. I don't think it's any single commit, but rather the cumulative effect of multiple changes. |
Can confirm 23 is much slower than 22 on re-running tests on Windows using Typescript and 23.4.1 adds ~3 seconds to a 3ms noop test 22.4.4 runs test re-runs as fast as expected |
I confirm
|
It is actual for MacOS too "jest 23.4.2" is slower than "jest 22.1.4" at most 80% |
Is there any update with this? It seems like the slowness is also on Linux as well. My team has been experiencing the same issues with slowness to the point where a dev is looking into porting back to mocha? Would love to not have to switch from Jest ❤️ if this is something that may be addressed in a later release. |
This may help #6925. |
I tested 23.6.0 and run times are identical to 23.5.0. |
We noticed a significant slow down (~12s to ~19s) in our test suite when upgrading from jest 22.4.4 to 23.5.0. The latest version as of this post, 23.6.0, did not improve the speed. We're using node 8.2, yarn 1.9.4, and macOS Sierra 10.12.6. There are 371 test suites and 1790 tests. While doing a git bisect to identify when the slowdown happened, I noted how long jest took to finish. The following table lists the median duration of the runs for each jest version.
Edit: The fast version of jest was 22.4.4. |
It would be awesome if someone would take the time to bisect Jest itself; building it and test against a repo showing the slowdown. I don't have Windows readily available (would have to be through virtualbox, which I think might give misleading results) |
@SimenB The slowdown happens on OS X too. |
This issue is for windows, if you can put together a reproduction that reproduces on more OS-es that would be great as a separate issue! 🙂 |
I did that earlier with a private repo and attributed most of the performance impact to a6aa304. I have not bisected with a public repo other than the one in the issue description, which was also impacted by 664681a, as the overhead of spinning up a worker is significantly longer than the time to run the tests. |
Ok, thanks. Too bad #6925 didn't help, then :( |
Similar experience when trying to upgrade jest on OS X. |
This may not be at all surprising, but running Jest in WSL is significantly faster than in "bare metal" Windows, despite the atrocious I/O performance of WSL. I tested with Jest 22 (which all of my team's projects are using if they use Jest at all) and found it to be roughly 3-4x faster on average. I haven't tested Jest 23 yet but I imagine it will be at least that much faster as well. |
This looks like it might be cache related, as running @Isaddo's test23 repo with: |
Only if you have it installed. We should probably specify that in the docs... And you can see in your stack trace that it uses the node crawler. |
note: I have the same problem (Windows10)
The test:
with this setup:
i get: with this setup:
i get => So mainly a cache problem :) but probably something else also. we need to find the commit between 22.4.4 and the release 23.0.0 that causes the problem. |
The slowdown with a single test file is primarily caused by 664681a. I assume |
@gsteacy |
Ah yes, sorry, that was in 23.4.0. a6aa304 was in 23.0.0 and definitely hurts performance as well, though I'm not sure why discarding the cache would help. |
@thymikee TEST 1 (what i did to know) TEST 2 (i dont replace the whole folder but i try just to rollback *if not clear enof you can use |
@gsteacy 👍
edit: solved in #7110 (comment) |
I am in vacations. I stated working (slowly) on the original issue of this thread #6783 (comment) |
I did the test with the repo.s from the guy that opened this issue: When i tested => no problem in my machine because my first run was Explanation=> Running in band in the case of the issue will make the tests run faster as wanted. in v22 jest the condition to run in band was: and in v23 jest the condition to run in band became: the condition was changed to fullfill the need of #6599 note: Possible SolutionsI need your input about which direction for a fix we want to take. I see multiple options: Solution1: (That's the best option in my opinion) Solution2: (not a robust solution in my opinion but it may work to close this issue) Solution3: (not ideal since the dev needs to set manually as opposite to now where it's auto) @thymikee waiting for feedback :) if you see other solutions let me know |
The problem with |
@thymikee I need to visualize what you have in mind :). Can you provide more details? (maybe an example) |
Ah yea, my answer was pretty generic and not really relevant for this use case, but still valid :D. Since the problem is with Node being slow with spawning workers, what do you think about experimental threads added in #7408? |
o O I see no changes on the result timing on rerun.
So it's not working. I followed the PR of worker threads they said it's fall backing by default to it if available and there is no config to enable it So how experimental threads could help in the issue? you mean if the total workers time go under 1sec it will re fallback to |
You need to run node with $ node --version
v10.14.1
$ node -p "require('worker_threads')"
internal/modules/cjs/loader.js:582
throw err;
^
Error: Cannot find module 'worker_threads'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:580:15)
at Function.Module._load (internal/modules/cjs/loader.js:506:25)
at Module.require (internal/modules/cjs/loader.js:636:17)
at require (internal/modules/cjs/helpers.js:20:18)
at [eval]:1:1
at Script.runInThisContext (vm.js:96:20)
at Object.runInThisContext (vm.js:303:38)
at Object.<anonymous> ([eval]-wrapper:6:22)
at Module._compile (internal/modules/cjs/loader.js:688:30)
at evalScript (internal/bootstrap/node.js:582:27)
$ node --experimental-worker -p "require('worker_threads')"
{ isMainThread: true,
MessagePort: [Function: MessagePort],
MessageChannel: [Function: MessageChannel],
threadId: 0,
Worker: [Function: Worker],
parentPort: null } |
It might make sense for us to use |
if someone hit this issue i edited my previous comment: #6783 (comment) I am really busy these dayz so when i have time i'll continue with |
followup: @jeysal did the test with workers and it's not the cause. |
This issue is stale because it has been open for 1 year with no activity. Remove stale label or comment or this will be closed in 14 days. |
This issue was closed because it has been stalled for 7 days with no activity. Please open a new issue if the issue is still relevant, linking to this one. |
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
💥 Regression Report
Re-run test in version 23 is significantly slower than version 22 on Windows
Last working version
Worked up to version: 22.4.4
Stopped working in version: 23.0.0
To Reproduce
repo: testjest22
repo: testjest23
These two repos only contains a test file
jest.test.js
and install different jest version.run
yarn test --watchAll --env=node
then, press Enter to trigger a test run
In v22, it takes about
0.05
s, but over2
s in v23When editing test file to trigger re-run,
v22 can finish in
0.5
s but v23 usually takes close to3
sRun
npx envinfo --preset jest
Update: It is an issue only on Windows, test it on Linux, just a little bit slower but acceptable.
The text was updated successfully, but these errors were encountered: