-
Notifications
You must be signed in to change notification settings - Fork 29.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
module: NativeModule._cache turned to a Map #8576
Conversation
Would be interesting to see the performance impact of this first. /cc @nodejs/benchmarking |
@mscdex How can I check that? Are there any performance tests I can run? |
I don't think there is a native module loading benchmark in this repo at least (so far). I am not sure if there are any elsewhere or not. The benchmarking team might know. |
LGTM, since I believe it is always outside of hot paths. The performance regression shouldn't be be massive really. |
$ #in master (a89c23f8):
$ make -j8
$ ./node benchmark/module/module-loader.js
module/module-loader.js fullPath="true" thousands=50: 1.5335868438348847
module/module-loader.js fullPath="false" thousands=50: 1.435314624271457
$ git checkout NativeModule-cache-map
$ make -j8
$ ./node benchmark/module/module-loader.js
module/module-loader.js fullPath="true" thousands=50: 1.5309753684627516
module/module-loader.js fullPath="false" thousands=50: 1.3562813178146016 And if I'd want to run ./node benchmark/compare.js --old out/Release/node-master --new ./node X ? |
@eljefedelrodeodeljefe IMHO it's a hot path since @danyshaanan That benchmark mostly exercises third party module loading, which goes through a different mechanism than the built-in ("native") modules. |
I agree with @mscdex |
@mscdex Does that mean that that benchmark does not reflect the real use and effect of the code change? If so, how can this be properly checked? |
@danyshaanan Correct. If the benchmarking team doesn't already have one, a new benchmark would have to be added to test the native module loading mechanism. That would be kind of tricky to implement since I believe some modules are loaded during startup, so startup time would also have to be measured (I think there is already a benchmark for startup time). |
I’m not sure the startup time is noticeably affected by this (I also think that’s already benchmarked?), but it would probably be nice to see benchmarks for re-requiring builtin modules over and over again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nodejs/benchmarking I know you run require cache benchmarks for https://benchmarking.nodejs.org/, could you run those on this?
ping @nodejs/benchmarking again because that doesn’t seem to work from review comments? |
But what's the point of this change, really? Is it supposed to be faster? |
afaik that was basically the outcome of the |
Well, the same logic is not applicable here, because we don't use untrusted strings as keys |
I am not sure how that relates…? I mean, yes, the original reasons for the querystring/http discussions were untrusted keys, but the microbenchmarking result that (See #7581 for what that speed information is based on. I realize it would probably be good to benchmark this separately.) |
I'm not so sure about |
@danyshaanan ... I cannot remember if we spoke at the code and learn about running the benchmark comparisons. It would be great to get some comparisons on the performance impact of this change. If you need some help collecting those, definitely let us know. As @mscdex points out, it's unclear if using As for the rest, I'm +1 on this change. |
@jasnell - We have, a bit. I ran what I could (#8576 (comment)). From Brian's comment (#8576 (comment)) I understand that a new benchmark is required, so if benchmark/README.md is up to date, I could look into it and give it a shot when I'll be back home... I guess that the correct thing would be to open an issue and move the benchmark-related discussion there. |
I've had a go at running the require benchmarks locally. The require test only requires a fairly basic .js file - not with native code. I might see if I can write a basic native module and add that as one of these require tests too. In the mean time, here is the results of require tests as run on benchmarking.nodejs.org. The results seem to be as below. The scores are in operations per second, so higher is better: We're wanting to compare the pr-require.new with master-require.new and pr-require.cached with master-require.cached We can see that we get a lower average ops/second for this pr, compared to the base build. Caveats: This data was not collected on the benchmarking machine, so can't be compared to the results displayed on benchmarking.nodejs.org Looking at the data on benchmarking.nodejs.org, we can see there's quite a bit of variability in this benchmark already. Early comments in this PR suggest that this may not even be testing what this PR changes, in which case either the differences we see above are due to variability, or some unexpected side effect. I'll run some more measures (the above data was a set of 10, interleaving pr, master, pr etc. |
Just to be clear, in this context |
@gareth-ellis with the results from the benchmarks being so close I would recommend running it quite a few times, say 30, on a benchmark machine, something with as little noise as possible and get the t-value. That close of a micro-benchmark worries me :) Second, I don't know how much time has been spent optimizing |
c133999
to
83c7a88
Compare
Closing in favor of #10789 which touches on the same bits. |
Checklist
make -j4 test
(UNIX), orvcbuild test nosign
(Windows) passesDescription of change
Changed
NativeModule._cache
inlib/internal/bootstrap_node.js
to be aMap
, in accordance to nodejs/code-and-learn#56 (comment)