You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue is for tracking and exploring creating a suite of benchmarks for the resolver so that we can better evaluate changes.
There are several different things worth benchmarking. Some rough ideas:
Creating a Cargo.lock from scratch.
Re-running the resolver when Cargo.lock already exists. (This is probably the most important one.)
Pathological edge cases, including those that result in errors. I don't know what are good candidates, maybe @Eh2406 has some ideas?
Covering resolve_ws_with_opts which runs the resolver and does a bunch of other work.
It may be useful to test against some real-world projects and some synthesized ones. The benchmarking suite can probably grab a snapshot of the crates.io index at a specific commit (cloning https://github.com/rust-lang/crates.io-index at a specific commit). It probably shouldn't be too hard to just capture the Cargo.toml files from some real-world projects so we can create some lightweight tests. Some real-world projects that I have used are (along with approximate number of deps):
empty project: 0
toml: 14
cargo: 130
rust: 518
tikv: 552
firefox: 577
diem: 653
servo: 658
paritytech/substrate: 896
I'm not sure which benchmarking libraries would be good to use. Criterion seems nice, but maybe others have other suggestions.
We may also want to create benchmarks for overall overhead (time to run cargo build with a project that is in a "fresh" state where no builds are necessary). This would cover several parts:
Process startup.
Config loading.
Workspace loading.
Resolver.
New feature resolver.
Generating units.
Scanning fingerprints.
I often do this with hyperfine on the projects listed above. It might be nice to make this easier to do, so something to keep in mind when making the benchmarks above.
@Eh2406 and @alexcrichton, if you have any other ideas or thoughts about what would be good to do, please include them here.
I expect this to be implemented in incremental steps. That is, we don't need a perfect benchmarking suite that covers everything all at once. Just automating a few real-world tests would be a good first step. I also expect these to be manually run by Cargo developers in an ad-hoc fashion at first. I don't think we have the facilities to do anything automated.
The text was updated successfully, but these errors were encountered:
This issue is for tracking and exploring creating a suite of benchmarks for the resolver so that we can better evaluate changes.
There are several different things worth benchmarking. Some rough ideas:
Cargo.lock
from scratch.Cargo.lock
already exists. (This is probably the most important one.)resolve_ws_with_opts
which runs the resolver and does a bunch of other work.It may be useful to test against some real-world projects and some synthesized ones. The benchmarking suite can probably grab a snapshot of the crates.io index at a specific commit (cloning https://github.com/rust-lang/crates.io-index at a specific commit). It probably shouldn't be too hard to just capture the
Cargo.toml
files from some real-world projects so we can create some lightweight tests. Some real-world projects that I have used are (along with approximate number of deps):I'm not sure which benchmarking libraries would be good to use. Criterion seems nice, but maybe others have other suggestions.
We may also want to create benchmarks for overall overhead (time to run
cargo build
with a project that is in a "fresh" state where no builds are necessary). This would cover several parts:I often do this with
hyperfine
on the projects listed above. It might be nice to make this easier to do, so something to keep in mind when making the benchmarks above.@Eh2406 and @alexcrichton, if you have any other ideas or thoughts about what would be good to do, please include them here.
I expect this to be implemented in incremental steps. That is, we don't need a perfect benchmarking suite that covers everything all at once. Just automating a few real-world tests would be a good first step. I also expect these to be manually run by Cargo developers in an ad-hoc fashion at first. I don't think we have the facilities to do anything automated.
The text was updated successfully, but these errors were encountered: