-
Notifications
You must be signed in to change notification settings - Fork 70
Open
Description
For many types of changes (such as #76 and #78, recently) it would be great to have a set of benchmarking scripts for assessing performance impacts.
spritezero does currently have some unit-tests that function sort of like benchmarking (see here) but it is not ideal to have these mixed in with unit tests.
Benchmarking scripts guidelines:
- Should live in the
bench/directory, following convention of other repos (ex https://github.com/mourner/rbush/tree/master/bench). - Can be run locally to assess if changes to spritezero deps or code have an impact on performance or memory usage
- Should do no network, only local i/o
- Should be primarily designed to be run locally and used for collaborative testing
- Should nevertheless be run on CI (with explicit lines added to the .travis.yml to run them) to ensure they don't break (since they should not run as unit tests). But we should ignore their timing result on CI - the only purpose is to make sure they don't break under refactors.
- Should dump their output in an easy to share format that can be posted on tickets and referred back to
- Data input should be stable over time so their results can be compared across time with git bisect
- Should have configurable concurrency and should not default to using os.cpus() otherwise results will not be very comparable across machines.
(h/t @springmeyer for this list)