You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add and implement tools to ensure that pull requests do not inadvertently cause significant regressions in memory usage or runtime.
At a hack day discussion, we targeted initial implementation and testing in stcal, since it's a smaller repository but still has non-trivial memory- and runtime-hungry functions (e.g., outlier detection median calculation). Goal is very quick adoption by JWST, and adoption by Roman once other outstanding regression test issues have been resolved.
The preferred workflow would be to have decorators that could be added to tests to flag them as memory- and/or runtime-checked. At time of writing, codspeed is being considered for runtime, and pytest-memray is being considered for memory.
Task list for now:
Add memory usage unit tests for methods where memory optimization has been attempted, e.g. the median calculator in outlier detection.
Test out codspeed by starting to use it in stcal
Figure out a good general way to report memory usage on PR branches
Decide whether memory usage regression tests are needed, or if benchmarking and showing the results to PR developers is good enough
The text was updated successfully, but these errors were encountered:
A PR in stcal adds a single unit test to the median calculator ensuring its memory usage does not go substantially above the expected amount. The structure of this test, using tracemalloc and comparing the tracemalloc.get_traced_memory() high-water mark with something, should be used in cases where the expected memory usage can be calculated straightforwardly based on the input size.
Issue JP-3775 was created on JIRA by Ned Molter:
Add and implement tools to ensure that pull requests do not inadvertently cause significant regressions in memory usage or runtime.
At a hack day discussion, we targeted initial implementation and testing in stcal, since it's a smaller repository but still has non-trivial memory- and runtime-hungry functions (e.g., outlier detection median calculation). Goal is very quick adoption by JWST, and adoption by Roman once other outstanding regression test issues have been resolved.
The preferred workflow would be to have decorators that could be added to tests to flag them as memory- and/or runtime-checked. At time of writing, codspeed is being considered for runtime, and pytest-memray is being considered for memory.
Task list for now:
The text was updated successfully, but these errors were encountered: