Merged
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #94 +/- ##
==========================================
+ Coverage 79.38% 84.45% +5.06%
==========================================
Files 18 17 -1
Lines 917 862 -55
==========================================
Hits 728 728
+ Misses 189 134 -55 ☔ View full report in Codecov by Sentry. |
This was referenced Apr 22, 2024
Merged
alessandrofelder
approved these changes
Apr 25, 2024
Member
alessandrofelder
left a comment
There was a problem hiding this comment.
Happy with this - just one tiny suggestion.
Co-authored-by: Alessandro Felder <alessandrofelder@users.noreply.github.com>
fdc29fa to
f800053
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
What is this PR
Why is this PR needed?
We are exploring a systematic way to benchmark
brainglobeworkflows usingasv.This PR fixes some issues running the
cellfinderworkflow benchmarks (1) on a small GIN dataset and (2) on data available locally.What does this PR do?
This PR involves:
setup_cachefunction of the benchmarks,To run the benchmarks locally on small dataset from GIN
Checkout this branch to get the latest version of the benchmarks locally.
Create a conda environment and pip install
asv:Note that to run the benchmarks you do not need to install a development version of
brainglobe-workflows, sinceasvwill create a separate Python virtual environment to run the benchmarks on it. However, for convenience we do includeasvas part of thedevdependencies, so you can use adevenvironment to run benchmarks.For a quick check, run one iteration per benchmark with
-v --show-stderrfor a more verbose output.asvvirtual environment thebrainglobe-workflowspackage from the tip of the local currently checked out branch, and run the (locally defined) benchmarks on it.To run the benchmarks (locally) on a locally available dataset
Define a config file for the workflow to benchmark. You can use the default one at
brainglobe_workflows/configs/cellfinder.jsonfor reference.input_data_dirfield pointing to the data of interest.signalandbackgroundsubdirectories underinput_data_dir. However, these defaults can be overwritten with thesignal_subdirandbackground_subdirfields.Create and activate an environment with
asv(follow steps 1 and 2 from above).Run the benchmarks in "quick mode", passing the path to your config file as an environment variable
CONFIG_PATH. In Unix systems:Troubleshooting
You may find that the conda environment creation is failing because of this issue. This seems to be because
asvis assuming a conda syntax that changed with the latest release (in conda 24.3.0--forcebecame--yes).A PR is on the way, as a temporary workaround you can try from base
conda install -y "conda<24.3".References
See issue #9.
Also related is issue #98 which I am currently investigating.
Further context
We currently have
asvbenchmarks for the three main steps involved in thecellfinderworkflow:We also have a benchmark for the full workflow.
We envisioned benchmarks being useful to developers in 3 main ways:
brainglobe_workflows/configs/cellfinder.json).This is all explained in the README.
A reminder of how
asvworks:asvcreates a virtual environment where it installs the package to be benchmarked (in our case,brainglobe-workflows). This virtual environment is defined in the asv config file (asv.conf.json).asvso that the version ofbrainglobe-workflowsthat is installed in the asv-managed virtual environment is the one at the tip of the currently checked out branch (i.e., the version atHEAD). This way developers can check if their local branch introduces regressions. Alternatively, we can choose to install a version ofbrainglobe-workflowsfetched from Github (for example, the tip of the remotemainbranch).asvwill look for benchmarks under thebenchmarksfolder (which is at the same level as theasv.conf.jsonfile), and run them.How has this PR been tested?
The benchmarks are checked with a CI job, rather than with explicit tests. This follows the general approach in the field - see #96 for more details.
Since we don't plan to test the benchmarks with pytest, I omitted the benchmarks from coverage.
Is this a breaking change?
No.
Does this PR require an update to the documentation?
The README has been updated to better reflect the current status.
Checklist: