-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slurm-scripts #337
Slurm-scripts #337
Conversation
These can also be transposed given a matrix
Codecov Report
@@ Coverage Diff @@
## main #337 +/- ##
========================================
- Coverage 2.06% 1.75% -0.32%
========================================
Files 21 20 -1
Lines 4888 4841 -47
========================================
- Hits 101 85 -16
+ Misses 4787 4756 -31 see 5 files with indirect coverage changes Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
Preview page for your plugin is ready here: |
I think I need to add documentation on how to run the code. |
Thanks for these utilities @edyoshikun. Yes, a quick how-to on staging the data will be useful. These can be comments in specific scripts or a markdown guide linked in the docs. Have you already implemented a utility that allows the reconstruction of data parallelized by positions, preceded by or followed by, the creation of metadata at all positions within the zarr store? Is |
I think this file does this (although using iohub internals which are now wrapped in public API, see following comment). |
@edyoshikun this PR no longer needs a review, correct? Can you change to draft or close? |
Deprecated by recent work on mantis |
This PR adds examples on how to use slurm for parallel reconstructions for birefringence and phase.
Both pipilines convert the data to zarr, create an empty zarr store for the output and parallelize the reconstructions per position to simultaneously write to the output zarr.
These scripts run using our zenodo dataset, but bigger datasets really take an advantage using HPC.