CVPR Workshop on Computer Vision for Microscopy Image Analysis (CVMI) 2024
Website / arXiv / MLiNS Lab / OpenSRH / Model checkpoint / Sample volumes
-
Clone MSDSR github repo
git clone [email protected]:MLNeurosurg/msdsr.git
-
Install miniconda: follow instructions here
-
Create conda environment:
conda create -n msdsr python=3.11
-
Activate conda environment:
conda activate msdsr
-
Install package and dependencies
pip install -e .
We release our pretrained DDPM model checkpoint and sample 3D volumes. They are available at the links below:
Model checkpoint / Sample volumes
The code base is written using PyTorch Lightning, with custom network and datasets for OpenSRH.
To train MSDSR on the OpenSRH dataset:
- Download OpenSRH - request data here.
- Update the sample config file in
train/config/train_msdsr.yaml
with desired configurations. - Change directory to
train
and activate the conda virtual environment. - Use
train/train_msdsr.py
to start training:python train_msdsr.py -c=config/train_msdsr.yaml
To evaluate with your trained model:
- Update the sample config files in
eval/config/*.yaml
with the checkpoint path and other desired configurations per file. If you are using the released checkpoint, place the checkpoint in the path$log_dir/$exp_name/msdsr_cvmi24/models/d17986ac.ckpt
. - Change directory to
eval
and activate the conda virtual environment. - Use the evaluation scripts in
eval/*.py
for evaluation. For example:# paired 2D evaluation python eval_paired.py -c=config/eval_paired.yaml # generate 3D volumes evaluation python generate_volumes.py -c=config/generate_volumes.yaml # generate metrics for 3D volumes [require paired 2D evaluation and 3D volume generated] python compute_volume_metrics.py -c=config/compute_metrics.yaml