|
| 1 | +# Installation instructions |
| 2 | + |
| 3 | +## Installation of Anima metrics |
| 4 | + |
| 5 | +Installation of |
| 6 | + |
| 7 | +``` |
| 8 | +cd ~ |
| 9 | +mkdir anima/ |
| 10 | +cd anima/ |
| 11 | +wget -q https://github.com/Inria-Empenn/Anima-Public/releases/download/v4.2/Anima-macOS-4.2.zip # for MACOS |
| 12 | +unzip Anima-macOS-4.2.zip |
| 13 | +rm Anima-macOS-4.2.zip |
| 14 | +git lfs install |
| 15 | +git clone --depth 1 https://github.com/Inria-Visages/Anima-Scripts-Public.git |
| 16 | +git clone --depth 1 https://github.com/Inria-Visages/Anima-Scripts-Data-Public.git |
| 17 | +``` |
| 18 | + |
| 19 | +Configure directories |
| 20 | + |
| 21 | +``` |
| 22 | +cd ~ |
| 23 | +mkdir .anima/ |
| 24 | +touch .anima/config.txt |
| 25 | +
|
| 26 | +echo "[anima-scripts]" >> .anima/config.txt |
| 27 | +echo "anima = ${HOME}/anima/Anima-Binaries-4.2/" >> .anima/config.txt |
| 28 | +echo "anima-scripts-public-root = ${HOME}/anima/Anima-Scripts-Public/" >> .anima/config.txt |
| 29 | +echo "extra-data-root = ${HOME}/anima/Anima-Scripts-Data-Public/" >> .anima/config.txt |
| 30 | +``` |
| 31 | + |
| 32 | +## Installation of required libraries |
| 33 | + |
| 34 | +Create a virtual invironment: |
| 35 | +~~~ |
| 36 | +conda create -n venv_nnunet python=3.9 |
| 37 | +~~~ |
| 38 | + |
| 39 | +Activate the environment with the following command: |
| 40 | +~~~ |
| 41 | +conda activate venv_nnunet |
| 42 | +~~~ |
| 43 | + |
| 44 | +To install required libraries to train an nnUNet v2: |
| 45 | + |
| 46 | +``` |
| 47 | +pip install -r requirements_nnunet.txt |
| 48 | +``` |
| 49 | + |
| 50 | +Install SpinalCordToolbox 6.0 : |
| 51 | + |
| 52 | +Installation link : https://spinalcordtoolbox.com/user_section/installation.html |
| 53 | + |
| 54 | + |
| 55 | +# Data preparation |
| 56 | + |
| 57 | +Create the following folders: |
| 58 | + |
| 59 | +~~~ |
| 60 | +mkdir nnUNet_raw |
| 61 | +mkdir nnUNet_preprocessed |
| 62 | +mkdir nnUNet_results |
| 63 | +~~~ |
| 64 | + |
| 65 | +We are training a region-based nnUNet taking an image from contrasts PSIR or STIR and creating a mask with 0=background, 1=spinal cord and 2=MS lesion. |
| 66 | + |
| 67 | +Convert the data to the nnUNet format : |
| 68 | + |
| 69 | +~~~ |
| 70 | +python convert_BIDS_to_nnunet.py --path-data /path/to/BIDS/dataset --path-out /path/to/nnUNet_raw --taskname TASK-NAME --tasknumber DATASET-ID --contrasts PSIR,STIR --test-ratio XX --time-point ses-XX --type training --exclude-file /path/to/exclude_file.yml |
| 71 | +~~~ |
| 72 | + |
| 73 | +> **Note** |
| 74 | +> The test ratio is 0.2 for 20% (train ratio is therefore 80%). For M0 images, the time point is ses-M0. |
| 75 | +
|
| 76 | +To mutliply PSIR images by -1 before training and convert the data to the nnUNet format : |
| 77 | + |
| 78 | +~~~ |
| 79 | +python convert_BIDS_to_nnunet_with_mul_PSIR.py --path-data /path/to/BIDS/dataset --path-out /path/to/nnUNet_raw --taskname TASK-NAME --tasknumber DATASET-ID --contrasts PSIR,STIR --test-ratio XX --time-point ses-XX --type training --exclude-file /path/to/exclude_file.yml |
| 80 | +~~~ |
| 81 | + |
| 82 | +# Model training |
| 83 | + |
| 84 | +Before training the model, nnU-Net performs data preprocessing and checks the integrity of the dataset: |
| 85 | + |
| 86 | +~~~ |
| 87 | +export nnUNet_raw="/path/to/nnUNet_raw" |
| 88 | +export nnUNet_preprocessed="/path/to/nnUNet_preprocessed" |
| 89 | +export nnUNet_results="/path/to/nnUNet_results" |
| 90 | +
|
| 91 | +nnUNetv2_plan_and_preprocess -d DATASET-ID --verify_dataset_integrity |
| 92 | +~~~ |
| 93 | + |
| 94 | +You will get the configuration plan for all four configurations (2d, 3d_fullres, 3d_lowres, 3d_cascade_fullres). |
| 95 | + |
| 96 | +To train the model, use the following command: |
| 97 | +~~~ |
| 98 | +CUDA_VISIBLE_DEVICES=XXX nnUNetv2_train DATASET-ID CONFIG FOLD --npz |
| 99 | +~~~ |
| 100 | + |
| 101 | +> **Note** |
| 102 | +> Example for Dataset 101, on 2d config on fold 0: CUDA_VISIBLE_DEVICES=2 nnUNetv2_train 101 2d 0 --npz |
| 103 | +
|
| 104 | +# Model inference |
| 105 | + |
| 106 | +Convert data to nnUNet format for inference using `convert_BIDS_to_nnunet.py` or `convert_BIDS_to_nnunet_with_mul_PSIR.py` with `--type=inference`. |
| 107 | + |
| 108 | +Then perform inference: |
| 109 | +~~~ |
| 110 | +CUDA_VISIBLE_DEVICES=XXX nnUNetv2_predict -i /path/to/image/folder -o /path/to/predictions -d DATASET_ID -c CONFIG --save_probabilities -chk checkpoint_best.pth -f FOLD |
| 111 | +~~~ |
| 112 | + |
| 113 | +# Inference evaluation |
| 114 | + |
| 115 | +First, convert the predictions back the BIDS format ; this only keeps the lesion segmentation and discards spinal cord segmentation : |
| 116 | + |
| 117 | +~~~ |
| 118 | +python convert_predictions_to_BIDS.py --pred-folder /path/to/predictions --out-folder /path/to/output/folder --conversion-dict /path/to/conversion/dict |
| 119 | +~~~ |
| 120 | + |
| 121 | +If you are converting predictions which are not the evaluations set from nnUNet, use the flag `--not-imageTs`. |
| 122 | + |
| 123 | +Then, you can evaluate the lesion prediction with Anima metrics |
| 124 | + |
| 125 | +~~~ |
| 126 | +python evaluate_lesion_seg_prediction.py --pred-folder path/to/predictions --dataset path/to/dataset --animaPath path/to/animaSegPerfAnalyzer --output-folder path/to/output_folder |
| 127 | +~~~ |
| 128 | + |
| 129 | +# Evaluation analysis |
| 130 | + |
| 131 | +The following Notebook `nnUNet_inference_analysis.ipynb` was used to perform analysis of the nnUNet segmentations. |
| 132 | +To use it, change the path to the csv files. |
0 commit comments