Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DWI mean SC seg #19

Closed
valosekj opened this issue Jul 18, 2024 · 9 comments
Closed

DWI mean SC seg #19

valosekj opened this issue Jul 18, 2024 · 9 comments

Comments

@valosekj
Copy link
Owner

valosekj commented Jul 18, 2024

Currently, I use the contrast-agnostic model to segment mean DWI image:

https://github.com/valosekj/dcm-brno/blob/main/02_processing_scripts/02_process_data.sh#L347-L348

This SC seg is consequently used to bring the template to DWI space:

https://github.com/valosekj/dcm-brno/blob/main/02_processing_scripts/02_process_data.sh#L353-L359

However, when debugging #18, I noticed that the contrast-agnostic segmentation is slightly shifted. So I tried sct_deepseg_sc and found out that it might actually provide a better segmentation:

gif

Kapture 2024-07-18 at 14 57 00

TODO: compare sct_deepseg_sc and contrast-agnostic on more subjects

@valosekj
Copy link
Owner Author

valosekj commented Jul 18, 2024

TODO: compare sct_deepseg_sc and contrast-agnostic on more subjects

Okay, tested. The contrast-agnostic seg is shifted at the superior part of the FOV for almost all subjects. --> switching to sct_deepseg_sc in 6787d06

Kapture 2024-07-18 at 17 49 05

@valosekj
Copy link
Owner Author

Resolved in 6787d06

@valosekj
Copy link
Owner Author

valosekj commented Jul 20, 2024

I also tried contrast-agnostic v2.3, based on the recommendation from @naga-karthik:

If possible, if you could try using the v2.3 model instead? The rationale is that between models v2.3 and v2.4, I added the canproco dataset and I remember that canproco had a lot of shifted segmentations in the GT.

Link to the data used.

conda activate monai
~/code/contrast-agnostic-softseg-spinalcord

# v2.3
python monai/run_inference_single_image.py --path-img /Users/valosek/Downloads/sub-1860B6472B/dwi/sub-1860B6472B_ses-1860B_acq-ZOOMit_dir-AP_dwi_crop_crop_moco_dwi_mean.nii.gz --path-out /Users/valosek/Downloads/sub-1860B6472B/dwi/contrast_agnostic_v2.3 --chkp-path /Users/valosek/Downloads/model_soft_bin_20240410-1136

# v2.4
python monai/run_inference_single_image.py --path-img /Users/valosek/Downloads/sub-1860B6472B/dwi/sub-1860B6472B_ses-1860B_acq-ZOOMit_dir-AP_dwi_crop_crop_moco_dwi_mean.nii.gz --path-out /Users/valosek/Downloads/sub-1860B6472B/dwi/contrast_agnostic_v2.4 --chkp-path /Users/valosek/Downloads/nnunet_seed=50_ndata=7_ncont=9_pad=zero_nf=32_opt=adam_lr=0.001_AdapW_bs=2_20240425-170840/

The shift in the top slices is presented for all versions (v2.3, v2.3, SCT):

gif

Kapture 2024-07-20 at 06 48 31

@naga-karthik
Copy link

Hey Jan! Before I start debugging this issue a bit deeper could you try the edge padding option when running inference from the model v2.4/v2.3 downloaded from the contrast-agnostic repository (i.e. not using SCT for inference)

Basically from the command you posted above, the change would be:

python monai/run_inference_single_image.py --path-img /Users/valosek/Downloads/sub-1860B6472B/dwi/sub-1860B6472B_ses-1860B_acq-ZOOMit_dir-AP_dwi_crop_crop_moco_dwi_mean.nii.gz --path-out /Users/valosek/Downloads/sub-1860B6472B/dwi/contrast_agnostic_v2.4 --chkp-path /Users/valosek/Downloads/nnunet_seed=50_ndata=7_ncont=9_pad=zero_nf=32_opt=adam_lr=0.001_AdapW_bs=2_20240425-170840/ --pad-mode edge

Instead of zero padding this option does edge padding and in my internal experiments I have seen that this works slightly better for the top/bottom slices. Let me know how it goes!

@valosekj
Copy link
Owner Author

Hey Naga! Thanks for the tip! I tried --pad-mode edge, but the predictions are pretty much the same (the shift is still there).

v2.3

Kapture 2024-07-22 at 11 35 56

v2.4

Kapture 2024-07-22 at 11 37 41

joshuacwnewton added a commit to spinalcordtoolbox/spinalcordtoolbox that referenced this issue Jul 29, 2024
## Description

Currently, test-time preprocessing transforms for the monai models
(which, at the moment, is only the contrast-agnostic model), used
zero-padding during cropping and padding as in [these
lines](https://github.com/spinalcordtoolbox/spinalcordtoolbox/blob/master/spinalcordtoolbox/deepseg/monai.py#L176-L177).
However, @valosekj observed that zero-padding DWI images from the
`dcm-brno` dataset resulted in [shifted
predictions](valosekj/dcm-brno#19 (comment)).
Interestingly, the shifts were only observed in the initial and final
slices, suggesting that this might be sub-optimal padding issue.

I experimented with different padding options as test-time
[here](sct-pipeline/contrast-agnostic-softseg-spinalcord#113 (comment))
and found that `edge`-padding fixed the issue with shifted predictions.

Hence, this PR updates the default padding (which is zero padding) to
`edge` padding. This change should not break anything as `edge` padding
is at least as good as the zero-padding (so it is not risky to make it
the default). Morever, I noticed that nnUNet has also started to
(subtly) make `edge` padding the default, supporting the changes in this
PR.

---------

Co-authored-by: Joshua Newton <[email protected]>
@valosekj
Copy link
Owner Author

Testing the contrast-agnostic model v2.4 after fixing the edge padding as part of SCT v6.4 on DWI mean images. I would say that the segmentations look relatively reasonable now! (of course, some minor manual corrections, especially at the compression levels, would be appropriate)

@naga-karthik, @sandrinebedard, @jcohenadad, what do you think?

Kapture 2024-08-13 at 16 03 54

@naga-karthik
Copy link

It seems that there are no major issues with the contrast-agnostic segmenations, but I notice that some slices are very slightly undersegmented. I am not sure if that's expected with DWI images because the cord/csf boundary is not that clear.

When you tried sct_deepseg_sc, was it convincingly better than the contrast-agnostic model?

@valosekj
Copy link
Owner Author

but I notice that some slices are very slightly undersegmented

Yes, you're right! I have the same feeling. I will perform manual correction of these segmentaions. Then we could use them for another iteration of the contrast-agnostic training. @naga-karthik, what do you think?

When you tried sct_deepseg_sc, was it convincingly better than the contrast-agnostic model?

Here is sct_deepseg_sc on the same subjects. Notice that sct_deepseg_sc undersegments even more and struggles with capturing the compressed cord. I would say that the fixed contrast-agnostic model is now consistently better!

Kapture 2024-08-13 at 18 22 51

@naga-karthik
Copy link

Indeed! sct_deepseg_sc is worse than contrast-agnostic for these subjects!

I will perform manual correction of these segmentaions. Then we could use them for another iteration of the contrast-agnostic training. @naga-karthik, what do you think?

Agreed, good plan!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants