Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataset use: How to be efficient with BIDS datasets #30

Open
NathanMolinier opened this issue Sep 18, 2023 · 1 comment
Open

Dataset use: How to be efficient with BIDS datasets #30

NathanMolinier opened this issue Sep 18, 2023 · 1 comment
Assignees

Comments

@NathanMolinier
Copy link
Contributor

NathanMolinier commented Sep 18, 2023

Description

Being able to use BIDS datasets correctly and efficiently is essential to improve the quality and the speed of our AI training.

However, there is currently no common behaviour to adopt regarding how to:

  • deal with missing data (missing contrasts/ missing labels for some subject etc.)
  • deal with data aggregation (multi dataset training etc.)
  • deal with special image or label suffixes such as acq_sag or echo that only appear in the file name for some datasets (not in the folder path)
  • deal with data split for training

The objective of this issue is to exchange on the different lab strategies that are used currently by the students. And also simplify the learning for future students.

If you encounter issues with specific images please create an issue here (Neuropoly/data-management)

@NathanMolinier
Copy link
Contributor Author

NathanMolinier commented Sep 18, 2023

Nathan's strategie

1 - Gather the data paths into a simple text file using commands such as:

find ~+ -type f -name *_label*.nii.gz | grep -v MTS | sort > ../../data_config/spinegeneric_vert.txt

By running this into a BIDS compliant repository, we are gathering all the paths to relevant ground truth images that contain _label in their name.

Note: By directly gathering ground truth instead of images, we are avoiding complication with label suffixes (i.e. for discs labels we have numerous suffixes currently used: _labels-disc, _labels-disc-manual, _labels-manual)

The created text file should look like this

/home/data/datasets/data-multi-subject/derivatives/labels/sub-amu01/anat/sub-amu01_T1w_labels-disc-manual.nii.gz
/home/data/datasets/data-multi-subject/derivatives/labels/sub-amu01/anat/sub-amu01_T2w_labels-disc-manual.nii.gz
/home/data/datasets/data-multi-subject/derivatives/labels/sub-amu02/anat/sub-amu02_T1w_labels-disc-manual.nii.gz
/home/data/datasets/data-multi-subject/derivatives/labels/sub-amu02/anat/sub-amu02_T2w_labels-disc-manual.nii.gz
/home/data/datasets/data-multi-subject/derivatives/labels/sub-amu03/anat/sub-amu03_T1w_labels-disc-manual.nii.gz
/home/data/datasets/data-multi-subject/derivatives/labels/sub-amu03/anat/sub-amu03_T2w_labels-disc-manual.nii.gz
/home/data/datasets/data-multi-subject/derivatives/labels/sub-amu04/anat/sub-amu04_T1w_labels-disc-manual.nii.gz
/home/data/datasets/data-multi-subject/derivatives/labels/sub-amu04/anat/sub-amu04_T2w_labels-disc-manual.nii.gz
/home/data/datasets/data-multi-subject/derivatives/labels/sub-amu05/anat/sub-amu05_T1w_labels-disc-manual.nii.gz
/home/data/datasets/data-multi-subject/derivatives/labels/sub-amu05/anat/sub-amu05_T2w_labels-disc-manual.nii.gz
/home/data/datasets/data-multi-subject/derivatives/labels/sub-balgrist01/anat/sub-balgrist01_T1w_labels-disc-manual.nii.gz
/home/data/datasets/data-multi-subject/derivatives/labels/sub-balgrist01/anat/sub-balgrist01_T2w_labels-disc-manual.nii.gz

2 - Run the script init_data_config.py (see spinalcordtoolbox/disc-labeling-hourglass#26)

python src/dlh/data_management/init_data_config.py --txt CONFIG_DATA --type LABEL --split-validation SPLIT_VAL --split-test SPLIT_TEST

With

  • CONFIG_DATA the path to the text file created step 1
  • the type LABEL to specify that the text file only contains ground truth and not images
  • SPLIT_VAL the percentage of data used for validation (default=0.1)
  • SPLIT_TEST the percentage of data used for testing (default=0.1)

Note: The sum of the percentages should be equal to 1

The created json file should look like this

{
    "TYPE": "LABEL",
    "CONTRASTS": "t1_t2",
    "TRAINING": [
        "/home/data/datasets/data-multi-subject/derivatives/labels/sub-amu01/anat/sub-amu01_T1w_labels-disc-manual.nii.gz",
        "/home/data/datasets/data-multi-subject/derivatives/labels/sub-amu01/anat/sub-amu01_T2w_labels-disc-manual.nii.gz",
        "/home/data/datasets/data-multi-subject/derivatives/labels/sub-amu02/anat/sub-amu02_T1w_labels-disc-manual.nii.gz",
        "/home/data/datasets/data-multi-subject/derivatives/labels/sub-amu02/anat/sub-amu02_T2w_labels-disc-manual.nii.gz",
        "/home/data/datasets/data-multi-subject/derivatives/labels/sub-amu03/anat/sub-amu03_T1w_labels-disc-manual.nii.gz",
        "/home/data/datasets/data-multi-subject/derivatives/labels/sub-amu03/anat/sub-amu03_T2w_labels-disc-manual.nii.gz",
        "/home/data/datasets/data-multi-subject/derivatives/labels/sub-amu04/anat/sub-amu04_T1w_labels-disc-manual.nii.gz"
],
    "VALIDATION": [
        "/home/data/datasets/data-multi-subject/derivatives/labels/sub-amu04/anat/sub-amu04_T2w_labels-disc-manual.nii.gz",
        "/home/data/datasets/data-multi-subject/derivatives/labels/sub-amu05/anat/sub-amu05_T1w_labels-disc-manual.nii.gz",
],
    "TESTING": [ 
        "/home/data/datasets/data-multi-subject/derivatives/labels/sub-amu05/anat/sub-amu05_T2w_labels-disc-manual.nii.gz",
        "/home/data/datasets/data-multi-subject/derivatives/labels/sub-balgrist01/anat/sub-balgrist01_T1w_labels-disc-manual.nii.gz",
        "/home/data/datasets/data-multi-subject/derivatives/labels/sub-balgrist01/anat/sub-balgrist01_T2w_labels-disc-manual.nii.gz"
]
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants