Code for "Benchmarking Geometric Deep Learning for Cortical Segmentation and Neurodevelopmental Phenotype Prediction"
In this benchmarking paper, we utilise and/or reproduce code from a number of different sources and models, for which we are very grateful. We thank the authors of the following papers and/or codebases:
Pytorch Geometric
Spherical CNNs (S2CNN) https://github.com/jonkhler/s2cnn
Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering
(ChebNet)
Semi-Supervised Classification with Graph Convolutional Networks (GCNConvNet)
Spherical CNNs on Unstructured Grids (UG-SCNN) https://github.com/maxjiang93/ugscnn
Spherical UNet https://github.com/zhaofenqiang/Spherical_U-Net
We provide processed and curated dataset from the Developing Human Connectome Project (dHCP) available here, subject to the dHCP data sharing agreement. Note that the data repository is private and only visible to those who have access. To gain access please email emma.robinson(at)kcl.ac.uk forwarding confirmation that the data sharing agreement has been signed.
Provided at the above link are the temaplte and native cortical surface features and segmentation labels. The cortical surface features included are those used the benchmarking; myelination, curvature, sulcal depth and corrected cortical thickness.
To generate the warps, the processing shell script is provided at example_post_processing.sh and can be run from the terminal. First, the human connectome workbench software is required to run the resampling. It can be downloaded from https://www.humanconnectome.org/software/get-connectome-workbench
Then the script must be modified with the correct relevant file paths. To do this, one must first download the following and set their precise paths in the script:
- the symmetric template must be downloaded from https://brain-development.org/brain-atlases/atlases-from-the-dhcp-project/cortical-surface-template/
- the 6th level icosahedron warps downloaded from the onedrive repository here
- the 6th level icosahedron which is found in the icosahedrons folder of this repository under file ico-6.surf.gii
- the location of the dHCP Dataset as downloaded from above
After correctly downloading all the required files and setting their paths in the script, running the shell will produce the complete warped dataset for training.
To use the code, first create a conda environment using the environment.yml file by running:
conda env create -f environment.yml
Please note that this may cause issues with non-Windows users, or those running different versions of CUDA or python, for example. In this case, please download the required modules from their respective sources.
To run the code, all file locations must be changed as appropriate.
For all regression experiments excluding UG-SCNN, experimental parameters must be set in the text files found in the params folder; instructions on how to do this are in the README-params file found in the folder. This includes both the warped and unwarped data directories. Some example experiments are provided for simplicity.
To run a graph model experiment with params in file ChebNet_NoPool_BirthAgeConfounded_Rotated_Native from the terminal, input:
python train_graph.py @params/ChebNet_NoPool_BirthAgeConfounded_Rotated_Native
For non graph models e.g. Spherical UNet use:
python train.py @params/SphericalUNet_ScanAge_Rotated_Native
Graph Models are: Monet, ChebNet, GConvNet (with and without TopK Pooling) and all use train_graph.py. All other models are non-graph models are executed with train.py. Results will be automatically saved in the appropriate results folder.
To run a segmentation or UGSCNN regression experiment, go to the relevant folder within the Segmentation_UGSCNN directory. The appropriate experimental parameters must be changed manually.
Within the dataloader, the unwarped and warped file directories must be specified manually (specifying either the native or template data paths depending on which experiment is being run).
The dataloaders are found in the following locations:
- for projected resnet: Projected_ResNet/MyDataLoader.py
- for Spherical UNet: Spherical_UNet/Spherical_UNet_Dataloader.py
- for UGSCNN: meshcnn/utils.py
- for all Graph Models (MoNet, ChebNet, GConvNet): GraphMethods/graph_dataloader.py
After setting the correct data locations, each experiment can be run from its corresponding executable python file.
Training Rotations must be manually set to either True or False from the train loader, depending on the choice of experiment:
e.g
train_loader = My_Projected_dHCP_Data_Segmentation(train_set, number_of_warps = 99, rotations= <#CHANGE HERE#>, smoothing = False, normalisation='std', parity_choice='both', output_as_torch=True)
For the Graph Methods, two separate executables exist: segmentation.py and segmentation_TopK.py for experiments without and with TopK models respectively. Within these, the correct model (MoNet, ChebNet or GConvNet) must be loaded manually.
In segmentation.py load the following:
# For Monet:
model = monet_segmentation(num_features=[32,64,128,256,512,1024])
# For ChebNet:
model = GraphUNet_modded(conv_style=ChebConv,activation_function=nn.ReLU(), in_channels = 4, device=device)
# For GConvNet:
model = GraphUNet_modded(conv_style=GCNConv,activation_function=nn.ReLU(), in_channels = 4, device=device)
In segmentation_TopK.py each model can be loaded separately:
# FOR GConvNet With Top K:
model = GraphUNet_TopK_GCN(4,37,4,0.5,False,act=F.relu)
# FOR ChebNet With Top K
# model = GraphUNet_TopK_Cheb(4,37,4,0.5,False,act=F.relu)
To save the coresponding models and results, the save directories must be manually changed.