Skip to content

Technical documentation: detect_cmb

abhivanth edited this page Aug 25, 2020 · 7 revisions

detect_cmb

dataloader

###savepatches.py

saves ground truth and corresponding input patches in a separate directory


###hdf5_convert.py

converts the input and ground truth patches to a single hdf5 file containing subgroups of 'input_images,'output_images'


###main_hdf5.py

main method for accessing hdf5 file and executing screening stage 1 and 2


balanced_dataset.py

def balanced_dataset(subjects):

subjects = 3D patches of input and ground truth images

  • Prepare dataset with an equal number of positive and negative patches

create_dataloader.py

def trainTestSplit(self, dataset, val_share):

dataset = complete data set after preprocessing

val_share = validation data set size

  • Return training and validation set

def create_dset_screening_stage1(self,dset):

  • Convert the data set into torch format

dataloader.py

Split the complete data set into train, validation and test sets based on number of CMB patches in each subject

def get_split(self,input_dir):

input_dir = file location of home directory

  • Read the ground truth of each subject
  • If the ground truth has a CMB, append it to a dictionary (subject, number of CMB in that subject)
  • Now split the dictionary based on a fixed percentage of train, validation and test set ratio

discrimination_dataset.py

def prepare_datset_with_mimics(subjects, checkpoint):

subjects = list of train subjects checkpoint = trained model from screening stage-2

  • Evaluate the train subjects based on the trained model of screening stage-2
  • Find out positives and false positives based on this evaluation
  • This dataset is used for training the discrimination stage

generate_patches.py

def cut_3d_image(self, index_x, index_y, index_z, size_x, size_y, size_z, x_stride, y_stride, z_stride, img, image,gt):

  • Cut the 3D volumetric data into 3D patches of 16 x 16 x 10 size
  • Return the patches - these patches are used for training

def create_3dpatches(self,subjects):

subjects = preprocessed x,y pair where x = input and y = ground truth

  • Define the size of the 3D patch here
  • Invoke cut_3d_image() to create 3D patches

executor

screening_stage.py

  • train and validate the screening stage-1 and checkpoints for each epoch is saved

screening_stage_2.py

  • train and validate the screening stage-2 and checkpoints for each epoch is saved

model

screening.py

  • defines network for screening stage

discrimination.py

  • defines network for discrimination stage

ops

dicom_to_nifti.py

  • Convert the DICOM to NIfTI images

extract_mimics.py

  • Find false positives based on a model trained from screening stage-1

generateGroundTruth.py

  • Create ground truth based on CMB co-ordinates in meta-file

normalize_images.py

  • Normalize the intensity of input NIfTI images

score_map.py

  • To find CMB candidate based on a model trained from screening stage-2