Skip to content

mit-quest/necstlab-damage-segmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

necstlab Damage Segmentation in Multiscale Heterogeneous Materials

This repository (code base) contains the documentation and code to train and use a model for damage segmentation of advanced composite laminate micro-computed tomography slices. For example, this is a properly segmented 3-class slice (image):

3-Class Annotated Slice (image)

Workflows

To accomplish this, there are six workflows that this repository supports. Before completing any of these workflows, you'll need to set up your local machine and set up your GCP bucket. You may also want to look through assumed knowledge.

  1. data ingestion: copying the raw data into a cloud bucket and logically structuring it
  2. dataset preparation: preparing a dataset for use in training and testing
  3. training: training a damage segmentation model on a dataset
  4. training prediction thresholds: training the prediction threshold(s) of a pretrained damage segmentation model on a dataset
  5. testing: testing the performance of a pretrained damage segmentation model on a dataset
  6. inference: segmenting the damage of an image stack or stack subset

Policy for Code Base Modifications

For significant code changes to any files except configs and .md's, users must:

  1. Create new branch on github web browser
  2. Refresh the local desktop client and switch to the new branch,
  3. Make the significant change in new branch
  4. Commit change in local desktop client,
  5. Push commit to remote git (i.e., web browser) using local client
  6. Create pull request in web browser with Josh Joseph as reviewer
  7. Once approved, complete merge and then delete branch

Known gotchas (in-process)

  • You can only run a single workflow at a time on a VM (due to different runs possibly stepping on each other through the temp directory). #27 will address this.
  • You cannot run multi-GPU at max efficiency due to computation graph of python objects (custom metrics) being rebuilt for each batch. #35 will address this.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published