This is the official repository for MedSAM: Segment Anything in Medical Images.
Welcome to join our mailing list to get updates.
- 2024.08.06: MedSAM2-Segment Anything in Medical Images and Videos: Benchmark and Deployment [
Paper
] [Code
] [Online Demo] [Gradio API
] [3D Slicer Plugin
] [Fine-tune SAM2] - 2024.01.15: Welcome to join CVPR 2024 Challenge: MedSAM on Laptop
- 2024.01.15: Release LiteMedSAM and 3D Slicer Plugin, 10x faster than MedSAM!
- Create a virtual environment
conda create -n medsam python=3.10 -y
and activate itconda activate medsam
- Install Pytorch 2.0
git clone https://github.com/bowang-lab/MedSAM
- Enter the MedSAM folder
cd MedSAM
and runpip install -e .
Download the model checkpoint and place it at e.g., work_dir/MedSAM/medsam_vit_b
We provide three ways to quickly test the model on your images
- Command line
python MedSAM_Inference.py # segment the demo image
Segment other images with the following flags
-i input_img
-o output path
--box bounding box of the segmentation target
- Jupyter-notebook
We provide a step-by-step tutorial on CoLab
You can also run it locally with tutorial_quickstart.ipynb
.
- GUI
Install PyQt5
with pip: pip install PyQt5
or conda: conda install -c anaconda pyqt
python gui.py
Load the image to the GUI and specify segmentation targets by drawing bounding boxes.
MedSAM-Demo.mp4
Download SAM checkpoint and place it at work_dir/SAM/sam_vit_b_01ec64.pth
.
Download the demo dataset and unzip it to data/FLARE22Train/
.
This dataset contains 50 abdomen CT scans and each scan contains an annotation mask with 13 organs. The names of the organ label are available at MICCAI FLARE2022.
Run pre-processing
Install cc3d
: pip install connected-components-3d
python pre_CT_MR.py
- split dataset: 80% for training and 20% for testing
- adjust CT scans to soft tissue window level (40) and width (400)
- max-min normalization
- resample image size to
1024x1024
- save the pre-processed images and labels as
npy
files
The model was trained on five A100 nodes and each node has four GPUs (80G) (20 A100 GPUs in total). Please use the slurm script to start the training process.
sbatch train_multi_gpus.sh
When the training process is done, please convert the checkpoint to SAM's format for convenient inference.
python utils/ckpt_convert.py # Please set the corresponding checkpoint path first
python train_one_gpu.py
- We highly appreciate all the challenge organizers and dataset owners for providing the public dataset to the community.
- We thank Meta AI for making the source code of segment anything publicly available.
- We also thank Alexandre Bonnet for sharing this great blog
@article{MedSAM,
title={Segment Anything in Medical Images},
author={Ma, Jun and He, Yuting and Li, Feifei and Han, Lin and You, Chenyu and Wang, Bo},
journal={Nature Communications},
volume={15},
pages={654},
year={2024}
}