Skip to content

This project showcases the deployment of the encoder and decoder model using ONNXRUNTIME in Python.

License

Notifications You must be signed in to change notification settings

CVHub520/sam-hq

Repository files navigation

Segment Anything in High Quality

PWC Open In Colab Huggingfaces Open in OpenXLab

Segment Anything in High Quality
NeurIPS 2023
ETH Zurich & HKUST

ONNX export

The HQ-SAM's heavy encoder and lightweight mask decoder can be exported to ONNX format so that it can be run in any environment that supports ONNX runtime. Export the model with run.sh

[Option-1]

You can see the example notebook for details on how to combine image preprocessing via HQ-SAM's backbone with mask prediction using the ONNX model. It is recommended to use the latest stable version of PyTorch for ONNX export.

[Option-2]

If you are targeting a deployment scenario, it's recommended refer to the example in scripts/main.py. Similarly, you can find the execution command in the run.sh script.

example4_box example4_point1 example4_point2

OnnxRuntime Demo Results

Citation

If you find HQ-SAM useful in your research or refer to the provided baseline results, please star ⭐ this repository and consider citing 📝:

@article{sam_hq,
    title={Segment Anything in High Quality},
    author={Ke, Lei and Ye, Mingqiao and Danelljan, Martin and Liu, Yifan and Tai, Yu-Wing and Tang, Chi-Keung and Yu, Fisher},
    journal = {arXiv:2306.01567},
    year = {2023}
}  

About

This project showcases the deployment of the encoder and decoder model using ONNXRUNTIME in Python.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages