Segment Anything in High Quality
NeurIPS 2023
ETH Zurich & HKUST
The HQ-SAM's heavy encoder and lightweight mask decoder can be exported to ONNX format so that it can be run in any environment that supports ONNX runtime. Export the model with run.sh
[Option-1]
You can see the example notebook for details on how to combine image preprocessing via HQ-SAM's backbone with mask prediction using the ONNX model. It is recommended to use the latest stable version of PyTorch for ONNX export.
[Option-2]
If you are targeting a deployment scenario, it's recommended refer to the example in scripts/main.py
. Similarly, you can find the execution command in the run.sh
script.
OnnxRuntime Demo Results
If you find HQ-SAM useful in your research or refer to the provided baseline results, please star ⭐ this repository and consider citing 📝:
@article{sam_hq,
title={Segment Anything in High Quality},
author={Ke, Lei and Ye, Mingqiao and Danelljan, Martin and Liu, Yifan and Tai, Yu-Wing and Tang, Chi-Keung and Yu, Fisher},
journal = {arXiv:2306.01567},
year = {2023}
}