pip install onnx>=1.10.0
python ./deploy/ONNX/export_onnx.py \
--weights yolov6s.pt \
--img 640 \
--batch 1 \
--simplify
--weights
: The path of yolov6 model weights.--img
: Image size of model inputs.--batch
: Batch size of model inputs.--half
: Whether to export half-precision model.--inplace
: Whether to set Detect() inplace.--simplify
: Whether to simplify onnx. Not support in end to end export.--end2end
: Whether to export end to end onnx model. Only support onnxruntime and TensorRT >= 8.0.0 .--trt-version
: Export onnx for TensorRT version. Support : 7 or 8.--with-preprocess
: Whether to export preprocess with bgr2rgb and normalize (divide by 255)--max-wh
: Default is None for TensorRT backend. Set int for onnxruntime backend.--topk-all
: Topk objects for every image.--iou-thres
: IoU threshold for NMS algorithm.--conf-thres
: Confidence threshold for NMS algorithm.--device
: Export device. Cuda device : 0 or 0,1,2,3 ... , CPU : cpu .
Now YOLOv6 supports end to end detect for onnxruntime and TensorRT !
If you want to deploy in TensorRT, make sure you have installed TensorRT !
python ./deploy/ONNX/export_onnx.py \
--weights yolov6s.pt \
--img 640 \
--batch 1 \
--end2end \
--max-wh 7680
You will get an onnx with NonMaxSuppression operater .
The onnx outputs shape is nums x 7
.
nums
means the number of all objects which were detected.
7
means [batch_index
,x0
,y0
,x1
,y1
,classid
,score
]
python ./deploy/ONNX/export_onnx.py \
--weights yolov6s.pt \
--img 640 \
--batch 1 \
--end2end \
--trt-version 7
You will get an onnx with BatchedNMSDynamic_TRT plugin .
python ./deploy/ONNX/export_onnx.py \
--weights yolov6s.pt \
--img 640 \
--batch 1 \
--end2end \
--trt-version 8
You will get an onnx with EfficientNMS_TRT plugin .
The onnx outputs are as shown :
num_dets
means the number of object in every image in its batch .
det_boxes
means topk(100) object's location about [x0
,y0
,x1
,y1
] .
det_scores
means the confidence score of every topk(100) objects .
det_classes
means the category of every topk(100) objects .
You can export TensorRT engine use trtexec tools.
For both TensorRT-7 and TensorRT-8 trtexec
tool is avaiable.
trtexec --onnx=yolov6s.onnx \
--saveEngine=yolov6s.engine \
--workspace=8192 # 8GB
--fp16 # if export TensorRT fp16 model
When we get the TensorRT model, we can evalute its performance by:
python deploy/ONNX/eval_trt.py --weights yolov6s.engine --batch-size=1 --data data/coco.yaml