-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance YOLOv7 vs YOLOv9 Series using TensorRT engine #143
Comments
@levipereira The converted weights are provided at here. |
https://github.com/levipereira/yolov9/blob/main/models/experimental.py#L140
|
|
Perfomance Test using GPU RTX 2080Ti 2GB on AMD Ryzen 7 5700X 8-Core/ 128GB RAMAll models are converted to ONNX models with the EfficientNMS plugin. The conversion was done using the TensoRT-YOLO tool, with the Model Export and Performance TestingUse the following commands to export the model and perform performance testing with trtyolo export -v yolov9 -w yolov9-converted.pt --imgsz 640 -o ./
trtexec --onnx=yolov9-converted.onnx --saveEngine=yolov9-converted.engine --fp16
trtexec --fp16 --avgRuns=1000 --useSpinWait --loadEngine=yolov9-converted.engine Performance testing was conducted using the TensorRT-YOLO inference on the coco128 dataset. YOLOv9 Series
YOLOv8 Series
|
Hi @WongKinYiu, The original post had results from many variables that shouldn't have been included in measuring the model's performance. That's why I made changes to the original post. |
Could you help for testing speed of yolov9-t-converted.pt, yolov9-s-converted.pt, yolov9-m-converted.pt? Thanks. |
@WongKinYiu Use trtexec or Trnsorrt-YOLO to test the model speed with NMS plugin? |
Same testing method as the table #143 (comment). |
@WongKinYiu Yes, these results were tested with the NMS plugin. In #143 (comment), we performed performance testing using the Python code of TensorRT-YOLO. We noticed that the results from the Python code were slightly lower compared to the tests conducted with the C++ code and the trtexec tool. To provide a more comprehensive comparison, we will conduct separate performance tests using the TensorRT-YOLO Python API, the TensorRT-YOLO C++ API, and the trtexec tool. |
If it won't bother you too much, conduct performance tests using different protocols are nice. |
@WongKinYiu Update at #143 (comment) |
Thanks. It seems you have same results as @levipereira. |
Could you help for test gelan-s2.pt too? |
@WongKinYiu Update at #143 (comment) |
Thanks. By the way, gelan-s2.pt is different from gelan-s.pt. |
@WongKinYiu Thank you very much for your reminder. I overlooked gelan-s2.pt and will update it shortly. Thanks again for your correction! |
@WongKinYiu Update at #143 (comment) |
Thanks. |
Hi, I was able to run at ~36fps on an Nvidia Xavier AGX using yolov9-c-converted exported to TensorRT engine with FP16 inference and onnxsim. Very impressive |
Perfomance Test using GPU RTX 4090 on AMD Ryzen 7 3700X 8-Core/ 16GB RAM
Model Performance using TensorRT engine
All models were sourced from the original repository and subsequently converted to ONNX format with dynamic batching enabled. Profiling was conducted using TensorRT Engine Explorer (TREx).
Detailed reports will be made available in the coming days, providing comprehensive insights into the performance metrics and optimizations achieved.
All models were converted (re-parameterized) and optimized for inference.
TensorRT version: 8.6.1
Device Properties:
YOLO v7 vs v9 Series Models Performance Results
Performance Summary Tables
Throughput and Average Time
Latency Summary
Full Report
https://github.com/levipereira/triton-server-yolo/tree/master/perfomance
The text was updated successfully, but these errors were encountered: