-
-
Notifications
You must be signed in to change notification settings - Fork 362
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
incorrect 0 INPUT kFLOAT input 3x640x640 1 OUTPUT kFLOAT output 25200x6 shape instead of WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1. INFO: [Implicit Engine Info]: layers num: 4 0 INPUT kFLOAT input 3x640x640 1 OUTPUT kFLOAT boxes 25200x4 2 OUTPUT kINT32 classes 25200x1 3 OUTPUT kFLOAT scores 25200x1 #595
Comments
There's no issue. This repo, now, expects 1 output. The |
It doesnt work or is not compatible with my deepstream pipelines. Is it possible to specify the commit code for the one that sill outputs 3x640x640, 25200x4, 25200x1, 25200x1 |
i did modify export_yoloV5 to output 3x640x640 for input. import os class DeepStreamOutput(nn.Module):
def parse_tensorrt_output(output): def yolov5_export(weights, device, inplace=True, fuse=True): def suppress_warnings(): def validate_onnx(onnx_file): def main(args):
def parse_args():
if name == 'main': |
Use the Remove
Change
to
Change
to
Change
to
|
I’m encountering a similar behavior with the export of YOLOv8 models. Could you provide a more detailed explanation of why these changes have been implemented in the export process? |
https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/utils/export_yoloV5.py still results in WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1. |
@Sanelembuli98 Please add to the file the changes I said #595 (comment) @valentin-phoenix The TensorRT sometimes doesn't keep the output order on the layers causing a bug on the output (more related to Paddle models). |
import os class DeepStreamOutput(nn.Module):
def parse_tensorrt_output(output): def yolov5_export(weights, device, inplace=True, fuse=True): def suppress_warnings(): def validate_onnx(onnx_file): def main(args):
def parse_args():
if name == 'main': |
Are you using the updated nvdsinfer_custom_impl_Yolo or the old plugin? |
Most likely the old one. I will update it and give feedback thank you. |
so I updated to using nvdsinfer_custom_impl_Yolo I am able to generate the engine file and actually run the deepstream-app -c deepstream_app_config.txt but it does not detect/infer on the video/stream and for my weights I used default yolov5s.pt |
additional info is I am able to run detection/inference using the yolov5s.pt but after I export it as an onnx I cant. This is the command I am running to export python3 export_yoloV5.py --weights yolov5s.pt --size 640 --simplify --dynamic --opset 17 |
maybe I am failing to properly detail my issue. please try test deepstream-app. And let me know if you are able to run detection/inference as I am not able to. Hopefully you can replicate my issue or point me in the right direction. |
The issue described involves TensorRT and YOLOv5 ONNX model export. Specifically:
Incorrect TensorRT Output Shape:
The expected output for a YOLOv5 ONNX model is:
3x640x640 for input.
25200x4, 25200x1, 25200x1 for boxes, scores, and classes respectively.
Instead, the output is concatenated as a single tensor: 25200x6.
This indicates the exported ONNX model is not properly handling YOLOv5's expected separate outputs (boxes, scores, classes).
The text was updated successfully, but these errors were encountered: