Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tensorrt does not generate the correct yolo11 model #592

Open
Mshir0 opened this issue Nov 22, 2024 · 4 comments
Open

tensorrt does not generate the correct yolo11 model #592

Mshir0 opened this issue Nov 22, 2024 · 4 comments

Comments

@Mshir0
Copy link

Mshir0 commented Nov 22, 2024

The device I'm using is a Jetson Xavier NX.
Ubuntu version is 20.04
The jetpack version is 5.1.4
cuda version 11.4
cudnn version is 8.6.0.166
tensorrt version 8.5.2.2
Related libraries for generating .onnx
onnx 1.17.0
onnxsim 0.4.36
onnxslim 0.1.39
onnxrumtime-gpu 1.18.0
Then the following are the runtime warnings:

nvidia@nvidia:~/DeepStream-Yolo$ deepstream-app -c deepstream_app_config.txt 
WARNING: Deserialize engine failed because file path: /home/nvidia/DeepStream-Yolo/yolo11n.pt.onnx_b1_gpu0_fp16.plan open error
0:00:04.696507296 1030812 0xaaaad9b3eab0 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 1]: deserialize engine from file :/home/nvidia/DeepStream-Yolo/yolo11n.pt.onnx_b1_gpu0_fp16.plan failed
0:00:04.777985167 1030812 0xaaaad9b3eab0 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 1]: deserialize backend context from engine from file :/home/nvidia/DeepStream-Yolo/yolo11n.pt.onnx_b1_gpu0_fp16.plan failed, try rebuild
0:00:04.778568899 1030812 0xaaaad9b3eab0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.

Building the TensorRT Engine

6.77WARNING: [TRT]: TensorRT encountered issues when converting weights between types and that could affect accuracy.
WARNING: [TRT]: If this is not the desired behavior, please modify the weights or retrain with regularization to adjust the magnitude of the weights.
WARNING: [TRT]: Check verbose logs for the list of affected weights.
WARNING: [TRT]: - 2 weights are affected by this issue: Detected NaN values and converted them to corresponding FP16 NaN.
WARNING: [TRT]: - 75 weights are affected by this issue: Detected subnormal FP16 values.
WARNING: [TRT]: - 1 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnormalized value.
Building complete

0:23:06.223452350 1030812 0xaaaad9b3eab0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2034> [UID = 1]: serialize cuda engine to file: /home/nvidia/DeepStream-Yolo/model_b1_gpu0_fp16.engine successfully
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input           3x640x640       
1   OUTPUT kFLOAT output          8400x6          

0:23:06.362116421 1030812 0xaaaad9b3eab0 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/nvidia/DeepStream-Yolo/config_infer_primary_yoloV8.txt sucessfully

I can found the video stream but the .engine model doesn't detect anything.

@Mshir0
Copy link
Author

Mshir0 commented Nov 22, 2024

I am using export_yoloV8.py to generate the ONNX model, and here is how I created the model.

python3 export_yoloV8.py -w yolo11n.pt --simplify

@Mshir0
Copy link
Author

Mshir0 commented Nov 22, 2024

To add, the version of deepsteam I'm using is 6.3 and I can run deepstream-test1-app successfully.

@marcoslucianops
Copy link
Owner

@Sanelembuli98 it's not the same issue.

@Sanelembuli98
Copy link

@marcoslucianops its sounds like the same problem to me. What would be the solution to this one?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants