Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Yolov5: ERROR: Failed to get cuda engine from custom library API #547

Open
flmello opened this issue Jun 10, 2024 · 1 comment
Open

Yolov5: ERROR: Failed to get cuda engine from custom library API #547

flmello opened this issue Jun 10, 2024 · 1 comment

Comments

@flmello
Copy link

flmello commented Jun 10, 2024

• Hardware Platform (Jetson / GPU) Jetson nano Devkit
• DeepStream Version 6.0.0
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 8.2.1.8

I have an script running on Jetson Xavier AGX, DS 6.3.0, JetPack 5.1, TensorRT 8.5.2.2. But when I tranfer this script to a nano devkit (specs above) I got error during the ONNX model conversion:

gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
ERROR: Deserialize engine failed because file path: /home/ubuntu/EdgeServer/model_b4_gpu0_fp32.engine open error
0:00:05.945769334 8805 0x2fd0a8f0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/home/ubuntu/EdgeServer/model_b4_gpu0_fp32.engine failed
0:00:05.946900129 8805 0x2fd0a8f0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/home/ubuntu/EdgeServer/model_b4_gpu0_fp32.engine failed, try rebuild
0:00:05.946948932 8805 0x2fd0a8f0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 217 [Range -> "349"]:
ERROR: [TRT]: ModelImporter.cpp:774: --- Begin node ---
ERROR: [TRT]: ModelImporter.cpp:775: input: "347"
input: "346"
input: "348"
output: "349"
name: "Range_217"
op_type: "Range"

ERROR: [TRT]: ModelImporter.cpp:776: --- End node ---
ERROR: [TRT]: ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:3352 In function importRange:
[8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!"

Could not parse the ONNX model

Failed to build CUDA engine
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:07.080241677 8805 0x2fd0a8f0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:07.081381742 8805 0x2fd0a8f0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:07.081462525 8805 0x2fd0a8f0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:07.081547838 8805 0x2fd0a8f0 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:07.081580286 8805 0x2fd0a8f0 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Config file path: /home/ubuntu/EdgeServer/config/dstest4_pgie_nvinfer_yolov5_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
[NvMultiObjectTracker] De-initialized
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-gpu-inference-engine:
Config file path: /home/ubuntu/EdgeServer/config/dstest4_pgie_nvinfer_yolov5_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app

--- 0.013864755630493164 seconds ---

Note that I had success in compiling nvdsinfer_custom_impl_Yolo with the correct Cuda version:
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo

The libnvdsinfer_custom_impl_Yolo.so path is correct in my config file dstest4_pgie_nvinfer_yolov5_config.txt.

There is something tricky here, but I couldn't find out. Does anyone can give me a tip of what is going on?

@marcoslucianops
Copy link
Owner

Export the ONNX model without --dynamic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants