You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The device I'm using is a Jetson Xavier NX.
Ubuntu version is 20.04
The jetpack version is 5.1.4
cuda version 11.4
cudnn version is 8.6.0.166
tensorrt version 8.5.2.2
Related libraries for generating .onnx
onnx 1.17.0
onnxsim 0.4.36
onnxslim 0.1.39
onnxrumtime-gpu 1.18.0
Then the following are the runtime warnings:
nvidia@nvidia:~/DeepStream-Yolo$ deepstream-app -c deepstream_app_config.txt
WARNING: Deserialize engine failed because file path: /home/nvidia/DeepStream-Yolo/yolo11n.pt.onnx_b1_gpu0_fp16.plan open error
0:00:04.696507296 1030812 0xaaaad9b3eab0 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 1]: deserialize engine from file :/home/nvidia/DeepStream-Yolo/yolo11n.pt.onnx_b1_gpu0_fp16.plan failed
0:00:04.777985167 1030812 0xaaaad9b3eab0 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 1]: deserialize backend context from engine from file :/home/nvidia/DeepStream-Yolo/yolo11n.pt.onnx_b1_gpu0_fp16.plan failed, try rebuild
0:00:04.778568899 1030812 0xaaaad9b3eab0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
Building the TensorRT Engine
6.77WARNING: [TRT]: TensorRT encountered issues when converting weights between types and that could affect accuracy.
WARNING: [TRT]: If this is not the desired behavior, please modify the weights or retrain with regularization to adjust the magnitude of the weights.
WARNING: [TRT]: Check verbose logs for the list of affected weights.
WARNING: [TRT]: - 2 weights are affected by this issue: Detected NaN values and converted them to corresponding FP16 NaN.
WARNING: [TRT]: - 75 weights are affected by this issue: Detected subnormal FP16 values.
WARNING: [TRT]: - 1 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnormalized value.
Building complete
0:23:06.223452350 1030812 0xaaaad9b3eab0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2034> [UID = 1]: serialize cuda engine to file: /home/nvidia/DeepStream-Yolo/model_b1_gpu0_fp16.engine successfully
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input 3x640x640
1 OUTPUT kFLOAT output 8400x6
0:23:06.362116421 1030812 0xaaaad9b3eab0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/nvidia/DeepStream-Yolo/config_infer_primary_yoloV8.txt sucessfully
I can found the video stream but the .engine model doesn't detect anything.
The text was updated successfully, but these errors were encountered:
The device I'm using is a Jetson Xavier NX.
Ubuntu version is 20.04
The jetpack version is 5.1.4
cuda version 11.4
cudnn version is 8.6.0.166
tensorrt version 8.5.2.2
Related libraries for generating .onnx
onnx 1.17.0
onnxsim 0.4.36
onnxslim 0.1.39
onnxrumtime-gpu 1.18.0
Then the following are the runtime warnings:
I can found the video stream but the .engine model doesn't detect anything.
The text was updated successfully, but these errors were encountered: