Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use deepstream to run yolov5s ERROR: <main:707>: Failed to set pipeline to PAUSED;ERROR: Failed to create network using custom network creation function #583

Open
eljinwei opened this issue Nov 9, 2024 · 5 comments

Comments

@eljinwei
Copy link

eljinwei commented Nov 9, 2024

I am trying to run YOLOV5 on Jetson Nano. I have converted the yolov5_last.pt file into onnx format. Then I updated the “config_infer_primary_yoloV5.txt” with following settings:

but when i run it using deepstream-app -c deepstream_app_config.txt, it gives me the following error:
Using winsys: x11
ERROR: Deserialize engine failed because file path: /home/jetson/DeepStream-Yolo-new/model_b1_gpu0_fp32.engine open error
0:00:02.752670502 10643 0x7f140022a0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/home/jetson/DeepStream-Yolo-new/model_b1_gpu0_fp32.engine failed
0:00:02.753780996 10643 0x7f140022a0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/home/jetson/DeepStream-Yolo-new/model_b1_gpu0_fp32.engine failed, try rebuild
0:00:02.753828601 10643 0x7f140022a0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: [TRT]: [graph.cpp::computeInputExecutionUses::549] Error Code 9: Internal Error (/0/model.11/Floor_1: IUnaryLayer cannot be used to compute a shape tensor)
ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 145 [Resize -> "/0/model.11/Resize_output_0"]:
ERROR: [TRT]: ModelImporter.cpp:774: --- Begin node ---
ERROR: [TRT]: ModelImporter.cpp:775: input: "/0/model.10/act/Mul_output_0"
input: ""
input: ""
input: "/0/model.11/Concat_1_output_0"
output: "/0/model.11/Resize_output_0"
name: "/0/model.11/Resize"
op_type: "Resize"
attribute {
name: "coordinate_transformation_mode"
s: "asymmetric"
type: STRING
}
attribute {
name: "cubic_coeff_a"
f: -0.75
type: FLOAT
}
attribute {
name: "mode"
s: "nearest"
type: STRING
}
attribute {
name: "nearest_mode"
s: "floor"
type: STRING
}

ERROR: [TRT]: ModelImporter.cpp:776: --- End node ---
ERROR: [TRT]: ModelImporter.cpp:779: ERROR: ModelImporter.cpp:179 In function parseGraph:
[6] Invalid Node - /0/model.11/Resize
[graph.cpp::computeInputExecutionUses::549] Error Code 9: Internal Error (/0/model.11/Floor_1: IUnaryLayer cannot be used to compute a shape tensor)

Could not parse the ONNX file

Failed to build CUDA engine
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:03.694147612 10643 0x7f140022a0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:03.695261701 10643 0x7f140022a0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:03.695322327 10643 0x7f140022a0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:03.695384360 10643 0x7f140022a0 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:03.695414204 10643 0x7f140022a0 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Config file path: /home/jetson/DeepStream-Yolo-new/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: main:707: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /home/jetson/DeepStream-Yolo-new/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed

could anyone suggest what is the problem. I am exactly following the instructions but still getting the error. Thanks!

@eljinwei
Copy link
Author

eljinwei commented Nov 9, 2024

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
type=3
uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0

[sink0]
enable=1
type=2
sync=0
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=5
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV5.txt

[tests]
file-loop=0

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=last.pt.onnx
model-engine-file=model_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels_wheat.txt
batch-size=1
network-mode=0
num-detected-classes=2
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
#workspace-size=2000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

@eljinwei
Copy link
Author

eljinwei commented Nov 9, 2024

ubuntu18.04
CUDA:10.2
CUDNN:8.2.1.32
TensorRT:8.2.1.8
OPENCV:4.1.1
jetpack:4.6.1
Deepstream6.0

@eljinwei eljinwei changed the title use deepstream to run yolov5s ERROR: Failed to create network using custom network creation function use deepstream to run yolov5s ERROR: <main:707>: Failed to set pipeline to PAUSED Nov 9, 2024
@eljinwei eljinwei changed the title use deepstream to run yolov5s ERROR: <main:707>: Failed to set pipeline to PAUSED use deepstream to run yolov5s ERROR: <main:707>: Failed to set pipeline to PAUSED;ERROR: Failed to create network using custom network creation function Nov 9, 2024
@eljinwei
Copy link
Author

@marcoslucianops can you help me ? thanks!

@marcoslucianops
Copy link
Owner

You need to export the ONNX file without the --dynamic and you need to set --opset 12 or lower for the old Jetson Nano board.

@eljinwei
Copy link
Author

eljinwei commented Nov 21, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants