-
-
Notifications
You must be signed in to change notification settings - Fork 362
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use deepstream to run yolov5s ERROR: <main:707>: Failed to set pipeline to PAUSED;ERROR: Failed to create network using custom network creation function #583
Comments
[application] [tiled-display] [source0] [sink0] [osd] [streammux] [primary-gie] [tests] [property] [class-attrs-all] |
ubuntu18.04 |
@marcoslucianops can you help me ? thanks! |
You need to export the ONNX file without the |
Hi Marcos,
I am a newcomer and am learning to deploy my model using your open-source project DeepStream-YOLO. I would be greateful for your help! I have read the code in `nvdsinfer_custom_impl_Yolo`, and `nvdsparsedbox_Yolo.cpp` parses the output of the YOLO model to generate bounding boxes and text, but it does not involve drawing the bounding boxes onto the original video. I would like to understand how this pipeline is organized, but I have not found the relevant code in the project files. Perhaps you could provide some guidance? The official examples, such as `deepstream-test1`, have a `deepstream_test1_app.c` file where you can see how the pipeline is designed, but in the DeepStream-YOLO project, I have not found the corresponding code, nor have I found where `nvosd` is used subsequently. I would appreciate it if you could clarify my doubts; Thanks!
What I would like to do is to save the video frames with detection boxes drawn after detection. My idea is to locate the OSD pad and add a callback function (I'm not sure if this approach is feasible). However, I am currently facing the issues mentioned above. The code in the project is all about constructing the TRT (TensorRT) engine and parsing the output layers. I am not sure where the parsed data goes after that, so I am writing to you for assistance(I am currently able to run the engine using deepstream-yolo project and invoke the camera for detection.). I wonder if you could provide me with some guidance, which I would greatly appreciate!
…------------------ 原始邮件 ------------------
发件人: "marcoslucianops/DeepStream-Yolo" ***@***.***>;
发送时间: 2024年11月14日(星期四) 晚上10:58
***@***.***>;
***@***.******@***.***>;
主题: Re: [marcoslucianops/DeepStream-Yolo] use deepstream to run yolov5s ERROR: <main:707>: Failed to set pipeline to PAUSED;ERROR: Failed to create network using custom network creation function (Issue #583)
You need to export the ONNX file without the --dynamic and you need to set --opset 12 or lower for the old Jetson Nano board.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
I am trying to run YOLOV5 on Jetson Nano. I have converted the yolov5_last.pt file into onnx format. Then I updated the “config_infer_primary_yoloV5.txt” with following settings:
but when i run it using deepstream-app -c deepstream_app_config.txt, it gives me the following error:
Using winsys: x11
ERROR: Deserialize engine failed because file path: /home/jetson/DeepStream-Yolo-new/model_b1_gpu0_fp32.engine open error
0:00:02.752670502 10643 0x7f140022a0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/home/jetson/DeepStream-Yolo-new/model_b1_gpu0_fp32.engine failed
0:00:02.753780996 10643 0x7f140022a0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/home/jetson/DeepStream-Yolo-new/model_b1_gpu0_fp32.engine failed, try rebuild
0:00:02.753828601 10643 0x7f140022a0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: [TRT]: [graph.cpp::computeInputExecutionUses::549] Error Code 9: Internal Error (/0/model.11/Floor_1: IUnaryLayer cannot be used to compute a shape tensor)
ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 145 [Resize -> "/0/model.11/Resize_output_0"]:
ERROR: [TRT]: ModelImporter.cpp:774: --- Begin node ---
ERROR: [TRT]: ModelImporter.cpp:775: input: "/0/model.10/act/Mul_output_0"
input: ""
input: ""
input: "/0/model.11/Concat_1_output_0"
output: "/0/model.11/Resize_output_0"
name: "/0/model.11/Resize"
op_type: "Resize"
attribute {
name: "coordinate_transformation_mode"
s: "asymmetric"
type: STRING
}
attribute {
name: "cubic_coeff_a"
f: -0.75
type: FLOAT
}
attribute {
name: "mode"
s: "nearest"
type: STRING
}
attribute {
name: "nearest_mode"
s: "floor"
type: STRING
}
ERROR: [TRT]: ModelImporter.cpp:776: --- End node ---
ERROR: [TRT]: ModelImporter.cpp:779: ERROR: ModelImporter.cpp:179 In function parseGraph:
[6] Invalid Node - /0/model.11/Resize
[graph.cpp::computeInputExecutionUses::549] Error Code 9: Internal Error (/0/model.11/Floor_1: IUnaryLayer cannot be used to compute a shape tensor)
Could not parse the ONNX file
Failed to build CUDA engine
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:03.694147612 10643 0x7f140022a0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:03.695261701 10643 0x7f140022a0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:03.695322327 10643 0x7f140022a0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:03.695384360 10643 0x7f140022a0 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:03.695414204 10643 0x7f140022a0 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Config file path: /home/jetson/DeepStream-Yolo-new/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: main:707: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /home/jetson/DeepStream-Yolo-new/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed
could anyone suggest what is the problem. I am exactly following the instructions but still getting the error. Thanks!
The text was updated successfully, but these errors were encountered: