This repository has been archived by the owner on Nov 9, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 38
Failed to read the onnx model #23
Comments
UNeedCryDear
changed the title
[ INFO:[email protected]] global c:\build\master_winpack-build-win64-vc15\opencv\modules\dnn\src\onnx\onnx_importer.cpp (797) cv::dnn::dnn4_v20220524::ONNXImporter::populateNet DNN/ONNX: loading ONNX v8 model produced by 'pytorch':1.14.0. Number of nodes = 263, initializers = 120, inputs = 1, outputs = 1 [ INFO:[email protected]] global c:\build\master_winpack-build-win64-vc15\opencv\modules\dnn\src\onnx\onnx_importer.cpp (713) cv::dnn::dnn4_v20220524::ONNXImporter::parseOperatorSet DNN/ONNX: ONNX opset version = 17 OpenCV(4.6.0) Error: Unsupported format or combination of formats (Unsupported data type: FLOAT16) in cv::dnn::dnn4_v20220524::getMatFromTensor, file c:\build\master_winpack-build-win64-vc15\opencv\modules\dnn\src\onnx\onnx_graph_simplifier.cpp, line 842
Failed to read the onnx model
Jan 16, 2023
export onnx with flag --opset 12.
|
(yolov5-6) D:\work\yolov5-6.0>python export.py --weights ./weights/yolov5s.pt --img [640,640] --opset 12 --include onnx
usage: export.py [-h] [--data DATA] [--weights WEIGHTS]
[--imgsz IMGSZ [IMGSZ ...]] [--batch-size BATCH_SIZE]
[--device DEVICE] [--half] [--inplace] [--train] [--optimize]
[--int8] [--dynamic] [--simplify] [--opset OPSET]
[--topk-per-class TOPK_PER_CLASS] [--topk-all TOPK_ALL]
[--iou-thres IOU_THRES] [--conf-thres CONF_THRES]
[--include INCLUDE [INCLUDE ...]]
export.py: error: argument --imgsz/--img/--img-size: invalid int value: '[640,640]'
十三
***@***.***
…------------------ 原始邮件 ------------------
发件人: "UNeedCryDear/yolov5-opencv-dnn-cpp" ***@***.***>;
发送时间: 2023年1月16日(星期一) 下午2:12
***@***.***>;
***@***.******@***.***>;
主题: Re: [UNeedCryDear/yolov5-opencv-dnn-cpp] Failed to read the onnx model (Issue #23)
export onnx with flag --opset 12.
$ python path/to/export.py --weights yolov5s.pt --img [640,640] --opset 12 --include onnx
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
can you add my wechat number ,i need your help ,boss
十三
***@***.***
…------------------ 原始邮件 ------------------
发件人: "UNeedCryDear/yolov5-opencv-dnn-cpp" ***@***.***>;
发送时间: 2023年1月16日(星期一) 下午2:12
***@***.***>;
***@***.******@***.***>;
主题: Re: [UNeedCryDear/yolov5-opencv-dnn-cpp] Failed to read the onnx model (Issue #23)
export onnx with flag --opset 12.
$ python path/to/export.py --weights yolov5s.pt --img [640,640] --opset 12 --include onnx
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
yeah ,i do it follow your lead,thank you ,honey
十三
***@***.***
…------------------ 原始邮件 ------------------
发件人: "UNeedCryDear/yolov5-opencv-dnn-cpp" ***@***.***>;
发送时间: 2023年1月16日(星期一) 下午2:29
***@***.***>;
***@***.******@***.***>;
主题: Re: [UNeedCryDear/yolov5-opencv-dnn-cpp] Failed to read the onnx model (Issue #23)
Modify these parameters in your export.py:
and then
python export.py
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
嗨,你好,如果我想加快推理速度,我如何在作者您的部署代码上修改
十三
***@***.***
…------------------ 原始邮件 ------------------
发件人: "UNeedCryDear/yolov5-opencv-dnn-cpp" ***@***.***>;
发送时间: 2023年2月9日(星期四) 上午9:07
***@***.***>;
***@***.******@***.***>;
主题: Re: [UNeedCryDear/yolov5-opencv-dnn-cpp] Failed to read the onnx model (Issue #23)
Closed #23 as completed.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
用显卡加速啊。如果你是用opencv推理的会麻烦一些,因为你需要编译iopencv的contirb包,勾选上cuda模块,重新编译opencv之后才能使用opencv-cuda加速。如果你是用的onnxruntime,确保你的cuda和cudnn安装正确的情况下,onnxruntime选择GPU的版本,然后readmodel的时候将使用cuda和cuda id设置正确就行了,后面的处理我都给你写好了。 |
我用剪枝压缩模型时间没怎么减少
…---Original---
From: ***@***.***>
Date: Tue, Feb 14, 2023 16:24 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [UNeedCryDear/yolov5-opencv-dnn-cpp] Failed to read the onnxmodel (Issue #23)
用显卡加速啊。如果你是用opencv推理的会麻烦一些,因为你需要编译iopencv的contirbute包,勾选上cuda,重新编译opencv之后才能使用opencv-cuda加速。如果你是用的onnxruntime,确保你的cuda和cudnn安装正确的情况下,onnxruntime选择GPU的版本,然后readmodel的时候将使用cuda和cuda id设置正确就行了,后面的处理我都给你写好了。
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
你减枝之前都不看看教程啥的吗,看完教程不往下面翻一下评论的吗?评论里面说的很清楚了,模型减枝又不是改变网络结构 |
如果你是为了加速,除开cuda之外,目前主流的路子量化模型,通量化来达成加速。 |
如果仅仅只是用cpu了。。。
…---Original---
From: ***@***.***>
Date: Tue, Feb 14, 2023 16:36 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [UNeedCryDear/yolov5-opencv-dnn-cpp] Failed to read the onnxmodel (Issue #23)
如果你是为了加速,除开cuda之外,目前主流的路子量化模型,通量化来达成加速。
举个例子来说,将模型量化为int8就会比FP32速度快一些,如果你的显卡支持FP16的话,FP16也会比FP32快
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
cpu那没啥法子了,你只能去看下openvino有无啥加速手段了,但是这个好像目前也仅限于IU,AU不行。 |
我是准备把模型类型s-n转化下,yolov8的话大佬你本地的cpp也得修改下吧
…---Original---
From: ***@***.***>
Date: Tue, Feb 14, 2023 16:41 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [UNeedCryDear/yolov5-opencv-dnn-cpp] Failed to read the onnxmodel (Issue #23)
cpu那没啥法子了,你只能去看下openvino有无啥加速手段了,但是这个好像目前也仅限于IU,AU不行。
另外就是缩小模型了,比如S的模型换成N的,或者自己手动改,数据集简单的可以去掉一些卷积层看下了,不然就是换其他网络了。不过目前来说,估计也就yolov8可能会快一些了,就这些法子了。
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
v8要修改啥?你运行我的代码哪里报错? |
那我v8试下,谢谢大佬
…---Original---
From: ***@***.***>
Date: Tue, Feb 14, 2023 16:46 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [UNeedCryDear/yolov5-opencv-dnn-cpp] Failed to read the onnxmodel (Issue #23)
v8要修改啥?你运行我的代码哪里报错?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
大佬有个yololite的问题请教你
…---Original---
From: ***@***.***>
Date: Tue, Feb 14, 2023 16:46 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [UNeedCryDear/yolov5-opencv-dnn-cpp] Failed to read the onnxmodel (Issue #23)
v8要修改啥?你运行我的代码哪里报错?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
[ INFO:[email protected]] global c:\build\master_winpack-build-win64-vc15\opencv\modules\dnn\src\onnx\onnx_importer.cpp (797) cv::dnn::dnn4_v20220524::ONNXImporter::populateNet DNN/ONNX: loading ONNX v8 model produced by 'pytorch':1.14.0. Number of nodes = 263, initializers = 120, inputs = 1, outputs = 1
[ INFO:[email protected]] global c:\build\master_winpack-build-win64-vc15\opencv\modules\dnn\src\onnx\onnx_importer.cpp (713) cv::dnn::dnn4_v20220524::ONNXImporter::parseOperatorSet DNN/ONNX: ONNX opset version = 17
OpenCV(4.6.0) Error: Unsupported format or combination of formats (Unsupported data type: FLOAT16) in cv::dnn::dnn4_v20220524::getMatFromTensor, file c:\build\master_winpack-build-win64-vc15\opencv\modules\dnn\src\onnx\onnx_graph_simplifier.cpp, line 842
The text was updated successfully, but these errors were encountered: