Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

onnx模型和mnn模型推理结果不一致问题 #2276

Closed
kitterive opened this issue Mar 13, 2023 · 21 comments
Closed

onnx模型和mnn模型推理结果不一致问题 #2276

kitterive opened this issue Mar 13, 2023 · 21 comments
Assignees
Labels
Converter question Further information is requested

Comments

@kitterive
Copy link

平台(如果交叉编译请再附上交叉编译目标平台):

Platform(Include target platform as well if cross-compiling):

ubuntu20.04

Github版本:

Github Version:

2.4.0

MNN安装方式:

Compiling Method

pip install MM #安装版本为2.4.0

测试代码和模型

粘贴在这里
推理代码:
# onnx部分:

import onnxruntime
import cv2

def onnx_inference(img_input):
    img_input = img_input.astype("float32")

    session = onnxruntime.InferenceSession("models/anime_bg.onnx")
    in_name = [input.name for input in session.get_inputs()]
    out_name = [output.name for output in session.get_outputs()]

    print("inputs name:", in_name, "outputs name:", out_name)

    data_output = session.run(out_name, {in_name[0]: img_input})

    output = data_output[0]
    return output

# MNN部分

mport cv2
import MNN
import numpy as np


def mnn_inference(img_input):
    img_input = img_input.astype("float32")

    interpreter = MNN.Interpreter("models/anime_bg.mnn")
    session = interpreter.createSession()

    input_tensor = interpreter.getSessionInput(session)
    interpreter.resizeTensor(input_tensor, (512, 512, 3))
    interpreter.resizeSession(session)

    tmp_input = MNN.Tensor((512, 512, 3), MNN.Halide_Type_Float, img_input, MNN.Tensor_DimensionType_Tensorflow)

    input_tensor.copyFrom(tmp_input)
    interpreter.runSession(session)
    output_tensor = interpreter.getSessionOutput(session)

    tmp_output = MNN.Tensor((512, 512, 3), MNN.Halide_Type_Uint8, np.ones([512, 512, 3]).astype(np.uint8),
                            MNN.Tensor_DimensionType_Tensorflow)
    output_tensor.copyToHostTensor(tmp_output)

    outimg = np.array(tmp_output.getData())

    outimg = outimg.reshape(512, 512, 3)

    # print(outimg)

    return outimg

# main部分
import cv2

# Press the green button in the gutter to run the script.
import mnn_inference
import onnx_inference

if __name__ == '__main__':
    img = cv2.imread('input.png')
    # out = onnx_inference.onnx_inference(img)
    out = mnn_inference.mnn_inference(img)
    # print(out.shape)

    # cv2.imwrite('onnx_result.png', out)
    cv2.imwrite('mnn_result.png', out)


@wangzhaode wangzhaode self-assigned this Mar 13, 2023
@wangzhaode wangzhaode added question Further information is requested Converter labels Mar 13, 2023
@wangzhaode
Copy link
Collaborator

使用testMNNFromONNX.py测试一下看看

@alexander2618
Copy link

使用testMNNFromONNX.py测试一下看看

在2.4的工程中使用testMNNFromONNX.py会失败

Dir exist
onnx/test.onnx
tensor(float)
['output']
inputs:
input
onnx/
outputs:
onnx/output.txt (1, 1536)
onnx/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[11:06:34] /home/MNN_20211029_PadChannel_Static/authen_mnn_2.4/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 7
Start to Optimize the MNN Net...
233 op name is empty or dup, set to Const233
250 op name is empty or dup, set to BinaryOp250
348 op name is empty or dup, set to Unsqueeze348
578 op name is empty or dup, set to BinaryOp578
590 op name is empty or dup, set to Unsqueeze590
602 op name is empty or dup, set to BinaryOp602
623 op name is empty or dup, set to BinaryOp623
640 op name is empty or dup, set to BinaryOp640
817 op name is empty or dup, set to Const817
973 op name is empty or dup, set to BinaryOp973
1024 op name is empty or dup, set to BinaryOp1024
1105 op name is empty or dup, set to BinaryOp1105
1110 op name is empty or dup, set to Const1110
1200 op name is empty or dup, set to BinaryOp1200
1317 op name is empty or dup, set to Shape1317
1325 op name is empty or dup, set to BinaryOp1325
1334 op name is empty or dup, set to Unsqueeze1334
1536 op name is empty or dup, set to BinaryOp1536
1537 op name is empty or dup, set to Unsqueeze1537
1686 op name is empty or dup, set to Unsqueeze1686
1688 op name is empty or dup, set to BinaryOp1688
1799 op name is empty or dup, set to Const1799
1938 op name is empty or dup, set to BinaryOp1938
2084 op name is empty or dup, set to BinaryOp2084
2086 op name is empty or dup, set to BinaryOp2086
2087 op name is empty or dup, set to Unsqueeze2087
2088 op name is empty or dup, set to Const2088
2089 op name is empty or dup, set to StridedSlice2089
2091 op name is empty or dup, set to BinaryOp2091
2118 op name is empty or dup, set to StridedSlice2118
2120 op name is empty or dup, set to BinaryOp2120
2262 op name is empty or dup, set to BinaryOp2262
2343 op name is empty or dup, set to Unsqueeze2343
2452 op name is empty or dup, set to BinaryOp2452
2486 op name is empty or dup, set to BinaryOp2486
2715 op name is empty or dup, set to Unsqueeze2715
2727 op name is empty or dup, set to BinaryOp2727
2744 op name is empty or dup, set to BinaryOp2744
2770 op name is empty or dup, set to BinaryOp2770
2856 op name is empty or dup, set to Unsqueeze2856
2863 op name is empty or dup, set to BinaryOp2863
2917 op name is empty or dup, set to BinaryOp2917
2921 op name is empty or dup, set to Unsqueeze2921
2939 op name is empty or dup, set to Unsqueeze2939
2943 op name is empty or dup, set to BinaryOp2943
2980 op name is empty or dup, set to Unsqueeze2980
3084 op name is empty or dup, set to BinaryOp3084
3085 op name is empty or dup, set to Unsqueeze3085
3086 op name is empty or dup, set to Const3086
3087 op name is empty or dup, set to StridedSlice3087
3089 op name is empty or dup, set to BinaryOp3089
3129 op name is empty or dup, set to Unsqueeze3129
3175 op name is empty or dup, set to Unsqueeze3175
3282 op name is empty or dup, set to BinaryOp3282
3292 op name is empty or dup, set to BinaryOp3292
3340 op name is empty or dup, set to Unsqueeze3340
3456 op name is empty or dup, set to BinaryOp3456
3506 op name is empty or dup, set to BinaryOp3506
3930 op name is empty or dup, set to Unsqueeze3930
3941 op name is empty or dup, set to BinaryOp3941
3947 op name is empty or dup, set to Unsqueeze3947
3965 op name is empty or dup, set to Unsqueeze3965
3967 op name is empty or dup, set to BinaryOp3967
4025 op name is empty or dup, set to BinaryOp4025
4109 op name is empty or dup, set to Unsqueeze4109
4289 op name is empty or dup, set to Unsqueeze4289
4291 op name is empty or dup, set to BinaryOp4291
4353 op name is empty or dup, set to Unsqueeze4353
4364 op name is empty or dup, set to Unsqueeze4364
4366 op name is empty or dup, set to Unsqueeze4366
4367 op name is empty or dup, set to Const4367
4374 op name is empty or dup, set to BinaryOp4374
4379 op name is empty or dup, set to Const4379
4431 op name is empty or dup, set to Unsqueeze4431
4432 op name is empty or dup, set to Const4432
4469 op name is empty or dup, set to BinaryOp4469
4481 op name is empty or dup, set to Shape4481
4489 op name is empty or dup, set to BinaryOp4489
4506 op name is empty or dup, set to BinaryOp4506
4776 op name is empty or dup, set to BinaryOp4776
4877 op name is empty or dup, set to Unsqueeze4877
4888 op name is empty or dup, set to BinaryOp4888
4889 op name is empty or dup, set to Unsqueeze4889
4984 op name is empty or dup, set to BinaryOp4984
5135 op name is empty or dup, set to Unsqueeze5135
5136 op name is empty or dup, set to BinaryOp5136
5146 op name is empty or dup, set to BinaryOp5146
5147 op name is empty or dup, set to Unsqueeze5147
5148 op name is empty or dup, set to BinaryOp5148
5170 op name is empty or dup, set to BinaryOp5170
5324 op name is empty or dup, set to BinaryOp5324
5341 op name is empty or dup, set to BinaryOp5341
5676 op name is empty or dup, set to Unsqueeze5676
5678 op name is empty or dup, set to Unsqueeze5678
5679 op name is empty or dup, set to Const5679
5762 op name is empty or dup, set to Unsqueeze5762
5764 op name is empty or dup, set to Unsqueeze5764
5806 op name is empty or dup, set to BinaryOp5806
5810 op name is empty or dup, set to Unsqueeze5810
5905 op name is empty or dup, set to Unsqueeze5905
5906 op name is empty or dup, set to Const5906
5914 op name is empty or dup, set to BinaryOp5914
5915 op name is empty or dup, set to Unsqueeze5915
5922 op name is empty or dup, set to BinaryOp5922
5934 op name is empty or dup, set to Unsqueeze5934
6159 op name is empty or dup, set to BinaryOp6159
6230 op name is empty or dup, set to BinaryOp6230
6252 op name is empty or dup, set to BinaryOp6252
inputTensors : [ input, ]
outputTensors: [ output, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
input
output: output
output: (1, 1536, )
TESTERROR output value error : absMaxV:0.038338 - DiffMax 0.000622
Error for output output
Save mnn result to  .error director

@wangzhaode
Copy link
Collaborator

是否能够提供模型?

@kitterive
Copy link
Author

@kitterive
Copy link
Author

@wangzhaode
testMNNFromOnnx.py 这个文件怎么用?

python testMNNFromOnnx.py models/anime_bg.mnn models
还是 python testMNNFromOnnx.py models/anime_bg.mnn models/anime_bg.onnx
都运行不了?

@alexander2618
Copy link

我邮件发给您

@wangzhaode
Copy link
Collaborator

testMNNFromOnnx.py 这个文件怎么用?

看文档:https://mnn-docs.readthedocs.io/en/latest/tools/convert.html#id3

@wangzhaode
Copy link
Collaborator

我邮件发给您

好的

@kitterive
Copy link
Author

@wangzhaode
测试结果是正确的,但我上面的代码出来的图像不一样,麻烦看下我上面的python代码,是哪里用的不正确吗?
Dir exist
onnx/test.onnx
tensor(float)
['out_image']
inputs:
input_image
onnx/
outputs:
onnx/out_image.txt (4, 4, 3)
onnx/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[12:55:39] /root/project/mydev/external_lib/mnn/MNN_source/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 7
Start to Optimize the MNN Net...
inputTensors : [ input_image, ]
outputTensors: [ out_image, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
input_image
output: out_image
out_image: (4, 4, 3, )
TEST_SUCCESS

@kitterive
Copy link
Author

onnx_result
mnn_result
上面是onnxruntime正确的输出,下面是mnn模型的输出

@wangzhaode
Copy link
Collaborator

@kitterive 只看代码看不出问题 主要是不知道你的模型输入输出的格式是什么

@wangzhaode
Copy link
Collaborator

模型文件上传一下?

@kitterive
Copy link
Author

谢谢!
在google硬盘上,
https://drive.google.com/file/d/1EslOEH6RxRgYveSg7TJaI4Grx-knbUxu/view?usp=sharing
你们应该能访问吧?

@kitterive
Copy link
Author

模型在models目录下,输入float (-1,-1,3), 输出 uint8(-1,-1,3)

@wangzhaode
Copy link
Collaborator

    tmp_input = MNN.Tensor((512, 512, 3), MNN.Halide_Type_Float, img_input, MNN.Tensor_DimensionType_Tensorflow)
    tmp_output = MNN.Tensor((512, 512, 3), MNN.Halide_Type_Uint8, np.ones([512, 512, 3]).astype(np.uint8),
                            MNN.Tensor_DimensionType_Tensorflow)

这两处的DimensionType都改成MNN.Tensor_DimensionType_Caffe

@wangzhaode
Copy link
Collaborator

不过效果上还有误差 我在深入看一下

@kitterive
Copy link
Author

mnn_result

@kitterive
Copy link
Author

谢谢,是的,我改后是这样的

@wangzhaode
Copy link
Collaborator

模型转换时一个优化Pass ConstDivToMul出错了,导致根据形状计算的值被优化成了常量;先关掉这个Pass,重新转换模型即可正确推理;请对文件tools/converter/source/optimizer/merge/FuseTemplateOp.cpp做如下修改:

diff --git a/tools/converter/source/optimizer/merge/FuseTemplateOp.cpp b/tools/converter/source/optimizer/merge/FuseTemplateOp.cpp
index 60a4ab7e..c00fff14 100644
--- a/tools/converter/source/optimizer/merge/FuseTemplateOp.cpp
+++ b/tools/converter/source/optimizer/merge/FuseTemplateOp.cpp
@@ -119,6 +119,7 @@ static auto gRegister = []() {
     {
         // Turn DIV Const to Multi
         auto match = [](EXPRP expr) {
+            return false;
             if (expr->get() == nullptr) {
                 return false;
             }

@wangzhaode
Copy link
Collaborator

或者也可以直接使用我转换好的模型
anime_bg.mnn.zip

@kitterive
Copy link
Author

问题解决了,非常感谢!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Converter question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants