You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
log:
onnx/test.onnx
tensor(float)
['output0']
inputs:
images
onnx/
outputs:
onnx/output0.txt (1, 2520, 6)
onnx/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:29:50] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:29:50] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:29:50] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ output0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: output0
output0: (1, 2520, 6, )
TESTERROR output0 value error : absMaxV:637.293518 - DiffMax 153.856781
Error for output output0
Save mnn result to .error director
log:
Debug Mode: True
onnx/test.onnx
tensor(float)
['/model.3/conv/Conv_output_0']
inputs:
images
onnx/
outputs:
onnx//model.3/conv/Conv_output_0.txt (1, 64, 24, 80)
onnx//model.3/conv/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:30:58] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:30:58] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:30:58] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.3/conv/Conv_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.3/conv/Conv_output_0
/model.3/conv/Conv_output_0: (1, 64, 24, 80, )
TESTERROR /model.3/conv/Conv_output_0 value error : absMaxV:11.974254 - DiffMax 7.639077
Error for output /model.3/conv/Conv_output_0
Save mnn result to .error director
Test Node : /model.3/conv/Conv False
onnx/test.onnx
tensor(float)
['/model.2/cv1/act/Mul_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/cv1/act/Mul_output_0.txt (1, 32, 48, 160)
onnx//model.2/cv1/act/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:01] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:01] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:01] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/cv1/act/Mul_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/cv1/act/Mul_output_0
/model.2/cv1/act/Mul_output_0: (1, 32, 48, 160, )
TEST_SUCCESS
Test Node : /model.2/cv1/act/Mul True
onnx/test.onnx
tensor(float)
['/model.2/Concat_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/Concat_output_0.txt (1, 48, 48, 160)
onnx//model.2/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:05] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:05] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:05] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/Concat_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/Concat_output_0
/model.2/Concat_output_0: (1, 48, 48, 160, )
TESTERROR /model.2/Concat_output_0 value error : absMaxV:27.603397 - DiffMax 25.806040
Error for output /model.2/Concat_output_0
Save mnn result to .error director
Test Node : /model.2/Concat False
onnx/test.onnx
tensor(float)
['/model.2/Split_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/Split_output_0.txt (1, 16, 48, 160)
onnx//model.2/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:08] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:08] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:08] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/Split_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/Split_output_0
/model.2/Split_output_0: (1, 16, 48, 160, )
TEST_SUCCESS
Test Node : /model.2/Split True
Error is between /model.2/Split and /model.2/Concat
onnx/test.onnx
tensor(float)
['/model.2/Split_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/Split_output_0.txt (1, 16, 48, 160)
onnx//model.2/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:11] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:11] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:11] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/Split_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/Split_output_0
/model.2/Split_output_0: (1, 16, 48, 160, )
TEST_SUCCESS
Test Node : /model.2/Split True
onnx/test.onnx
tensor(float)
['/model.2/Split_output_1']
inputs:
images
onnx/
outputs:
onnx//model.2/Split_output_1.txt (1, 16, 48, 160)
onnx//model.2/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:13] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:13] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:13] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/Split_output_1, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/Split_output_1
/model.2/Split_output_1: (1, 16, 48, 160, )
TEST_SUCCESS
Test Node : /model.2/Split True
onnx/test.onnx
tensor(float)
['/model.2/m.0/Add_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/m.0/Add_output_0.txt (1, 16, 48, 160)
onnx//model.2/m.0/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:16] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:16] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:16] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/m.0/Add_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/m.0/Add_output_0
/model.2/m.0/Add_output_0: (1, 16, 48, 160, )
TESTERROR /model.2/m.0/Add_output_0 value error : absMaxV:25.796280 - DiffMax 25.911701
Error for output /model.2/m.0/Add_output_0
Save mnn result to .error director
Test Node : /model.2/m.0/Add False
onnx/test.onnx
tensor(float)
['/model.2/Split_output_1']
inputs:
images
onnx/
outputs:
onnx//model.2/Split_output_1.txt (1, 16, 48, 160)
onnx//model.2/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:19] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:19] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:19] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/Split_output_1, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/Split_output_1
/model.2/Split_output_1: (1, 16, 48, 160, )
TEST_SUCCESS
Test Node : /model.2/Split True
onnx/test.onnx
tensor(float)
['/model.2/m.0/cv2/act/Mul_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/m.0/cv2/act/Mul_output_0.txt (1, 16, 48, 160)
onnx//model.2/m.0/cv2/act/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:22] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:22] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:22] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/m.0/cv2/act/Mul_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/m.0/cv2/act/Mul_output_0
/model.2/m.0/cv2/act/Mul_output_0: (1, 16, 48, 160, )
TESTERROR /model.2/m.0/cv2/act/Mul_output_0 value error : absMaxV:21.914549 - DiffMax 25.643703
Error for output /model.2/m.0/cv2/act/Mul_output_0
Save mnn result to .error director
Test Node : /model.2/m.0/cv2/act/Mul False
onnx/test.onnx
tensor(float)
['/model.2/m.0/cv1/act/Mul_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/m.0/cv1/act/Mul_output_0.txt (1, 16, 48, 160)
onnx//model.2/m.0/cv1/act/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:25] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:25] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:25] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/m.0/cv1/act/Mul_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/m.0/cv1/act/Mul_output_0
/model.2/m.0/cv1/act/Mul_output_0: (1, 16, 48, 160, )
TESTERROR /model.2/m.0/cv1/act/Mul_output_0 value error : absMaxV:19.549574 - DiffMax 19.519550
Error for output /model.2/m.0/cv1/act/Mul_output_0
Save mnn result to .error director
Test Node : /model.2/m.0/cv1/act/Mul False
onnx/test.onnx
tensor(float)
['/model.2/m.0/cv1/conv/Conv_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/m.0/cv1/conv/Conv_output_0.txt (1, 16, 48, 160)
onnx//model.2/m.0/cv1/conv/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:28] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:28] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:28] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/m.0/cv1/conv/Conv_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/m.0/cv1/conv/Conv_output_0
/model.2/m.0/cv1/conv/Conv_output_0: (1, 16, 48, 160, )
TESTERROR /model.2/m.0/cv1/conv/Conv_output_0 value error : absMaxV:32.350361 - DiffMax 37.784775
Error for output /model.2/m.0/cv1/conv/Conv_output_0
Save mnn result to .error director
Test Node : /model.2/m.0/cv1/conv/Conv False
Error is between /model.2/Split and /model.2/m.0/cv1/conv/Conv
问题描述
在转换yolov8模型从onnx->mnn时 用mnn提供的testMNNFromOnnx工具进行一致性验证时发现效果对不上 。
编译版本:
tag 2.4.2
编译方式:
mkdir build
cd build
cmake .. -DMNN_BUILD_CONVERTER=true && make -j4
运行:
python ../tools/script/testMNNFromOnnx.py test.onnx
log:
onnx/test.onnx
tensor(float)
['output0']
inputs:
images
onnx/
outputs:
onnx/output0.txt (1, 2520, 6)
onnx/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:29:50] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:29:50] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:29:50] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ output0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: output0
output0: (1, 2520, 6, )
TESTERROR output0 value error : absMaxV:637.293518 - DiffMax 153.856781
Error for output output0
Save mnn result to .error director
运行:
python ../tools/script/testMNNFromOnnx.py test.onnx DEBUG
log:
Debug Mode: True
onnx/test.onnx
tensor(float)
['/model.3/conv/Conv_output_0']
inputs:
images
onnx/
outputs:
onnx//model.3/conv/Conv_output_0.txt (1, 64, 24, 80)
onnx//model.3/conv/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:30:58] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:30:58] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:30:58] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.3/conv/Conv_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.3/conv/Conv_output_0
/model.3/conv/Conv_output_0: (1, 64, 24, 80, )
TESTERROR /model.3/conv/Conv_output_0 value error : absMaxV:11.974254 - DiffMax 7.639077
Error for output /model.3/conv/Conv_output_0
Save mnn result to .error director
Test Node : /model.3/conv/Conv False
onnx/test.onnx
tensor(float)
['/model.2/cv1/act/Mul_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/cv1/act/Mul_output_0.txt (1, 32, 48, 160)
onnx//model.2/cv1/act/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:01] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:01] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:01] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/cv1/act/Mul_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/cv1/act/Mul_output_0
/model.2/cv1/act/Mul_output_0: (1, 32, 48, 160, )
TEST_SUCCESS
Test Node : /model.2/cv1/act/Mul True
onnx/test.onnx
tensor(float)
['/model.2/Concat_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/Concat_output_0.txt (1, 48, 48, 160)
onnx//model.2/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:05] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:05] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:05] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/Concat_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/Concat_output_0
/model.2/Concat_output_0: (1, 48, 48, 160, )
TESTERROR /model.2/Concat_output_0 value error : absMaxV:27.603397 - DiffMax 25.806040
Error for output /model.2/Concat_output_0
Save mnn result to .error director
Test Node : /model.2/Concat False
onnx/test.onnx
tensor(float)
['/model.2/Split_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/Split_output_0.txt (1, 16, 48, 160)
onnx//model.2/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:08] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:08] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:08] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/Split_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/Split_output_0
/model.2/Split_output_0: (1, 16, 48, 160, )
TEST_SUCCESS
Test Node : /model.2/Split True
Error is between /model.2/Split and /model.2/Concat
onnx/test.onnx
tensor(float)
['/model.2/Split_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/Split_output_0.txt (1, 16, 48, 160)
onnx//model.2/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:11] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:11] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:11] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/Split_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/Split_output_0
/model.2/Split_output_0: (1, 16, 48, 160, )
TEST_SUCCESS
Test Node : /model.2/Split True
onnx/test.onnx
tensor(float)
['/model.2/Split_output_1']
inputs:
images
onnx/
outputs:
onnx//model.2/Split_output_1.txt (1, 16, 48, 160)
onnx//model.2/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:13] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:13] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:13] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/Split_output_1, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/Split_output_1
/model.2/Split_output_1: (1, 16, 48, 160, )
TEST_SUCCESS
Test Node : /model.2/Split True
onnx/test.onnx
tensor(float)
['/model.2/m.0/Add_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/m.0/Add_output_0.txt (1, 16, 48, 160)
onnx//model.2/m.0/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:16] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:16] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:16] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/m.0/Add_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/m.0/Add_output_0
/model.2/m.0/Add_output_0: (1, 16, 48, 160, )
TESTERROR /model.2/m.0/Add_output_0 value error : absMaxV:25.796280 - DiffMax 25.911701
Error for output /model.2/m.0/Add_output_0
Save mnn result to .error director
Test Node : /model.2/m.0/Add False
onnx/test.onnx
tensor(float)
['/model.2/Split_output_1']
inputs:
images
onnx/
outputs:
onnx//model.2/Split_output_1.txt (1, 16, 48, 160)
onnx//model.2/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:19] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:19] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:19] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/Split_output_1, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/Split_output_1
/model.2/Split_output_1: (1, 16, 48, 160, )
TEST_SUCCESS
Test Node : /model.2/Split True
onnx/test.onnx
tensor(float)
['/model.2/m.0/cv2/act/Mul_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/m.0/cv2/act/Mul_output_0.txt (1, 16, 48, 160)
onnx//model.2/m.0/cv2/act/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:22] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:22] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:22] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/m.0/cv2/act/Mul_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/m.0/cv2/act/Mul_output_0
/model.2/m.0/cv2/act/Mul_output_0: (1, 16, 48, 160, )
TESTERROR /model.2/m.0/cv2/act/Mul_output_0 value error : absMaxV:21.914549 - DiffMax 25.643703
Error for output /model.2/m.0/cv2/act/Mul_output_0
Save mnn result to .error director
Test Node : /model.2/m.0/cv2/act/Mul False
onnx/test.onnx
tensor(float)
['/model.2/m.0/cv1/act/Mul_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/m.0/cv1/act/Mul_output_0.txt (1, 16, 48, 160)
onnx//model.2/m.0/cv1/act/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:25] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:25] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:25] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/m.0/cv1/act/Mul_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/m.0/cv1/act/Mul_output_0
/model.2/m.0/cv1/act/Mul_output_0: (1, 16, 48, 160, )
TESTERROR /model.2/m.0/cv1/act/Mul_output_0 value error : absMaxV:19.549574 - DiffMax 19.519550
Error for output /model.2/m.0/cv1/act/Mul_output_0
Save mnn result to .error director
Test Node : /model.2/m.0/cv1/act/Mul False
onnx/test.onnx
tensor(float)
['/model.2/m.0/cv1/conv/Conv_output_0']
inputs:
images
onnx/
outputs:
onnx//model.2/m.0/cv1/conv/Conv_output_0.txt (1, 16, 48, 160)
onnx//model.2/m.0/cv1/conv/
The device support dot:0, support fp16:0, support i8mm: 0
Start to Convert Other Model Format To MNN Model...
[10:31:28] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:40: ONNX Model ir version: 8
[10:31:28] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.10/Resize_output_0 has empty input, the index is 1
[10:31:28] /mnt/code/MNN/tools/converter/source/onnx/onnxConverter.cpp:108: Check it out ==> /model.13/Resize_output_0 has empty input, the index is 1
Start to Optimize the MNN Net...
inputTensors : [ images, ]
outputTensors: [ /model.2/m.0/cv1/conv/Conv_output_0, ]
Converted Success!
Check convert result by onnx, thredhold is 0.01
images
output: /model.2/m.0/cv1/conv/Conv_output_0
/model.2/m.0/cv1/conv/Conv_output_0: (1, 16, 48, 160, )
TESTERROR /model.2/m.0/cv1/conv/Conv_output_0 value error : absMaxV:32.350361 - DiffMax 37.784775
Error for output /model.2/m.0/cv1/conv/Conv_output_0
Save mnn result to .error director
Test Node : /model.2/m.0/cv1/conv/Conv False
Error is between /model.2/Split and /model.2/m.0/cv1/conv/Conv
test.zip
The text was updated successfully, but these errors were encountered: