Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug in vgg19 model #90

Closed
ghost opened this issue Aug 9, 2018 · 7 comments
Closed

Bug in vgg19 model #90

ghost opened this issue Aug 9, 2018 · 7 comments

Comments

@ghost
Copy link

ghost commented Aug 9, 2018

Revision : https://s3.amazonaws.com/download.onnx/models/opset_8/vgg19.tar.gz

The model contains following two operators.

%vgg0_pool4_fwd = MaxPoolkernel_shape = [2, 2], pads = [0, 0, 0, 0], strides = [2, 2]
%vgg0_dense0_fwd = Gemm[alpha = 1, beta = 1, transA = 0, transB = 1](%vgg0_pool4_fwd, %vgg0_dense0_weight, %vgg0_dense0_bias)

The Tensor inputs for the Gemm operator has incompatible shapes.
%vgg0_pool4_fwd (Output of maxpool)- shape : [1L, 512L, 7L, 7L]
%vgg0_dense0_weight(Initializer)- shape : [4096L, 25088L]

The above two shapes are not compatible for matrix multiplication.

Note : The shape inference is done using onnx infer_shapes()

@ghost
Copy link
Author

ghost commented Aug 16, 2018

Ping!

@ankkhedia
Copy link
Contributor

@ranjith-jr All onnx models over here have been converted from different frameworks and a lot depends on how these onnx operators have been mapped to the native framework's operator.

The above model has been converted from caffe. So, I would look how the above mentioned onnx operator are mapped to Caffe operators. Since onnx doesn't have all the operators as of now, usually converters try to map to the closest operator present and taking a look into converters might help in answering your question.

@ghost
Copy link
Author

ghost commented Aug 17, 2018

@ankkhedia Thanks for the reply.

Even though the models are from different framework, is it not the philosophy behind the ONNX to provide a universal exchange format. The model in question is here is not as per the the ONNX standards.

Should not this be considered as the bug in the model generation and fixed accordingly?

@ankkhedia
Copy link
Contributor

@ranjith-jr
I do think that this is a bug in the converter but some operators are not present in ONNX at all and the operators from base framework are mapped to nearest operator in ONNX. The above problem can be alleviated when the operator coverage for ONNX increases.

@Flamefire
Copy link

I noticed the same and this IS a bug in the model (and therefore in the converter): It misses a "Flatten" operator between MaxPool and Gemm.

See also onnx/onnx#1101

@ankkhedia
Copy link
Contributor

The issue has been fixed and new model uploaded to the model zoo.
So, closing the issue
Please feel free to re-open if closed in error.

@luan1412167
Copy link

@ankkhedia I'm facing the same error. my model is from mx to onnx. Can you give me some suggestion to fix it?
2019-10-08 14:14:34.879614128 [E:onnxruntime:, sequential_executor.cc:165 Execute] Non-zero status code returned while running PRelu node. Name:'relu0' Status Message: /home/luandd/project_company/face_rec/onnxruntime/onnxruntime/core/providers/cpu/math/element_wise_ops.h:329 void onnxruntime::BroadcastIterator::Init(int64_t, int64_t) axis == 1 || axis == largest was false. Attempting to broadcast an axis by a dimension other than 1. 64 by 112

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants