You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I ran most CNN models in the model zoo for onnx and achieve similar/reasonable accuracy for the imagenet test data.
but when it comes to densenet and shufflenet, I saw the accuracy is very very low. For ShuffleNet, I always observe same results labels almost regardless of what input images I give. This is tested under Caffe2, (and MXNET, TVM, when feasible),. They all show similar issues. Is it because some preprocessing is missed? in the doc, it doesn't say any preprocessing is needed. Or it's due to some model issues.
As some previous issues reported (Resnet50, SqueezeNet and VGG19 represented twice from different source frameworks #82), models such as resnet in onnx/models is not as good as in onnx/models/models/image_classification/, what's the cause for that? in general, I don't see preprocessing specified in models under onnx/models. I also observe point 3 as follows in Pytorch, which could be a hint, I guess (?)
It seems in Pytorch torchvision, there are base models (such as resnet, densenet, squeezenet) and related specific models (such as resnet18, densenet121 etc), I think they are the ones exported to onnx/models and /models/models/image_classification/ respectively. Does anyone know the background why there are separate models in torchvision? Are the 'base' model ready for use also, or just the specific models? Through the test I did, I find out that most models under https://github.com/onnx/models/tree/master/models/image_classification can reach the good level of accuracy.
Just to recap, the core questions are how to deploy the densenet and shufflenet under onnx/models/ to reasonable accuracy. Please help!
Thank you!
The text was updated successfully, but these errors were encountered:
Hi, I ran most CNN models in the model zoo for onnx and achieve similar/reasonable accuracy for the imagenet test data.
but when it comes to densenet and shufflenet, I saw the accuracy is very very low. For ShuffleNet, I always observe same results labels almost regardless of what input images I give. This is tested under Caffe2, (and MXNET, TVM, when feasible),. They all show similar issues. Is it because some preprocessing is missed? in the doc, it doesn't say any preprocessing is needed. Or it's due to some model issues.
As some previous issues reported (Resnet50, SqueezeNet and VGG19 represented twice from different source frameworks #82), models such as resnet in onnx/models is not as good as in onnx/models/models/image_classification/, what's the cause for that? in general, I don't see preprocessing specified in models under onnx/models. I also observe point 3 as follows in Pytorch, which could be a hint, I guess (?)
It seems in Pytorch torchvision, there are base models (such as resnet, densenet, squeezenet) and related specific models (such as resnet18, densenet121 etc), I think they are the ones exported to onnx/models and /models/models/image_classification/ respectively. Does anyone know the background why there are separate models in torchvision? Are the 'base' model ready for use also, or just the specific models? Through the test I did, I find out that most models under https://github.com/onnx/models/tree/master/models/image_classification can reach the good level of accuracy.
Just to recap, the core questions are how to deploy the densenet and shufflenet under onnx/models/ to reasonable accuracy. Please help!
Thank you!
The text was updated successfully, but these errors were encountered: