Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch fx量化后的onnx模型转换失败 #2548

Closed
WenmuZhou opened this issue Aug 20, 2023 · 4 comments
Closed

torch fx量化后的onnx模型转换失败 #2548

WenmuZhou opened this issue Aug 20, 2023 · 4 comments
Labels

Comments

@WenmuZhou
Copy link

WenmuZhou commented Aug 20, 2023

你好,我用torch.fx量化模型,然后将量化模型转换为onnx,遇到的报错如下:

[19:20:42] :93: These Op Not Support: ONNX::Conv | ONNX::DequantizeLinear | ONNX::QuantizeLinear 
Converted Failed!
Traceback (most recent call last):
  File "/root/miniconda3/bin/mnnconvert", line 8, in <module>
    sys.exit(main())
  File "/root/miniconda3/lib/python3.8/site-packages/MNN/tools/mnnconvert.py", line 49, in main
    dst_model_size = os.path.getsize(arg_dict["MNNModel"]) / 1024.0 / 1024.0
  File "/root/miniconda3/lib/python3.8/genericpath.py", line 50, in getsize
    return os.stat(filename).st_size
FileNotFoundError: [Errno 2] No such file or directory: 'qat_int8.mnn'

下面是一个mv2的量化模型

qat_int8.zip

@zhyy2345
Copy link

应该是先把torch pth模型,转化onnx,onnx转化mnn的时候再进行fp16或者int8量化,不应该先把torch模型量化再转化,这是我的经验。

@WenmuZhou
Copy link
Author

应该是先把torch pth模型,转化onnx,onnx转化mnn的时候再进行fp16或者int8量化,不应该先把torch模型量化再转化,这是我的经验。

那样量化不就只能在mnn生态里做了,我训练代码还得做迁移,太麻烦了

@gaoshuzhendanten
Copy link

gaoshuzhendanten commented Dec 27, 2023

请问,我看到在mnn 2.8.0的releases中已修复该问题,并且编译对应版本的mnn使用
./MNNConvert -f ONNX --modelFile xxx.onnx --MNNModel xxx.mnn --bizCode mnn也可以实现相应onnx到mnn格式模型的转换
但是在使用mnn session推理时会出现段错误,示例如下
image

请问该如何解决呢

Copy link

Marking as stale. No activity in 60 days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants