-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Picodet INT8 is slower than FP32 when inference with MKLDNN #44075
Comments
您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网API文档、常见问题、历史Issue、AI社区来寻求解答。祝您生活愉快~ Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the API,FAQ,Github Issue and AI community to get the answer.Have a nice day! |
Hi @yeliang2258 I working on improving performance for INT8 model. But as I mentioned before it is very difficult case where we have very small filters in convolution that is why avx512_vnni int8 will not give as so much speed up. So the performance is worse because this is an INT8 conversion overhead.
|
Hi @wozna,
Then:
|
@yeliang2258 this accuracy bug is related to new this new quantization method with |
@wozna No, the accuracy of the quantized model in the old format is also not correct. |
This PR should fix this issue #46378. The problem was that even if we have uint8 output we used int8 data type which was associated with a loss of accuracy. |
bug描述 Describe the Bug
Picodet INT8 is slower than FP32 when inference with MKLDNN.
CPU:Intel(R) Xeon(R) Gold 6271C CPU @ 2.60GHz
thread num:8
FP32:3.09 s
INT8:3.13 s
My model and script :
picobug.tar.gz
其他补充信息 Additional Supplementary Information
No response
The text was updated successfully, but these errors were encountered: