You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The accuracy of the RTMPose model drops significantly when quantized to INT8 for inference, while INT16 inference is acceptable. Are there any solutions?
Any other context?
No response
The text was updated successfully, but these errors were encountered:
I was unable to run the quantized RTMpose because mmdeploy was giving an error. I have tried FP16 and int8. but cant run. How were you able to do that.
What is the feature?
The accuracy of the RTMPose model drops significantly when quantized to INT8 for inference, while INT16 inference is acceptable. Are there any solutions?
Any other context?
No response
The text was updated successfully, but these errors were encountered: