MMPose Model Quantization #2424
Unanswered
gusmcarreira
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi there,
I am trying to speed up inference speed of my mmpose model, for such im using PyTorch's quantisation. The code is as follows:
But when I try to then perform inference with that model, the results are way different. Any help?
Kind regards
Beta Was this translation helpful? Give feedback.
All reactions