You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
what's the inference speed of MobileNet v2 ssd on CPU approximately? The paper says 200ms, but i tested to be around 1 sec per image. Is there any way to speed up the inference?
The text was updated successfully, but these errors were encountered:
Caffe2 runtime is faster than Pytorch, as it fuses batchnorm and conv layers. It's less than 200ms usng Caffe2 on my PC, about 1s on a 800M Hz Arm CPU.
Is anyway to increase the speed on CPU inference?
mobilenet v2 ssd in Tensorflow is also about 200ms
What are the bottlenecks of speed for pytorch version? Thanks
what's the inference speed of MobileNet v2 ssd on CPU approximately? The paper says 200ms, but i tested to be around 1 sec per image. Is there any way to speed up the inference?
The text was updated successfully, but these errors were encountered: