You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, it appears as though the inference speeds for multiple models that I have tested are slower than other libraries. In an attempt to speed up inference, I have gone down a few avenues and found that:
MMdeploy's Docker runs on an older version of ubuntu, which is no longer supported on most HPCs
Batching is not supported in inference for both topdown and bottomup
Any other context?
I test a model from another library DeepLabCut (dlc) which is widely used in the animal tracking field, a bottom-up network, and various top-down networks within mmpose. I ran these models on a 1-hour video of a single mouse that I am tracking the nose and tail-base keypoints. DeepLabCut ended up being twice as fast as any mmpose model that I tested.
I have adjusted the bottomup_demo.py and topdown_demo_with_mmdet.py to exclude writing out the labeled video file to save on time, as I am solely trying to benchmark the inference speed. I also set the visualizer to None in the topdown script to account for an error that I get in regard to the number of keypoints (default is 17, I have 2 in my model).
Thanks for the great library to test a lot of different models easily, and thanks in advance for any input.
The text was updated successfully, but these errors were encountered:
What is the feature?
Currently, it appears as though the inference speeds for multiple models that I have tested are slower than other libraries. In an attempt to speed up inference, I have gone down a few avenues and found that:
Any other context?
I test a model from another library DeepLabCut (dlc) which is widely used in the animal tracking field, a bottom-up network, and various top-down networks within mmpose. I ran these models on a 1-hour video of a single mouse that I am tracking the nose and tail-base keypoints. DeepLabCut ended up being twice as fast as any mmpose model that I tested.
I have adjusted the
![speed_accuracy (1)](https://private-user-images.githubusercontent.com/82849903/402692257-bac61060-2146-4085-b36d-93206f629808.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkzNTA2ODcsIm5iZiI6MTczOTM1MDM4NywicGF0aCI6Ii84Mjg0OTkwMy80MDI2OTIyNTctYmFjNjEwNjAtMjE0Ni00MDg1LWIzNmQtOTMyMDZmNjI5ODA4LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjEyVDA4NTMwN1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWYxYWIxZGE1YTkxYWI5ZjczMTQxYTgzYTdjN2YwMDRlNjQ5YmRmYWNjZjlkZDEwNjdhNmQ1NjVkYzY3YWFiNGUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.K7QE5RhUWJuOMoboOJIDVtoTMWCAyc6EMcGRhDnP7sU)
bottomup_demo.py
andtopdown_demo_with_mmdet.py
to exclude writing out the labeled video file to save on time, as I am solely trying to benchmark the inference speed. I also set the visualizer to None in the topdown script to account for an error that I get in regard to the number of keypoints (default is 17, I have 2 in my model).Thanks for the great library to test a lot of different models easily, and thanks in advance for any input.
The text was updated successfully, but these errors were encountered: