Skip to content

Re model types #29

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
GuyTevet opened this issue Feb 25, 2024 · 3 comments
Closed

Re model types #29

GuyTevet opened this issue Feb 25, 2024 · 3 comments

Comments

@GuyTevet
Copy link

GuyTevet commented Feb 25, 2024

Hi @kexul and @ZhengyiLuo ,
What's the difference between the Shape + rotation + keypoint model and the keypoint model?
In case I want to run the same robot (i.e. SMPL beta=0 for all the sequences) over all of amass, which one should I use?

P.S. - For the AMASS evaluation, the data for Shape + rotation + keypoint model is loaded immediately, yet for the keypoint model it takes forever, is that intended?
(reproduce by running python phc/run_hydra.py learning=im_mcp exp_name=phc_kp_mcp_iccv epoch=-1 test=True env=env_im_getup_mcp robot=smpl_humanoid robot.freeze_hand=True robot.box_body=False env.z_activation=relu env.motion_file=sample_data/amass_isaac_standing_upright_slim.pkl env.models=['output/HumanoidIm/phc_kp_pnn_iccv/Humanoid.pth'] env.num_envs=1024 env.obs_v=7 im_eval=True)

@kexul
Copy link
Contributor

kexul commented Feb 25, 2024

In the paper, the authors use two kinds of input to train the model, Shape + rotation + keypoint model and keypooint model. As far as I can see, the first one is used when high quality data from motion capture is available , the second one enable the model to use the noisy data, from text driven motion generation model (MDM) or keypoints estimated by human pose estimation model (openpose).

From the quantitative results, the model is robust to two kinds of input and the result seems to be similar. If you only have keypoints data in your later research, I'd like to recommed use the keypoints only model.
图片

Hopefully this can help! 🤗 I'm not from @ZhengyiLuo's team, please wait his official clarification.

@ZhengyiLuo
Copy link
Owner

ZhengyiLuo commented Feb 26, 2024

Shape + rotation + keypoint model: meaning the model can handle shape varaition and takes rotation + keypoint as input.

Keypoint model: only takes keypoint as input, does not handle shape variation

Both are trained with high quality data, and I have tried both with video-based demos. The reason I demoed mostly because with keypoint is keypoints are easier to work with, and MDM/MeTRabs outputs keypoints directly.

The command:

python phc/run_hydra.py learning=im_mcp exp_name=phc_kp_mcp_iccv epoch=-1 test=True env=env_im_getup_mcp robot=smpl_humanoid robot.freeze_hand=True robot.box_body=False env.z_activation=relu env.motion_file=sample_data/amass_isaac_standing_upright_slim.pkl env.models=['output/HumanoidIm/phc_kp_pnn_iccv/Humanoid.pth'] env.num_envs=1024 env.obs_v=7  im_eval=True headless=True

Runs okay on my end; could it be an mutliprocessing issue?

@ZhengyiLuo
Copy link
Owner

I added os.environ["OMP_NUM_THREADS"] = "1" to the run_hydra script, seems to be fixing the multiprocessing issue on my end.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants