You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently working on a project that involves testing some Human Pose Estimation Models, my principal focus its to track foots, so im using AlphaPose, OpenPose and BlazePose, because this 3 models ahs 3 foot keypoints each. So, my plan its to test this three models on COCO-WholeBody, which has 133 keypoints (3 for each feet).
I have encountered a few questions and problems that would greatly appreciate your guidance, my main three problems and questions are:
Clarification on Keypoints Indices: I understand that AlphaPose outputs 133 keypoints for whole body pose estimation. Could you provide a detailed description that clearly maps each keypoint index to its corresponding body part? Specifically, I am looking to identify keypoints related to the feet, additionally, is there an example or best practice for isolating and processing these specific keypoints? My problems its that the output its a array of a lot of numbers and they not have any type of identification so i dont know how to use it. I used this command for inference " !python scripts/demo_inference.py --cfg configs/coco_wholebody/resnet/256x192_res50_lr1e-3_2x-regression.yaml --checkpoint pretrained_models/coco_wholebody133_fast50_regression_256x192.pth --indir ../../dataset/val2017/ --outdir examples/res_wholebody_val2017 --eval --vis_fast --save_img ".
Script for validate: I also tried your script validate.py [1] for validate on WholeBody (eval2017), but i have an error with a dismatch of shaps (17 != 133) purposes, I am using this commnand " !python scripts/validate.py --cfg configs/coco_wholebody/resnet/256x192_res50_lr1e-3_2x-regression.yaml --checkpoint pretrained_models/coco_wholebody133_fast50_regression_256x192.pth ", I dont understand why becausa i read de config file and everything its for 133 keypoints so i dont know what to do.
Memory Error Handling: While processing a large number of images (5000 img, COCO eval2017), I occasionally encounter memory allocation errors (e.g., numpy.core._exceptions._ArrayMemoryError). Could you provide advice on how the model use memory? Why is the error? I mean the models use the RAM memory for that 5k images? Or how it this works?.
All of this its using FastPose Model for COCO WholeBody dataset 133 keypoints [2] (checkpoint named coco_wholebody133_fast50_regression_256x192.pth) and this cfg file [3] (cfg file named: 256x192_res50_lr1e-3_2x-regression.yaml).
And last, the output for my inference (COCO eval2017) looks like the .json file that i adjunted.
I greatly appreciate your time and assistance in addressing these questions.
I am currently working on a project that involves testing some Human Pose Estimation Models, my principal focus its to track foots, so im using AlphaPose, OpenPose and BlazePose, because this 3 models ahs 3 foot keypoints each. So, my plan its to test this three models on COCO-WholeBody, which has 133 keypoints (3 for each feet).
I have encountered a few questions and problems that would greatly appreciate your guidance, my main three problems and questions are:
Clarification on Keypoints Indices: I understand that AlphaPose outputs 133 keypoints for whole body pose estimation. Could you provide a detailed description that clearly maps each keypoint index to its corresponding body part? Specifically, I am looking to identify keypoints related to the feet, additionally, is there an example or best practice for isolating and processing these specific keypoints? My problems its that the output its a array of a lot of numbers and they not have any type of identification so i dont know how to use it. I used this command for inference " !python scripts/demo_inference.py --cfg configs/coco_wholebody/resnet/256x192_res50_lr1e-3_2x-regression.yaml --checkpoint pretrained_models/coco_wholebody133_fast50_regression_256x192.pth --indir ../../dataset/val2017/ --outdir examples/res_wholebody_val2017 --eval --vis_fast --save_img ".
Script for validate: I also tried your script validate.py [1] for validate on WholeBody (eval2017), but i have an error with a dismatch of shaps (17 != 133) purposes, I am using this commnand " !python scripts/validate.py --cfg configs/coco_wholebody/resnet/256x192_res50_lr1e-3_2x-regression.yaml --checkpoint pretrained_models/coco_wholebody133_fast50_regression_256x192.pth ", I dont understand why becausa i read de config file and everything its for 133 keypoints so i dont know what to do.
Memory Error Handling: While processing a large number of images (5000 img, COCO eval2017), I occasionally encounter memory allocation errors (e.g., numpy.core._exceptions._ArrayMemoryError). Could you provide advice on how the model use memory? Why is the error? I mean the models use the RAM memory for that 5k images? Or how it this works?.
All of this its using FastPose Model for COCO WholeBody dataset 133 keypoints [2] (checkpoint named coco_wholebody133_fast50_regression_256x192.pth) and this cfg file [3] (cfg file named: 256x192_res50_lr1e-3_2x-regression.yaml).
And last, the output for my inference (COCO eval2017) looks like the .json file that i adjunted.
I greatly appreciate your time and assistance in addressing these questions.
Links:
https://github.com/MVIG-SJTU/AlphaPose/blob/master/scripts/validate.py
https://github.com/MVIG-SJTU/AlphaPose/blob/master/docs/MODEL_ZOO.md
https://github.com/MVIG-SJTU/AlphaPose/blob/master/configs/coco_wholebody/resnet/256x192_res50_lr1e-3_2x-regression.yaml
first_res.json
The text was updated successfully, but these errors were encountered: