You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I have run your code on the dataset Shelf. However, when the target camera and reference camera have different numbers of people, (e.g. there are 3 people in the target camera and there are 2 people in the reference camera), an error will happen in line 65 of mvmppe.py file.
For example, there are two views at the same time.
The 500th frames of Camera0 and Camera1 have different numbers of people (e.g., 2 people in Camera0 and 4 people in Camaera1). In the code 'mvmppe.py' lines 63 and 64, the codes reshape the 2D target pose and 2D reference pose.
The parameter 'num_persons' is the number of people in Camera0. However, there are 4 people in Camera1. When reshaping poses_2d_ref (Camera1) with parameter 'num_persons = 2', there is a error "RuntimeError: shape '[1, 1, 2, 17, 1, 2]' is invalid for input of size 136".
The num_persons here is not the number of pose detections in each frame. It’s the maximum number of persons which you can see in the config files as “MAX_NUM_PERSONS”. Views with less than “MAX_NUM_PERSONS” pose detections will have dummy poses (all zeros) padded to the same shape.
Hi, I have run your code on the dataset Shelf. However, when the target camera and reference camera have different numbers of people, (e.g. there are 3 people in the target camera and there are 2 people in the reference camera), an error will happen in line 65 of mvmppe.py file.
poses_dist = torch.sum((pt - pr) ** 2, dim=-1) # [B, Npt, Npf, Nj, Nd]
Is there any solution to solve this problem?
The text was updated successfully, but these errors were encountered: