-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation #4
Comments
Hi, Seonghyun For the evaluation, I think this gap is quite large, perhaps you can visualize the results first to see if the implementation is correct? (visually right or wrong). I am not sure if the code of the model and the pretrained model are matched or not in your implementation since I have updated once. Another thing is in the PROX, there are several data are completely wrong (which leads to extremely large error) and I did not use those data. Maybe you can also check this. |
Sorry I have just noticed that you are using I think I am using |
Hi, If torch.abs(prediction-target).mean() is used, For visualization, I will refer to the linked repository. Thanks for the sincere reply. Seonghyun |
I am wondering which data you don't use. Can you provide more information? |
@Silverster98 have you found which data they use/don't use for the evaluation? |
@seonghyunkim1212 what values did you get for each error? |
Hi @nicolasugrinovic , I remembered when we observed the distribution of the error, we found some errors were extremely large. And then we found actually those pseudo-gt of PROX were wrong. So we ignored them. |
@jiashunwang thanks for replying. Ok got it. So do you have a list stored somewhere of the exact frames you ignored? Or maybe you used a certain error threshold to ignore these frames with bad pseudo-gt? |
@nicolasugrinovic |
@jiashunwang In the code above, I see that the evaluation of the poses is computed in the vPoser space, over the 32D vector. The numbers reported in the paper are computed like that? Or do you use SMPL params to get the L1 error instead? |
@nicolasugrinovic |
@jiashunwang Ok, thanks |
Hi
I have a question about evaluation.
I implemented the evaluation code for the evaluation split of the PROX dataset.
In Table 1 of the paper(Ours w/o opt), translation, orientation, and pose errors are reported as 6.91, 9,71, and 41.17, respectively.
The results of the code I implemented are 59.02, 60.23 and 1459.95 respectively.
What's wrong with my implementation?
Thanks, Seonghyun
The text was updated successfully, but these errors were encountered: