You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recently, I encountered an issue in stitch prediction using nn/evaluation_scripts/predict_per_example.py. Actually, there is the exactly same problem in #3 . Specifically, when I use the same testing data (for example, dress_sleeveless_2550/dress_sleeveless_6VN3H6FMOY in seen test split), the stitch results are totally different using predict_per_example.py compared with using nn/evaluation_scripts/on_test_set.py. The one by predict_per_example.py is quite bad (always with one stitch only per garment), but the one by given evaluation script on_test_set.py is quite good (looks correct when visualizing in 3D). By the way, the shape prediction of both script are both good, only stitch is influenced.
My questions are: 1. Since there is no instruction of using predict_per_example.py in official README, I am wondering whether it can be used to infer on single data/folder without GT. 2. What do you think the possible reasons for different stitch prediction?
The text was updated successfully, but these errors were encountered:
Yes, it's the main purpose of this script, and I would expect it to work 🤔
This is indeed a weird behavior in that the two scripts provide different outputs. I need more context to understand the problem
2.1) Please, share the commands that you run the scripts with, including all the command arguments you supply
2.2) Please, share the outputs (saved files) for the dress_sleeveless_6VN3H6FMOY, from both scripts.
@maria-korosteleva Thank you for your reply!
These are the commands of running each script: Running on_test_set.py:
python nn/evaluation_scripts/on_test_set.py -sh models/att/att.yaml -st models/att/stitch_model.yaml --predict Running on predict_per_sample.py:
python nn/evaluation_scripts/predict_per_example.py -sh models/att/att.yaml -st models/att/stitch_model.yaml --file /home/jinleyang/Projects/data/garment/dress_sleeveless_2550/dress_sleeveless_6VN3H6FMOY/dress_sleeveless_6VN3H6FMOY_sim.obj
The checkpoints and configs are the same as the original code. And the output for each script is attached. (For output of on_test_set.py, I only include the folder of dress_sleeveless_6VN3H6FMOY, because the command will generate results for all data in seen test split.) NeuralTailor_testing.zip
Hi Maria, thank you for your revolutionary work!
Recently, I encountered an issue in stitch prediction using nn/evaluation_scripts/predict_per_example.py. Actually, there is the exactly same problem in #3 . Specifically, when I use the same testing data (for example, dress_sleeveless_2550/dress_sleeveless_6VN3H6FMOY in seen test split), the stitch results are totally different using predict_per_example.py compared with using nn/evaluation_scripts/on_test_set.py. The one by predict_per_example.py is quite bad (always with one stitch only per garment), but the one by given evaluation script on_test_set.py is quite good (looks correct when visualizing in 3D). By the way, the shape prediction of both script are both good, only stitch is influenced.
My questions are: 1. Since there is no instruction of using predict_per_example.py in official README, I am wondering whether it can be used to infer on single data/folder without GT. 2. What do you think the possible reasons for different stitch prediction?
The text was updated successfully, but these errors were encountered: