You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello Liang, thanks for sharing the code of your interesting work. I have some questions.
I I train your model for 16384 points on RTX 3090 with a batch size of 18. I trained it with cd loss instead of emd loss. It took around 1 hour to train one epoch. How long does it take for you to train the model?
If possible, could you provide the pre-trained model with 16384 points ?
Thanks in advance.
The text was updated successfully, but these errors were encountered:
Thank you for asking. Actually, I have been asked many times about the training time with 16384 points as well as the pre-trained model. Unfortunately, it is indeed time-consuming to train the model with 16384 points, and my pre-trained models are lost due to changing the workstation cluster nodes.
When I was doing this task (2 years ago), I mostly conduct experiments with 2048 points or 4096 points, which could finish the training in around one day with a V100 GPU. The results with 2048 points are usually positively related to the results with 16384 points. I did not train many models with 16384 points due to its time-consuming fact (a few days). For the same reason, the following works all prefer to use 2048 points in their experiments. You could also check: https://github.com/wutong16/Density_aware_Chamfer_Distance https://github.com/ZhaoyangLyu/Point_Diffusion_Refinement
Perhaps, you could focus on 2048 points first, or you could consider designing efficient operations to ease the training (this is a true research gap).
Hello Liang, thanks for sharing the code of your interesting work. I have some questions.
Thanks in advance.
The text was updated successfully, but these errors were encountered: