Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Train VRC on single categories produces bad results #25

Open
Emanuele97x opened this issue May 3, 2022 · 3 comments
Open

Train VRC on single categories produces bad results #25

Emanuele97x opened this issue May 3, 2022 · 3 comments

Comments

@Emanuele97x
Copy link

Hi! I have retrained your model on another dataset (derived from ShapeNet) on single categories, (i.e. for each class I train a separate model). I found that VRC has the same performances of ECG for some categories and perform even worser on other categories. There's no an overall improvement as is demonstrated by training on all categories. What do you think could be the reason behind this behaviour. I trained with point cloud resolution set to 2048 and the same settings specified in your config file.

@paul007pl
Copy link
Owner

Well… many reasons may lead to your reported bad results, as I do not really understand your experimental details as well as your dataset details.
Here we could just discuss one question: "Does a relatively more complicated model always outperform a simpler model on all tasks (especially a simple task)?"
Perhaps not, right?
If a simpler model already satisfies your requirements and achieves good performance, a more complicated model may not make further improvements. What's more, the more complicated one may be more difficult to be trained (not very stable) and also easier to be overfitting. Other specific reasons may also cause the performance drop. Maybe there are insufficient 3D CAD models from every single category? And you can try to use more 3D point cloud pairs or data augmentations reported in our ICLR2022 paper?
A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud Completion

So, in the current situation, ECG could be a better choice in your experiments. It could be good if you could make further improvements based on ECG, or try to find the true disadvantages of VRCNet.

@Emanuele97x
Copy link
Author

HI! Thanks for your feedback, I am using the dataset proposed in https://arxiv.org/abs/2108.00205 that contains a larger number of samples with respect to MVP C3D or PCN. Actually I just wanted to instert VRC in the experiment table in order to have a strong reference for comparison but since I am not getting very good results, I'm trying to understand whether the problem lies in the data difference or in the model. Do you think that VRC could be affected by beign trained on a single category? Or maybe the model implicitly expects point clouds within a certain scale? I'm normalizing inputs to the unit sphere. Now I'm training on the whole set of categories together but since the dataset is huge it will take time..

@Fayeben
Copy link

Fayeben commented May 5, 2022

Hello! When I carried out experiments on ShapeNet34/55 datasets proposed in https://github.com/yuxumin/PoinTr, I found that when I trained the ECG in 200 or more epochs, the results is better than many networks such as CRN, ASFMNet, and even PoinTr. I generate 8192 points and the settings can be found as follow:
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants