Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support to avoid overwriting trained models #411

Merged
merged 3 commits into from
Nov 14, 2024

Conversation

anwai98
Copy link
Collaborator

@anwai98 anwai98 commented Nov 13, 2024

This PR adds optional support to the DefaultTrainer in favor of avoiding to train if the training is already completed. This is controlled via the fit method with the overwrite_training argument, which is set to True as default (i.e. the current action of the trainers). If this is set to False, it will check in the latest.pt checkpoint to verify if the training is finished or not. GTG from my side!

@constantinpape
Copy link
Owner

I will think about how this interacts with continuing training from an existing checkpoint.

@anwai98
Copy link
Collaborator Author

anwai98 commented Nov 14, 2024

Hi @constantinpape,

I took care of interactions between continuing training with overwriting trained models (i.e. raising error when overwrite_training is set to False and user passes a custom checkpoint to continue training)

GTG from my side!

@constantinpape constantinpape merged commit c2689f0 into constantinpape:main Nov 14, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants