-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
allow val_check_interval to be larger than training dataset size #5413
Comments
#4409 similar |
@gan3sh500 yeah go ahead. |
@cccntu Did you try @rohitgr7 This issue can be fixed by removing exceptions to warning and checking It seems a bit redundant with |
yeah I guess just deprecate |
@rohitgr7 Sounds good. It'll be a bit cleaner this way. I'll make the changes for this and PR. |
@gan3sh500 any update on this? |
Related to this discussion: #6253 |
Sorry for the late reply. I use Using together with |
Not exactly Priority P0. Moving it to P1 |
Any updates on this? I really don't like training using epochs, so this feature would be quite useful for me. |
@kaushikb11 what is left TODO here? |
Is this change merged? if yes then in which version? It is really useful, I am not sure why it is marked P2. |
🚀 Feature
allow
val_check_interval
to be larger than the number of the training batches in one epochMotivation
I am using a small datasets, so instead of specifying
max_epochs
in Trainer, I want to usemax_steps
and evaluate everyval_check_interval
, but whenval_check_interval
is larger than number of batches in training set, there is an error, like this:Pitch
val_check_interval
should't be limited by the number of the training batches in one epoch, it should be a warning, not an errorAlternatives
I am currently using a wrapper to make it an iterable dataset, so it allows me to do that
Additional context
The text was updated successfully, but these errors were encountered: