-
Notifications
You must be signed in to change notification settings - Fork 31.9k
Fix tot update in trainer #37923
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Fix tot update in trainer #37923
Changes from all commits
Commits
Show all changes
6 commits
Select commit
Hold shift + click to select a range
08e7c02
fix total updates in epoch
efsotr 49f322c
add test; fix max_steps
efsotr 060f647
Merge remote-tracking branch 'upstream/main' into fix_tot_update_in_t…
efsotr 19200f0
Merge branch 'main' into fix_tot_update_in_trainer
efsotr b2db871
replace with multi-gpu decorator
efsotr bde667a
Merge branch 'main' into fix_tot_update_in_trainer
efsotr File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -2495,13 +2495,13 @@ def _inner_training_loop( | |
| step = -1 | ||
| epoch_iterator = iter(epoch_dataloader) | ||
| # We chunkify the epoch iterator into gradient accumulation steps `n` batches | ||
| remainder = num_examples % args.gradient_accumulation_steps | ||
| remainder = steps_in_epoch % args.gradient_accumulation_steps | ||
| if remainder == 0: | ||
| remainder = args.gradient_accumulation_steps | ||
| update_step = -1 | ||
| total_updates = steps_in_epoch // args.gradient_accumulation_steps + 1 | ||
| if args.gradient_accumulation_steps == 1: | ||
| total_updates -= 1 | ||
| total_updates = steps_in_epoch // args.gradient_accumulation_steps + int( | ||
| remainder < args.gradient_accumulation_steps | ||
| ) | ||
| for _ in range(total_updates): | ||
| update_step += 1 | ||
| num_batches = args.gradient_accumulation_steps if update_step != (total_updates - 1) else remainder | ||
|
|
@@ -5319,7 +5319,11 @@ def set_initial_training_values( | |
|
|
||
| # Case 2: We have a dataloader length and can extrapolate | ||
| if len_dataloader is not None: | ||
| num_update_steps_per_epoch = max(len_dataloader // args.gradient_accumulation_steps, 1) | ||
| num_update_steps_per_epoch = max( | ||
| len_dataloader // args.gradient_accumulation_steps | ||
| + int(len_dataloader % args.gradient_accumulation_steps > 0), | ||
| 1, | ||
| ) | ||
|
Comment on lines
+5322
to
+5326
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this seems like the only real change no ? |
||
| # Case 3: We have a length but are using epochs, we can extrapolate the number of steps | ||
| if epoch_based: | ||
| max_steps = math.ceil(args.num_train_epochs * num_update_steps_per_epoch) | ||
|
|
||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should give the same results no before and after but agree that this is a bit strange to use num_examples for
remainderbut not fortotal_updates.Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When computing the remainder, there was an error where
steps_in_epochwas mistakenly written asnum_examples. Here,num_examplesrefers to the size of the dataset, whilesteps_in_epochis the number of batches in the dataset.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
num_examples!=steps_in_epochThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when
steps_in_epochis multiple ofargs.gradient_accumulation_steps,total_updatesis incorrectly greater than expected by 1.Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yeah indeed, my bad. Can you share the results of your tests before and after this PR in the description? That would help future readers !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see below comment