-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Distributed checkpointing with mcore GPT #7116
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Signed-off-by: ericharper <[email protected]>
Signed-off-by: ericharper <[email protected]>
Signed-off-by: ericharper <[email protected]>
Signed-off-by: ericharper <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: ericharper <[email protected]>
Signed-off-by: ericharper <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: ericharper <[email protected]>
Signed-off-by: ericharper <[email protected]>
Signed-off-by: ericharper <[email protected]>
Signed-off-by: ericharper <[email protected]>
Signed-off-by: ericharper <[email protected]>
Signed-off-by: ericharper <[email protected]>
Signed-off-by: ericharper <[email protected]>
Signed-off-by: ericharper <[email protected]>
Signed-off-by: ericharper <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: ericharper <[email protected]>
Signed-off-by: ericharper <[email protected]>
Signed-off-by: ericharper <[email protected]>
Signed-off-by: ericharper <[email protected]>
Signed-off-by: ericharper <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: ericharper <[email protected]>
Signed-off-by: eharper <[email protected]>
Signed-off-by: eharper <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: eharper <[email protected]>
Signed-off-by: eharper <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: eharper <[email protected]>
Signed-off-by: eharper <[email protected]>
* Integrate new DistOpt state dict * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Change optimizer fp32_param key --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Eric Harper <[email protected]>
Signed-off-by: eharper <[email protected]>
aklife97
reviewed
Aug 26, 2023
for more information, see https://pre-commit.ci
aklife97
reviewed
Aug 26, 2023
aklife97
approved these changes
Aug 28, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thank you!
This was referenced Aug 30, 2023
rohitrango
pushed a commit
to rohitrango/NeMo
that referenced
this pull request
Jun 25, 2024
* start adding gpt from megatron core path Signed-off-by: ericharper <[email protected]> * set model parallel config Signed-off-by: ericharper <[email protected]> * use model parallel config object Signed-off-by: ericharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * update args Signed-off-by: ericharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * set vp size to none if it is 1 Signed-off-by: ericharper <[email protected]> * set vp size to none if it is 1 Signed-off-by: ericharper <[email protected]> * add TransformerConfig Signed-off-by: ericharper <[email protected]> * start updating to TransformerConfig Signed-off-by: ericharper <[email protected]> * add todo Signed-off-by: ericharper <[email protected]> * revert to model parallel config Signed-off-by: ericharper <[email protected]> * add hidden_size to model_parallel_config Signed-off-by: ericharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * remove imports Signed-off-by: ericharper <[email protected]> * revert Signed-off-by: ericharper <[email protected]> * remove import Signed-off-by: ericharper <[email protected]> * small clean up Signed-off-by: ericharper <[email protected]> * update hidden size in peft base model, add mcore commit to jenkins Signed-off-by: ericharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * update module args Signed-off-by: ericharper <[email protected]> * add config obj to flash attention tests Signed-off-by: ericharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * remove args Signed-off-by: ericharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * remove sequence parallel arg Signed-off-by: ericharper <[email protected]> * update args Signed-off-by: ericharper <[email protected]> * add config to self Signed-off-by: ericharper <[email protected]> * update args Signed-off-by: ericharper <[email protected]> * update args Signed-off-by: ericharper <[email protected]> * update args Signed-off-by: ericharper <[email protected]> * add config to test Signed-off-by: ericharper <[email protected]> * get hidden_size from config Signed-off-by: ericharper <[email protected]> * add try except Signed-off-by: ericharper <[email protected]> * use default Signed-off-by: ericharper <[email protected]> * update config with hidden size Signed-off-by: ericharper <[email protected]> * remove arg Signed-off-by: ericharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * comment out jenkins test Signed-off-by: ericharper <[email protected]> * revert import Signed-off-by: ericharper <[email protected]> * build transformer config Signed-off-by: ericharper <[email protected]> * add model to provider func Signed-off-by: ericharper <[email protected]> * update forward and float16 wrapper Signed-off-by: ericharper <[email protected]> * instantiate model parallel config after init model parallel Signed-off-by: ericharper <[email protected]> * set virtual rank Signed-off-by: ericharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add GQA config to megatron gpt model (NVIDIA#7096) * Add GQA config in gpt config file Signed-off-by: jasonwan <[email protected]> * Verify mcore is enabled when using GQA Signed-off-by: jasonwan <[email protected]> --------- Signed-off-by: jasonwan <[email protected]> * revert Signed-off-by: ericharper <[email protected]> * update strategy and exp_manager Signed-off-by: ericharper <[email protected]> * update model checkpoint Signed-off-by: ericharper <[email protected]> * update megatron gpt model Signed-off-by: ericharper <[email protected]> * correct var Signed-off-by: ericharper <[email protected]> * check for mcore gpt and use gpt model list Signed-off-by: ericharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * remove model prefix Signed-off-by: ericharper <[email protected]> * setup te tp groups Signed-off-by: ericharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * revert Signed-off-by: eharper <[email protected]> * revert Signed-off-by: eharper <[email protected]> * add default Signed-off-by: eharper <[email protected]> * add default Signed-off-by: eharper <[email protected]> * revert Signed-off-by: eharper <[email protected]> * update sharded state dict for interleaved Signed-off-by: eharper <[email protected]> * update load for interleaved Signed-off-by: eharper <[email protected]> * check sharded state dict is nonempty Signed-off-by: eharper <[email protected]> * remove import Signed-off-by: eharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * revert comment Signed-off-by: eharper <[email protected]> * inject before checking legacy ckpt Signed-off-by: eharper <[email protected]> * revert Signed-off-by: eharper <[email protected]> * pop arg for now Signed-off-by: eharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * revert jenkins change Signed-off-by: eharper <[email protected]> * remove device state_dict Signed-off-by: eharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * reduce batch size for max steps Signed-off-by: eharper <[email protected]> * update megatron core commit Signed-off-by: eharper <[email protected]> * Integrate dist ckpt with new DistOpt state dict v2 (NVIDIA#7281) * Integrate new DistOpt state dict * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Change optimizer fp32_param key --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Eric Harper <[email protected]> * update apex commit Signed-off-by: eharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: ericharper <[email protected]> Signed-off-by: jasonwan <[email protected]> Signed-off-by: eharper <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Jason Wang <[email protected]> Co-authored-by: mikolajblaz <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR needs mcore dist ckpt for GPT PR to be pushed before merging.
What does this PR do ?
Adds distributed checkpointing when using mcore gpt.
Distributed checkpointing enables training runs to restart automatically with different model parallel configs.
The checkpoint is saved to disk according to the
sharded_state_dict
:Below is a sample of what the checkpoint looks like on disk.
Then inside a module directory we have the sharded tensor:
To implement distributed checkpointing for a model, the
sharded_state_dict
has to be defined.This is done in megatron core so that in NeMo, if the module is from mcore, we only have to call
module.sharded_state_dict()
.Collection: NLP
Usage
Usage is automatic when using mcore:
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information