-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix adapters and ptuning for amp O2 #7285
Merged
arendu
merged 14 commits into
main
from
guyueh1/fix_adapters_and_ptuning_for_ampO2_rebased
Aug 22, 2023
Merged
Fix adapters and ptuning for amp O2 #7285
arendu
merged 14 commits into
main
from
guyueh1/fix_adapters_and_ptuning_for_ampO2_rebased
Aug 22, 2023
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Under megatron_amp_O2, transform the adapter modules to low precision after instantiation Signed-off-by: Guyue Huang <[email protected]> Conflicts: nemo/collections/nlp/modules/common/megatron/adapters/parallel_adapters.py
* Fix the first_stage_of_pipeline detection for half models * Fix the freezing of InferenceTable for half models Signed-off-by: Guyue Huang <[email protected]>
* When unfreezing adapters, we explicitly set inference embedding table in prompt encoder to be untrainable. Signed-off-by: Guyue Huang <[email protected]>
Signed-off-by: Guyue Huang <[email protected]>
for more information, see https://pre-commit.ci
8 tasks
blahBlahhhJ
reviewed
Aug 21, 2023
nemo/collections/nlp/models/language_modeling/megatron_gpt_peft_models.py
Outdated
Show resolved
Hide resolved
Signed-off-by: jasonwan <[email protected]>
Signed-off-by: Guyue Huang <[email protected]>
…github.com:NVIDIA/NeMo into guyueh1/fix_adapters_and_ptuning_for_ampO2_rebased
for more information, see https://pre-commit.ci
Signed-off-by: jasonwan <[email protected]>
…ebased Signed-off-by: jasonwan <[email protected]>
Signed-off-by: jasonwan <[email protected]>
for more information, see https://pre-commit.ci
arendu
approved these changes
Aug 22, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!!
styagi130
pushed a commit
to styagi130/NeMo
that referenced
this pull request
Aug 23, 2023
* Transform adapter modules to fp16/bf16 under amp_O2 * Under megatron_amp_O2, transform the adapter modules to low precision after instantiation Signed-off-by: Guyue Huang <[email protected]> Conflicts: nemo/collections/nlp/modules/common/megatron/adapters/parallel_adapters.py * Fix ptuning under amp O2 * Fix the first_stage_of_pipeline detection for half models * Fix the freezing of InferenceTable for half models Signed-off-by: Guyue Huang <[email protected]> * Fix MegatronGPTAdapterPTuningModel * When unfreezing adapters, we explicitly set inference embedding table in prompt encoder to be untrainable. Signed-off-by: Guyue Huang <[email protected]> * Add comments for feature explanation Signed-off-by: Guyue Huang <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix ptuning and lora model_parallel_config Signed-off-by: jasonwan <[email protected]> * Put the casting of adapters in their instantiaion Signed-off-by: Guyue Huang <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * small fix for state dict Signed-off-by: jasonwan <[email protected]> * optional model_parallel_config Signed-off-by: jasonwan <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Guyue Huang <[email protected]> Signed-off-by: jasonwan <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: jasonwan <[email protected]> Signed-off-by: Siddharth Tyagi <[email protected]> Signed-off-by: Siddharth Tyagi <[email protected]>
dorotat-nv
pushed a commit
to dorotat-nv/NeMo
that referenced
this pull request
Aug 24, 2023
* Transform adapter modules to fp16/bf16 under amp_O2 * Under megatron_amp_O2, transform the adapter modules to low precision after instantiation Signed-off-by: Guyue Huang <[email protected]> Conflicts: nemo/collections/nlp/modules/common/megatron/adapters/parallel_adapters.py * Fix ptuning under amp O2 * Fix the first_stage_of_pipeline detection for half models * Fix the freezing of InferenceTable for half models Signed-off-by: Guyue Huang <[email protected]> * Fix MegatronGPTAdapterPTuningModel * When unfreezing adapters, we explicitly set inference embedding table in prompt encoder to be untrainable. Signed-off-by: Guyue Huang <[email protected]> * Add comments for feature explanation Signed-off-by: Guyue Huang <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix ptuning and lora model_parallel_config Signed-off-by: jasonwan <[email protected]> * Put the casting of adapters in their instantiaion Signed-off-by: Guyue Huang <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * small fix for state dict Signed-off-by: jasonwan <[email protected]> * optional model_parallel_config Signed-off-by: jasonwan <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Guyue Huang <[email protected]> Signed-off-by: jasonwan <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: jasonwan <[email protected]> Signed-off-by: dorotat <[email protected]>
rohitrango
pushed a commit
to rohitrango/NeMo
that referenced
this pull request
Jun 25, 2024
* Transform adapter modules to fp16/bf16 under amp_O2 * Under megatron_amp_O2, transform the adapter modules to low precision after instantiation Signed-off-by: Guyue Huang <[email protected]> Conflicts: nemo/collections/nlp/modules/common/megatron/adapters/parallel_adapters.py * Fix ptuning under amp O2 * Fix the first_stage_of_pipeline detection for half models * Fix the freezing of InferenceTable for half models Signed-off-by: Guyue Huang <[email protected]> * Fix MegatronGPTAdapterPTuningModel * When unfreezing adapters, we explicitly set inference embedding table in prompt encoder to be untrainable. Signed-off-by: Guyue Huang <[email protected]> * Add comments for feature explanation Signed-off-by: Guyue Huang <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix ptuning and lora model_parallel_config Signed-off-by: jasonwan <[email protected]> * Put the casting of adapters in their instantiaion Signed-off-by: Guyue Huang <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * small fix for state dict Signed-off-by: jasonwan <[email protected]> * optional model_parallel_config Signed-off-by: jasonwan <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Guyue Huang <[email protected]> Signed-off-by: jasonwan <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: jasonwan <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this PR do ?
Fix the issues when using megatron_amp_O2 on PEFT models
Collection: [Note which collection this PR will affect]
Changelog
Usage
# Add a code snippet demonstrating how to use this
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information