-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RoPE length extrapolation with interpolation #7005
Conversation
Signed-off-by: MaximumEntropy <[email protected]>
Signed-off-by: MaximumEntropy <[email protected]>
Signed-off-by: MaximumEntropy <[email protected]>
Signed-off-by: MaximumEntropy <[email protected]>
Signed-off-by: MaximumEntropy <[email protected]>
Signed-off-by: MaximumEntropy <[email protected]>
Signed-off-by: MaximumEntropy <[email protected]>
for more information, see https://pre-commit.ci
model = load_from_nemo(MegatronGPTModel, cfg, trainer, gpt_cfg, modify_confg_fn=_modify_config) | ||
elif cfg.model.get("pretrained_checkpoint", None) is not None: | ||
validate_checkpoint_loading_args(cfg.model.pretrained_checkpoint) | ||
model = load_from_checkpoint_dir(MegatronGPTModel, cfg, trainer, gpt_cfg, modify_confg_fn=_modify_config) |
Check failure
Code scanning / CodeQL
Wrong number of arguments in a call
model = load_from_nemo(MegatronGPTModel, cfg, trainer, gpt_cfg, modify_confg_fn=_modify_config) | ||
elif cfg.model.get("pretrained_checkpoint", None) is not None: | ||
validate_checkpoint_loading_args(cfg.model.pretrained_checkpoint) | ||
model = load_from_checkpoint_dir(MegatronGPTModel, cfg, trainer, gpt_cfg, modify_confg_fn=_modify_config) |
Check failure
Code scanning / CodeQL
Potentially uninitialized local variable
nemo/collections/nlp/modules/common/megatron/position_embedding/rotary_position_embedding.py
Outdated
Show resolved
Hide resolved
examples/nlp/language_modeling/megatron_gpt_continue_training.py
Outdated
Show resolved
Hide resolved
* sft with pi Signed-off-by: Evelina <[email protected]> * update values only if not None" Signed-off-by: Evelina <[email protected]> --------- Signed-off-by: Evelina <[email protected]>
Signed-off-by: MaximumEntropy <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: MaximumEntropy <[email protected]>
…into sandeepsub/rope_interpolate
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
Signed-off-by: MaximumEntropy <[email protected]>
…into sandeepsub/rope_interpolate
@@ -60,6 +60,8 @@ model: | |||
activations_checkpoint_num_layers: null # not used with 'selective' | |||
answer_only_loss: False # not used right now | |||
gradient_as_bucket_view: False | |||
seq_len_interpolation_factor: null # if not None, seq_len_interpolation_factor will match the base model's value |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add some explanation about how interpolation factor translate to longer sequences.
e.g. factor = 2, sequence length x 2
from nemo.utils.model_utils import inject_model_parallel_rank | ||
|
||
|
||
def _modify_config(gpt_cfg, cfg, add_cfg_to_tree=False): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these modify config
, load_from_nemo
, load_from_checkpoint_dir
, validate_checkpoint_loading_args
functions are the same as in the SFT code. can we put them into an utility file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are almost the same not 100% identical. The issue is each one (SFT vs continued training) modifies some common attributes like data
, optim
, but also a few different things.
@@ -559,7 +562,9 @@ def __init__( | |||
assert 0 < rotary_percentage <= 1 | |||
if rotary_percentage < 1: | |||
rotary_dim = int(rotary_dim * rotary_percentage) | |||
self.rotary_pos_emb = RotaryEmbedding(rotary_dim) | |||
self.rotary_pos_emb = RotaryEmbedding( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe not in this PR. we need to add seq_len_interpolation_factor for all the models that uses RoPe.
* Push changes Signed-off-by: MaximumEntropy <[email protected]> * Fixes Signed-off-by: MaximumEntropy <[email protected]> * add continue training script Signed-off-by: MaximumEntropy <[email protected]> * [WIP] nonlinear interp Signed-off-by: MaximumEntropy <[email protected]> * Fix Signed-off-by: MaximumEntropy <[email protected]> * override encoder_seq_len Signed-off-by: MaximumEntropy <[email protected]> * Remove nonlinear Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * sft with pi (#7006) * sft with pi Signed-off-by: Evelina <[email protected]> * update values only if not None" Signed-off-by: Evelina <[email protected]> --------- Signed-off-by: Evelina <[email protected]> * Address comments Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add info Signed-off-by: MaximumEntropy <[email protected]> * Empty Signed-off-by: MaximumEntropy <[email protected]> --------- Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: Evelina <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Evelina <[email protected]> Signed-off-by: Gerald Shen <[email protected]>
* Push changes Signed-off-by: MaximumEntropy <[email protected]> * Fixes Signed-off-by: MaximumEntropy <[email protected]> * add continue training script Signed-off-by: MaximumEntropy <[email protected]> * [WIP] nonlinear interp Signed-off-by: MaximumEntropy <[email protected]> * Fix Signed-off-by: MaximumEntropy <[email protected]> * override encoder_seq_len Signed-off-by: MaximumEntropy <[email protected]> * Remove nonlinear Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * sft with pi (#7006) * sft with pi Signed-off-by: Evelina <[email protected]> * update values only if not None" Signed-off-by: Evelina <[email protected]> --------- Signed-off-by: Evelina <[email protected]> * Address comments Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add info Signed-off-by: MaximumEntropy <[email protected]> * Empty Signed-off-by: MaximumEntropy <[email protected]> --------- Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: Evelina <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Evelina <[email protected]> Signed-off-by: Gerald Shen <[email protected]>
* Add end_strings to SamplingParams Signed-off-by: Gerald Shen <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: Gerald Shen <[email protected]> * Add end_strings to megatron_gpt_inference.yaml Signed-off-by: Gerald Shen <[email protected]> * Add end_strings to sampling params Signed-off-by: Gerald Shen <[email protected]> * Remove extra_id_1 from default end_strings Signed-off-by: Gerald Shen <[email protected]> * Fix require_grad typos (#6930) Signed-off-by: Sergii Dymchenko <[email protected]> Signed-off-by: Gerald Shen <[email protected]> * fix syntax error Signed-off-by: Gerald Shen <[email protected]> * fix the mpt chatbot (#6957) (#6968) Signed-off-by: Yi Dong <[email protected]> Co-authored-by: Yi Dong <[email protected]> Signed-off-by: Gerald Shen <[email protected]> * add support for max_total_length=4096 for 43b (#6763) * add support for max_total_length=4096 for 43b Signed-off-by: Zhilin Wang <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Zhilin Wang <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Gerald Shen <[email protected]> * rnnt_greedy_decoding.py: typos? auto-repressively -> auto-regressively (#6989) Signed-off-by: Vadim Kantorov <[email protected]> Signed-off-by: Gerald Shen <[email protected]> * Cache handling without input tensors mutation (#6980) (#6996) * Cache handling without input tensors mutation * Cleanup * Cleanup#2 * Cleanup#3 --------- Signed-off-by: Boris Fomitchev <[email protected]> Co-authored-by: Boris Fomitchev <[email protected]> Co-authored-by: Somshubra Majumdar <[email protected]> Signed-off-by: Gerald Shen <[email protected]> * Hybrid conformer export (#6983) (#6995) * Implemented generic kv-pair setting of export_config from args * Hybrid conformer export * Hybrid decoder export * Cleanup * Changed from **kwargs * Docstring * Docs added * Stringify args * Added docs for ASR export configs * lowercase ctc --------- Signed-off-by: Boris Fomitchev <[email protected]> Co-authored-by: Boris Fomitchev <[email protected]> Signed-off-by: Gerald Shen <[email protected]> * Fixing an issue with confidence ensembles (#6987) (#7004) * Bug fix for the confidence ensembles * Relax constraints for the test --------- Signed-off-by: Igor Gitman <[email protected]> Co-authored-by: Igor Gitman <[email protected]> Signed-off-by: Gerald Shen <[email protected]> * [TTS] Add cosine distance option to TTS aligner (#6806) * [TTS] Add cosine distance option to TTS aligner Signed-off-by: Ryan <[email protected]> * [TTS] Update aligner comments Signed-off-by: Ryan <[email protected]> --------- Signed-off-by: Ryan <[email protected]> Signed-off-by: Gerald Shen <[email protected]> * Minor MPT-7B fixes and creation script update (#6982) * Initial commit of minor MPT-7B fixes Signed-off-by: Daniel Egert <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Daniel Egert <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Gerald Shen <[email protected]> * Change Jenkins timeout (#6997) * change timeout Signed-off-by: ericharper <[email protected]> * change to 8 hours Signed-off-by: ericharper <[email protected]> --------- Signed-off-by: ericharper <[email protected]> Signed-off-by: Gerald Shen <[email protected]> * remove hard coded input and output fields (#7008) * remove hard coded input and output fields Signed-off-by: arendu <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: arendu <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Gerald Shen <[email protected]> * RoPE length extrapolation with interpolation (#7005) * Push changes Signed-off-by: MaximumEntropy <[email protected]> * Fixes Signed-off-by: MaximumEntropy <[email protected]> * add continue training script Signed-off-by: MaximumEntropy <[email protected]> * [WIP] nonlinear interp Signed-off-by: MaximumEntropy <[email protected]> * Fix Signed-off-by: MaximumEntropy <[email protected]> * override encoder_seq_len Signed-off-by: MaximumEntropy <[email protected]> * Remove nonlinear Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * sft with pi (#7006) * sft with pi Signed-off-by: Evelina <[email protected]> * update values only if not None" Signed-off-by: Evelina <[email protected]> --------- Signed-off-by: Evelina <[email protected]> * Address comments Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add info Signed-off-by: MaximumEntropy <[email protected]> * Empty Signed-off-by: MaximumEntropy <[email protected]> --------- Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: Evelina <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Evelina <[email protected]> Signed-off-by: Gerald Shen <[email protected]> * use proper config Signed-off-by: Gerald Shen <[email protected]> * Add end_strings to SamplingParams Signed-off-by: Gerald Shen <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: Gerald Shen <[email protected]> * Add end_strings to megatron_gpt_inference.yaml Signed-off-by: Gerald Shen <[email protected]> * Add end_strings to sampling params Signed-off-by: Gerald Shen <[email protected]> * Remove extra_id_1 from default end_strings Signed-off-by: Gerald Shen <[email protected]> * fix syntax error Signed-off-by: Gerald Shen <[email protected]> * use proper config Signed-off-by: Gerald Shen <[email protected]> --------- Signed-off-by: Gerald Shen <[email protected]> Signed-off-by: Sergii Dymchenko <[email protected]> Signed-off-by: Yi Dong <[email protected]> Signed-off-by: Zhilin Wang <[email protected]> Signed-off-by: Vadim Kantorov <[email protected]> Signed-off-by: Boris Fomitchev <[email protected]> Signed-off-by: Igor Gitman <[email protected]> Signed-off-by: Ryan <[email protected]> Signed-off-by: Daniel Egert <[email protected]> Signed-off-by: ericharper <[email protected]> Signed-off-by: arendu <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: Evelina <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Sergii Dymchenko <[email protected]> Co-authored-by: Gerald Shen <[email protected]> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Yi Dong <[email protected]> Co-authored-by: Zhilin Wang <[email protected]> Co-authored-by: Vadim Kantorov <[email protected]> Co-authored-by: Boris Fomitchev <[email protected]> Co-authored-by: Somshubra Majumdar <[email protected]> Co-authored-by: Igor Gitman <[email protected]> Co-authored-by: Ryan Langman <[email protected]> Co-authored-by: trias702 <[email protected]> Co-authored-by: Eric Harper <[email protected]> Co-authored-by: Adi Renduchintala <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> Co-authored-by: Evelina <[email protected]>
* Push changes Signed-off-by: MaximumEntropy <[email protected]> * Fixes Signed-off-by: MaximumEntropy <[email protected]> * add continue training script Signed-off-by: MaximumEntropy <[email protected]> * [WIP] nonlinear interp Signed-off-by: MaximumEntropy <[email protected]> * Fix Signed-off-by: MaximumEntropy <[email protected]> * override encoder_seq_len Signed-off-by: MaximumEntropy <[email protected]> * Remove nonlinear Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * sft with pi (#7006) * sft with pi Signed-off-by: Evelina <[email protected]> * update values only if not None" Signed-off-by: Evelina <[email protected]> --------- Signed-off-by: Evelina <[email protected]> * Address comments Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add info Signed-off-by: MaximumEntropy <[email protected]> * Empty Signed-off-by: MaximumEntropy <[email protected]> --------- Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: Evelina <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Evelina <[email protected]>
What does this PR do ?
Add a one line overview of what this PR aims to accomplish.
Collection: NLP
Changelog
Usage
# Add a code snippet demonstrating how to use this
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information