forked from NVIDIA/NeMo
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge heh and zhehuai's initial version of frozen am+llm #5
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The previous differences are summarized here: https://docs.google.com/document/d/1zNI4hC6vJtUfcHbrUSPaMuYWRBQdN_36H0P2NiBiuPY/edit This PR includes 1. Finish merging the model and dataset code 2. Previous tests are still enabled and passed (prepare_llm_input, training_step, validation_step) The major remaining works are listed here https://docs.google.com/document/d/1o0AM7v4gcTQkPZjE0Vl9TTX4vYnGTrbXEFGWh0UhGlk/edit#bookmark=id.pzvdadt5oxyw
Signed-off-by: He Huang (Steve) <[email protected]>
stevehuang52
requested changes
Aug 8, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the great work, the PR is almost ready to merge, just missing the few things that we discussed yesterday.
- change
init_perception_model
to make it properly instantiate all perception sub-modules - change the yaml config to make it use the new perception modules
- run the example training script with LS960 to make sure the training pipeline works
perception sub-modules 2. change the yaml config to make it use the new perception modules 3. run the example training script with LS960 to make sure the training pipeline works 4. also fix the python format
stevehuang52
approved these changes
Aug 8, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks~
zhehuaichen
added a commit
that referenced
this pull request
Oct 4, 2023
* Merge heh and zhehuai's initial version of frozen am+llm The previous differences are summarized here: https://docs.google.com/document/d/1zNI4hC6vJtUfcHbrUSPaMuYWRBQdN_36H0P2NiBiuPY/edit This PR includes 1. Finish merging the model, dataset, and config code 2. Previous tests are still enabled and passed (prepare_llm_input, training_step, validation_step) 3. the example training script with LS960 has been run to make sure the training pipeline works The major remaining works are listed here https://docs.google.com/document/d/1o0AM7v4gcTQkPZjE0Vl9TTX4vYnGTrbXEFGWh0UhGlk/edit#bookmark=id.pzvdadt5oxyw --------- Co-authored-by: He Huang (Steve) <[email protected]>
zhehuaichen
added a commit
that referenced
this pull request
Oct 4, 2023
* Merge heh and zhehuai's initial version of frozen am+llm The previous differences are summarized here: https://docs.google.com/document/d/1zNI4hC6vJtUfcHbrUSPaMuYWRBQdN_36H0P2NiBiuPY/edit This PR includes 1. Finish merging the model, dataset, and config code 2. Previous tests are still enabled and passed (prepare_llm_input, training_step, validation_step) 3. the example training script with LS960 has been run to make sure the training pipeline works The major remaining works are listed here https://docs.google.com/document/d/1o0AM7v4gcTQkPZjE0Vl9TTX4vYnGTrbXEFGWh0UhGlk/edit#bookmark=id.pzvdadt5oxyw --------- Co-authored-by: He Huang (Steve) <[email protected]>
zhehuaichen
added a commit
that referenced
this pull request
Oct 4, 2023
* Merge heh and zhehuai's initial version of frozen am+llm The previous differences are summarized here: https://docs.google.com/document/d/1zNI4hC6vJtUfcHbrUSPaMuYWRBQdN_36H0P2NiBiuPY/edit This PR includes 1. Finish merging the model, dataset, and config code 2. Previous tests are still enabled and passed (prepare_llm_input, training_step, validation_step) 3. the example training script with LS960 has been run to make sure the training pipeline works The major remaining works are listed here https://docs.google.com/document/d/1o0AM7v4gcTQkPZjE0Vl9TTX4vYnGTrbXEFGWh0UhGlk/edit#bookmark=id.pzvdadt5oxyw --------- Co-authored-by: He Huang (Steve) <[email protected]>
zhehuaichen
added a commit
that referenced
this pull request
Oct 4, 2023
* Merge heh and zhehuai's initial version of frozen am+llm The previous differences are summarized here: https://docs.google.com/document/d/1zNI4hC6vJtUfcHbrUSPaMuYWRBQdN_36H0P2NiBiuPY/edit This PR includes 1. Finish merging the model, dataset, and config code 2. Previous tests are still enabled and passed (prepare_llm_input, training_step, validation_step) 3. the example training script with LS960 has been run to make sure the training pipeline works The major remaining works are listed here https://docs.google.com/document/d/1o0AM7v4gcTQkPZjE0Vl9TTX4vYnGTrbXEFGWh0UhGlk/edit#bookmark=id.pzvdadt5oxyw --------- Co-authored-by: He Huang (Steve) <[email protected]>
zhehuaichen
added a commit
that referenced
this pull request
Oct 4, 2023
* Merge heh and zhehuai's initial version of frozen am+llm The previous differences are summarized here: https://docs.google.com/document/d/1zNI4hC6vJtUfcHbrUSPaMuYWRBQdN_36H0P2NiBiuPY/edit This PR includes 1. Finish merging the model, dataset, and config code 2. Previous tests are still enabled and passed (prepare_llm_input, training_step, validation_step) 3. the example training script with LS960 has been run to make sure the training pipeline works The major remaining works are listed here https://docs.google.com/document/d/1o0AM7v4gcTQkPZjE0Vl9TTX4vYnGTrbXEFGWh0UhGlk/edit#bookmark=id.pzvdadt5oxyw --------- Co-authored-by: He Huang (Steve) <[email protected]> Signed-off-by: zhehuaichen <[email protected]>
zhehuaichen
added a commit
that referenced
this pull request
Oct 4, 2023
* Merge heh and zhehuai's initial version of frozen am+llm The previous differences are summarized here: https://docs.google.com/document/d/1zNI4hC6vJtUfcHbrUSPaMuYWRBQdN_36H0P2NiBiuPY/edit This PR includes 1. Finish merging the model, dataset, and config code 2. Previous tests are still enabled and passed (prepare_llm_input, training_step, validation_step) 3. the example training script with LS960 has been run to make sure the training pipeline works The major remaining works are listed here https://docs.google.com/document/d/1o0AM7v4gcTQkPZjE0Vl9TTX4vYnGTrbXEFGWh0UhGlk/edit#bookmark=id.pzvdadt5oxyw --------- Co-authored-by: He Huang (Steve) <[email protected]> Signed-off-by: zhehuaichen <[email protected]>
zhehuaichen
added a commit
that referenced
this pull request
Oct 4, 2023
* Merge heh and zhehuai's initial version of frozen am+llm The previous differences are summarized here: https://docs.google.com/document/d/1zNI4hC6vJtUfcHbrUSPaMuYWRBQdN_36H0P2NiBiuPY/edit This PR includes 1. Finish merging the model, dataset, and config code 2. Previous tests are still enabled and passed (prepare_llm_input, training_step, validation_step) 3. the example training script with LS960 has been run to make sure the training pipeline works The major remaining works are listed here https://docs.google.com/document/d/1o0AM7v4gcTQkPZjE0Vl9TTX4vYnGTrbXEFGWh0UhGlk/edit#bookmark=id.pzvdadt5oxyw --------- Co-authored-by: He Huang (Steve) <[email protected]>
zhehuaichen
added a commit
that referenced
this pull request
Oct 26, 2023
…IDIA#7634) * add initial impl of ModularizedSpeechGPTModel and integration test * fix typo in the test name (#1) approve the nit change * clean a initial version of example config; make sure it works by test (#2) approve as no need to review * add the test for training_step and fix the code correspondingly (test passed now) (#3) * add test for validation_step (#4) * mv audio and text emb concat to prepare_llm_input so as to write test to guard the llm input * Merge heh and zhehuai's initial version of frozen am+llm (#5) * Merge heh and zhehuai's initial version of frozen am+llm The previous differences are summarized here: https://docs.google.com/document/d/1zNI4hC6vJtUfcHbrUSPaMuYWRBQdN_36H0P2NiBiuPY/edit This PR includes 1. Finish merging the model, dataset, and config code 2. Previous tests are still enabled and passed (prepare_llm_input, training_step, validation_step) 3. the example training script with LS960 has been run to make sure the training pipeline works The major remaining works are listed here https://docs.google.com/document/d/1o0AM7v4gcTQkPZjE0Vl9TTX4vYnGTrbXEFGWh0UhGlk/edit#bookmark=id.pzvdadt5oxyw --------- Co-authored-by: He Huang (Steve) <[email protected]> * fix a nit init bug broke test (#6) Signed-off-by: zhehuaichen <[email protected]> * Clean up implementation for SALM paper and sync to NEMO v1.20.0 (#18) * wip Signed-off-by: zhehuaichen <[email protected]> * fix data Signed-off-by: zhehuaichen <[email protected]> * fix consumed_samples Signed-off-by: zhehuaichen <[email protected]> * fix the training restart problem by storing adapter+perception model and init them from the ckpt Signed-off-by: zhehuaichen <[email protected]> * refix state dict Signed-off-by: zhehuaichen <[email protected]> * support wer and inf Signed-off-by: zhehuaichen <[email protected]> * nan guard Signed-off-by: zhehuaichen <[email protected]> * reimpl inf and bug fix Signed-off-by: zhehuaichen <[email protected]> * multi loader Signed-off-by: zhehuaichen <[email protected]> * unfreeze lm Signed-off-by: zhehuaichen <[email protected]> * flag for load am Signed-off-by: zhehuaichen <[email protected]> * tokenizer Signed-off-by: zhehuaichen <[email protected]> * overwrite vocab size Signed-off-by: zhehuaichen <[email protected]> * support bpe dropout Signed-off-by: zhehuaichen <[email protected]> * add tarred datasets Signed-off-by: stevehuang52 <[email protected]> * fix sample_alpha Signed-off-by: stevehuang52 <[email protected]> * fix bpe dropout bugs in the mismatched context in tokenization Signed-off-by: zhehuaichen <[email protected]> * add bleu metric Signed-off-by: stevehuang52 <[email protected]> * update metrics Signed-off-by: stevehuang52 <[email protected]> * support inference and fix a bug in wer calculation Signed-off-by: zhehuaichen <[email protected]> * fix bucketing dataset Signed-off-by: stevehuang52 <[email protected]> * fix bleu implementation Signed-off-by: zhehuaichen <[email protected]> * support question set file per dataset/data loader in preparation for multitask understanding; also fix bleu implementation Signed-off-by: zhehuaichen <[email protected]> * support simple random context for word boosting Signed-off-by: zhehuaichen <[email protected]> * use sacrebleu.corpus_bleu to be consistent with the rest Signed-off-by: zhehuaichen <[email protected]> * make audio_file optional in the data loader Signed-off-by: zhehuaichen <[email protected]> * add a tool to materialize mt and text data Signed-off-by: zhehuaichen <[email protected]> * compatible with tar dataset Signed-off-by: zhehuaichen <[email protected]> * temp fix for metric and speed up materialization Signed-off-by: zhehuaichen <[email protected]> * make num of context configurable Signed-off-by: zhehuaichen <[email protected]> * val_check_interval fix; make manifest dumping consistent with speech models Signed-off-by: zhehuaichen <[email protected]> * random_context_positive_ratio configurable to control precision Signed-off-by: zhehuaichen <[email protected]> * bug fix: freeze_llm flag is not passed to the model cfg Signed-off-by: zhehuaichen <[email protected]> * overwrite tensor_model_parallel_size Signed-off-by: zhehuaichen <[email protected]> * support both stt and ssl models for loading audio encoder Signed-off-by: zhehuaichen <[email protected]> * fix the inference config so as to use sampling; allow inference config update in training Signed-off-by: zhehuaichen <[email protected]> * refactorize and clean up code for preprocessing collections, dataset interface, model inference and rename some classes to be consistent with salm paper. also make sure test passed Signed-off-by: zhehuaichen <[email protected]> * Undo changes in megatron_gpt_peft_models.py and move them to speechllm_models.py; make sure the correctness by test_speechllm_models.py::TestModularizedAudioGPTModel::test_predict_step Signed-off-by: zhehuaichen <[email protected]> * update default inference config and test golden value accordingly Signed-off-by: zhehuaichen <[email protected]> * integration test and minor fix Signed-off-by: zhehuaichen <[email protected]> * nit bug fix on manifest_filepath introduced by code cleanup Signed-off-by: zhehuaichen <[email protected]> * update workspace/ files; consider moving to examples later Signed-off-by: zhehuaichen <[email protected]> * further remove unnecessary stuff in the inference implementation Signed-off-by: zhehuaichen <[email protected]> * revert the update in default end_string to be compatible with legacy models Signed-off-by: zhehuaichen <[email protected]> --------- Signed-off-by: zhehuaichen <[email protected]> Signed-off-by: stevehuang52 <[email protected]> Co-authored-by: stevehuang52 <[email protected]> Co-authored-by: He Huang (Steve) <[email protected]> * rename 'ModularizedAudioGPTModel' to 'ModularAudioGPTLoRAModel'; move speechllm stuff under nemo/collections/multimodal/speechllm Signed-off-by: zhehuaichen <[email protected]> * update copyright; remove workspace/scripts and workspace/tools folders since the main branch has LLaMA support Signed-off-by: zhehuaichen <[email protected]> --------- Signed-off-by: zhehuaichen <[email protected]> Signed-off-by: stevehuang52 <[email protected]> Co-authored-by: Zhehuai Chen <[email protected]> Co-authored-by: He Huang (Steve) <[email protected]> Co-authored-by: stevehuang52 <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this PR do ?
This PR tries to merge heh and zhehuai's implementation of frozen am+llm. These implementations will contribute to Modular Speech LLM slide idea the SpeechLLM team plans to pursue.
heh's initial version: stevehuang52@36a1fbf
Zhehuai's initial version: main...zhehuaichen:NeMo:speechllm
Their previous differences are summarized here and will be resolved in this PR and its followups:
https://docs.google.com/document/d/1zNI4hC6vJtUfcHbrUSPaMuYWRBQdN_36H0P2NiBiuPY/edit
This PR includes
The major remaining works are listed here
https://docs.google.com/document/d/1o0AM7v4gcTQkPZjE0Vl9TTX4vYnGTrbXEFGWh0UhGlk/edit#bookmark=id.pzvdadt5oxyw