Skip to content

Commit

Permalink
Add SpeechLM to main (NVIDIA#8741)
Browse files Browse the repository at this point in the history
* update package info

Signed-off-by: ericharper <[email protected]>

* fix the mpt chatbot (#6957)

Signed-off-by: Yi Dong <[email protected]>

* Remove `compute_on_step` from metrics (#6979)

* Remove `compute_on_step` from metrics

Signed-off-by: smajumdar <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Remove confusing log message

Signed-off-by: smajumdar <[email protected]>

* Update tests

Signed-off-by: smajumdar <[email protected]>

---------

Signed-off-by: smajumdar <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Hybrid conformer export (#6983)

* Implemented generic kv-pair setting of export_config from args

Signed-off-by: Boris Fomitchev <[email protected]>

* Hybrid conformer export

Signed-off-by: Boris Fomitchev <[email protected]>

* Hybrid decoder export

Signed-off-by: Boris Fomitchev <[email protected]>

* Cleanup

Signed-off-by: Boris Fomitchev <[email protected]>

* Changed from **kwargs

Signed-off-by: Boris Fomitchev <[email protected]>

* Docstring

Signed-off-by: Boris Fomitchev <[email protected]>

* Docs added

Signed-off-by: Boris Fomitchev <[email protected]>

* Stringify args

Signed-off-by: Boris Fomitchev <[email protected]>

* Added docs for ASR export configs

Signed-off-by: Boris Fomitchev <[email protected]>

* lowercase ctc

Signed-off-by: Boris Fomitchev <[email protected]>

---------

Signed-off-by: Boris Fomitchev <[email protected]>

* Cache handling without input tensors mutation (#6980)

* Cache handling without input tensors mutation

Signed-off-by: Boris Fomitchev <[email protected]>

* Cleanup

Signed-off-by: Boris Fomitchev <[email protected]>

* Cleanup#2

Signed-off-by: Boris Fomitchev <[email protected]>

* Cleanup#3

Signed-off-by: Boris Fomitchev <[email protected]>

---------

Signed-off-by: Boris Fomitchev <[email protected]>
Co-authored-by: Somshubra Majumdar <[email protected]>

* fixes for spellmapper (#6994)

Signed-off-by: Alexandra Antonova <[email protected]>

* Fixing an issue with confidence ensembles (#6987)

* Bug fix for the confidence ensembles

Signed-off-by: Igor Gitman <[email protected]>

* Relax constraints for the test

Signed-off-by: Igor Gitman <[email protected]>

---------

Signed-off-by: Igor Gitman <[email protected]>

* [TTS] Append pretrained FastPitch & SpectrogamEnhancer pair to available models (#7012)

* [TTS] fastpitch: add english libritts model with asr stft parameters (25 ms 10 ms)

Signed-off-by: Roman Korostik <[email protected]>

* [TTS] enhancer: add pretrained model intended for asr finetuning

Signed-off-by: Roman Korostik <[email protected]>

---------

Signed-off-by: Roman Korostik <[email protected]>

* Add ASR with TTS Tutorial. Fix enhancer usage. (#6955)

* Add ASR with TTS Tutorial
* Fix enhancer usage

Signed-off-by: Vladimir Bataev <[email protected]>

* install_bs (#7019)

Signed-off-by: Nikolay Karpov <[email protected]>

* fix tab text gen (#7022)

Signed-off-by: Yi Dong <[email protected]>

* TE bug fix (#7027)

Signed-off-by: Dmytro Pykhtar <[email protected]>

* Add support for Numba FP16 RNNT Loss (#6991) (#7038)

* Force working space memory to always be in fp32

Signed-off-by: smajumdar <[email protected]>

* Add support for fp16 testing in Numba

Signed-off-by: smajumdar <[email protected]>

* Add support for fp16 testing in Numba

Signed-off-by: smajumdar <[email protected]>

* Add support for fp16 testing in Numba

Signed-off-by: smajumdar <[email protected]>

* Fix cost calculation by upcasting to fp32

Signed-off-by: smajumdar <[email protected]>

* Fix cost calculation by upcasting to fp32

Signed-off-by: smajumdar <[email protected]>

* Add support to check if numba fp16 is available

Signed-off-by: smajumdar <[email protected]>

* add RNN-T loss implemented by PyTorch and test code (#5312)

* Fix the bugs in cache-aware streaming Conformer (#5032)

Signed-off-by: Vahid <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* IA3 support for GPT and T5 (#4909)

* init commit for ia3 adater training in GPT

Signed-off-by: arendu <[email protected]>

* ia3 adater training in GPT, models and adapter classes

Signed-off-by: arendu <[email protected]>

* reshape to operate even on non-contiguous tensors

Signed-off-by: arendu <[email protected]>

* configs

Signed-off-by: arendu <[email protected]>

* fixed none init

Signed-off-by: arendu <[email protected]>

* adding adapter and ia3 support for T5 based models

Signed-off-by: arendu <[email protected]>

* style fix

Signed-off-by: arendu <[email protected]>

* config update and t5 model adapter and ia3

Signed-off-by: arendu <[email protected]>

* removed unused imports

Signed-off-by: arendu <[email protected]>

* predict step for inference

Signed-off-by: arendu <[email protected]>

* style fix

Signed-off-by: arendu <[email protected]>

* style fix

Signed-off-by: arendu <[email protected]>

* adapter inference for t5

Signed-off-by: arendu <[email protected]>

* style fix

Signed-off-by: arendu <[email protected]>

* fixed bug micro and global batch size in eval

Signed-off-by: arendu <[email protected]>

* minor edit

Signed-off-by: arendu <[email protected]>

* agressive truncation if in test examples if no truncation field is given

Signed-off-by: arendu <[email protected]>

* corrected for language_model_path name changes in main

Signed-off-by: arendu <[email protected]>

* removed unused import

Signed-off-by: arendu <[email protected]>

* name change for language_model_path

Signed-off-by: arendu <[email protected]>

* include inter_attention to IA3

Signed-off-by: arendu <[email protected]>

* minor fix in confg

Signed-off-by: arendu <[email protected]>

* minor fixes

Signed-off-by: arendu <[email protected]>

* removed unused flag

Signed-off-by: arendu <[email protected]>

* addressing PR comments

Signed-off-by: arendu <[email protected]>

* address PR comments

Signed-off-by: arendu <[email protected]>

* minor fix

Signed-off-by: arendu <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* style fix

Signed-off-by: arendu <[email protected]>

* CI test

Signed-off-by: arendu <[email protected]>

* minor fix in jenkinsfile

Signed-off-by: arendu <[email protected]>

Signed-off-by: arendu <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Hainan Xu <[email protected]>

* Bug fix - Limit val batches set to 1.0  (#5023)

* Bug fix

Signed-off-by: shanmugamr1992 <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Adressed sandeep's comments

* Fixing limit val batches support in bert

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fixing limit val batches support in bert

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: shanmugamr1992 <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Sandeep Subramanian <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* [bug_fix] kv_channels is used when available (#5066)

* fix bug s.t kv_channels is used when available

Signed-off-by: arendu <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: arendu <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Hainan Xu <[email protected]>

* P&C Docs (#5068) (#5069)

Signed-off-by: Matvei Novikov <[email protected]>

Signed-off-by: Matvei Novikov <[email protected]>

Signed-off-by: Matvei Novikov <[email protected]>
Co-authored-by: Matvei Novikov <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Add spe_split_by_unicode_script arg (#5072)

* Add spe_split_by_unicode_script arg

Signed-off-by: Anas <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: Anas <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Hainan Xu <[email protected]>

* probabilites -> probabilities (#5078) (#5079)

Signed-off-by: nithinraok <[email protected]>

Signed-off-by: nithinraok <[email protected]>

Signed-off-by: nithinraok <[email protected]>
Co-authored-by: Nithin Rao <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* increase PR and Issue sweep quantity and active close PRs. (#5073)

* increase PR and Issue sweep quantity and active close PRs.

Signed-off-by: Xuesong Yang <[email protected]>

* update with stricter rules, 30 days to be stale and 7 days to be closed for both Issues and PRs.

Signed-off-by: Xuesong Yang <[email protected]>

Signed-off-by: Xuesong Yang <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* [TTS] added missing German phoneme tokenizer. (#5070) (#5074)

Signed-off-by: Xuesong Yang <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* rename to match prompt leanring (#5076)

Signed-off-by: arendu <[email protected]>

Signed-off-by: arendu <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Missing fixes from r1.11.0 to T5 finetuning eval (#5054) (#5061)

* Fixes to seq2seq eval

Signed-off-by: MaximumEntropy <[email protected]>

* Style

Signed-off-by: MaximumEntropy <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: MaximumEntropy <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

Signed-off-by: MaximumEntropy <[email protected]>
Co-authored-by: Sandeep Subramanian <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Hainan Xu <[email protected]>

* Notebook bug fixes (#5084) (#5085)

* Notebook bug fixes

Signed-off-by: Virginia Adams <[email protected]>

* Turned nemo install back on

Signed-off-by: Virginia Adams <[email protected]>

* reverted notebook

Signed-off-by: Virginia Adams <[email protected]>

* Updated one line in entity linking nb

Signed-off-by: Virginia Adams <[email protected]>

Signed-off-by: Virginia Adams <[email protected]>
Co-authored-by: Eric Harper <[email protected]>

Signed-off-by: Virginia Adams <[email protected]>
Co-authored-by: Virginia Adams <[email protected]>
Co-authored-by: Eric Harper <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* update strategy in notebook from ddp_fork to dp (#5088) (#5089)

Co-authored-by: Zhilin Wang <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Fix bug in Squeezeformer Conv block (#5011) (#5024)

* Fix bug in Squeezeformer Conv block

Signed-off-by: smajumdar <[email protected]>

* Fix kernel context

Signed-off-by: smajumdar <[email protected]>

* Fix access mixin

Signed-off-by: smajumdar <[email protected]>

Signed-off-by: smajumdar <[email protected]>

Signed-off-by: smajumdar <[email protected]>
Co-authored-by: Somshubra Majumdar <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* fixed megatron lm conversion bug (PTL related) (#5038) (#5063)

Signed-off-by: David Mosallanezhad <[email protected]>

Signed-off-by: David Mosallanezhad <[email protected]>
Co-authored-by: David Mosallanezhad <[email protected]>

Signed-off-by: David Mosallanezhad <[email protected]>
Co-authored-by: David <[email protected]>
Co-authored-by: David Mosallanezhad <[email protected]>
Co-authored-by: Eric Harper <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Fix Unhashable type list for Numba Cuda spec augment kernel (#5093) (#5094)

Signed-off-by: smajumdar <[email protected]>

Signed-off-by: smajumdar <[email protected]>

Signed-off-by: smajumdar <[email protected]>
Co-authored-by: Somshubra Majumdar <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Fix numba (#5098)

Signed-off-by: smajumdar <[email protected]>

Signed-off-by: smajumdar <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Make it possible to specify output_filename in normalize_with_audio.py (#5092)

Signed-off-by: Elena Rastorgueva <[email protected]>

Signed-off-by: Elena Rastorgueva <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Greedy decoding confidence for CTC and RNNT (#4931)

* rnnt confidence draft

Signed-off-by: Aleksandr Laptev <[email protected]>

* word confidence

Signed-off-by: Aleksandr Laptev <[email protected]>

* advanced entropies added

Signed-off-by: Aleksandr Laptev <[email protected]>

* refactoring

Signed-off-by: Aleksandr Laptev <[email protected]>

* oops forgot a file

Signed-off-by: Aleksandr Laptev <[email protected]>

* metrics and benchmarking script added

Signed-off-by: Aleksandr Laptev <[email protected]>

* style fix

Signed-off-by: Aleksandr Laptev <[email protected]>

* texterrors installation added

Signed-off-by: Aleksandr Laptev <[email protected]>

* lgtm and bug fix

Signed-off-by: Aleksandr Laptev <[email protected]>

* fix comments

Signed-off-by: Aleksandr Laptev <[email protected]>

* fix typos

Signed-off-by: Aleksandr Laptev <[email protected]>

* add missing import after rebase

Signed-off-by: Aleksandr Laptev <[email protected]>

Signed-off-by: Aleksandr Laptev <[email protected]>
Co-authored-by: Aleksandr Laptev <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* [Add] SLURP models and examples (#4668)

* add model, util and loss

Signed-off-by: stevehuang52 <[email protected]>

* update

Signed-off-by: stevehuang52 <[email protected]>

* update

Signed-off-by: stevehuang52 <[email protected]>

* update

Signed-off-by: stevehuang52 <[email protected]>

* update

Signed-off-by: stevehuang52 <[email protected]>

* update

Signed-off-by: stevehuang52 <[email protected]>

* update

Signed-off-by: stevehuang52 <[email protected]>

* update

Signed-off-by: stevehuang52 <[email protected]>

* update

Signed-off-by: stevehuang52 <[email protected]>

* update

Signed-off-by: stevehuang52 <[email protected]>

* refactor

Signed-off-by: stevehuang52 <[email protected]>

* refactor annd update

Signed-off-by: stevehuang52 <[email protected]>

* update

Signed-off-by: stevehuang52 <[email protected]>

* update and refactor

Signed-off-by: stevehuang52 <[email protected]>

* update and refactor

Signed-off-by: stevehuang52 <[email protected]>

* update and refactor

Signed-off-by: stevehuang52 <[email protected]>

* update

Signed-off-by: stevehuang52 <[email protected]>

* update

Signed-off-by: stevehuang52 <[email protected]>

* update

Signed-off-by: stevehuang52 <[email protected]>

* update

Signed-off-by: stevehuang52 <[email protected]>

* update

Signed-off-by: stevehuang52 <[email protected]>

* update

Signed-off-by: stevehuang52 <[email protected]>

* update

Signed-off-by: stevehuang52 <[email protected]>

* update docs

Signed-off-by: stevehuang52 <[email protected]>

* update available models

Signed-off-by: stevehuang52 <[email protected]>

* update

Signed-off-by: stevehuang52 <[email protected]>

* refactor data processing

Signed-off-by: stevehuang52 <[email protected]>

* fix typo

Signed-off-by: stevehuang52 <[email protected]>

* update docs

Signed-off-by: stevehuang52 <[email protected]>

* refactor and update

Signed-off-by: stevehuang52 <[email protected]>

* update doc

Signed-off-by: stevehuang52 <[email protected]>

* move transformer to asr.modules

Signed-off-by: stevehuang52 <[email protected]>

* move transformer to asr.modules

Signed-off-by: stevehuang52 <[email protected]>

* get rid of jsonlines

Signed-off-by: stevehuang52 <[email protected]>

* refactor

Signed-off-by: stevehuang52 <[email protected]>

* revert changes to nlp

Signed-off-by: stevehuang52 <[email protected]>

Signed-off-by: stevehuang52 <[email protected]>
Signed-off-by: He Huang (Steve) <[email protected]>
Co-authored-by: Jagadeesh Balam <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* only optimize params that are part of the adapter modules (#5086)

Signed-off-by: arendu <[email protected]>

Signed-off-by: arendu <[email protected]>
Co-authored-by: Virginia Adams <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Pipeline Parallel T5 Prompt Learning (#4956)

* Added pre process flag checks and pipeline parallel in fwd

Signed-off-by: Virginia Adams <[email protected]>

* Added rank check for pipeline parallel

Signed-off-by: Virginia Adams <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* T5 prompt learning works!

Signed-off-by: Virginia Adams <[email protected]>

* IA3 passing CI

Signed-off-by: Virginia Adams <[email protected]>

* Fixed typo

Signed-off-by: Virginia Adams <[email protected]>

* removed optimizer setup so Adi's change will not conflict

Signed-off-by: Virginia Adams <[email protected]>

Signed-off-by: Virginia Adams <[email protected]>
Signed-off-by: Adi Renduchintala <[email protected]>
Co-authored-by: Adi Renduchintala <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Hainan Xu <[email protected]>

* [TTS] remove phonemizer.py (#5090)

remove phonemizer.py and convert code block to markdown in the tutorial.

Signed-off-by: Xuesong Yang <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* T5 Decoding with PP > 2 fix (#5091) (#5103)

* set sequence lenghts in the pipeline properly

Signed-off-by: MaximumEntropy <[email protected]>

* Fix

Signed-off-by: MaximumEntropy <[email protected]>

Signed-off-by: MaximumEntropy <[email protected]>

Signed-off-by: MaximumEntropy <[email protected]>
Co-authored-by: Sandeep Subramanian <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* [TTS] fixed wrong val loss for epoch 0 and inconsistent metrics names (#5087) (#5102)

* fixed hifigan configs as well
* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: Xuesong Yang <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Hainan Xu <[email protected]>

* Fix and refactor consumed samples save/restore for Megatron models. (#5077)

* Fixes and refactor

Signed-off-by: MaximumEntropy <[email protected]>

* Fix

Signed-off-by: MaximumEntropy <[email protected]>

* Remove unused imports

Signed-off-by: MaximumEntropy <[email protected]>

* Empty

Signed-off-by: MaximumEntropy <[email protected]>

* Fix

Signed-off-by: MaximumEntropy <[email protected]>

Signed-off-by: MaximumEntropy <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* RIR corpus generator tool (#4927)

Signed-off-by: Ante Jukić <[email protected]>

Signed-off-by: Ante Jukić <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Multiprocessing fix (#5106) (#5107)

Signed-off-by: Matvei Novikov <[email protected]>

Signed-off-by: Matvei Novikov <[email protected]>

Signed-off-by: Matvei Novikov <[email protected]>
Co-authored-by: Matvei Novikov <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* [Bug fix] PC lexical + audio (#5109) (#5110)

* training running

Signed-off-by: ekmb <[email protected]>

* revert

Signed-off-by: ekmb <[email protected]>

* revert

Signed-off-by: ekmb <[email protected]>

Signed-off-by: ekmb <[email protected]>

Signed-off-by: ekmb <[email protected]>
Co-authored-by: Evelina <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* [Fix] schedulers with no max_steps param (#4564)

* fix schedulers

Signed-off-by: stevehuang52 <[email protected]>

* update to use python inspect module

Signed-off-by: stevehuang52 <[email protected]>

* update

Signed-off-by: stevehuang52 <[email protected]>

Signed-off-by: stevehuang52 <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* T5 prompt learning fixes missing from r.11.0 merge (#5075) (#5101)

* Fix special tokens

Signed-off-by: MaximumEntropy <[email protected]>

* Fix

Signed-off-by: MaximumEntropy <[email protected]>

* Empty

Signed-off-by: MaximumEntropy <[email protected]>

Signed-off-by: MaximumEntropy <[email protected]>
Co-authored-by: David <[email protected]>

Signed-off-by: MaximumEntropy <[email protected]>
Co-authored-by: Sandeep Subramanian <[email protected]>
Co-authored-by: David <[email protected]>
Co-authored-by: Eric Harper <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* [TTS] Add NeMo TTS Primer Tutorial (#4933)

* [TTS] Add NeMo TTS Primer Tutorial

Signed-off-by: Ryan <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Add Squeezeformer CTC model checkpoints on Librispeech (#5121)

Signed-off-by: smajumdar <[email protected]>

Signed-off-by: smajumdar <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* adding loss normalization options to rnnt joint  (#4829)

* adding normalization options to rnnt joint loss

* moving the param to joint

* moving loss normalization to rnnt loss config

* style

* cleaning up

* fixing sum reduction in joint

Signed-off-by: Dima Rekesh <[email protected]>

* moving reduction into RNNT loss class

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* refactoring

* typos

Signed-off-by: Dima Rekesh <[email protected]>

Signed-off-by: Dima Rekesh <[email protected]>
Co-authored-by: Dima Rekesh <[email protected]>
Co-authored-by: Oleksii Kuchaiev <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Hainan Xu <[email protected]>

* Asr concat dataloader (#5108)

* forced precision

* typo

* initial commit

Signed-off-by: Dima Rekesh <[email protected]>

* typos and bugs

Signed-off-by: Dima Rekesh <[email protected]>

* reverting conformer encoder

Signed-off-by: Dima Rekesh <[email protected]>

* additional checks

Signed-off-by: Dima Rekesh <[email protected]>

* adding support to CTC models as well

* reverting conformer_encoder

Signed-off-by: Dima Rekesh <[email protected]>

* typo

Signed-off-by: Dima Rekesh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* refactoring

Signed-off-by: Dima Rekesh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* refactoring

Signed-off-by: Dima Rekesh <[email protected]>

* merging

Signed-off-by: Dima Rekesh <[email protected]>

Signed-off-by: Dima Rekesh <[email protected]>
Signed-off-by: Dima Rekesh <[email protected]>
Co-authored-by: Dima Rekesh <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Somshubra Majumdar <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* fix blossom ci unittests

Signed-off-by: Oleksii Kuchaiev <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* bugfix: pybtex.database.InvalidNameString: Too many commas in author field. (#5112) (#5115)

Signed-off-by: Xuesong Yang <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Uppdate container version to 22.09 (#5105)

* update container version

Signed-off-by: ericharper <[email protected]>

* pin click

Signed-off-by: ericharper <[email protected]>

* pin click 8.0.2

Signed-off-by: ericharper <[email protected]>

Signed-off-by: ericharper <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Remove unsupported arguments from MegatronNMT (#5065)

* Fixes

Signed-off-by: MaximumEntropy <[email protected]>

* Fixes

Signed-off-by: MaximumEntropy <[email protected]>

* Style

Signed-off-by: MaximumEntropy <[email protected]>

* Fix

Signed-off-by: MaximumEntropy <[email protected]>

* More fixes

Signed-off-by: MaximumEntropy <[email protected]>

Signed-off-by: MaximumEntropy <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* pp2 support for T5 IA3 learning and T5 Adapters learning (#5116)

* enabling pp2

Signed-off-by: arendu <[email protected]>

* optimizer update

Signed-off-by: arendu <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* T5 pp>1 support for adapters and ia3

Signed-off-by: arendu <[email protected]>

* fix bug with missing adapter_tuning

Signed-off-by: arendu <[email protected]>

* inference error fixed, pp=2

Signed-off-by: arendu <[email protected]>

Signed-off-by: arendu <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Oleksii Kuchaiev <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* T5 Prompt Learning Fixes for Pipeline Parallel (#5120)

* Initial fixes

Signed-off-by: MaximumEntropy <[email protected]>

* Added back validation acc

Signed-off-by: Virginia Adams <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Put num workers back

Signed-off-by: Virginia Adams <[email protected]>

* added relative encoding if statament

Signed-off-by: Virginia Adams <[email protected]>

* Added back val loss only validation

Signed-off-by: Virginia Adams <[email protected]>

* Revert "Added back val loss only validation"

This reverts commit 86d8f4806fe30335c40c3716ce18259939df500f.

* Removed val acc for PP > 1

Signed-off-by: Virginia Adams <[email protected]>

* Removed enc_seq_len if statement

Signed-off-by: Virginia Adams <[email protected]>

* Added back validation acc calc

Signed-off-by: Virginia Adams <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: MaximumEntropy <[email protected]>
Signed-off-by: Virginia Adams <[email protected]>
Signed-off-by: Virginia Adams <[email protected]>
Co-authored-by: Virginia Adams <[email protected]>
Co-authored-by: Virginia Adams <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Virginia Adams <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* add doc info (#4721)

Signed-off-by: Yang Zhang <[email protected]>

Signed-off-by: Yang Zhang <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* [TTS] Add SpanishCharsTokenizer (#5135)

* [TTS] Add SpanishCharsTokenizer

Signed-off-by: Ryan <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Update megatron interface to dialogue (#4936)

* fix style formatting

Signed-off-by: Zhilin Wang <[email protected]>

* update template to include description of intent

Signed-off-by: Zhilin Wang <[email protected]>

* update Jenkinsfile

Signed-off-by: Zhilin Wang <[email protected]>

* changes based on requests in review

Signed-off-by: Zhilin Wang <[email protected]>

* add compatibility with assistant dataset

Signed-off-by: Zhilin Wang <[email protected]>

* update Jenkins

Signed-off-by: Zhilin Wang <[email protected]>

* remove dialogue_state_tracking

Signed-off-by: Zhilin Wang <[email protected]>

* update huggingface utils for dialogue

Signed-off-by: Zhilin Wang <[email protected]>

* rename dialogue_state_tracking_hybrid to dialogue_state_tracking_sgdqa

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* fix style

Signed-off-by: Zhilin Wang <[email protected]>

* style fix nemo/collections/nlp/models/dialogue_state_tracking_sgdqa/__init__.py

Signed-off-by: Zhilin Wang <[email protected]>

* update Jenkinsfile for SGDGEN

Signed-off-by: Zhilin Wang <[email protected]>

* update Jenkinsfile for SGDGEN

Signed-off-by: Zhilin Wang <[email protected]>

* update Jenkinsfile for SGDGEN

Signed-off-by: Zhilin Wang <[email protected]>

* update Jenkinsfile for SGDGEN

Signed-off-by: Zhilin Wang <[email protected]>

* update Jenkinsfile for SGDGEN

Signed-off-by: Zhilin Wang <[email protected]>

* fix typo

Signed-off-by: Zhilin Wang <[email protected]>

* add docstrings for assistant data processsor

Signed-off-by: Zhilin Wang <[email protected]>

* update Jenkins for SGDGEN local checkpoint

Signed-off-by: Zhilin Wang <[email protected]>

* update style

Signed-off-by: Zhilin Wang <[email protected]>

* use local vocab file for Jenkinsfile

Signed-off-by: Zhilin Wang <[email protected]>

* patch for Jenkins CI using local file

Signed-off-by: Zhilin Wang <[email protected]>

* add slot filling prediction and metrics

Signed-off-by: Zhilin Wang <[email protected]>

* remove unused code

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* refactor metrics code out of Dialogue GPT Model

Signed-off-by: Zhilin Wang <[email protected]>

* integrate backward compatible support for IntentSlotClassificationModel (bert model)

Signed-off-by: Zhilin Wang <[email protected]>

* save prediction file for IntentSlotClassification

Signed-off-by: Zhilin Wang <[email protected]>

* update dialogue gpt model training for megatron gpt

Signed-off-by: Zhilin Wang <[email protected]>

* remove batch generate for HF GPT2, which causes lower performance

Signed-off-by: Zhilin Wang <[email protected]>

* add few shot capability to dialogue gpt model

Signed-off-by: Zhilin Wang <[email protected]>

* update Jenkinsfile and remove unused import

Signed-off-by: Zhilin Wang <[email protected]>

* update code description and clarity

Signed-off-by: Zhilin Wang <[email protected]>

* address PR comments

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* integrate compatibility with ZeroShotIntentModel

Signed-off-by: Zhilin Wang <[email protected]>

* rename folder to dialogue due to increased scope and further refactor for clarity

Signed-off-by: Zhilin Wang <[email protected]>

* added dialogue GPT for sequence generation task (e.g. answer extender)

Signed-off-by: Zhilin Wang <[email protected]>

* add CI test for DialogueGPTGenerationModel

Signed-off-by: Zhilin Wang <[email protected]>

* integrate DialogueS2SGenerationModel for generation task (e.g. answer extender)

Signed-off-by: Zhilin Wang <[email protected]>

* modify huggingface utils to support HF t5/BART models

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* remove unused imports

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* update Jenkinsfile

Signed-off-by: Zhilin Wang <[email protected]>

* update Jenkinsfile

Signed-off-by: Zhilin Wang <[email protected]>

* update bleu metric

Signed-off-by: Zhilin Wang <[email protected]>

* fix bleu metric style

Signed-off-by: Zhilin Wang <[email protected]>

* debug bleu metric

Signed-off-by: Zhilin Wang <[email protected]>

* debug bleu metric

Signed-off-by: Zhilin Wang <[email protected]>

* update based on PR #3893

Signed-off-by: Zhilin Wang <[email protected]>

* update 2 based on PR #3893

Signed-off-by: Zhilin Wang <[email protected]>

* update 3 based on PR #3893

Signed-off-by: Zhilin Wang <[email protected]>

* integrate sgd generation based on user user utterance and system slot-values to generate system utterance

Signed-off-by: Zhilin Wang <[email protected]>

* add validation model saving capabilities

Signed-off-by: Zhilin Wang <[email protected]>

* cleaned up code for SGD Based Answer extender

Signed-off-by: Zhilin Wang <[email protected]>

* update Dialogue Generation CI

Signed-off-by: Zhilin Wang <[email protected]>

* update Jenkinsfile

Signed-off-by: Zhilin Wang <[email protected]>

* update Jenkinsfile

Signed-off-by: Zhilin Wang <[email protected]>

* fix Jenkins CI issue"

Signed-off-by: Zhilin Wang <[email protected]>

* add support for design dataset

Signed-off-by: Zhilin Wang <[email protected]>

* remove unnecessary imports

Signed-off-by: Zhilin Wang <[email protected]>

* update Jenkins

Signed-off-by: Zhilin Wang <[email protected]>

* update jenkins

Signed-off-by: Zhilin Wang <[email protected]>

* update jenkins

Signed-off-by: Zhilin Wang <[email protected]>

* support megatron for dialogue_s2s_generation_model

Signed-off-by: Zhilin Wang <[email protected]>

* reduce loaded samples in MSMarcoDataProcessor to 64 when cfg.model.dataset.debug_mode=True

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* update CI

Signed-off-by: Zhilin Wang <[email protected]>

* update checkpoint and predictions filename to include epoch number

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* integrate HF BART MNLI into zero shot intent model

Signed-off-by: Zhilin Wang <[email protected]>

* integrate Dialogue Nearest Neighbour Model

Signed-off-by: Zhilin Wang <[email protected]>

* update Jenkins

Signed-off-by: Zhilin Wang <[email protected]>

* update Jenkins

Signed-off-by: Zhilin Wang <[email protected]>

* refactor Dialogue SGD Data Processor to make interface for models cleaner

Signed-off-by: Zhilin Wang <[email protected]>

* update jenkins

Signed-off-by: Zhilin Wang <[email protected]>

* update Dialogue S2S Generation model for DialogueSGDDataProcessor interface

Signed-off-by: Zhilin Wang <[email protected]>

* update jenkins

Signed-off-by: Zhilin Wang <[email protected]>

* update jenkins

Signed-off-by: Zhilin Wang <[email protected]>

* support sgd and drive thru datasets by zero shot model and nearest neighbour model

Signed-off-by: Zhilin Wang <[email protected]>

* add prediction saving code to nearest neighbour and zero shot intent models

Signed-off-by: Zhilin Wang <[email protected]>

* fix typo in sgd data processor

Signed-off-by: Zhilin Wang <[email protected]>

* integrate Dialogue Mellon QA Data Processor

Signed-off-by: Zhilin Wang <[email protected]>

* update mellon qa

Signed-off-by: Zhilin Wang <[email protected]>

* update dialogue.py to remove outdated info

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* update dialogue_config.yaml

Signed-off-by: Zhilin Wang <[email protected]>

* update dialogue_config.yaml

Signed-off-by: Zhilin Wang <[email protected]>

* add dialogue docs

Signed-off-by: Zhilin Wang <[email protected]>

* address review comments

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* style fix for cfg

Signed-off-by: Zhilin Wang <[email protected]>

* make dependency on apex optional

Signed-off-by: Zhilin Wang <[email protected]>

* change NLPDDPluggin calling logic to make it possible to run without apex

Signed-off-by: Zhilin Wang <[email protected]>

* add first draft of tutorial

Signed-off-by: Zhilin Wang <[email protected]>

* reduce ms marco size by removing lines without wellFormedAnswers

Signed-off-by: Zhilin Wang <[email protected]>

* address pr comments

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* update colab tutorial link in dialogue docs

Signed-off-by: Zhilin Wang <[email protected]>

* include unit test and some refactor to facilitate unit test

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* address pr issues

Signed-off-by: Zhilin Wang <[email protected]>

* remove typos in dialogue tutorial

Signed-off-by: Zhilin Wang <[email protected]>

* support larger files for question answering

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* remove unnecessary artifacts to reduce memory use

Signed-off-by: Zhilin Wang <[email protected]>

* put 0 tensor to device

Signed-off-by: Zhilin Wang <[email protected]>

* update link within dialogue tutorial

Signed-off-by: Zhilin Wang <[email protected]>

* restore previously delete files

Signed-off-by: Zhilin Wang <[email protected]>

* update error handling when loss = nan

Signed-off-by: Zhilin Wang <[email protected]>

* update nan handling

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* update spanning loss func

Signed-off-by: Zhilin Wang <[email protected]>

* update spanning loss

Signed-off-by: Zhilin Wang <[email protected]>

* fix type error raised in qa_dataset.py

Signed-off-by: Zhilin Wang <[email protected]>

* add error checking message

Signed-off-by: Zhilin Wang <[email protected]>

* revert back to float32

Signed-off-by: Zhilin Wang <[email protected]>

* revert back to float32

Signed-off-by: Zhilin Wang <[email protected]>

* update error msgs

Signed-off-by: Zhilin Wang <[email protected]>

* update error msgs

Signed-off-by: Zhilin Wang <[email protected]>

* update error msgs

Signed-off-by: Zhilin Wang <[email protected]>

* update error msgs

Signed-off-by: Zhilin Wang <[email protected]>

* update error msgs

Signed-off-by: Zhilin Wang <[email protected]>

* update error msgs

Signed-off-by: Zhilin Wang <[email protected]>

* update error msgs

Signed-off-by: Zhilin Wang <[email protected]>

* update error msgs

Signed-off-by: Zhilin Wang <[email protected]>

* update exp logging

Signed-off-by: Zhilin Wang <[email protected]>

* update error msgs

Signed-off-by: Zhilin Wang <[email protected]>

* update loading of large file from pickle to json

Signed-off-by: Zhilin Wang <[email protected]>

* update loading of large file from pickle to json

Signed-off-by: Zhilin Wang <[email protected]>

* limit number of negative samples

Signed-off-by: Zhilin Wang <[email protected]>

* revert post processing

Signed-off-by: Zhilin Wang <[email protected]>

* revert post processing

Signed-off-by: Zhilin Wang <[email protected]>

* remove unused methods and style fix

Signed-off-by: Zhilin Wang <[email protected]>

* add more documentation

Signed-off-by: Zhilin Wang <[email protected]>

* remove unused imports

Signed-off-by: Zhilin Wang <[email protected]>

* changes base on PR review

Signed-off-by: Zhilin Wang <[email protected]>

* set wandb logger falseby default

Signed-off-by: Zhilin Wang <[email protected]>

* update interface with megatron gpt prompt learning

Signed-off-by: Zhilin Wang <[email protected]>

* update inline documentation

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* update prompt_ids

Signed-off-by: Zhilin Wang <[email protected]>

* update error msg

Signed-off-by: Zhilin Wang <[email protected]>

* update config

Signed-off-by: Zhilin Wang <[email protected]>

* update config

Signed-off-by: Zhilin Wang <[email protected]>

* set inference = False for dialgue prompt learning during trainng

Signed-off-by: Zhilin Wang <[email protected]>

* set inference = False for dialgue prompt learning during trainng

Signed-off-by: Zhilin Wang <[email protected]>

* remove unused code

Signed-off-by: Zhilin Wang <[email protected]>

* update config yaml

Signed-off-by: Zhilin Wang <[email protected]>

* fix bug for megatron gpt prompt learning

Signed-off-by: Zhilin Wang <[email protected]>

* remove unused import

Signed-off-by: Zhilin Wang <[email protected]>

* address comments in PR

Signed-off-by: Zhilin Wang <[email protected]>

* address comments in PR

Signed-off-by: Zhilin Wang <[email protected]>

* address typo

Signed-off-by: Zhilin Wang <[email protected]>

* add megatron t5 inference

Signed-off-by: Zhilin Wang <[email protected]>

* fix bug due to bert tokenizer not being space-aware

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* update style

Signed-off-by: Zhilin Wang <[email protected]>

* update IntentSlotModel onnx export test

Signed-off-by: Zhilin Wang <[email protected]>

* update style

Signed-off-by: Zhilin Wang <[email protected]>

* update exportable

Signed-off-by: Zhilin Wang <[email protected]>

* address PR comments

Signed-off-by: Zhilin Wang <[email protected]>

* replace functools.cache_property with functools.lru_cache to maintain python 3.7 compatibility

Signed-off-by: Zhilin Wang <[email protected]>

* improve speed of rank_candidates and support for p tuning

Signed-off-by: Zhilin Wang <[email protected]>

* update dialogue.py

Signed-off-by: Zhilin Wang <[email protected]>

* fix megatron prompt learning saving bug

Signed-off-by: Zhilin Wang <[email protected]>

* update generate_candidate method

Signed-off-by: Zhilin Wang <[email protected]>

* remove repeated init text ids and invert attention masks

Signed-off-by: Zhilin Wang <[email protected]>

* update typo

Signed-off-by: Zhilin Wang <[email protected]>

* custom collate fn to remove excess padding in batch

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* style fix

Signed-off-by: Zhilin Wang <[email protected]>

* update complete method to mitigate issue when max seq len is low

Signed-off-by: Zhilin Wang <[email protected]>

* address pr comments

Signed-off-by: Zhilin Wang <[email protected]>

* update generation interface

Signed-off-by: Zhilin Wang <[email protected]>

Signed-off-by: Zhilin Wang <[email protected]>
Co-authored-by: Zhilin Wang <[email protected]>
Co-authored-by: Oleksii Kuchaiev <[email protected]>
Co-authored-by: Yang Zhang <[email protected]>
Co-authored-by: Eric Harper <[email protected]>
Co-authored-by: Sandeep Subramanian <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Added save inference ready .nemo file with every checkpoint (#5055)

* Added save inference ready .nemo file with every checkpoint

Signed-off-by: Virginia Adams <[email protected]>

* Python style fix

Signed-off-by: Virginia Adams <[email protected]>

* addressed Adi's comment

Signed-off-by: Virginia Adams <[email protected]>

* Added ptuning check in model checkpoint saving

Signed-off-by: Virginia Adams <[email protected]>

* Changed save_nemo_on_valdaition default to False

Signed-off-by: Virginia Adams <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Changes global batch size of adapter CI

Signed-off-by: Virginia Adams <[email protected]>

* Changed num workers to 0

Signed-off-by: Virginia Adams <[email protected]>

* added first stage of pipeline check

Signed-off-by: Virginia Adams <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: Virginia Adams <[email protected]>
Signed-off-by: Virginia Adams <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Hainan Xu <[email protected]>

* Fixes for docs/typos + remove max_utts parameter from tarred datasets as it causes hang in training (#5118)

* Remove ; from jupyter notebook cells

Signed-off-by: Igor Gitman <[email protected]>

* Fix typos in documentation/code

Signed-off-by: Igor Gitman <[email protected]>

* Fix output message to have 'or equal'

Signed-off-by: Igor Gitman <[email protected]>

* Link formatting fixes

Signed-off-by: Igor Gitman <[email protected]>

* Add error if max_utts is used in tarred datasets

Signed-off-by: Igor Gitman <[email protected]>

* Remove max_utts parameter from tarred datasets

Signed-off-by: Igor Gitman <[email protected]>

* Fix max_utts removal in tests

Signed-off-by: Igor Gitman <[email protected]>

* Fix typo if -> is

Signed-off-by: Igor Gitman <[email protected]>

Signed-off-by: Igor Gitman <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Merge r1.12.0 main (#5139)

* update branch

Signed-off-by: ericharper <[email protected]>

* Add cherry-pick action (#4958)

* add cherry-pick action

Signed-off-by: ericharper <[email protected]>

* Pin Transformers version to fix CI (#4955)

* Pin transformers version in CI to prevent offline tokenizer loading error

Signed-off-by: SeanNaren <[email protected]>

* Drop version

Signed-off-by: SeanNaren <[email protected]>

* Disable offline temporarily

Signed-off-by: SeanNaren <[email protected]>

* Disable offline temporarily

Signed-off-by: SeanNaren <[email protected]>

* Enable offline

Signed-off-by: SeanNaren <[email protected]>

Signed-off-by: SeanNaren <[email protected]>

Signed-off-by: ericharper <[email protected]>
Signed-off-by: SeanNaren <[email protected]>
Co-authored-by: Sean Naren <[email protected]>

* upper bound transformers

Signed-off-by: ericharper <[email protected]>

* remove duplicate transformers requirement

Signed-off-by: ericharper <[email protected]>

* Release SOTA Lang ID model  (#5080)

* add pretrained lang id model ambernet

Signed-off-by: fayejf <[email protected]>

* update doc and style fix

Signed-off-by: fayejf <[email protected]>

Signed-off-by: fayejf <[email protected]>

* update branch and package info

Signed-off-by: ericharper <[email protected]>

* remove upper bounds on lightning and transformers

Signed-off-by: ericharper <[email protected]>

* remove transformers offline from ci

Signed-off-by: ericharper <[email protected]>

* upper bound transformers

Signed-off-by: ericharper <[email protected]>

Signed-off-by: ericharper <[email protected]>
Signed-off-by: SeanNaren <[email protected]>
Signed-off-by: fayejf <[email protected]>
Co-authored-by: Sean Naren <[email protected]>
Co-authored-by: fayejf <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Added ASR model comparison to SDE (#5043)

SDE: Added ASR model comparison tool to SDE
transcribe speech: Added support for many predictions in one file, as well as custom field names
Signed-off-by: George Zelenfroynd <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* fix nmt eval sampler (#5154)

Signed-off-by: Abhinav Khattar <[email protected]>

Signed-off-by: Abhinav Khattar <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Fix Global init steps (#5143)

* move global step to base

Signed-off-by: Yi Dong <[email protected]>

* fix fused softmax

Signed-off-by: Yi Dong <[email protected]>

* add the missing file

Signed-off-by: Yi Dong <[email protected]>

* update the fused kernel

Signed-off-by: Yi Dong <[email protected]>

* fix import error

Signed-off-by: Yi Dong <[email protected]>

* fix import again

Signed-off-by: Yi Dong <[email protected]>

Signed-off-by: Yi Dong <[email protected]>
Signed-off-by: Yi Dong <[email protected]>
Co-authored-by: Yi Dong <[email protected]>
Co-authored-by: Sandeep Subramanian <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* [TTS] bug fix - sample rate was being ignored in vocoder dataset (#4518)

* bug fix - sample rate was being ignored in vocoder dataset when not loading mel
* handled n segments for a different sampling rate than original sampling rate
* Added case for n_segments 0, warning for n_segments greater than file length

Signed-off-by: Paarth Neekhara <[email protected]>
Co-authored-by: Xuesong Yang <[email protected]>
Co-authored-by: Jocelyn <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Add EMA support to NeMo (#4764)

* Added Base files

Signed-off-by: SeanNaren <[email protected]>

* Some refactors, swap to using MNIST Lnet

Signed-off-by: SeanNaren <[email protected]>

* Add a few more tests, allow the callback to be set via the exp manager

Signed-off-by: SeanNaren <[email protected]>

* Actually run validation for testing

Signed-off-by: SeanNaren <[email protected]>

* Run isort

Signed-off-by: SeanNaren <[email protected]>

* Add test for saving state/fix saving state

Signed-off-by: SeanNaren <[email protected]>

* Use dummy model

Signed-off-by: SeanNaren <[email protected]>

* Fix test

Signed-off-by: SeanNaren <[email protected]>

* Add copyright

Signed-off-by: SeanNaren <[email protected]>

* Support saving separate EMA weight module

Signed-off-by: SeanNaren <[email protected]>

* Add standalone functionality/logging

Signed-off-by: SeanNaren <[email protected]>

* Expose more parameters

Signed-off-by: SeanNaren <[email protected]>

* Modify to allow option to replace validation

Signed-off-by: SeanNaren <[email protected]>

* Add jenkins test, formatting

Signed-off-by: SeanNaren <[email protected]>

* Pin Transformers version to fix CI (#4955)

* Pin transformers version in CI to prevent offline tokenizer loading error

Signed-off-by: SeanNaren <[email protected]>

* Drop version

Signed-off-by: SeanNaren <[email protected]>

* Disable offline temporarily

Signed-off-by: SeanNaren <[email protected]>

* Disable offline temporarily

Signed-off-by: SeanNaren <[email protected]>

* Enable offline

Signed-off-by: SeanNaren <[email protected]>

Signed-off-by: SeanNaren <[email protected]>

* Add cherry-pick action (#4958) (#4961)

* add cherry-pick action

Signed-off-by: ericharper <[email protected]>

* Pin Transformers version to fix CI (#4955)

* Pin transformers version in CI to prevent offline tokenizer loading error

Signed-off-by: SeanNaren <[email protected]>

* Drop version

Signed-off-by: SeanNaren <[email protected]>

* Disable offline temporarily

Signed-off-by: SeanNaren <[email protected]>

* Disable offline temporarily

Signed-off-by: SeanNaren <[email protected]>

* Enable offline

Signed-off-by: SeanNaren <[email protected]>

Signed-off-by: SeanNaren <[email protected]>

Signed-off-by: ericharper <[email protected]>
Signed-off-by: SeanNaren <[email protected]>
Co-authored-by: Sean Naren <[email protected]>

Signed-off-by: ericharper <[email protected]>
Signed-off-by: SeanNaren <[email protected]>
Co-authored-by: Eric Harper <[email protected]>
Co-authored-by: Sean Naren <[email protected]>
Signed-off-by: SeanNaren <[email protected]>

* Fix changelog builder (#4962) (#4963)

Signed-off-by: smajumdar <[email protected]>

Signed-off-by: smajumdar <[email protected]>

Signed-off-by: smajumdar <[email protected]>
Signed-off-by: SeanNaren <[email protected]>

* fix cherry pick workflow (#4964) (#4965)

Signed-off-by: ericharper <[email protected]>

Signed-off-by: ericharper <[email protected]>

Signed-off-by: ericharper <[email protected]>
Co-authored-by: Eric Harper <[email protected]>
Signed-off-by: SeanNaren <[email protected]>

* reorder model check (#4959) (#4967)

Signed-off-by: nithinraok <[email protected]>

Signed-off-by: nithinraok <[email protected]>

Signed-off-by: nithinraok <[email protected]>
Co-authored-by: Nithin Rao <[email protected]>
Signed-off-by: SeanNaren <[email protected]>

* check for active conda environment (#4970) (#4971)

Signed-off-by: SeanNaren <[email protected]>

* [TTS] fix broken tutorial for MixerTTS. (#4949) (#4976)

Signed-off-by: Xuesong Yang <[email protected]>

Signed-off-by: Xuesong Yang <[email protected]>

Signed-off-by: Xuesong Yang <[email protected]>
Co-authored-by: Xuesong Yang <[email protected]>
Signed-off-by: SeanNaren <[email protected]>

* Checkpoint averaging class fix (#4946)

* 1. Added args.class_path to provide it externally.

Signed-off-by: Micha Livne <[email protected]>

* 1. Fixed style.

Signed-off-by: Micha Livne <[email protected]>

Signed-off-by: Micha Livne <[email protected]>
Signed-off-by: SeanNaren <[email protected]>

* Add ability to give seperate datasets for test, train and validation (#4798)

* Add ability to give seperate datasets for test, train and validation

* Addressed Sandeeps comments

* Addressed Sandeeps comments

* Add ability to give seperate datasets for test, train and validation

* Add ability to give seperate datasets for test, train and validation

* Addressed review comments

* Bug fix for common dataset utils

* Add CI tests

Signed-off-by: shanmugamr1992 <[email protected]>

* Reformat code

Signed-off-by: shanmugamr1992 <[email protected]>

* Bug fix

Signed-off-by: shanmugamr1992 <[email protected]>

* Bug fix

* Bug Fix

* Bug Fix

* Update Jenkinsfile

* Addressed comments

* Addressed Eriks comments.

* Addressed Sandeep

* Update Jenkinsfile

* Update Jenkinsfile

* Update dataset_utils.py

* Update Jenkinsfile

* Update Jenkinsfile

* Use GPT CI config

Signed-off-by: MaximumEntropy <[email protected]>

Signed-off-by: shanmugamr1992 <[email protected]>
Signed-off-by: MaximumEntropy <[email protected]>
Co-authored-by: MaximumEntropy <[email protected]>
Signed-off-by: SeanNaren <[email protected]>

* fix label models restoring issue from wrighted cross entropy (#4968) (#4975)

Signed-off-by: nithinraok <[email protected]>

Signed-off-by: nithinraok <[email protected]>

Signed-off-by: nithinraok <[email protected]>
Co-authored-by: Nithin Rao <[email protected]>
Signed-off-by: SeanNaren <[email protected]>

* Add simple pre-commit file (#4983)

* Add simple pre-commit file

Signed-off-by: SeanNaren <[email protected]>

* Exclude docs folder

Signed-off-by: SeanNaren <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: SeanNaren <[email protected]>

* Revert "[pre-commit.ci] auto fixes from pre-commit.com hooks"

This reverts commit 053bd5ba579537a5f311b431871c21f3381b43eb.

Signed-off-by: SeanNaren <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: SeanNaren <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: SeanNaren <[email protected]>

* Import pycuda.autoprimaryctx or pycuda.autoinit to init pycuda execution environment (#4951)

Signed-off-by: Jin Li <[email protected]>

Signed-off-by: Jin Li <[email protected]>
Co-authored-by: Somshubra Majumdar <[email protected]>
Signed-off-by: SeanNaren <[email protected]>

* Adding speaker embedding conditioning in fastpitch (#4986)

Signed-off-by: subhankar-ghosh <[email protected]>

Signed-off-by: subhankar-ghosh <[email protected]>
Signed-off-by: SeanNaren <[email protected]>

* Fix ASR issues (#4984) (#4991)

* Fix ASR issues

Signed-off-by: smajumdar <[email protected]>

* Revert fix

Signed-off-by: smajumdar <[email protected]>

Signed-off-by: smajumdar <[email protected]>

Signed-off-by: smajumdar <[email protected]>
Co-authored-by: Somshubra Majumdar <[email protected]>
Signed-off-by: SeanNaren <[email protected]>

* Fix current tests

Signed-off-by: SeanNaren <[email protected]>

* More test coverage

Signed-off-by: SeanNaren <[email protected]>

* Address reviews

Signed-off-by: SeanNaren <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Address review

Signed-off-by: SeanNaren <[email protected]>

* Drop bf16 test

Signed-off-by: SeanNaren <[email protected]>

* Address review

Signed-off-by: SeanNaren <[email protected]>

* remove print

Signed-off-by: SeanNaren <[email protected]>

* Add bf16

Signed-off-by: SeanNaren <[email protected]>

Signed-off-by: SeanNaren <[email protected]>
Signed-off-by: ericharper <[email protected]>
Signed-off-by: smajumdar <[email protected]>
Signed-off-by: nithinraok <[email protected]>
Signed-off-by: Xuesong Yang <[email protected]>
Signed-off-by: Micha Livne <[email protected]>
Signed-off-by: shanmugamr1992 <[email protected]>
Signed-off-by: MaximumEntropy <[email protected]>
Signed-off-by: Jin Li <[email protected]>
Signed-off-by: subhankar-ghosh <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Eric Harper <[email protected]>
Co-authored-by: Somshubra Majumdar <[email protected]>
Co-authored-by: Nithin Rao <[email protected]>
Co-authored-by: Xuesong Yang <[email protected]>
Co-authored-by: Micha Livne <[email protected]>
Co-authored-by: shanmugamr1992 <[email protected]>
Co-authored-by: MaximumEntropy <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: liji-nv <[email protected]>
Co-authored-by: Subhankar Ghosh <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Fix BF16 test (#5162)

Signed-off-by: SeanNaren <[email protected]>

Signed-off-by: SeanNaren <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Fix errors in speaker diarization nemo docs (#5153)

* fix docs and docstrings for MSDD

Signed-off-by: Taejin Park <[email protected]>

* fix nemo docs errors

Signed-off-by: Taejin Park <[email protected]>

* reflected review comments

Signed-off-by: Taejin Park <[email protected]>

Signed-off-by: Taejin Park <[email protected]>
Signed-off-by: Hainan Xu <[email protected]>

* Add interleaved pipeline schedule to GPT (#5025)

* add virtual pipeline size to config

Signed-off-by: ericharper <[email protected]>

* convert model to list of modules

Signed-off-by: ericharper <[email protected]>

* convert model to list of modules

Signed-off-by: ericharper <[email protected]>

* convert model to list of modules

Signed-off-by: ericharper <[email protected]>

* update for list of modules

Signed-off-by: ericharper <[email protected]>

* add virtual to init

Signed-off-by: ericharper <[email protected]>

* update first last stage embedding all reduce

Signed-off-by: ericharper <[email protected]>

* update sequence parallel all reduce for virtual models

Signed-off-by: ericharper <[email protected]>

* runs but we get an error

Signed-off-by: ericharper <[email protected]>

* set virtual rank 0 after looping

Signed-off-by: ericharper <[email protected]>

* account for virtual when determinining first and last pipeline stages

Signed-off-by: ericharper <[email protected]>

* checkpointing for virtual models in progress

Signed-off-by: ericharper <[email protected]>

* add checkpoint hooks

Signed-off-by: ericharper <[email protected]>

* working on validation when resuming

Signed-off-by: ericharper <[email protected]>

* skip sanity val steps by default in config

Signed-off-by: ericharper <[email protected]>

* remove comment

Signed-off-by: ericharper <[email protected]>

* log number of params

Signed-off-by: ericharper <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* style

Signed-off-by: ericharper <[email protected]>

* check if self.model is a list

Signed-off-by: ericharper <[email protected]>

* make virtual pipeline default size None on init

Signed-off-by: ericharper <[email protected]>

* make virtual pipeline default to None in config

Signed-off-by: ericharper <[email protected]>

* remove ensure_divisibility call

Signed-off-by: ericharper <[email protected]>

* fix lgtm alerts

Signed-off-by: ericharper <[email protected]>

* remove num_sanity_val_steps from config

Signed-off-by: ericharper <complex451@gmai…
  • Loading branch information
Show file tree
Hide file tree
Showing 36 changed files with 7,370 additions and 137 deletions.
189 changes: 189 additions & 0 deletions examples/multimodal/speech_llm/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,189 @@
# Modular SpeechLLM

This directory contains example scripts to train and evaluate modular SpeechLLM (e.g, SALM[1], etc).

## Requirements
You will need to install this specific branch of NeMo, or use the provided Dockerfile in the root directory of this repository to build a Docker image with all the necessary dependencies.

## Architecture

In general, there're three main components of a modular SpeechLLM:
- An audio encoder that processes the input audio and produces a sequence of audio embeddings.
- A modality adapter that processes the audio embeddings and produces a sequence of embeddings in the same latent space as the token embeddings of a pretrained large language model (LLM).
- A pretrained large language model (LLM) that processes embeddings from the modality adapter as well as token embeddings of input prompt, and produces the text output. The audio embeddings and text token embeddings are concatenated in time dimension before going into the LLM.
- The LLM produces text outputs based on the concatenated input audio and text embedding.

## Usage

### Input Format

You'll need to prepare data in the NeMo manifest format, where each line is a python dictionary with some keys, for example:
```
{
"audio_filepath": "path/to/audio.wav",
"offset": 0.0, # offset of the audio in seconds, this is an optional field
"duration": 10.0 , # duration of the audio in seconds, can set to `None` to load the whole audio
"context": "what is the transcription of the audio?", # text prompt for the audio, see below for more details
"answer": "the transcription of the audio", # optional for inference, default to "na" in dataloader
}
```

The `context` field in the manifest is optional, and you can put a list of context in a context file (one context for each line) then set `++model.data.train_ds.context_file=<path to to context file>` to ask the dataloader to randomly pick a context from the file for each audio sample. This is useful for training with multiple prompts for the same task. If neither `context` field nor `context_file` is provided, the dataloader will use a default context `what does the audio mean?` for all audios. During inference, it is recommended to have the `context` field in the manifest.

#### **Customizing the fields to use**

You can also use other fields in the manifest to replace the `context` and `answer`fields, but you'll also need to change the `prompt_template` to use the new field names. For example, if you desire to use the new fields `input_text` and `output_text`, you need to set:
```bash
++model.data.train_ds.context_key=input_text \
++model.data.train_ds.answer_key=output_text \
++model.data.train_ds.prompt_template="'Q: {input_text}\nA: {output_text}'"
```
Note that there're single quotes around the prompt template (to avoid hydra errors), and the field names are wrapped in curly braces.

#### **Customizing the input format**

If you would like to use multiple audios, you can set the `audio_filepath` to be a list of audio file paths, and specify the location of each audio by using a special `audio_locator` string in the context. The choice of `audio_locator` should also be passed into the config. For example, if you have a manifest item like this:
```
{
"audio_filepath": ["path/to/audio1.wav", "path/to/audio2.wav"],
"context": "what is the transcription of the [audio] and [audio]?", # text prompt for the audio, see below for more details
"answer": "the transcription of the audio1 and audio2", # optional for inference, default to "na" in dataloader
}
```
You can set the `audio_locator` to be `[audio]` in the config:
```bash
++model.data.train_ds.audio_locator='[audio]'
```

By using `audio_locator`, the dataloader will replace the `audio_locator` in the context with the corresponding audio features extracted for each audio. You need to make sure that the number of audio locators in the context matches the number of audio files in the `audio_filepath` field.

### Training

There are several configs for training a SpeechLLM:
- `conf/modular_audio_gpt_config_peft.yaml`: a config for training a SpeechLLM with PEFT (e.g., LoRA), where you don't want to tune the whole LLM but still want to adapt the LLM to your needs.
- `conf/modular_audio_gpt_config_sft.yaml`: a config for training a SpeechLLM without PEFT, where you might want to tune the whole LLM or simply freeze it and use as is.
- `conf/modular_audio_gpt_multi_enc_config_peft.yaml`: a config for training a SpeechLLM with multiple audio encoders and PEFT, where you can add speaker embeddings to the audio embeddings. Currently only TitaNet is supported as the speaker encoder.

With any config, you can set the following flags to control which components to train or freeze:
- `model.freeze_llm` # Generally set to `True` unless you want to fine-tune the whole LLM.
- `model.freeze_audio_encoder` # Generally set to `False` unless you want to freeze the audio encoder.
- `model.freeze_modality_adapter` # Generally set to `False` since we want to train the modality adapter.

In addition to the config file, you will also need to prepare the audio encoder and the LLM as `*.nemo` files.

To train a SpeechLLM that uses LoRA, you can run the following script:
```bash
MEGATRON_MODEL=/path/to/megatron-model.nemo
ASR_MODEL=/path/to/audio-model.nemo # only the encoder part will be loaded. e.g, stt_en_fastconformer_transducer_large.nemo

TRAIN_MANIFESTS="[/data/train_1.json,/data/train_2.json]"
VAL_MANIFESTS="[/data/dev_1.json,/data/dev_2.json]"
VAL_NAMES="[dev-1,dev-2]" # names to display when logging validation results for each dataset

CUDA_VISIBLE_DEVICES="0,1" python modular_audio_gpt_train.py --config-path="./conf" --config-name "modular_audio_gpt_config_peft" \
trainer.devices=-1 \
model.freeze_audio_encoder=True \
model.freeze_llm=True \
model.global_batch_size=4 \ # global_batch_size = micro_batch_size * num_gpus_per_node * num_nodes * accumulate_grad_batches
model.micro_batch_size=2 \ # micro_batch_size = batch_size_per_gpu
model.pretrained_audio_model=$ASR_MODEL \
model.restore_from_path=$MEGATRON_MODEL \
model.data.train_ds.manifest_filepath=$TRAIN_MANIFESTS \
model.data.validation_ds.manifest_filepath=$VAL_MANIFESTS \
++model.data.validation_ds.names=$VAL_NAMES \
```

You can also use tarred datasets for faster training by converting normal NeMo datasets to tarred datasets using this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/speech_recognition/convert_to_tarred_audio_dataset.py) and follow the same dataset setting as shown in the script. Also, `accumulate_grad_batches` is automatically set by the model based on `global_batch_size` and `micro_batch_size`, so there's no need to manually calculate and set `trainer.accumulate_grad_batches`.


#### **Multi-task Training**

In order to use a context file, you can set `++model.data.train_ds.context_file=<path to to context file>` in the command line or use multiple context files with `++model.data.train_ds.context_file=[<path to to context file1>,<path to context file2>,...]`. If the number of context files is equal to the number of provided datasets, the dataloader will assigne each context file to a dataset. Otherwise, the dataloader will randomly pick a context file from all provided context files for each audio sample. Using multiple context files is useful for training with multiple tasks, where each task has its own set of prompts. Meanwhile, you can control the weights for different tasks/datasets by using concatentated tarred datasets, where you can assign weights to datasets by:
```
++model.data.train_ds.is_tarred=True \
++model.data.train_ds.is_concat=True \
++model.data.train_ds.manifest_filepath=[/path/to/data1/tarred_audio_manifest.json,/path/to/data2/tarred_audio_manifest.json] \
++model.data.train_ds.tarred_audio_filepaths=[/path/to/data1/audio__OP_0..1023_CL_.tar,/path/to/data2/audio__OP_0..1023_CL_.tar] \
++model.data.train_ds.concat_sampling_technique='random' \
++model.data.train_ds.concat_sampling_probabilities=[0.4,0.6] \
```

#### **Available Audio Encoders**

Currently all NeMo ASR models are supported, others may also work if they have an `encoder` attribute that returns a sequence of audio embeddings, and a `preprocessor` that takes raw audios and returns a sequence of features for the encoder. The model should also have a `cfg` attribute that returns a `omegaconf.DictConfig` object of model configuration. In addition to a local model, you can also set `pretrained_audio_model` to a model from NGC (e.g., `stt_en_fastconformer_transducer_large`) or Huggingface (e.g., `nvidia/parakeet-rnnt-1.1b`), and the script will download the model and use it for training.


### Inference

The script you need to perform inference is `modular_audio_gpt_eval.py`, and the corresponding config file is `conf/modular_audio_gpt_config_eval.yaml`, where you mainly need to set the `model.data.test_ds` fields as well as paths to the checkpoints.

#### **Inference with Intermediate Checkpoints**

If you want to perform inference with intermediate checkpoints, where there's no single NeMo checkpoint file that contains all the model parameters, you can use the following script to load each component from its own checkpoint file and perform inference:

```bash
MEGATRON_CKPT=/path/to/megatron-llm.nemo
ALM_DIR=/path/to/nemo_experiments/job_name
# below is the path to the config used during training
ALM_YAML=$ALM_DIR/version_0/hparams.yaml
# this checkpoint file only contains the trainable params, the backslash is used to avoid hyrda parsing error
ALM_CKPT="$ALM_DIR/checkpoints/AudioGPT--validation_wer\=0.2-step\=100000-epoch\=0-last.ckpt"

TEST_MANIFESTS="[/data/test_1.json,/data/test_2.json]"
TEST_NAMES="[test-1,test-2]"

CUDA_VISIBLE_DEVICES=0 python modular_audio_gpt_eval.py \
model.restore_from_path=$MEGATRON_CKPT \
model.peft.restore_from_path=$ALM_CKPT \
model.peft.restore_from_hparams_path=$ALM_YAML \
model.data.test_ds.manifest_filepath=$TEST_MANIFESTS \
model.data.test_ds.names=$TEST_NAMES \
model.data.test_ds.metric.name="bleu" \
model.data.test_ds.global_batch_size=8 \
model.data.test_ds.micro_batch_size=8 \
model.data.test_ds.tokens_to_generate=256 \
++inference.greedy=False \
++inference.top_k=50 \
++inference.top_p=0.95 \
++inference.temperature=0.4 \
++inference.repetition_penalty=1.2 \
++model.data.test_ds.output_dir=${ALM_DIR}
```

If you froze the audio encoder during training, you will also need to add the following line to the above script:
```bash
++model.pretrained_audio_model=/path/to/audio/model.nemo
```

If you want to save the intermediate checkpoints to a single NeMo checkpoint file, you can add the following line to the above script:
```bash
++save_to_nemo=/path/to/save/model.nemo
```

#### **Inference with Complete SpeechLLM Checkpoints**

If you want to load a trained SpeechLLM from cloud, you can use the following script:
```bash
TEST_MANIFESTS="[/data/test_1.json,/data/test_2.json]"
TEST_NAMES="[test-1,test-2]"

CUDA_VISIBLE_DEVICES=0 python modular_audio_gpt_eval.py \
model.from_pretrained="speechllm_fc_llama2_7b" \
model.data.test_ds.manifest_filepath=$TEST_MANIFESTS \
model.data.test_ds.names=$TEST_NAMES \
model.data.test_ds.global_batch_size=8 \
model.data.test_ds.micro_batch_size=8 \
model.data.test_ds.tokens_to_generate=256 \
++inference.greedy=False \
++inference.top_k=50 \
++inference.top_p=0.95 \
++inference.temperature=0.4 \
++inference.repetition_penalty=1.2 \
++model.data.test_ds.output_dir="./test_outputs"
```

If you have a local `.nemo` file, you can use `model.restore_from_path=/path/to/model.nemo` to replace the line `model.from_pretrained="speechllm_fc_llama2_7b"` in the above example.


## Reference
[1] Chen, Z.\*, Huang, H.\*, Andrusenko, A., Hrinchuk, O., Puvvada, K.C., Li, J., Ghosh, S., Balam, J. and Ginsburg, B., 2023. SALM: Speech-augmented Language Model with In-context Learning for Speech Recognition and Translation. ICASSP'24.
128 changes: 128 additions & 0 deletions examples/multimodal/speech_llm/conf/modular_audio_gpt_config_eval.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
# this config is used to perform inference on SpeechLLM checkpoints
name: megatron_audio_gpt_eval

trainer:
devices: 1
accelerator: gpu
num_nodes: 1
precision: bf16
logger: False # logger provided by exp_manager
enable_checkpointing: False
use_distributed_sampler: False
max_epochs: 1
max_steps: 1000000
log_every_n_steps: 10 # frequency with which training steps are logged
val_check_interval: 1.0 # If is an int n > 1, will run val every n training steps, if a float 0.0 - 1.0 will run val every epoch fraction, e.g. 0.25 will run val every quarter epoch
gradient_clip_val: 1.0

exp_manager:
explicit_log_dir: null
exp_dir: null
name: ${name}
create_wandb_logger: False
wandb_logger_kwargs:
project: null
name: null
resume_if_exists: True
resume_ignore_no_checkpoint: True
create_checkpoint_callback: True
checkpoint_callback_params:
monitor: validation_${model.data.validation_ds.metric.name}
save_top_k: 1
mode: min
save_nemo_on_train_end: True
filename: '${name}--{${exp_manager.checkpoint_callback_params.monitor}:.3f}-{step}'
model_parallel_size: ${model.tensor_model_parallel_size}
always_save_nemo: True
save_best_model: False

model:
from_pretrained: null # pretrained model name on NGC or HF
restore_from_path: null # Path to an existing .nemo model you wish to add new tasks to or run inference with
resume_from_checkpoint: null # The path to a checkpoint file to continue the training, restores the whole state including the epoch, step, LR schedulers, apex, etc.
pretrained_audio_model: null # Path to a .nemo model for audio encoder

seed: 1234
tensor_model_parallel_size: 1 # intra-layer model parallelism
pipeline_model_parallel_size: 1 # inter-layer model parallelism

global_batch_size: 1
micro_batch_size: 1
sync_batch_comm: False
megatron_amp_O2: False

## Sequence Parallelism
# Makes tensor parallelism more memory efficient for LLMs (20B+) by parallelizing layer norms and dropout sequentially
# See Reducing Activation Recomputation in Large Transformer Models: https://arxiv.org/abs/2205.05198 for more details.
sequence_parallel: False

## Activation Checkpoint
activations_checkpoint_granularity: null # 'selective' or 'full'
activations_checkpoint_method: null # 'uniform', 'block', not used with 'selective'
# 'uniform' divides the total number of transformer layers and checkpoints the input activation
# of each chunk at the specified granularity
# 'block' checkpoints the specified number of layers per pipeline stage at the specified granularity
activations_checkpoint_num_layers: null # not used with 'selective'
activations_checkpoint_layers_per_pipeline: null
answer_only_loss: False # not used right now
gradient_as_bucket_view: False

hidden_dropout: 0.0
attention_dropout: 0.0
ffn_dropout: 0.0

peft: # keep these basic params for reusing in both sft and peft SpeechLMs
restore_from_path: null
restore_from_hparams_path: null
restore_from_ckpt:
checkpoint_name: null
checkpoint_dir: null


data:
test_ds:
manifest_filepath: ??? # Path to a list of JSONL files corresponding to the source data. Data format is identical to train_ds.
names: null # Names of the corresponding datasets used to log metrics.
global_batch_size: 1
micro_batch_size: 1
shuffle: False
num_workers: 0
pin_memory: True
max_seq_length: 2048
min_seq_length: 1
drop_last: False
end_string: ${data.train_ds.end_string} # don't change, let hydra resolve from saved config
context_key: ${data.train_ds.context_key} # don't change, let hydra resolve from saved config
answer_key: ${data.train_ds.answer_key} # don't change, let hydra resolve from saved config
add_eos: ${data.train_ds.add_eos} # don't change, let hydra resolve from saved config
add_sep: ${data.train_ds.add_sep} # don't change, let hydra resolve from saved config
add_bos: ${data.train_ds.add_bos} # don't change, let hydra resolve from saved config
separate_prompt_and_response_with_newline: ${data.train_ds.separate_prompt_and_response_with_newline}
write_predictions_to_file: True
output_file_path_prefix: "preds" # Prefix of the file to write predictions to.
truncation_field: ${data.train_ds.truncation_field} # don't change, let hydra resolve from saved config
index_mapping_dir: null # Path to a directory to write index mapping files.
prompt_template: ${data.train_ds.prompt_template} # don't change, let hydra resolve from saved config
tokens_to_generate: 512
log_every_n_steps: 1
sample_rate: ${data.train_ds.sample_rate} # don't change, let hydra resolve from saved config
audio_locator: null # set it to allow multiple audios in a sample, e.g. '|audio|', and use it in the context field of manifest to specify the locations of audios (`audio_filepath` is a list of audios).

metric:
name: "bleu" # Name of the evaluation metric to use. Options: ['exact_string_match', 'loss', 'wer', 'bleu', 'rouge']
average: null # Average the metric over the dataset. Options: ['macro', 'micro']. Works only for 'F1', 'accuracy' etc. Refer to torchmetrics for metrics where this is supported.
num_classes: null

save_as_nemo: null # optional string, set to save the whole model into a single nemo file

inference:
greedy: True # Whether or not to use sampling ; use greedy decoding otherwise
top_k: 0 # The number of highest probability vocabulary tokens to keep for top-k-filtering.
top_p: 0.9 # If set to float < 1, only the most probable tokens with probabilities that add up to top_p or higher are kept for generation.
temperature: 1.0 # sampling temperature
all_probs: False # whether return the log prob for all the tokens in vocab
repetition_penalty: 1.2 # The parameter for repetition penalty. 1.0 means no penalty.
min_tokens_to_generate: 0 # The minimum length of the sequence to be generated.
compute_logprob: False # a flag used to compute logprob of all the input text, a very special case of running inference, default False
outfile_path: output.txt
compute_attention_mask: True
Loading

0 comments on commit 6aef236

Please sign in to comment.