Skip to content

Commit 01fc632

Browse files
yaoyu-33popcornellparthmannanVictor49152hXl3s
committed
Add All Multimodal Source Code Part 2: Text to image, x to nerf (NVIDIA#7970)
* Update README.md: output_path --> output_manifest_filepath (#7442) Signed-off-by: Samuele Cornell <[email protected]> * Updating FlashAttention API to match FlashAttentionV2 * Multiple fixes for mm * Fix CI inductor issue and update to torch compile * Remove suppress error * Fix when conversion config uses fp16 and it complains about precision plugin * Fixing FAv2 API usage * Initial release of content filtering model * Added synthetic dataloader for precached and online mode * Mingyuanm/dreambooth opt * Add llama2 support in neva training * Fix sampler length * Fix all precision issues in nemo multimodal * Add rope dynamic linear scaling (#7437) * Add dynamic linear scaling Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix bug Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix Signed-off-by: Cheng-Ping Hsieh <[email protected]> --------- Signed-off-by: Cheng-Ping Hsieh <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Yang Zhang <[email protected]> * Fix None dataloader issue in PTL2.0 (#7455) * Fix None dataloader issue in PTL2.0 Signed-off-by: KunalDhawan <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * updating values of self._validation_dl and self._test_dl as well Signed-off-by: KunalDhawan <[email protected]> * updating values of self._validation_dl and self._test_dl as well Signed-off-by: KunalDhawan <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: KunalDhawan <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * [ASR] Confidence measure -> method renames (#7434) * measure -> method Signed-off-by: Aleksandr Laptev <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Aleksandr Laptev <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Add steps for document of getting dataset 'SF Bilingual Speech' (#7378) * Add steps for document of getting dataset 'SF Bilingual Speech' Signed-off-by: Robin Dong <[email protected]> * Update datasets.rst added a link from a tutorial demonstrating detailed data prep steps. Signed-off-by: Xuesong Yang <[email protected]> --------- Signed-off-by: Robin Dong <[email protected]> Signed-off-by: Xuesong Yang <[email protected]> Co-authored-by: Xuesong Yang <[email protected]> * RNN-T confidence and alignment bugfix (#7381) * new frame_confidence and alignments lists are now always created after the while loop Signed-off-by: Aleksandr Laptev <[email protected]> * tests added Signed-off-by: Aleksandr Laptev <[email protected]> --------- Signed-off-by: Aleksandr Laptev <[email protected]> * Fix resume from checkpoint in exp_manager (#7424) (#7426) Signed-off-by: Abhishree <[email protected]> Co-authored-by: Abhishree Thittenamane <[email protected]> Co-authored-by: Eric Harper <[email protected]> * Fix checking of cuda/cpu device for inputs of Decoder (#7444) * Fix checking of cuda/cpu device for inputs of Decoder Signed-off-by: Robin Dong <[email protected]> * Update tacotron2.py Signed-off-by: Jason <[email protected]> --------- Signed-off-by: Robin Dong <[email protected]> Signed-off-by: Jason <[email protected]> Co-authored-by: Jason <[email protected]> * Fix failure of ljspeech's get_data.py (#7430) * Fix failure of ljspeech's get_data.py Signed-off-by: Robin Dong <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Robin Dong <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * [TTS] Fix audio codec type checks (#7373) * [TTS] Fix audio codec type checks Signed-off-by: Ryan <[email protected]> * [TTS] Fix audio codec tests Signed-off-by: Ryan <[email protected]> --------- Signed-off-by: Ryan <[email protected]> * [TTS] Add dataset to path of logged artifacts (#7462) * [TTS] Add dataset to path of logged artifacts Signed-off-by: Ryan <[email protected]> * [TTS] Revert axis name back to Audio Frames Signed-off-by: Ryan <[email protected]> --------- Signed-off-by: Ryan <[email protected]> * Fix sft dataset truncation (#7464) * Add fix Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix Signed-off-by: Cheng-Ping Hsieh <[email protected]> --------- Signed-off-by: Cheng-Ping Hsieh <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Automatic Lip Reading Recognition (ALR) - ASR/CV (Visual ASR) (#7330) * striding_conv1d_k5 and dw_striding_conv1d_k5 subsampling Signed-off-by: mburchi <[email protected]> * transpose conv1d inputs Signed-off-by: mburchi <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: mburchi <[email protected]> * Update subsampling.py change striding_conv1d_k5 to striding_conv1d Signed-off-by: Maxime Burchi <[email protected]> * cv branch Signed-off-by: mburchi <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * video manifest Signed-off-by: mburchi <[email protected]> * add collection classes Signed-off-by: mburchi <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add test_step_outputs Signed-off-by: mburchi <[email protected]> * correct manifest bug when having only audio or only videos Signed-off-by: mburchi <[email protected]> * correct manifest bug when having only audio or only videos Signed-off-by: mburchi <[email protected]> * clean references Signed-off-by: mburchi <[email protected]> * freeze unfreeze transcribe cv models Signed-off-by: mburchi <[email protected]> * correct manifest get_full_path bug Signed-off-by: mburchi <[email protected]> * update for PR Signed-off-by: mburchi <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * guard torchvision Signed-off-by: mburchi <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update nemo/collections/cv/data/video_to_text_dataset.py Co-authored-by: Igor Gitman <[email protected]> Signed-off-by: Maxime Burchi <[email protected]> * _video_speech_collate_fn in cv/data/video_to_text.py Signed-off-by: mburchi <[email protected]> * add self.out = None to asr subsampling Signed-off-by: mburchi <[email protected]> * Update nemo/collections/cv/data/video_to_text_dataset.py Co-authored-by: Igor Gitman <[email protected]> Signed-off-by: Maxime Burchi <[email protected]> * cv -> multimodal/speech_cv branch Signed-off-by: mburchi <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: mburchi <[email protected]> Signed-off-by: Maxime Burchi <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Igor Gitman <[email protected]> * HF StarCoder to NeMo conversion script (#7421) * Script to convert HF StarCoder checkpoint to NeMo Signed-off-by: Jan Lasek <[email protected]> * StarCoder conversion test Signed-off-by: Jan Lasek <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: Jan Lasek <[email protected]> * Fix test Signed-off-by: Jan Lasek <[email protected]> * Catch up with save_to changes Signed-off-by: Jan Lasek <[email protected]> * Don't abbreviate args for clarity Signed-off-by: Jan Lasek <[email protected]> * Configurable precision: BF16 vs FP32 Signed-off-by: Jan Lasek <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Jan Lasek <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix bug when loading dist ckpt in peft (#7452) Signed-off-by: Hongbin Liu <[email protected]> Co-authored-by: Hongbin Liu <[email protected]> * Fix adding positional embeddings in-place in transformer module (#7440) Signed-off-by: Tamerlan Tabolov <[email protected]> Co-authored-by: Cheng-Ping Hsieh <[email protected]> * Fix (#7478) Signed-off-by: Cheng-Ping Hsieh <[email protected]> * add sleep (#7498) (#7499) * add sleep * add sleep onto config instead * add comment --------- Signed-off-by: Gerald Shen <[email protected]> Co-authored-by: Gerald Shen <[email protected]> * Fix exp manager check for sleep (#7503) (#7504) Signed-off-by: smajumdar <[email protected]> Co-authored-by: Somshubra Majumdar <[email protected]> * bugfix: trainer.accelerator=auto from None. (#7492) (#7493) Signed-off-by: Xuesong Yang <[email protected]> Co-authored-by: Xuesong Yang <[email protected]> * [doc] fix broken link (#7481) Signed-off-by: Stas Bekman <[email protected]> * [TTS] Read audio as int32 to avoid flac read errors (#7477) * [TTS] Read audio as int32 to avoid flac read errors Signed-off-by: Ryan <[email protected]> * [TTS] Add comment about read failures Signed-off-by: Ryan <[email protected]> --------- Signed-off-by: Ryan <[email protected]> * Add dataset 'AISHELL-3' from OpenSLR for training mandarin TTS (#7409) * Add dataset 'AISHELL-3' from OpenSLR for training mandarin TTS * Train 'AISHELL-3' dataset with multi-speakers Signed-off-by: Robin Dong <[email protected]> * Update get_data.py update copyright header Signed-off-by: Xuesong Yang <[email protected]> * Update get_data.py added a disclaimer Signed-off-by: Xuesong Yang <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add new configuration file for AISHELL3 with multispeaker of fastpitch Signed-off-by: Robin Dong <[email protected]> --------- Signed-off-by: Robin Dong <[email protected]> Signed-off-by: Xuesong Yang <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Xuesong Yang <[email protected]> * dllogger - log on rank 0 only (#7513) Signed-off-by: Stas Bekman <[email protected]> * Fix TTS FastPitch tutorial (#7494) (#7516) * Fix --------- Signed-off-by: Cheng-Ping Hsieh <[email protected]> Co-authored-by: Cheng-Ping Hsieh <[email protected]> * Fix get_dist() tensor dimension (#7506) (#7515) Signed-off-by: Jocelyn Huang <[email protected]> Co-authored-by: Jocelyn <[email protected]> * bugfix: specify trainer.strategy=auto when devices=1 (#7509) (#7512) Signed-off-by: Xuesong Yang <[email protected]> Co-authored-by: Xuesong Yang <[email protected]> Co-authored-by: Xuesong Yang <[email protected]> * fix (#7511) Signed-off-by: Abhinav Khattar <[email protected]> * [TTS] Fix FastPitch data prep tutorial (#7524) Signed-off-by: Ryan <[email protected]> * add italian tokenization (#7486) * add italian tokenization Signed-off-by: GiacomoLeoneMaria <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add more ipa lexicon it Signed-off-by: GiacomoLeoneMaria <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix error deletion Signed-off-by: GiacomoLeoneMaria <[email protected]> * add test Signed-off-by: GiacomoLeoneMaria <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: GiacomoLeoneMaria <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Replace None strategy with auto in tutorial notebooks (#7521) (#7527) Signed-off-by: Abhishree <[email protected]> Co-authored-by: Abhishree Thittenamane <[email protected]> * unpin setuptools (#7534) (#7535) Signed-off-by: fayejf <[email protected]> Co-authored-by: fayejf <[email protected]> * remove auto generated examples (#7510) * explicitly remove autogenerated examples for data parallel evaluation Signed-off-by: arendu <[email protected]> * mark autogenrated and remove it for test Signed-off-by: arendu <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: arendu <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Add the `strategy` argument to `MegatronGPTModel.generate()` (#7264) It is passed as an explicit argument rather than through `**strategy_args` so as to ensure someone cannot accidentally pass other arguments that would end up being ignored. It is a keyword-only argument to ensure that if in the future we want to update the signature to `**strategy_args`, we can do it without breaking code. Signed-off-by: Olivier Delalleau <[email protected]> * Fix PTL2.0 related ASR bugs in r1.21.0: Val metrics logging, None dataloader issue (#7531) (#7533) * fix none dataloader issue ptl2 * ptl2.0 logging fixes for rnnt_models --------- Signed-off-by: KunalDhawan <[email protected]> Co-authored-by: Kunal Dhawan <[email protected]> Co-authored-by: Nithin Rao <[email protected]> * gpus -> devices (#7542) (#7545) Signed-off-by: Nithin Rao Koluguri <nithinraok> Co-authored-by: Nithin Rao <[email protected]> * Update FFMPEG version to fix issue with torchaudio (#7551) (#7553) Signed-off-by: smajumdar <[email protected]> Co-authored-by: Somshubra Majumdar <[email protected]> * PEFT GPT & T5 Refactor (#7308) * initial implementation of add_adapters API * correct type hint * Add config in add_adapters for save and load (@author bobchen) * Remove AdapterConfig to avoid import error * Add AdaterConfig back and move adaptermixin to sft model * Add NLPSaveRestoreConnector as default in NLPModel.restore_from * Add restore_from_nemo_with_adapter and test script * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * rename t5 file and classes to be consistent with GPT * add t5 sft dataset * add support for single-file format with T5SFTDataset * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Various small changes to make T5 SFT work like GPT SFT * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add adapter evaluation test script * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add MultiAdaterConfig for ia3 and fix builder issue * Make ptuning for T5SFTModel work using mixin * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add IA3_Adapter for AdapterName * Add adapter name for ptuning and attention adapter * Make test script GPT/T5 agnostic * Add layer selection feature * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Integrate adapter name and config * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * update gpt peft tuning script to new API * add t5 peft tuning script with new API * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix IA3 layer selection issue * Override state_dict on SFT model instead of mixin * Add load adapter by adapter config * move peft config map away from example script * auto get config from nemo adapter * Move PEFTConfig to new file * fix ckpt save/load for t5 * name change: add_adapters -> add_adapter * variable name change * update t5 script * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix t5 issues * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add weight tying * update gpt tuning script * PEFT-API proposal * Fix according to comments * update tuning scripts * move merge_cfg_with to mixin class since it applies to both gpt and t5 and requires the model class for restore * Add mcore_gpt support for NLPAdapterMixin * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix typo * variable name change to distinguish "peft" and "adapter" * override `load_adapters` to support `add_adapter` name change * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * update tuning and eval script for adapter save/load * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add Ptuning on first stage only * add lora tutorial for review * Fix layer selection for mcore * add landing page * fix resume training Signed-off-by: jasonwan <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add mcore condition in sharded_state_dict to make sft work * Update lora_tutorial.md First edit of this file for PEFT documentation for NeMO Signed-off-by: hkelly33 <[email protected]> * rename Adapter to AttentionAdapter to avoid confusion in doc * Change load_adapters to load .nemo * add quick start guide * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add load_adapters with .ckpt * Remove setup_complete changes in load_adapters * update landing page * remove typo * Updated quick_start.md per Chen Cui Signed-off-by: hkelly33 <[email protected]> * Add inference config merger and tutorial * Add doc string for NLPAdapterModelMixin and deprecated warning on MegatronGPTPEFTModel * add supported_methods.md and update other documentations * Update supported_methods.md minor updates. Signed-off-by: Adi Renduchintala <[email protected]> * Update landing_page.md minor update. Signed-off-by: Adi Renduchintala <[email protected]> * Modify doc string for NLPAdapterModelMixin * Add doc string add_adapters in NLPAdapterModelMixin * rename canonical adapters * remove mcore hard dependency * [PATCH] move microbatch calculator to nemo from apex * remove apex dependency in gpt and t5 sft models * remove apex dependency in gpt model * render doc strings * fix * Add missing virtual_tokens on ptuning * fix docstrings * update gpt-style model coverage in docs * update docstring * Remove pdb * add lightning_fabric to make docstring rendering work * Add Ptuning missing key * try docstring rendering * Fix ptuning issue * update gpt t5 peft tuning and eval scripts * typos * update eval config * fix bug relating to apex dependency removal * typo * make predict step behave the same as test step * make lora tutorial work in notebook * cosmetics * update yaml scripts * mcore_gpt attribute optional * typo * update eval scripts and fix T5 eval bugs * add NLPDDPStrategyNotebook and trainer builder logic to use it * update lora notebook to use new trainer builder * fix microbatch calculator bug for inference after training * Convert markdown files to RST and incorporate with doc * typo * revise language * remove extra cell * remove unnecessary inheritance * remove old tests * move layer selection default so logging messages make sense * remove `save_adapters` as adapter weights are saved automatically during training * initialize weights from a checkpoint instead of randomly * multiple fields can form a context (#7147) * list of context fields and flexible prompt template Signed-off-by: arendu <[email protected]> * list of fields for context Signed-off-by: arendu <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix bug Signed-off-by: Cheng-Ping Hsieh <[email protected]> * Fix bug Signed-off-by: Cheng-Ping Hsieh <[email protected]> * Add multiple truncation fields and middle truncation Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Compatible to old ckpt Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix tokenize detokenize issue Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Remove detokenization, add truncation augmentation Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Resolve comments Signed-off-by: Cheng-Ping Hsieh <[email protected]> * Remove unused import Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * revert eos Signed-off-by: Cheng-Ping Hsieh <[email protected]> * Add tokenizer space_sensitive attribute Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix error Signed-off-by: Cheng-Ping Hsieh <[email protected]> * Fix erorr and use re Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix bug Signed-off-by: Cheng-Ping Hsieh <[email protected]> * Change assert logic Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Follow adi suggestion Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Remove merge function Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add example and comment Signed-off-by: Cheng-Ping Hsieh <[email protected]> * Remove context_key and add comment Signed-off-by: Cheng-Ping Hsieh <[email protected]> * Remove random truncation Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix bug Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix template none Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix bug Signed-off-by: Cheng-Ping Hsieh <[email protected]> --------- Signed-off-by: arendu <[email protected]> Signed-off-by: Cheng-Ping Hsieh <[email protected]> Signed-off-by: Cheng-Ping Hsieh <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Cheng-Ping Hsieh <[email protected]> Co-authored-by: Cheng-Ping Hsieh <[email protected]> * revert config changes * remove accidental breakpoint * support TP>1 loading * infer adapter type from checkpoint in during eval * breakup add adapter * enable interpolation of train_ds and validation_ds * update metric calc script to conform to single-file eval format * remove extraneous print * update lora notebook for updated merge_inference_cfg * Update nlp_adapter_mixins.py variable name change Signed-off-by: Chen Cui <[email protected]> * turn off grad scaler for PP to match old scripts * remove PEFTSaveRestoreConnector since functionality all covered by the new mixin class * remove resume_from_checkpoint check since covered in #7335 * revert changes made in eval config interpolation * more interpolation * typo * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * remove dup line Signed-off-by: Chen Cui <[email protected]> * code style warnings Signed-off-by: Chen Cui <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix config mistake Signed-off-by: Chen Cui <[email protected]> * add copyright header Signed-off-by: Chen Cui <[email protected]> * fix code check warnings Signed-off-by: Chen Cui <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * revert changes to remove apex dependency (mixed apex+nemo microbatch calculator broke some CI tests) Signed-off-by: Chen Cui <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add more deprecation notices Signed-off-by: Chen Cui <[email protected]> * update deprecation notices Signed-off-by: Chen Cui <[email protected]> * update deprecation notices Signed-off-by: Chen Cui <[email protected]> * consolidate peft and sft scripts Signed-off-by: Chen Cui <[email protected]> * update CI tests Signed-off-by: Chen Cui <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * notebook branch points to main to prepare for merge Signed-off-by: Chen Cui <[email protected]> * fix gpt and t5 validation with any metric other than loss Signed-off-by: Chen Cui <[email protected]> * support pre-extracted checkpoints Signed-off-by: Chen Cui <[email protected]> --------- Signed-off-by: jasonwan <[email protected]> Signed-off-by: hkelly33 <[email protected]> Signed-off-by: Adi Renduchintala <[email protected]> Signed-off-by: arendu <[email protected]> Signed-off-by: Cheng-Ping Hsieh <[email protected]> Signed-off-by: Cheng-Ping Hsieh <[email protected]> Signed-off-by: Chen Cui <[email protected]> Co-authored-by: Chen Cui <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Marc Romeyn <[email protected]> Co-authored-by: jasonwan <[email protected]> Co-authored-by: hkelly33 <[email protected]> Co-authored-by: Adi Renduchintala <[email protected]> Co-authored-by: Yuanzhe Dong <[email protected]> Co-authored-by: Cheng-Ping Hsieh <[email protected]> Co-authored-by: Cheng-Ping Hsieh <[email protected]> * fix a typo (#7496) Signed-off-by: BestJuly <[email protected]> * [TTS] remove curly braces from ${BRANCH} in jupyer notebook cell. (#7554) (#7560) * remove curly braces. * remove installation of pynini. --------- Signed-off-by: Xuesong Yang <[email protected]> Co-authored-by: Xuesong Yang <[email protected]> Co-authored-by: Xuesong Yang <[email protected]> * add youtube embed url (#7570) Signed-off-by: Xuesong Yang <[email protected]> Co-authored-by: Xuesong Yang <[email protected]> * Remap speakers to continuous range of speaker_id for dataset AISHELL3 (#7536) * Remap speakers to continuous range of speaker_id for dataset AISHELL3 * Add new key/value pair to record raw speaker for AISHELL3 dataset Signed-off-by: Robin Dong <[email protected]> --------- Signed-off-by: Robin Dong <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix validation_step_outputs initialization for multi-dataloader (#7546) (#7572) * added correct validation_step_outputs initialization for mutli-dataloader * changed kernel for display * Update logic for validation and test step outputs * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * revert multidataloader changes in multilang ASR notebook --------- Signed-off-by: KunalDhawan <[email protected]> Signed-off-by: smajumdar <[email protected]> Co-authored-by: Kunal Dhawan <[email protected]> Co-authored-by: Somshubra Majumdar <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Append output of val step to self.validation_step_outputs (#7530) (#7532) Signed-off-by: Abhishree <[email protected]> Co-authored-by: Abhishree Thittenamane <[email protected]> * [TTS] fixed trainer's accelerator and strategy. (#7569) (#7574) Signed-off-by: Xuesong Yang <[email protected]> Co-authored-by: Xuesong Yang <[email protected]> Co-authored-by: Xuesong Yang <[email protected]> * Append val/test output to instance variable in EncDecSpeakerLabelModel (#7562) (#7573) * Append val/test output to the instance variable in EncDecSpeakerLabelModel * Handle test case in evaluation_step * Replace type with isinstance --------- Signed-off-by: Abhishree <[email protected]> Co-authored-by: Abhishree Thittenamane <[email protected]> * Fix CustomProgressBar for resume (#7427) (#7522) * Fix CustomProgress Bar for resume and multiple epochs * Edit num_training_batches * Use max_steps as total for progress bar for resume * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Abhishree <[email protected]> Co-authored-by: Abhishree Thittenamane <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * fix typos in nfa and speech enhancement tutorials (#7580) (#7583) Signed-off-by: Elena Rastorgueva <[email protected]> Co-authored-by: Elena Rastorgueva <[email protected]> * Add strategy as ddp_find_unused_parameters_true for glue_benchmark.py (#7454) (#7461) Signed-off-by: Abhishree <[email protected]> Co-authored-by: Abhishree Thittenamane <[email protected]> * update strategy (#7577) (#7578) Signed-off-by: Nithin Rao Koluguri <nithinraok> Co-authored-by: Nithin Rao <[email protected]> * Fix typos (#7581) * Change hifigan finetune strategy to ddp_find_unused_parameters_true (#7579) (#7584) * Change strategy to auto --------- Signed-off-by: Cheng-Ping Hsieh <[email protected]> Co-authored-by: Cheng-Ping Hsieh <[email protected]> * [BugFix] Add missing quotes for auto strategy in tutorial notebooks (#7541) (#7548) * Add missing quotes for auto strategy * Revert trainer.gpus to trainer.devices in Self_Supervised_Pre_Training.ipynb --------- Signed-off-by: Abhishree <[email protected]> Signed-off-by: Abhishree Thittenamane <[email protected]> Co-authored-by: Abhishree Thittenamane <[email protected]> * add build os key (#7596) (#7599) * add build os key * add tools * update to stable version --------- Signed-off-by: Nithin Rao Koluguri <nithinraok> Co-authored-by: Nithin Rao <[email protected]> * StarCoder SFT test + bump PyT NGC image to 23.09 (#7540) * Add SFT StarCoder test Signed-off-by: Jan Lasek <[email protected]> * Remove _modify_config call as it is covered in load_from_nemo just below Signed-off-by: Jan Lasek <[email protected]> * Test with pyt:23.09 container Signed-off-by: Jan Lasek <[email protected]> --------- Signed-off-by: Jan Lasek <[email protected]> * defaults changed (#7600) * defaults changed Signed-off-by: arendu <[email protected]> * typo Signed-off-by: arendu <[email protected]> * update Signed-off-by: arendu <[email protected]> --------- Signed-off-by: arendu <[email protected]> * add ItalianPhonemesTokenizer (#7587) * add ItalianPhonemesTokenizer Signed-off-by: GiacomoLeoneMaria <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix Italian phonemes Signed-off-by: GiacomoLeoneMaria <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add test Signed-off-by: GiacomoLeoneMaria <[email protected]> --------- Signed-off-by: GiacomoLeoneMaria <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Xuesong Yang <[email protected]> * best ckpt fix (#7564) (#7588) Signed-off-by: dimapihtar <[email protected]> Co-authored-by: Dmytro Pykhtar <[email protected]> * Add files via upload (#7598) specifies the branch Signed-off-by: George <[email protected]> Co-authored-by: Xuesong Yang <[email protected]> * Fix validation in G2PModel and ThutmoseTaggerModel (#7597) (#7606) Signed-off-by: Abhishree <[email protected]> Co-authored-by: Abhishree Thittenamane <[email protected]> * Broadcast loss only when using pipeline parallelism and within the pipeline parallel domain (#7576) (#7586) * Broadcast loss only when using pipeline parallelism and within the pipeline parallel domain * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Sangkug Lym <[email protected]> Co-authored-by: Sangkug Lym <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Safeguard nemo_text_processing installation on ARM (#7485) * safeguard nemo_text_processing installing Signed-off-by: Jason <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * update check Signed-off-by: Jason <[email protected]> --------- Signed-off-by: Jason <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Bound transformers version in requirements (#7620) Signed-off-by: Abhishree <[email protected]> * fix llama2 70b lora tuning bug (#7622) * fix llama2 70b lora tuning bug Signed-off-by: Chen Cui <[email protected]> * Update peft_config.py brackets Signed-off-by: Adi Renduchintala <[email protected]> --------- Signed-off-by: Chen Cui <[email protected]> Signed-off-by: Adi Renduchintala <[email protected]> Co-authored-by: Adi Renduchintala <[email protected]> * Fix import error no module name model_utils (#7629) Signed-off-by: Mehadi Hasan Menon <[email protected]> * add fc large ls models (#7641) Signed-off-by: Nithin Rao Koluguri <nithinraok> Co-authored-by: Nithin Rao Koluguri <nithinraok> * bugfix: trainer.gpus, trainer.strategy, trainer.accelerator (#7621) (#7642) * [TTS] bugfix for Tacotron2 tutorial due to PTL 2.0 * trainer.gpus -> trainer.devices * fixed related tutorial bugs --------- Signed-off-by: Xuesong Yang <[email protected]> Co-authored-by: Xuesong Yang <[email protected]> * fix ssl models ptl monitor val through logging (#7608) (#7614) Signed-off-by: Nithin Rao Koluguri <nithinraok> Co-authored-by: Nithin Rao <[email protected]> Co-authored-by: Eric Harper <[email protected]> Co-authored-by: Xuesong Yang <[email protected]> * Fix metrics for SE tutorial (#7604) (#7612) Signed-off-by: Ante Jukić <[email protected]> Co-authored-by: anteju <[email protected]> * Add ddp_find_unused_parameters=True and change accelerator to auto (#7623) (#7644) * Add ddp_find_unused_parameters=True and change acclerator to auto * Add ddp_find_unused_parameters True for normalization_as_tagging_train.py --------- Signed-off-by: Abhishree <[email protected]> Co-authored-by: Abhishree Thittenamane <[email protected]> * Fix py3.11 dataclasses issue (#7616) * Fix py3.11 dataclasses issue (#7582) * Update ASR configs to support Python 3.11 Signed-off-by: smajumdar <[email protected]> * Update TTS configs to support Python 3.11 Signed-off-by: smajumdar <[email protected]> * Guard MeCab and Ipadic Signed-off-by: smajumdar <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix remaining ASR dataclasses Signed-off-by: smajumdar <[email protected]> * Fix remaining ASR dataclasses Signed-off-by: smajumdar <[email protected]> * Fix scripts Signed-off-by: smajumdar <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: smajumdar <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Update name to ConfidenceMethodConfig Signed-off-by: smajumdar <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Broadcast loss only when using pipeline parallelism and within the pipeline parallel domain (#7576) (#7586) * Broadcast loss only when using pipeline parallelism and within the pipeline parallel domain * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Sangkug Lym <[email protected]> Co-authored-by: Sangkug Lym <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Safeguard nemo_text_processing installation on ARM (#7485) * safeguard nemo_text_processing installing Signed-off-by: Jason <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * update check Signed-off-by: Jason <[email protected]> --------- Signed-off-by: Jason <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Fix changes to confidence measure Signed-off-by: smajumdar <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: smajumdar <[email protected]> Signed-off-by: Sangkug Lym <[email protected]> Signed-off-by: Jason <[email protected]> Co-authored-by: Somshubra Majumdar <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Sangkug Lym <[email protected]> Co-authored-by: Jason <[email protected]> * [Stable Diffusion/ControlNet] Enable O2 training for SD and Fix ControlNet CI failure * Mingyuanm/dreambooth fix * Fix NeMo CI Infer Issue * DreamFusion * Move neva export changes * Add Imagen Synthetic Dataloader * Add VITWrapper and export stuff to wrapper * Update neva with megatron-core support * Fix issues with Dockerfile (#7650) (#7652) Signed-off-by: smajumdar <[email protected]> Co-authored-by: Somshubra Majumdar <[email protected]> * [ASR] RNN-T greedy decoding max_frames fix for alignment and confidence (#7635) * decoding and test fix Signed-off-by: Aleksandr Laptev <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Aleksandr Laptev <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * [ASR] Fix type error in jasper (#7636) (#7653) Signed-off-by: Ryan <[email protected]> Co-authored-by: Ryan Langman <[email protected]> * [TTS] Add STFT and SI-SDR loss to audio codec recipe (#7468) * [TTS] Add STFT and SI-SDR loss to audio codec recipe Signed-off-by: Ryan <[email protected]> * [TTS] Fix STFT resolution Signed-off-by: Ryan <[email protected]> * [TTS] Fix training metric logging Signed-off-by: Ryan <[email protected]> * [TTS] Add docstring to mel and stft losses Signed-off-by: Ryan <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Ryan <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Create per.py (#7538) * Move model precision copy (#7336) * move cfg precision set to megatron base model Signed-off-by: Maanu Grover <[email protected]> * remove copy from other models Signed-off-by: Maanu Grover <[email protected]> * modify attribute not arg Signed-off-by: Maanu Grover <[email protected]> * fix gpt model test for ptl 2.0 Signed-off-by: Maanu Grover <[email protected]> * rename function and add docstring Signed-off-by: Maanu Grover <[email protected]> * replace precision to dtype conditionals with func call Signed-off-by: Maanu Grover <[email protected]> * unnecessary function and cfg reset Signed-off-by: Maanu Grover <[email protected]> * set default value Signed-off-by: Maanu Grover <[email protected]> * fix precision lookup in a few more places Signed-off-by: Maanu Grover <[email protected]> * rename mapping function Signed-off-by: Maanu Grover <[email protected]> * ununsed import Signed-off-by: Maanu Grover <[email protected]> * save torch datatype to model Signed-off-by: Maanu Grover <[email protected]> * set weights precision wrt amp o2 Signed-off-by: Maanu Grover <[email protected]> * Revert "set weights precision wrt amp o2" This reverts commit 313a4bfe5eb69d771a6d2433898c0685836aef5c. Signed-off-by: Maanu Grover <[email protected]> * revert half precision at inference attempt Signed-off-by: Maanu Grover <[email protected]> * move autocast dtype to base model Signed-off-by: Maanu Grover <[email protected]> * move params dtype to base model, enable fp16 O2 inf Signed-off-by: Maanu Grover <[email protected]> * unused imports Signed-off-by: Maanu Grover <[email protected]> --------- Signed-off-by: Maanu Grover <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * Fix PEFT checkpoint loading (#7388) * Fix PEFT checkpoint loading Signed-off-by: Jason Wang <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Jason Wang <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Sasha Meister <[email protected]> * Use distributed optimizer support for multiple dtypes (#7359) * Update distopt wrapper with multiple dtype support Remove manual handling of separate FP32 optimizer. Signed-off-by: Tim Moon <[email protected]> * Use distopt support for contiguous buffers with multiple dtypes Signed-off-by: Tim Moon <[email protected]> * Fix typo Signed-off-by: Tim Moon <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Separate distopt buckets for first GPT layer and non-overlapped params Signed-off-by: Tim Moon <[email protected]> * Add distopt logic for int dtypes Signed-off-by: Tim Moon <[email protected]> * Update Apex commit Signed-off-by: Tim Moon <[email protected]> * Remove unused variables Signed-off-by: Tim Moon <[email protected]> * Update Apex commit in README and Jenkensfile Signed-off-by: Tim Moon <[email protected]> * Debug Dockerfile and Jenkinsfile Signed-off-by: Tim Moon <[email protected]> --------- Signed-off-by: Tim Moon <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Eric Harper <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * minor fix for llama ckpt conversion script (#7387) * minor fix for llama ckpt conversion script Signed-off-by: Jason Wang <[email protected]> * Update Jenkinsfile Signed-off-by: Jason Wang <[email protected]> * remove fast_swiglu configuration Signed-off-by: Jason Wang <[email protected]> --------- Signed-off-by: Jason Wang <[email protected]> Co-authored-by: Eric Harper <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * Fix wrong calling of librosa.get_duration() in notebook (#7376) Signed-off-by: Robin Dong <[email protected]> Co-authored-by: Somshubra Majumdar <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * [PATCH] PEFT import mcore (#7393) * [PATCH] PEFT import mcore Signed-off-by: Jason Wang <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Jason Wang <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Sasha Meister <[email protected]> * Create per.py Script for calculation Punctuation Error Rate and related rates (correct rate, deletions rate, etc.) Signed-off-by: Sasha Meister <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: Sasha Meister <[email protected]> * [TTS] Added a callback for logging initial data (#7384) Signed-off-by: Ante Jukić <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * Update Core Commit (#7402) * Update Core Commit Signed-off-by: Abhinav Khattar <[email protected]> * update commit Signed-off-by: Abhinav Khattar <[email protected]> --------- Signed-off-by: Abhinav Khattar <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * Use cfg attribute in bert (#7394) * use cfg attribute instead of arg Signed-off-by: Maanu Grover <[email protected]> * use torch_dtype in place of cfg.precision Signed-off-by: Maanu Grover <[email protected]> * move precision copy before super constructor Signed-off-by: Maanu Grover <[email protected]> * use trainer arg Signed-off-by: Maanu Grover <[email protected]> --------- Signed-off-by: Maanu Grover <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * Add support for bias conversion in Swiglu models (#7386) * Add support for bias conversion in Swiglu models Signed-off-by: smajumdar <[email protected]> * Add support for auto extracting tokenizer model Signed-off-by: smajumdar <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add support for auto extracting tokenizer model Signed-off-by: smajumdar <[email protected]> * Fix issue with missing tokenizer Signed-off-by: smajumdar <[email protected]> * Refactor Signed-off-by: smajumdar <[email protected]> * Refactor Signed-off-by: smajumdar <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: smajumdar <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Sasha Meister <[email protected]> * Update save_to and restore_from for dist checkpointing (#7343) * add dist ckpt to save to, in progress Signed-off-by: eharper <[email protected]> * move dist ckpt Signed-off-by: eharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * clean up Signed-off-by: eharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * update restore from, need to figure out how to initialize distributed Signed-off-by: eharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * launch distrib if needed when restoring dist ckpt Signed-off-by: eharper <[email protected]> * when using mcore we can change tp pp on the fly Signed-off-by: eharper <[email protected]> * add load_from_checkpoint support for dist ckpt Signed-off-by: eharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * update llama convert script to save dist .nemo Signed-off-by: eharper <[email protected]> * fix load dist ckpt Signed-off-by: jasonwan <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * setup TE TP groups if needed Signed-off-by: eharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * setup te tp groups if needed Signed-off-by: eharper <[email protected]> * remove import Signed-off-by: eharper <[email protected]> --------- Signed-off-by: eharper <[email protected]> Signed-off-by: jasonwan <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: jasonwan <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * fix forward for with mcore=false (#7403) Signed-off-by: Jimmy Zhang <[email protected]> Co-authored-by: Jimmy Zhang <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * Fix logging to remove 's/it' from progress bar in Megatron models and add train_step_timing (#7374) * Add CustomProgressBar class to exp_manager and trainer callbacks Signed-off-by: Abhishree <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix the progress bar to reflect total microbatch cnt Signed-off-by: Abhishree <[email protected]> * Modify CustomProgressBar class 1) Modify CustomProgressBar class to update progress bar per global_step instead of per microbatch 2) Add the callback to other megatron training/finetuning files that are not using MegatronTrainerBuilder Signed-off-by: Abhishree <[email protected]> * Add CustomProgressBar callback to tuning files Signed-off-by: Abhishree <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Abhishree <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Sasha Meister <[email protected]> * Set Activation Checkpointing Defaults (#7404) * Set Activation Checkpointing Defaults Signed-off-by: Abhinav Khattar <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * check for None Signed-off-by: Abhinav Khattar <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Abhinav Khattar <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Sasha Meister <[email protected]> * make loss mask default to false (#7407) Signed-off-by: eharper <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * Add dummy userbuffer config files (#7408) Signed-off-by: Sangkug Lym <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * add missing ubconf files (#7412) Signed-off-by: Abhinav Khattar <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * New tutorial on Speech Data Explorer (#7405) * Added Google Colab based tutorial on Speech Data Explorer Signed-off-by: George Zelenfroynd <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * Update ptl training ckpt conversion script to work with dist ckpt (#7416) * update ptl convert script Signed-off-by: eharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * don't break legacy Signed-off-by: eharper <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: eharper <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Sasha Meister <[email protected]> * Allow disabling sanity checking when num_sanity_val_steps=0 (#7413) * Allow disabling sanity checking when num_sanity_val_steps=0 Signed-off-by: Abhishree <[email protected]> * Update num_sanity_val_steps to be a multiple of num_microbatches Signed-off-by: Abhishree Thittenamane <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Abhishree <[email protected]> Signed-off-by: Abhishree Thittenamane <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Sasha Meister <[email protected]> * Add comprehensive error messages (#7261) Signed-off-by: Anton Peganov <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * check NEMO_PATH (#7418) Signed-off-by: Nikolay Karpov <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * layer selection for ia3 (#7417) * layer selection for ia3 Signed-off-by: arendu <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: arendu <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Sasha Meister <[email protected]> * Fix missing pip package 'einops' (#7397) Signed-off-by: Robin Dong <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * Fix failure of pyaudio in Google Colab (#7396) Signed-off-by: Robin Dong <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * Update README.md: output_path --> output_manifest_filepath (#7442) Signed-off-by: Samuele Cornell <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * Add rope dynamic linear scaling (#7437) * Add dynamic linear scaling Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix bug Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix Signed-off-by: Cheng-Ping Hsieh <[email protected]> --------- Signed-off-by: Cheng-Ping Hsieh <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Yang Zhang <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * Fix None dataloader issue in PTL2.0 (#7455) * Fix None dataloader issue in PTL2.0 Signed-off-by: KunalDhawan <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * updating values of self._validation_dl and self._test_dl as well Signed-off-by: KunalDhawan <[email protected]> * updating values of self._validation_dl and self._test_dl as well Signed-off-by: KunalDhawan <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: KunalDhawan <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Sasha Meister <[email protected]> * [ASR] Confidence measure -> method renames (#7434) * measure -> method Signed-off-by: Aleksandr Laptev <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Aleksandr Laptev <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Sasha Meister <[email protected]> * Add steps for document of getting dataset 'SF Bilingual Speech' (#7378) * Add steps for document of getting dataset 'SF Bilingual Speech' Signed-off-by: Robin Dong <[email protected]> * Update datasets.rst added a link from a tutorial demonstrating detailed data prep steps. Signed-off-by: Xuesong Yang <[email protected]> --------- Signed-off-by: Robin Dong <[email protected]> Signed-off-by: Xuesong Yang <[email protected]> Co-authored-by: Xuesong Yang <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * RNN-T confidence and alignment bugfix (#7381) * new frame_confidence and alignments lists are now always created after the while loop Signed-off-by: Aleksandr Laptev <[email protected]> * tests added Signed-off-by: Aleksandr Laptev <[email protected]> --------- Signed-off-by: Aleksandr Laptev <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * Fix resume from checkpoint in exp_manager (#7424) (#7426) Signed-off-by: Abhishree <[email protected]> Co-authored-by: Abhishree Thittenamane <[email protected]> Co-authored-by: Eric Harper <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * Fix checking of cuda/cpu device for inputs of Decoder (#7444) * Fix checking of cuda/cpu device for inputs of Decoder Signed-off-by: Robin Dong <[email protected]> * Update tacotron2.py Signed-off-by: Jason <[email protected]> --------- Signed-off-by: Robin Dong <[email protected]> Signed-off-by: Jason <[email protected]> Co-authored-by: Jason <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * Fix failure of ljspeech's get_data.py (#7430) * Fix failure of ljspeech's get_data.py Signed-off-by: Robin Dong <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Robin Dong <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Sasha Meister <[email protected]> * [TTS] Fix audio codec type checks (#7373) * [TTS] Fix audio codec type checks Signed-off-by: Ryan <[email protected]> * [TTS] Fix audio codec tests Signed-off-by: Ryan <[email protected]> --------- Signed-off-by: Ryan <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * [TTS] Add dataset to path of logged artifacts (#7462) * [TTS] Add dataset to path of logged artifacts Signed-off-by: Ryan <[email protected]> * [TTS] Revert axis name back to Audio Frames Signed-off-by: Ryan <[email protected]> --------- Signed-off-by: Ryan <[email protected]> Signed-off-by: Sasha Meister <[email protected]> * Fix sft dataset truncation (#7464) * Add fix Signed-off-by: Cheng-Ping Hsieh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix Signed-off-by: Cheng-Ping Hsieh <[email protected]> --------- Signed-of…
1 parent 41b9d11 commit 01fc632

File tree

195 files changed

+28782
-134
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

195 files changed

+28782
-134
lines changed

examples/multimodal/convert_ckpt_to_nemo.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@
3636
from nemo.collections.multimodal.models.text_to_image.imagen import MegatronImagen
3737
from nemo.collections.multimodal.models.text_to_image.instruct_pix2pix.ldm.ddpm_edit import MegatronLatentDiffusionEdit
3838
from nemo.collections.multimodal.models.text_to_image.stable_diffusion.ldm.ddpm import MegatronLatentDiffusion
39-
from nemo.collections.multimodal.models.vision_language_foundation.clip import MegatronCLIPModel
39+
from nemo.collections.multimodal.models.vision_language_foundation.clip.megatron_clip_models import MegatronCLIPModel
4040
from nemo.collections.nlp.parts.megatron_trainer_builder import MegatronTrainerBuilder
4141
from nemo.collections.nlp.parts.nlp_overrides import NLPSaveRestoreConnector
4242
from nemo.utils import AppState, logging
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
name: stable-diffusion-train
2+
3+
infer:
4+
unconditional_guidance_scale: 3
5+
num_images_per_prompt: 4
6+
hint_image_size: 512
7+
height: 512
8+
width: 512
9+
down_factor: 8
10+
inference_steps: 50
11+
sampler_type: 'DDIM'
12+
eta: 0
13+
output_type: 'pil'
14+
save_to_file: True
15+
out_path: 'controlnet'
16+
seed: 355
17+
prompts:
18+
- high quality picture of a house in oil painting style
19+
control:
20+
- /datasets/coco-stuff/house.png #images/val2017/000000001584.jpg
21+
# Depending on the input control, if the input control is already the conditioning image, null should be passed here
22+
# If a reconstruction target is used as control, then preprocessing function that turns it into a conditioning image needs to be specified
23+
control_image_preprocess:
24+
25+
trainer:
26+
devices: 1
27+
num_nodes: 1
28+
accelerator: gpu
29+
precision: 16
30+
logger: False # logger provided by exp_manager
31+
32+
model:
33+
restore_from_path: /ckpts/controlnet/30k.nemo
34+
precision: ${trainer.precision}
35+
strength: 2.0
36+
guess_mode: False
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,222 @@
1+
trainer:
2+
devices: 2
3+
num_nodes: 1
4+
accelerator: gpu
5+
precision: 16
6+
logger: False # logger provided by exp_manager
7+
enable_checkpointing: False
8+
use_distributed_sampler: True
9+
max_epochs: 3 # PTL default. In practice, max_steps will be reached first.
10+
max_steps: -1 # consumed_samples = global_step * micro_batch_size * data_parallel_size * accumulate_grad_batches
11+
log_every_n_steps: 10
12+
accumulate_grad_batches: 1 # do not modify, grad acc is automatic for training megatron models
13+
gradient_clip_val: 1.0
14+
benchmark: False
15+
enable_model_summary: True
16+
limit_val_batches: 0
17+
18+
19+
exp_manager:
20+
explicit_log_dir: null
21+
exp_dir: null
22+
name: controlnet
23+
create_wandb_logger: False
24+
wandb_logger_kwargs:
25+
project: stable-diffusion
26+
group: controlnet
27+
name: controlnet-v1.5
28+
resume: True
29+
create_checkpoint_callback: True
30+
create_tensorboard_logger: True
31+
checkpoint_callback_params:
32+
save_top_k: -1
33+
every_n_train_steps: 5000
34+
every_n_epochs: 0
35+
monitor: reduced_train_loss
36+
filename: 'controlnet--{reduced_train_loss:.2f}-{step}-{consumed_samples}'
37+
resume_if_exists: True
38+
resume_ignore_no_checkpoint: True
39+
resume_from_checkpoint: ${model.resume_from_checkpoint}
40+
ema:
41+
enable: False
42+
decay: 0.9999
43+
validate_original_weights: False
44+
every_n_steps: 1
45+
cpu_offload: False
46+
47+
48+
49+
50+
model:
51+
precision: ${trainer.precision}
52+
# specify micro_batch_size, global_batch_size, and model parallelism
53+
# gradient accumulation will be done automatically based on data_parallel_size
54+
micro_batch_size: 4 # limited by GPU memory
55+
global_batch_size: 8
56+
57+
linear_start: 0.00085
58+
linear_end: 0.0120
59+
num_timesteps_cond: 1
60+
log_every_t: 200
61+
timesteps: 1000
62+
first_stage_key: images
63+
cond_stage_key: captions
64+
control_key: hint
65+
image_size: 64
66+
channels: 4
67+
cond_stage_trainable: false
68+
conditioning_key: crossattn
69+
monitor: val/loss_simple_ema
70+
scale_factor: 0.18215
71+
use_ema: False
72+
scale_by_std: False
73+
ckpt_path:
74+
ignore_keys: [ ]
75+
parameterization: eps
76+
clip_denoised: True
77+
load_only_unet: False
78+
cosine_s: 8e-3
79+
given_betas:
80+
original_elbo_weight: 0
81+
v_posterior: 0
82+
l_simple_weight: 1
83+
use_positional_encodings: False
84+
learn_logvar: False
85+
logvar_init: 0
86+
beta_schedule: linear
87+
loss_type: l2
88+
learning_rate: 1.0e-04
89+
concat_mode: True
90+
cond_stage_forward:
91+
text_embedding_dropout_rate: 0.0
92+
fused_opt: True
93+
inductor: False
94+
inductor_cudagraphs: False
95+
capture_cudagraph_iters: -1 # -1 to disable
96+
channels_last: True
97+
only_mid_control: False
98+
sd_locked: True
99+
100+
control_stage_config:
101+
_target_: nemo.collections.multimodal.models.controlnet.controlnet.ControlNet
102+
params:
103+
from_pretrained_unet: /ckpts/v1-5-pruned.ckpt
104+
from_NeMo: True
105+
image_size: 32 # unused
106+
in_channels: 4
107+
hint_channels: 3
108+
model_channels: 320
109+
attention_resolutions: [ 4, 2, 1 ]
110+
num_res_blocks: 2
111+
channel_mult: [ 1, 2, 4, 4 ]
112+
num_heads: 8
113+
use_spatial_transformer: True
114+
use_linear_in_transformer: False
115+
transformer_depth: 1
116+
context_dim: 768
117+
use_checkpoint: False
118+
legacy: False
119+
use_flash_attention: False
120+
121+
unet_config:
122+
_target_: nemo.collections.multimodal.models.controlnet.controlnet.ControlledUnetModel
123+
from_pretrained: /ckpts/v1-5-pruned.ckpt
124+
from_NeMo: True
125+
image_size: 32 # unused
126+
in_channels: 4
127+
out_channels: 4
128+
model_channels: 320
129+
attention_resolutions:
130+
- 4
131+
- 2
132+
- 1
133+
num_res_blocks: 2
134+
channel_mult:
135+
- 1
136+
- 2
137+
- 4
138+
- 4
139+
num_heads: 8
140+
use_spatial_transformer: True
141+
transformer_depth: 1
142+
context_dim: 768
143+
use_checkpoint: False
144+
legacy: False
145+
use_flash_attention: False
146+
147+
first_stage_config:
148+
_target_: nemo.collections.multimodal.models.stable_diffusion.ldm.autoencoder.AutoencoderKL
149+
from_pretrained: /ckpts/vae.bin
150+
embed_dim: 4
151+
monitor: val/rec_loss
152+
ddconfig:
153+
double_z: true
154+
z_channels: 4
155+
resolution: 256
156+
in_channels: 3
157+
out_ch: 3
158+
ch: 128
159+
ch_mult:
160+
- 1
161+
- 2
162+
- 4
163+
- 4
164+
num_res_blocks: 2
165+
attn_resolutions: []
166+
dropout: 0.0
167+
lossconfig:
168+
target: torch.nn.Identity
169+
170+
cond_stage_config:
171+
_target_: nemo.collections.multimodal.modules.stable_diffusion.encoders.modules.FrozenCLIPEmbedder
172+
version: openai/clip-vit-large-patch14
173+
device: cuda
174+
max_length: 77
175+
176+
data:
177+
num_workers: 16
178+
synthetic_data: False # dataset_path and local_root_path can be empty when using synthetic data
179+
synthetic_data_length: 10000
180+
train:
181+
dataset_path:
182+
#- /datasets/tarfiles/fill50k.pkl
183+
- /datasets/coco-stuff/coco-stuff-tarfiles/wdinfo-coco-stuff.pkl
184+
augmentations:
185+
resize_smallest_side: 512
186+
center_crop_h_w: 512, 512
187+
horizontal_flip: False
188+
filterings:
189+
190+
webdataset:
191+
infinite_sampler: False
192+
local_root_path: /datasets/coco-stuff/coco-stuff-tarfiles
193+
194+
optim:
195+
name: fused_adam
196+
lr: 2e-5
197+
weight_decay: 0.
198+
betas:
199+
- 0.9
200+
- 0.999
201+
sched:
202+
name: WarmupHoldPolicy
203+
warmup_steps: 0
204+
hold_steps: 10000000000000 # Incredibly large value to hold the lr as constant
205+
206+
# Nsys profiling options
207+
nsys_profile:
208+
enabled: False
209+
start_step: 10 # Global batch to start profiling
210+
end_step: 10 # Global batch to end profiling
211+
ranks: [ 0 ] # Global rank IDs to profile
212+
gen_shape: False # Generate model and kernel details including input shapes
213+
214+
image_logger:
215+
batch_frequency: 1000
216+
max_images: 4
217+
218+
#miscellaneous
219+
seed: 1234
220+
resume_from_checkpoint: null # manually set the checkpoint file to load from
221+
apex_transformer_log_level: 30 # Python logging level displays logs with severity greater than or equal to this
222+
gradient_as_bucket_view: True # PyTorch DDP argument. Allocate gradients in a contiguous bucket to save memory (less fragmentation and buffer memory)

0 commit comments

Comments
 (0)