Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Codec] Finite scalar quantizer #7886

Merged
merged 2 commits into from
Nov 17, 2023
Merged

[Codec] Finite scalar quantizer #7886

merged 2 commits into from
Nov 17, 2023

Conversation

anteju
Copy link
Collaborator

@anteju anteju commented Nov 14, 2023

What does this PR do ?

This PR adds finite scalar quantizer as used in
Mentzer et al., Finite Scalar Quantization: VQ-VAE Made Simple (https://arxiv.org/abs/2309.15505v1)

Collection: TTS

Changelog

  • Added FiniteScalarQuantizer
  • Added unit tests

Usage

This can be used as a drop-in replacement for ResidualVectorQuantizer, for example using the following configuration

model:
  ...
  commit_loss_scale: 0.0 # need to be set to zero, since FSQ does not include commit loss
  ...
  vector_quantizer:
    _target_: nemo.collections.tts.modules.audio_codec_modules.FiniteScalarQuantizer
    num_levels: [8, 5, 5, 5]

For using larger vector, GroupFiniteScalarQuantizer can be used to repeat quantization pattern several times, for example using the following configuration

model:
  ...
  commit_loss_scale: 0.0 # need to be set to zero, since FSQ does not include commit loss
  ...
  vector_quantizer:
    _target_: nemo.collections.tts.modules.audio_codec_modules.GroupFiniteScalarQuantizer
    num_groups: 3
    num_levels_per_group: [3, 4]

This corresponds to FSQ with num_levels: [3, 4, 3, 4, 3, 4].

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

@anteju anteju marked this pull request as ready for review November 14, 2023 02:01
@github-actions github-actions bot added the TTS label Nov 14, 2023
@anteju
Copy link
Collaborator Author

anteju commented Nov 14, 2023

jenkins

@anteju anteju force-pushed the pr/fsq branch 2 times, most recently from 003735a to 78df06d Compare November 15, 2023 00:12
Comment on lines +83 to +88
if len(vq_output_types) == 3 and vq_output_types[-1] == 'commit_loss':
self.vector_quantizer_has_commit_loss = True
logging.info('Vector quantizer supports commit loss.')
else:
self.vector_quantizer_has_commit_loss = False
logging.info('Vector quantizer does not support commit loss.')
Copy link
Collaborator

@rlangman rlangman Nov 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be simpler if we require the VQ to return commit loss and modify FSQ to return 0 commit loss?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could do that as well, we could skip ~7 lines above.
A disadvantage may be that we could still set commit_loss_scale, even if the quantizer is always returning 0.0, which may be confusing.

I implemented something like that earlier:
an abstract base class VectorQuantizer with methods forward (including commit_loss), encode, decode and the same typecheck types.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it is not much extra work, I think having the abstract base class would be better. It does not matter much now with only 2 implementations, but should help long term as we inevitably add more types of quantizers.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's merge this PR and we can factorize out to an abstract class in a follow-up.
Sounds good?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, sounds good.

tests/collections/tts/modules/test_audio_codec_modules.py Outdated Show resolved Hide resolved
tests/collections/tts/modules/test_audio_codec_modules.py Outdated Show resolved Hide resolved
Signed-off-by: Ante Jukić <[email protected]>
Signed-off-by: Ante Jukić <[email protected]>
@anteju
Copy link
Collaborator Author

anteju commented Nov 16, 2023

jenkins

Copy link
Collaborator

@rlangman rlangman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Comment on lines +83 to +88
if len(vq_output_types) == 3 and vq_output_types[-1] == 'commit_loss':
self.vector_quantizer_has_commit_loss = True
logging.info('Vector quantizer supports commit loss.')
else:
self.vector_quantizer_has_commit_loss = False
logging.info('Vector quantizer does not support commit loss.')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it is not much extra work, I think having the abstract base class would be better. It does not matter much now with only 2 implementations, but should help long term as we inevitably add more types of quantizers.

Copy link
Collaborator

@nithinraok nithinraok left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@anteju anteju merged commit 76e5bdf into NVIDIA:main Nov 17, 2023
11 checks passed
erhoo82 pushed a commit to erhoo82/NeMo that referenced this pull request Dec 2, 2023
Signed-off-by: Chen Cui <[email protected]>

support packed dataset

Signed-off-by: Chen Cui <[email protected]>

[Codec] Finite scalar quantizer (NVIDIA#7886)

* Finite scalar quantizer

Signed-off-by: Ante Jukić <[email protected]>

* Updated test

Signed-off-by: Ante Jukić <[email protected]>

---------

Signed-off-by: Ante Jukić <[email protected]>

upgrade to latest mcore and TE (NVIDIA#7908)

* reimport module

Signed-off-by: dimapihtar <[email protected]>

* update mcore and TE commits

Signed-off-by: dimapihtar <[email protected]>

---------

Signed-off-by: dimapihtar <[email protected]>

Tar codec (NVIDIA#7867)

added missing torch import (NVIDIA#7913)

Signed-off-by: David Mosallanezhad <[email protected]>

add cpu init check (NVIDIA#7889)

Signed-off-by: Chen Cui <[email protected]>

Fix pinned triton version (NVIDIA#7925)

* Fix pinned triton version

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* Remove comment

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Change README

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* Remove flash-attn in Dockerfile

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* Revert

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

---------

Signed-off-by: Cheng-Ping Hsieh <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

fix tp_overlap config var name (NVIDIA#7928)

Signed-off-by: Xiaowei Ren <[email protected]>

add Dutch P&C FC model info (NVIDIA#7892)

* add Dutch P&C FC model info

Signed-off-by: zhehuaichen <[email protected]>

* update order of the results

Signed-off-by: zhehuaichen <[email protected]>

---------

Signed-off-by: zhehuaichen <[email protected]>

fix issues with convert_nemo_llama_to_hf.py (NVIDIA#7922)

[pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

fix collate_fn bug for TP > 1

Signed-off-by: Chen Cui <[email protected]>

[pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

make packed dataset work

Signed-off-by: Chen Cui <[email protected]>

fix nan bug

Signed-off-by: Chen Cui <[email protected]>

[pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

support answer only loss

Signed-off-by: Chen Cui <[email protected]>

[pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

account for padding in cu_seqlens during dataloading for attn kernel

Signed-off-by: Chen Cui <[email protected]>

fix path for answer_only_loss = false

Signed-off-by: Chen Cui <[email protected]>

Modify GPTSFTPackedDataset to respond to pad_to_max_length setting

Signed-off-by: Valerie Sarge <[email protected]>
erhoo82 pushed a commit to erhoo82/NeMo that referenced this pull request Dec 2, 2023
Signed-off-by: Chen Cui <[email protected]>

support packed dataset

Signed-off-by: Chen Cui <[email protected]>

[Codec] Finite scalar quantizer (NVIDIA#7886)

* Finite scalar quantizer

Signed-off-by: Ante Jukić <[email protected]>

* Updated test

Signed-off-by: Ante Jukić <[email protected]>

---------

Signed-off-by: Ante Jukić <[email protected]>

upgrade to latest mcore and TE (NVIDIA#7908)

* reimport module

Signed-off-by: dimapihtar <[email protected]>

* update mcore and TE commits

Signed-off-by: dimapihtar <[email protected]>

---------

Signed-off-by: dimapihtar <[email protected]>

Tar codec (NVIDIA#7867)

added missing torch import (NVIDIA#7913)

Signed-off-by: David Mosallanezhad <[email protected]>

add cpu init check (NVIDIA#7889)

Signed-off-by: Chen Cui <[email protected]>

Fix pinned triton version (NVIDIA#7925)

* Fix pinned triton version

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* Remove comment

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Change README

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* Remove flash-attn in Dockerfile

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

* Revert

Signed-off-by: Cheng-Ping Hsieh <[email protected]>

---------

Signed-off-by: Cheng-Ping Hsieh <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

fix tp_overlap config var name (NVIDIA#7928)

Signed-off-by: Xiaowei Ren <[email protected]>

add Dutch P&C FC model info (NVIDIA#7892)

* add Dutch P&C FC model info

Signed-off-by: zhehuaichen <[email protected]>

* update order of the results

Signed-off-by: zhehuaichen <[email protected]>

---------

Signed-off-by: zhehuaichen <[email protected]>

fix issues with convert_nemo_llama_to_hf.py (NVIDIA#7922)

[pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

fix collate_fn bug for TP > 1

Signed-off-by: Chen Cui <[email protected]>

[pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

make packed dataset work

Signed-off-by: Chen Cui <[email protected]>

fix nan bug

Signed-off-by: Chen Cui <[email protected]>

[pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

support answer only loss

Signed-off-by: Chen Cui <[email protected]>

[pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

account for padding in cu_seqlens during dataloading for attn kernel

Signed-off-by: Chen Cui <[email protected]>

fix path for answer_only_loss = false

Signed-off-by: Chen Cui <[email protected]>
pzelasko pushed a commit to pzelasko/NeMo that referenced this pull request Jan 3, 2024
* Finite scalar quantizer

Signed-off-by: Ante Jukić <[email protected]>

* Updated test

Signed-off-by: Ante Jukić <[email protected]>

---------

Signed-off-by: Ante Jukić <[email protected]>
Signed-off-by: Piotr Żelasko <[email protected]>
rohitrango pushed a commit to rohitrango/NeMo that referenced this pull request Jun 25, 2024
* Finite scalar quantizer

Signed-off-by: Ante Jukić <[email protected]>

* Updated test

Signed-off-by: Ante Jukić <[email protected]>

---------

Signed-off-by: Ante Jukić <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants