-
Notifications
You must be signed in to change notification settings - Fork 32k
Support BatchNorm in Hubert pos_conv_emb as in fairseq #34389
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
ylacombe
merged 27 commits into
huggingface:main
from
gallilmaimon:add_hubert_conv_emb_batchnorm_support
Dec 10, 2024
Merged
Changes from all commits
Commits
Show all changes
27 commits
Select commit
Hold shift + click to select a range
023a033
Support BatchNorm in Hubert pos_conv_emb as in fairseq
gallilmaimon fb32b55
Correct the new defaults (#34377)
Cyrilvallez c4ab8a5
[auto. ping] Avoid sending empty info + add more team members (#34383)
ydshieh b2a7b11
Fix glm (#34388)
Cyrilvallez 5289130
Use non nested images and batched text Idefics2/3 (#34222)
yonigozlan 86468ad
Fix onnx non-expotable inplace aten op (#34376)
IlyasMoutawwakil cfe1e14
Fix right padding in LLaVA models (#34305)
zucchini-nlp 337621a
no filter (#34391)
ydshieh 3ae703b
SynthID: better example (#34372)
gante a0ccf20
Tests: upgrade `test_eager_matches_sdpa_generate` (#34386)
gante 3c3e153
Fix bnb training test failure (#34414)
matthewdouglas 2dded53
Avoid check expected exception when it is on CUDA (#34408)
ydshieh d8edfcb
Fix typos in agents_advanced.md (#34405)
rudydel 3398913
[docs] Cache implementations (#34325)
stevhliu 75f0689
[run-slow] hubert
gallilmaimon a3042a0
Support BatchNorm in Hubert pos_conv_emb as in fairseq
gallilmaimon a0a2731
Support BatchNorm in Hubert pos_conv_emb as in fairseq
gallilmaimon 987d521
Merge branch 'huggingface:main' into add_hubert_conv_emb_batchnorm_su…
gallilmaimon ce40909
[run-slow] hubert
gallilmaimon 391ea79
Support BatchNorm in Hubert pos_conv_emb as in fairseq
gallilmaimon 7bbc7b4
[run-slow] hubert
gallilmaimon 3e7f77e
Support BatchNorm in Hubert pos_conv_emb as in fairseq
gallilmaimon 2ca473f
Support BatchNorm in Hubert pos_conv_emb as in fairseq
gallilmaimon 9f167a2
[run-slow] hubert
gallilmaimon af1d65e
Merge branch 'main' into add_hubert_conv_emb_batchnorm_support
ylacombe eaed17f
Merge branch 'add_hubert_conv_emb_batchnorm_support' of https://githu…
gallilmaimon 61d8ad0
[run-slow] hubert
gallilmaimon File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -943,3 +943,40 @@ def test_inference_distilhubert(self): | |
| self.assertTrue(torch.allclose(outputs[:, :4, :4], expected_outputs_first, atol=5e-3)) | ||
| self.assertTrue(torch.allclose(outputs[:, -4:, -4:], expected_outputs_last, atol=5e-3)) | ||
| self.assertTrue(abs(outputs.sum() - expected_output_sum) < 0.1) | ||
|
|
||
| def test_inference_hubert_25hz(self): | ||
| model = HubertModel.from_pretrained("slprl/mhubert-base-25hz").to(torch_device) | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. would be nice to open a PR to the original repo and use pr branch revision in the mean time! |
||
|
|
||
| sample = self._load_datasamples(1) | ||
| input_speech = torch.tensor(sample[0], dtype=torch.float, device=torch_device).unsqueeze(0) | ||
|
|
||
| with torch.no_grad(): | ||
| outputs = model(input_speech, output_hidden_states=True).hidden_states[11] | ||
|
|
||
| # expected outputs taken from the original textlesslib implementation by: | ||
| # model = SpeechEncoder.by_name(dense_model_name='mhubert-base-25hz', quantizer_model_name='kmeans', | ||
| # vocab_size=500, deduplicate=False, need_f0=False) | ||
| # model(wav)['dense'] | ||
| expected_outputs_first = torch.tensor( | ||
| [ | ||
| [0.0267, 0.1776, -0.1706, -0.4559], | ||
| [-0.2430, -0.2943, -0.1864, -0.1187], | ||
| [-0.1812, -0.4239, -0.1916, -0.0858], | ||
| [-0.1495, -0.4758, -0.4036, 0.0302], | ||
| ], | ||
| device=torch_device, | ||
| ) | ||
| expected_outputs_last = torch.tensor( | ||
| [ | ||
| [0.3366, -0.2734, -0.1415, -0.3055], | ||
| [0.2329, -0.3580, -0.1421, -0.3197], | ||
| [0.1631, -0.4301, -0.1965, -0.2956], | ||
| [0.3342, -0.2185, -0.2253, -0.2363], | ||
| ], | ||
| device=torch_device, | ||
| ) | ||
| expected_output_sum = 1681.7603 | ||
|
|
||
| self.assertTrue(torch.allclose(outputs[:, :4, :4], expected_outputs_first, atol=5e-3)) | ||
| self.assertTrue(torch.allclose(outputs[:, -4:, -4:], expected_outputs_last, atol=5e-3)) | ||
| self.assertTrue(abs(outputs.sum() - expected_output_sum) < 0.1) | ||
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we precise
(for bf16 models)out of curiosity ?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be honest I just copied this from the fairseq definition https://github.com/facebookresearch/fairseq/blob/ecbf110e1eb43861214b05fa001eff584954f65a/fairseq/models/hubert/hubert.py#L197
I can remove this if you prefer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's remove it then