-
Notifications
You must be signed in to change notification settings - Fork 31.9k
[Speech Examples] Add pytorch speech pretraining #13877
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
patrickvonplaten
merged 28 commits into
huggingface:master
from
patrickvonplaten:add_pytorch_speech_pretraining
Oct 11, 2021
Merged
Changes from all commits
Commits
Show all changes
28 commits
Select commit
Hold shift + click to select a range
512a0ec
adapt wav2vec2
patrickvonplaten e9395bc
Merge branch 'master' of https://github.com/huggingface/transformers …
patrickvonplaten 94b9bf8
add example
patrickvonplaten f2344dc
add files
patrickvonplaten 6404e83
adapt
patrickvonplaten 3886e45
remove bogus file
patrickvonplaten f9a82ea
Apply suggestions from code review
patrickvonplaten 4c8913f
adapt files more
patrickvonplaten eb3cf71
upload changes
patrickvonplaten 5f4019e
del old files
patrickvonplaten 3a76384
up
patrickvonplaten 0384b8e
up
patrickvonplaten deb949f
up
patrickvonplaten d1bf7f8
up
patrickvonplaten 54fec73
up
patrickvonplaten 5d02cf9
correct gradient checkpoitning
patrickvonplaten dfbe20e
add readme
patrickvonplaten 8b6422d
finish
patrickvonplaten 0791247
finish
patrickvonplaten 65f3038
up
patrickvonplaten 3233a46
more fixes
patrickvonplaten 2d8f1e1
up
patrickvonplaten d700e6a
up
patrickvonplaten 66b9a4f
add demo run to readme
patrickvonplaten ffeda23
up
patrickvonplaten 97e9bb9
up
patrickvonplaten e302e41
up
patrickvonplaten bdb5d35
up
patrickvonplaten File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,124 @@ | ||
| <!--- | ||
| Copyright 2021 The HuggingFace Team. All rights reserved. | ||
|
|
||
| Licensed under the Apache License, Version 2.0 (the "License"); | ||
| you may not use this file except in compliance with the License. | ||
| You may obtain a copy of the License at | ||
|
|
||
| http://www.apache.org/licenses/LICENSE-2.0 | ||
|
|
||
| Unless required by applicable law or agreed to in writing, software | ||
| distributed under the License is distributed on an "AS IS" BASIS, | ||
| WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| See the License for the specific language governing permissions and | ||
| limitations under the License. | ||
| --> | ||
|
|
||
| # Speech Recognition Pre-Training | ||
|
|
||
|
|
||
| ## Wav2Vec2 Speech Pre-Training | ||
|
|
||
| The script [`run_speech_wav2vec2_pretraining_no_trainer.py`](https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py) can be used to pre-train a [Wav2Vec2](https://huggingface.co/transformers/model_doc/wav2vec2.html?highlight=wav2vec2) model from scratch. | ||
|
|
||
| In the script [`run_speech_wav2vec2_pretraining_no_trainer`](https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py), a Wav2Vec2 model is pre-trained on audio data alone using [Wav2Vec2's contrastive loss objective](https://arxiv.org/abs/2006.11477). | ||
|
|
||
| The following examples show how to fine-tune a `"base"`-sized Wav2Vec2 model as well as a `"large"`-sized Wav2Vec2 model using [`accelerate`](https://github.com/huggingface/accelerate). | ||
|
|
||
|
|
||
| --- | ||
| **NOTE 1** | ||
|
|
||
| Wav2Vec2's pre-training is known to be quite unstable. | ||
| It is advised to do a couple of test runs with a smaller dataset, | ||
| *i.e.* `--dataset_config_names clean clean`, `--dataset_split_names validation test` | ||
| to find good hyper-parameters for `learning_rate`, `batch_size`, `num_warmup_steps`, | ||
| and the optimizer. | ||
| A good metric to observe during training is the gradient norm which should ideally be between 0.5 and 2. | ||
|
|
||
| --- | ||
|
|
||
| --- | ||
| **NOTE 2** | ||
|
|
||
| When training a model on large datasets it is recommended to run the data preprocessing | ||
| in a first run in a **non-distributed** mode via `--preprocessing_only` so that | ||
| when running the model in **distributed** mode in a second step the preprocessed data | ||
| can easily be loaded on each distributed device. | ||
|
|
||
| --- | ||
|
|
||
| ### Demo | ||
|
|
||
| In this demo run we pre-train a `"base-sized"` Wav2Vec2 model simply only on the validation | ||
| and test data of [librispeech_asr](https://huggingface.co/datasets/librispeech_asr). | ||
|
|
||
| The demo is run on two Titan RTX (24 GB RAM each). In case you have less RAM available | ||
| per device, consider reducing `--batch_size` and/or the `--max_duration_in_seconds`. | ||
|
|
||
|
|
||
| ```bash | ||
| accelerate launch run_wav2vec2_pretraining_no_trainer.py \ | ||
| --dataset_name="librispeech_asr" \ | ||
| --dataset_config_names clean clean \ | ||
| --dataset_split_names validation test \ | ||
| --model_name_or_path="patrickvonplaten/wav2vec2-base-v2" \ | ||
| --output_dir="./wav2vec2-pretrained-demo" \ | ||
| --max_train_steps="20000" \ | ||
| --num_warmup_steps="32000" \ | ||
| --gradient_accumulation_steps="8" \ | ||
| --learning_rate="0.005" \ | ||
| --weight_decay="0.01" \ | ||
| --max_duration_in_seconds="20.0" \ | ||
| --min_duration_in_seconds="2.0" \ | ||
| --logging_steps="1" \ | ||
| --saving_steps="10000" \ | ||
| --per_device_train_batch_size="8" \ | ||
| --per_device_eval_batch_size="8" \ | ||
| --adam_beta1="0.9" \ | ||
| --adam_beta2="0.98" \ | ||
| --adam_epsilon="1e-06" \ | ||
| --gradient_checkpointing \ | ||
| ``` | ||
|
|
||
| The results of this run can be seen [here](https://wandb.ai/patrickvonplaten/wav2vec2-pretrained-demo/reports/Wav2Vec2-PreTraining-Demo-Run--VmlldzoxMDk3MjAw?accessToken=oa05s1y57lizo2ocxy3k01g6db1u4pt8m6ur2n8nl4cb0ug02ms2cw313kb8ruch). | ||
|
|
||
| ### Base | ||
|
|
||
| TODO (currently running...) | ||
patrickvonplaten marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
|
|
||
| ### Large | ||
|
|
||
| To pre-train `"large-sized"` Wav2Vec2 model, *e.g.* [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60), | ||
| on [librispeech_asr](https://huggingface.co/datasets/librispeech_asr), the following command can be run: | ||
|
|
||
| ```bash | ||
| accelerate launch run_pretrain_no_trainer.py \ | ||
| --dataset_name=librispeech_asr \ | ||
| --dataset_config_names clean clean other \ | ||
| --dataset_split_names train.100 train.360 train.500 \ | ||
| --output_dir=./test \ | ||
| --max_train_steps=200000 \ | ||
| --num_warmup_steps=32000 \ | ||
| --gradient_accumulation_steps=8 \ | ||
| --learning_rate=0.001 \ | ||
| --weight_decay=0.01 \ | ||
| --max_duration_in_seconds=20.0 \ | ||
| --min_duration_in_seconds=2.0 \ | ||
| --model_name_or_path=./ | ||
| --logging_steps=1 \ | ||
| --saving_steps=10000 \ | ||
| --per_device_train_batch_size=2 \ | ||
| --per_device_eval_batch_size=4 \ | ||
| --adam_beta1=0.9 \ | ||
| --adam_beta2=0.98 \ | ||
| --adam_epsilon=1e-06 \ | ||
| --gradient_checkpointing \ | ||
| ``` | ||
|
|
||
| The experiment was run on 8 GPU V100 (16 GB RAM each) for 7 days. | ||
| In case you have more than 8 GPUs available for a higher effective `batch_size`, | ||
| it is recommended to increase the `learning_rate` to `0.005` for faster convergence. | ||
|
|
||
| The results of this run can be seen [here](https://wandb.ai/patrickvonplaten/pretraining-wav2vec2/reports/Wav2Vec2-Large--VmlldzoxMTAwODM4?accessToken=wm3qzcnldrwsa31tkvf2pdmilw3f63d4twtffs86ou016xjbyilh55uoi3mo1qzc) and the checkpoint pretrained for 120,000 steps can be accessed [here](https://huggingface.co/patrickvonplaten/wav2vec2-large-repro-960h-libri-120k-steps) | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,4 @@ | ||
| datasets >= 1.12.0 | ||
| torch >= 1.5 | ||
| torchaudio | ||
| accelerate >= 0.5.0 |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.