Skip to content

Commit

Permalink
G2P docs (#4841)
Browse files Browse the repository at this point in the history
* g2p docs added

Signed-off-by: ekmb <[email protected]>

* fix references

Signed-off-by: ekmb <[email protected]>

* address review feedback

Signed-off-by: ekmb <[email protected]>

Signed-off-by: ekmb <[email protected]>
  • Loading branch information
ekmb authored and XuesongYang committed Sep 12, 2022
1 parent be60005 commit ae686e1
Show file tree
Hide file tree
Showing 11 changed files with 296 additions and 3 deletions.
2 changes: 1 addition & 1 deletion docs/source/asr/datasets.rst
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ The audio files can be of any format supported by `Pydub <https://github.com/jia
WAV files as they are the default and have been most thoroughly tested.

There should be one manifest file per dataset that will be passed in, therefore, if the user wants separate training and validation
datasets, they should also have separate manifests. Otherwise, thay will be loading validation data with their training data and vice
datasets, they should also have separate manifests. Otherwise, they will be loading validation data with their training data and vice
versa.

Each line of the manifest should be in the following format:
Expand Down
1 change: 1 addition & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,7 @@
'nlp/text_normalization/tn_itn_all.bib',
'tools/tools_all.bib',
'tts_all.bib',
'text_processing/text_processing_all.bib',
'core/adapters/adapter_bib.bib',
]

Expand Down
9 changes: 9 additions & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ NVIDIA NeMo User Guide
nlp/machine_translation/machine_translation
nlp/text_normalization/intro
nlp/api
nlp/models


.. toctree::
Expand All @@ -60,6 +61,14 @@ NVIDIA NeMo User Guide
:caption: Common
:name: Common

text_processing/intro

.. toctree::
:maxdepth: 2
:caption: Text Processing
:name: Text Processing

text_processing/g2p/g2p
common/intro


Expand Down
209 changes: 209 additions & 0 deletions docs/source/text_processing/g2p/g2p.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,209 @@
.. _g2p:

Grapheme-to-Phoneme Models
==========================

Grapheme-to-phoneme conversion (G2P) is the task of transducing graphemes (i.e., orthographic symbols) to phonemes (i.e., units of the sound system of a language).
For example, for `International_Phonetic_Alphabet (IPA): <https://en.wikipedia.org/wiki/International_Phonetic_Alphabet>`__ ``"Swifts, flushed from chimneys …" → "ˈswɪfts, ˈfɫəʃt ˈfɹəm ˈtʃɪmniz …"``.

Modern text-to-speech (TTS) models can learn pronunciations from raw text input and its corresponding audio data,
but by relying on grapheme input during training, such models fail to provide a reliable way of correcting wrong pronunciations. As a result, many TTS systems use phonetic input
during training to directly access and correct pronunciations at inference time. G2P systems allow users to enforce the desired pronunciation by providing a phonetic transcript of the input.

G2P models convert out-of-vocabulary words (OOV), e.g. proper names and loaner words, as well as heteronyms in their phonetic form to improve the quality of the syntesized text.

*Heteronyms* represent words that have the same spelling but different pronunciations, e.g., “read” in “I will read the book.” vs. “She read her project last week.” A single model that can handle OOVs and heteronyms and replace dictionary lookups can significantly simplify and improve the quality of synthesized speech.

We support the following G2P models:

* **ByT5 G2P** a text-to-text model that is based on ByT5 :cite:`g2p--xue2021byt5` neural network model that was originally proposed in :cite:`g2p--vrezavckova2021t5g2p` and :cite:`g2p--zhu2022byt5`.

* **G2P-Conformer** CTC model - uses a Conformer encoder :cite:`g2p--ggulati2020conformer` followed by a linear decoder; the model is trained with CTC-loss. G2P-Conformer model has about 20 times fewer parameters than the ByT5 model and is a non-autoregressive model that makes it faster during inference.

The models can be trained using words or sentences as input.
If trained with sentence-level input, the models can handle out-of-vocabulary (OOV) and heteronyms along with unambiguous words in a single pass.
See :ref:`Sentence-level Dataset Preparation Pipeline <sentence_level_dataset_pipeline>` on how to label data for G2P model training.

Additionally, we support a purpose-built BERT-based classification model for heteronym disambiguation, see :ref:`this <bert_heteronym_cl>` for details.

Model Training, Evaluation and Inference
----------------------------------------

The section covers both ByT5 and G2P-Conformer models.

The models take input data in `.json` manifest format, and there should be separate training and validation manifests.
Each line of the manifest should be in the following format:

.. code::
{"text_graphemes": "Swifts, flushed from chimneys.", "text": "ˈswɪfts, ˈfɫəʃt ˈfɹəm ˈtʃɪmniz."}
Manifest fields:

* ``text`` - name of the field in manifest_filepath for ground truth phonemes

* ``text_graphemes`` - name of the field in manifest_filepath for input grapheme text

The models can handle input with and without punctuation marks.

To train ByT5 G2P model and evaluate it after at the end of the training, run:

.. code::
python examples/text_processing/g2p/g2p_train_and_evaluate.py \
# (Optional: --config-path=<Path to dir of configs> --config-name=<name of config without .yaml>) \
model.train_ds.manifest_filepath="<Path to manifest file>" \
model.validation_ds.manifest_filepath="<Path to manifest file>" \
model.test_ds.manifest_filepath="<Path to manifest file>" \
trainer.devices=1 \
do_training=True \
do_testing=True
Example of the config file: ``NeMo/examples/text_processing/g2p/conf/t5_g2p.yaml``.


To train G2P-Conformer model and evaluate it after at the end of the training, run:

.. code::
python examples/text_processing/g2p/g2p_train_and_evaluate.py \
# (Optional: --config-path=<Path to dir of configs> --config-name=<name of config without .yaml>) \
model.train_ds.manifest_filepath="<Path to manifest file>" \
model.validation_ds.manifest_filepath="<Path to manifest file>" \
model.test_ds.manifest_filepath="<Path to manifest file>" \
model.tokenizer.dir=<Path to pretrained tokenizer> \
model.tokenizer_grapheme.do_lower=False \
model.tokenizer_grapheme.add_punctuation=True \
trainer.devices=1 \
do_training=True \
do_testing=True
Example of the config file: ``NeMo/examples/text_processing/g2p/conf/g2p_conformer_ctc.yaml``.


To evaluate a pretrained G2P model, run:

.. code::
python examples/text_processing/g2p/g2p_train_and_evaluate.py \
# (Optional: --config-path=<Path to dir of configs> --config-name=<name of config without .yaml>) \
pretrained_model="<Path to .nemo file or pretrained model name from list_available_models()>" \
model.test_ds.manifest_filepath="<Path to manifest file>" \
trainer.devices=1 \
do_training=False \
do_testing=True
To run inference with a pretrained G2P model, run:

.. code-block::
python g2p_inference.py \
pretrained_model=<Path to .nemo file or pretrained model name for G2PModel from list_available_models()>" \
manifest_filepath="<Path to .json manifest>" \
output_file="<Path to .json manifest to save prediction>" \
batch_size=32 \
num_workers=4 \
pred_field="pred_text"
Model's predictions will be saved in `pred_field` of the `output_file`.

.. _sentence_level_dataset_pipeline:

Sentence-level Dataset Preparation Pipeline
-------------------------------------------

Here is the overall overview of the data labeling pipeline for sentence-level G2P model training:

.. image:: images/data_labeling_pipeline.png
:align: center
:alt: Data labeling pipeline for sentence-level G2P model training
:scale: 70%

Here we describe the automatic phoneme-labeling process for generating augmented data. The figure below shows the phoneme-labeling steps to prepare data for sentence-level G2P model training. We first convert known unambiguous words to their phonetic pronunciations with dictionary lookups, e.g. CMU dictionary.
Next, we automatically label heteronyms using a RAD-TTS Aligner :cite:`g2p--badlani2022one`. More details on how to disambiguate heteronyms with a pretrained Aligner model could be found in `NeMo/tutorials/tts/Aligner_Inference_Examples.ipynb <https://github.com/NVIDIA/NeMo/blob/stable/tutorials/tts/Aligner_Inference_Examples.ipynb>`__ in `Google's Colab <https://colab.research.google.com/github/NVIDIA/NeMo/blob/stable/tutorials/tts/Aligner_Inference_Examples.ipynb>`_.
Finally, we mask-out OOV words with a special masking token, “<unk>” in the figure below (note, we use `model.tokenizer_grapheme.unk_token="҂"` symbol during G2P model training.)
Using this unknown token forces a G2P model to produce the same masking token as a phonetic representation during training. During inference, the model generates phoneme predictions for OOV words without emitting the masking token as long as this token is not included in the grapheme input.



.. _bert_heteronym_cl:

Purpose-built BERT-based classification model for heteronym disambiguation
--------------------------------------------------------------------------

HeteronymClassificationModel is a BERT-based :cite:`g2p--ddevlin2018bert` model represents a token classification model and can handle multiple heteronyms at once. The model takes a sentence as an input, and then for every word, it selects a heteronym option out of the available forms.
We mask irrelevant forms to disregard the model’s predictions for non-ambiguous words. E.g., given the input “The Poems are simple to read and easy to comprehend.” the model scores possible {READ_PRESENT and READ_PAST} options for the word “read”.
Possible heteronym forms are extracted from the WikipediaHomographData :cite:`g2p--gorman2018improving`.

The model expects input to be in `.json` manifest format, where is line contains at least the following fields:

.. code::
{"text_graphemes": "Oxygen is less able to diffuse into the blood, leading to hypoxia.", "start_end": [23, 30], "homograph_span": "diffuse", "word_id": "diffuse_vrb"}
Manifest fields:

* `text_graphemes` - input sentence

* `start_end` - beginning and end of the heteronym span in the input sentence

* `homograph_span` - heteronym word in the sentence

* `word_id` - heteronym label, e.g., word `diffuse` has the following possible labels: `diffuse_vrb` and `diffuse_adj`. See `https://github.com/google-research-datasets/WikipediaHomographData/blob/master/data/wordids.tsv <https://github.com/google-research-datasets/WikipediaHomographData/blob/master/data/wordids.tsv>`__ for more details.

To convert the WikipediaHomographData to `.json` format suitable for the HeteronymClassificationModel training, run:

.. code-block::
# WikipediaHomographData could be downloaded from `https://github.com/google-research-datasets/WikipediaHomographData <https://github.com/google-research-datasets/WikipediaHomographData>`__.
python NeMo/scripts/dataset_processing/g2p/export_wikihomograph_data_to_manifest.py \
--data_folder=<Path to WikipediaHomographData>/WikipediaHomographData-master/data/eval/
--output=eval.json
python NeMo/scripts/dataset_processing/g2p/export_wikihomograph_data_to_manifest.py \
--data_folder=<Path to WikipediaHomographData>/WikipediaHomographData-master/data/train/
--output=train.json
To train and evaluate the model, run:

.. code-block::
python heteronym_classification_train_and_evaluate.py \
train_manifest=<Path to manifest file>" \
validation_manifest=<Path to manifest file>" \
model.encoder.pretrained="<Path to .nemo file or pretrained model name from list_available_models()>" \
model.wordids=<Path to wordids.tsv file, similar to https://github.com/google-research-datasets/WikipediaHomographData/blob/master/data/wordids.tsv> \
do_training=True \
do_testing=True
To run inference with a pretrained HeteronymClassificationModel, run:

.. code-block::
python heteronym_classification_inference.py \
manifest="<Path to .json manifest>" \
pretrained_model="<Path to .nemo file or pretrained model name from list_available_models()>" \
output_file="<Path to .json manifest to save prediction>"
Note, if the input manifest contains target "word_id", evaluation will be also performed. During inference, the model predicts heteronym `word_id` and saves predictions in `"pred_text"` field of the `output_file`:

.. code::
{"text_graphemes": "Oxygen is less able to diffuse into the blood, leading to hypoxia.", "pred_text": "diffuse_vrb", "start_end": [23, 30], "homograph_span": "diffuse", "word_id": "diffuse_vrb"}
Requirements
------------

G2P requires NeMo NLP and ASR collections installed. See `Installation instructions <https://github.com/NVIDIA/NeMo/blob/main/docs/source/starthere/intro.rst#installation>`__ for more details.


References
----------

.. bibliography:: ../text_processing_all.bib
:style: plain
:labelprefix: g2p-
:keyprefix: g2p--
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
12 changes: 12 additions & 0 deletions docs/source/text_processing/intro.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
NeMo Text Processing
====================

NeMo provides a set of models for text processing input and/or output of Automatic Speech Recognitions (ASR) and Text-to-Speech (TTS) models: \
`https://github.com/NVIDIA/NeMo/tree/main/nemo_text_processing <https://github.com/NVIDIA/NeMo/tree/main/nemo_text_processing>`__ .

.. toctree::
:maxdepth: 1

g2p


53 changes: 53 additions & 0 deletions docs/source/text_processing/text_processing_all.bib
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
@article{xue2021byt5,
title={ByT5: Towards a token-free future with pre-trained byte-to-byte models 2021},
author={Xue, Linting and Barua, Aditya and Constant, Noah and Al-Rfou, Rami and Narang, Sharan and Kale, Mihir and Roberts, Adam and Raffel, Colin},
journal={arXiv preprint arXiv:2105.13626},
year={2021}
}

@article{vrezavckova2021t5g2p,
title={T5g2p: Using text-to-text transfer transformer for grapheme-to-phoneme conversion},
author={{\v{R}}ez{\'a}{\v{c}}kov{\'a}, Mark{\'e}ta and {\v{S}}vec, Jan and Tihelka, Daniel},
year={2021},
journal={International Speech Communication Association}
}

@article{zhu2022byt5,
title={ByT5 model for massively multilingual grapheme-to-phoneme conversion},
author={Zhu, Jian and Zhang, Cong and Jurgens, David},
journal={arXiv preprint arXiv:2204.03067},
year={2022}
}

@article{ggulati2020conformer,
title={Conformer: Convolution-augmented transformer for speech recognition},
author={Gulati, Anmol and Qin, James and Chiu, Chung-Cheng and Parmar, Niki and Zhang, Yu and Yu, Jiahui and Han, Wei and Wang, Shibo and Zhang, Zhengdong and Wu, Yonghui and others},
journal={arXiv preprint arXiv:2005.08100},
year={2020}
}

@article{ddevlin2018bert,
title={Bert: Pre-training of deep bidirectional transformers for language understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}

@inproceedings{gorman2018improving,
title={Improving homograph disambiguation with supervised machine learning},
author={Gorman, Kyle and Mazovetskiy, Gleb and Nikolaev, Vitaly},
booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
year={2018}
}


@inproceedings{badlani2022one,
title={One TTS alignment to rule them all},
author={Badlani, Rohan and {\L}a{\'n}cucki, Adrian and Shih, Kevin J and Valle, Rafael and Ping, Wei and Catanzaro, Bryan},
booktitle={ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={6092--6096},
year={2022},
organization={IEEE}
}


4 changes: 2 additions & 2 deletions examples/text_processing/g2p/conf/g2p_conformer_ctc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ model:
feat_in: ${model.embedding.d_model}
feat_out: -1 # you may set it if you need different output size other than the default d_model
n_layers: 16
d_model: 256
d_model: 176

# Sub-sampling params
subsampling: null # vggnet or striding, vggnet may give better results but needs more memory
Expand All @@ -39,7 +39,7 @@ model:

# Multi-headed Attention Module's params
self_attention_model: rel_pos # rel_pos or abs_pos
n_heads: 8 # may need to be lower for smaller d_models
n_heads: 4 # may need to be lower for smaller d_models
# [left, right] specifies the number of steps to be seen from left and right of each step in self-attention
att_context_size: [ -1, -1 ] # -1 means unlimited context
xscaling: true # scales up the input embeddings by sqrt(d_model)
Expand Down
4 changes: 4 additions & 0 deletions examples/text_processing/g2p/g2p_train_and_evaluate.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,8 @@
trainer.devices=1 \
do_training=True \
do_testing=True
Example of the config file: NeMo/examples/text_processing/g2p/conf/t5_g2p.yaml
# Training Conformer-G2P Model and evaluation at the end of training:
python examples/text_processing/g2p/g2p_train_and_evaluate.py \
Expand All @@ -50,6 +52,8 @@
do_training=True \
do_testing=True
Example of the config file: NeMo/examples/text_processing/g2p/conf/g2p_conformer_ctc.yaml
# Run evaluation of the pretrained model:
python examples/text_processing/g2p/g2p_train_and_evaluate.py \
# (Optional: --config-path=<Path to dir of configs> --config-name=<name of config without .yaml>) \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,8 @@
This script runs inference with HeteronymClassificationModel
If the input manifest contains target "word_id", evaluation will be also performed.
To prepare dataset, see NeMo/scripts/dataset_processing/g2p/export_wikihomograph_data_to_manifest.py
python heteronym_classification_inference.py \
manifest="<Path to .json manifest>" \
pretrained_model="<Path to .nemo file or pretrained model name from list_available_models()>" \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,11 +27,14 @@
"""
This script runs training and evaluation of HeteronymClassificationModel
To prepare dataset, see NeMo/scripts/dataset_processing/g2p/export_wikihomograph_data_to_manifest.py
To run training and testing:
python heteronym_classification_train_and_evaluate.py \
train_manifest=<Path to manifest file>" \
validation_manifest=<Path to manifest file>" \
model.encoder.pretrained="<Path to .nemo file or pretrained model name from list_available_models()>" \
model.wordids=<Path to wordids.tsv file, similar to https://github.com/google-research-datasets/WikipediaHomographData/blob/master/data/wordids.tsv> \
do_training=True \
do_testing=True
"""
Expand Down

0 comments on commit ae686e1

Please sign in to comment.