Skip to content

A TensorFlow Implementation of Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model

License

Notifications You must be signed in to change notification settings

mikelibg/tacotron

 
 

Repository files navigation

A (Heavily Documented) TensorFlow Implementation of Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model

Requirements

  • NumPy >= 1.11.1
  • TensorFlow >= 1.3
  • librosa
  • tqdm
  • matplotlib
  • scipy

Data

We train the model on two different speech datasets.

  1. LJ Speech Dataset
  2. Nick Offerman's Audiobooks

LJ Speech Dataset is recently widely used as a benchmark dataset in the TTS task because it is publicly available. It has 24 hours of reasonable quality samples. Nick's audiobooks are additionally used to see if the model can learn even with less data, variable speech samples. They are 18 hours long.

Training

  • STEP 0. Download LJ Speech Dataset or prepare your own data.
  • STEP 1. Adjust hyper parameters in hyperparams.py. (If you want to do preprocessing, set prepro True`.
  • STEP 2. Run python train.py. (If you set prepro True, run python prepro.py first)
  • STEP 3. Run python eval.py regularly during training.

Sample Synthesis

We generate speech samples based on Harvard Sentences as the original paper does. It is already included in the repo.

  • Run python synthesize.py and check the files in samples.

Training Curve

Attention Plot

Generated Samples

Notes

  • It's important to monitor the attention plots during training. If the attention plots look good (alignment looks linear), and then they look bad (the plots will look similar to what they looked like in the begining of training), then training has gone awry and most likely will need to be restarted from a checkpoint where the attention looked good, because we've learned that it's unlikely that the loss will ever recover. This deterioration of attention will correspond with a spike in the loss.

  • In the original paper, the authors said, "An important trick we discovered was predicting multiple, non-overlapping output frames at each decoder step" where the number of of multiple frame is the reduction factor, r. We originally interpretted this as predicting non-sequential frames during each decoding step t. Thus were using the following scheme (with r=5) during decoding.

    t    frame numbers
    -----------------------
    0    [ 0  1  2  3  4]
    1    [ 5  6  7  8  9]
    2    [10 11 12 13 14]
    ...
    

    After much experimentation, we were unable to have our model learning anything useful. We then switched to predicting r sequential frames during each decoding step.

    t    frame numbers
    -----------------------
    0    [ 0  1  2  3  4]
    1    [ 5  6  7  8  9]
    2    [10 11 12 13 14]
    ...
    

    With this setup we noticed improvements in the attention and have since kept it.

  • Perhaps the most important hyperparemeter is the learning rate. With an intitial learning rate of 0.002 we were never able to learn a clean attention, the loss would frequently explode. With an initial learning rate of 0.001 we were able to learn a clean attention and train for much longer get decernable words during synthesis.

  • Check other TTS models such as DCTTS or deep voice 3.

Differences from the original paper

  • We use Noam style warmup and decay.
  • We implement gradient clipping.
  • Our training batches are bucketed.
  • After the last convolutional layer of the post-processing net, we apply an affine transformation to bring the dimensionality up to 128 from 80, because the required dimensionality of highway net is 128. In the original highway networks paper, the authors mention that the dimensionality of the input can also be increased with zero-padding, but they used the affine transformation in all their experiments. We do not know what the Tacotron authors chose.

Papers that referenced this repo

Jan. 2018, Kyubyong Park & Tommy Mulc

About

A TensorFlow Implementation of Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%