Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrating new synthesizer #1

Closed
fazlekarim opened this issue Jun 12, 2019 · 1 comment
Closed

Integrating new synthesizer #1

fazlekarim opened this issue Jun 12, 2019 · 1 comment

Comments

@fazlekarim
Copy link

Hi,

If I wanted to integrate a new synthesizer, (https://github.com/syang1993/gst-tacotron/), what would be the steps I would need to take?

@CorentinJ
Copy link
Owner

You're free to use that implementation or fatchord's Tacotron would work too, that is up to you.

First, you'll need to understand how Tacotron is modified to allow for voice conditioning. Refer to section 2 of SV2TTS. You can also check my code.

Then you need to ensure that you will use the same format of spectrogram and audio for the synthesizer and for the vocoder. Check the preprocessing scripts to see the what it done to the data. You will likely have to change the data loading routine of the vocoder so that it takes correct inputs.

The scripts that use the synthesizer are synthesizer_train.py and vocoder_preprocess.py. Ensure that your model correctly interfaces them. You have to provide an interface for inference as well.

And you will have to train all this, which is not a minor task! Good luck.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants