Replies: 3 comments
-
>>> georroussos |
Beta Was this translation helpful? Give feedback.
-
>>> Daksh_Varshneya |
Beta Was this translation helpful? Give feedback.
-
>>> georroussos |
Beta Was this translation helpful? Give feedback.
-
>>> Daksh_Varshneya
[May 22, 2020, 8:23am]
Hi, slash
I am trying to fine-tune TTS models further. For that purpose, I am
trying to collect a dataset where hopefully the audio samples are more
expressive in nature. It is tough to collect a dataset in only one voice
such as LJSpeech dataset. If I collect some good quality dataset but
with multiple different voices spread across the audio samples, will it
be difficult to train the model on this dataset? I can make sure that
one audio sample has only one distinct voice. But across samples, this
may not hold true. slash
Any suggestions?
[This is an archived TTS discussion thread from discourse.mozilla.org/t/multiple-voices-dataset]
Beta Was this translation helpful? Give feedback.
All reactions