-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use trained model in inference? #193
Comments
@solee0022, I've identified the root of the issue. It's related to my training configuration file. Inside The consequence? The model skips the cleaner function, which I was using to convert from grapheme to phoneme sequences. As a result, I unintentionally 😅trained the model on a character-based approach rather than phoneme-based. Moreover, in the from text import text_to_sequence, cleaned_text_to_sequence
def get_text(text, hps):
text_norm = cleaned_text_to_sequence(text)
if hps.data.add_blank:
text_norm = commons.intersperse(text_norm, 0)
text_norm = torch.LongTensor(text_norm)
return text_norm In addition, it's also a good idea to preprocess your text beforehand as an offline preprocessing step, so you do not take time while training the model to do the cleaning. |
Thank you, I changed according to your instructions and it worked. |
I trained model with Korean dataset. And I got checkpoints of Discriminator and Generator.
Is it right to use generator checkpoint( 'G_20000.pth') except for Discriminator in inference?
I've made Korean Synthesized audio with only G_20000.pth, but the synthesized audio was terrible.
And below is the code of I changed.
net_g = SynthesizerTrn( len(lang_symbols['en']), hps.data.filter_length // 2 + 1, hps.train.segment_size // hps.data.hop_length, **hps.model).cuda()
_ = net_g.eval()
_ = utils.load_checkpoint("/path/to/G_20000.pth", net_g, None)
The text was updated successfully, but these errors were encountered: