You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I have been trying to train a spanish model for the synthetizer for a month, and each time I trained the models on different datasets, it took a lot of time to finish each schedule. Right now I am training on another dataset, but my monitor shows this:
It seems to me that it is not training in parallel as it should be, and it is using just about 5% of my GPU:
Is there any param I should turn on to enable training in parallel? This is the output of the training in process:
The text was updated successfully, but these errors were encountered:
1 % would indicate GPU is not used here, but the training speed sounds like gpu is used. Maybe the speed is ok for a 2060, but there are also some bugs with training speed, which have unfortunately not really solutions yet, like #700
Hello, I have been trying to train a spanish model for the synthetizer for a month, and each time I trained the models on different datasets, it took a lot of time to finish each schedule. Right now I am training on another dataset, but my monitor shows this:
It seems to me that it is not training in parallel as it should be, and it is using just about 5% of my GPU:
Is there any param I should turn on to enable training in parallel? This is the output of the training in process:
The text was updated successfully, but these errors were encountered: