-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
训练模型时显存爆了 #27
Comments
嗯嗯,如上所示,我之前用940MX, batch size在2是可以跑的,你这个话可以试下4. |
我试着通过修改部分文件将batch_size调到6,跑的很稳定。 |
可以請問是修改什麼文件呢,我看了上面的引用,研究了半天看不大懂。 |
需要修改 ./synthesizer/hparams.py 这一部分 |
好的我試試,謝謝! |
請問訓練 vocoder 時也遇到 |
我把下面代码的 12 to 2 暂时不报错了
|
好勒,謝謝你 |
已经update到 Readme |
Variable._execution_engine.run_backward(RuntimeError: CUDA out of memory. Tried to allocate 88.00 MiB (GPU 0; 4.00 GiB totalcapacity; 2.68 GiB already allocated; 0 bytes free; 2.85 GiB reserved in total by PyTorch)
能不能提供一个调batch_size的参数? 我目前用的显卡显存只有4G(GTX1050Ti),默认参数正常训练时经常爆掉显存....
The text was updated successfully, but these errors were encountered: