You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The model tries to use CUDA, even though I specified it to use the CPU.
Because I have one of those computers with a soldered on GPU with only 4GB of memory, that is bound to crash and burn as soon as it tries to allocate some memory.
To Reproduce
from TTS.api import TTS
tts = TTS(model_name="tts_models/de/thorsten/tacotron2-DDC", progress_bar=False).to("cpu")
tts.tts_with_vc_to_file(
speaker_wav="my_speaker.wav",
text="Hallo! Das ist ein Test",
file_path="output.wav")
Expected behavior
The model should run on the cpu.
Logs
--truncated--
> voice_conversion_models/multilingual/vctk/freevc24 is already downloaded.
> Using model: freevc
> Loading pretrained speaker encoder model ...
Loaded the voice encoder model on cuda in 0.19 seconds.
Traceback (most recent call last):
File "my_script.py", line 20, in<module>
tts.tts_with_vc_to_file(
File ".venv/lib/python3.11/site-packages/TTS/api.py", line 455, in tts_with_vc_to_file
wav = self.tts_with_vc(
^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/TTS/api.py", line 420, in tts_with_vc
wav = self.voice_converter.voice_conversion(source_wav=fp.name, target_wav=speaker_wav)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/TTS/utils/synthesizer.py", line 254, in voice_conversion
output_wav = self.vc_model.voice_conversion(source_wav, target_wav)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/TTS/vc/models/freevc.py", line 523, in voice_conversion
g_tgt = self.enc_spk_ex.embed_utterance(wav_tgt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/TTS/vc/modules/freevc/speaker_encoder/speaker_encoder.py", line 155, in embed_utterance
partial_embeds = self(mels).cpu().numpy()
^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/TTS/vc/modules/freevc/speaker_encoder/speaker_encoder.py", line 60, in forward
_, (hidden, _) = self.lstm(mels)
^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/nn/modules/rnn.py", line 911, in forward
result = _VF.lstm(input, hx, self._flat_weights, self.bias, self.num_layers,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 7.35 GiB. GPU
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels.
Describe the bug
The model tries to use CUDA, even though I specified it to use the CPU.
Because I have one of those computers with a soldered on GPU with only 4GB of memory, that is bound to crash and burn as soon as it tries to allocate some memory.
To Reproduce
from TTS.api import TTS
tts = TTS(model_name="tts_models/de/thorsten/tacotron2-DDC", progress_bar=False).to("cpu")
tts.tts_with_vc_to_file(
speaker_wav="my_speaker.wav",
text="Hallo! Das ist ein Test",
file_path="output.wav")
Expected behavior
The model should run on the cpu.
Logs
Environment
Additional context
No response
The text was updated successfully, but these errors were encountered: