Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ModuleNotFoundError: TTS.tts.datasets.preprocess #733

Closed
luis-vera opened this issue Oct 28, 2021 · 2 comments
Closed

ModuleNotFoundError: TTS.tts.datasets.preprocess #733

luis-vera opened this issue Oct 28, 2021 · 2 comments
Labels
wontfix This will not be worked on

Comments

@luis-vera
Copy link

Hi:
I downloaded Mozilla TTS from Github, following the same steps you suggest in it. I am using my own dataset in the ljspeech format. I installed TTS but when I execute:

python TTS/bin/train_tacotron.py --config_path TTS/tts/configs/config,json

I got:

ModuleNotFoundError: No module named TTS.tts.datasets.preprocess.

I found a similar issue and you recommended change server/conf.json. I did as you indicated but error is the same.

I am using Ubuntu 18 with 2 GPU Geforce RTX 2080 Ti. I am working with Anaconda and Python 3.8

Thanks a lot for your help.

@stale
Copy link

stale bot commented Jan 3, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discourse page for further help. https://discourse.mozilla.org/c/tts

@stale stale bot added the wontfix This will not be worked on label Jan 3, 2022
@stale stale bot closed this as completed Apr 16, 2022
@ZipingL
Copy link

ZipingL commented May 19, 2023

I believe that I most make a point of privileged revival and state that my computer is giving me this odd error as well. Mozilla team? I guess now that it has been nine years since I have last used @Mozilla-GitHub-Standards related Common Voice, Deep Voice programs, there really has not been any changes...

You guys can't just rely on using the "OpenSource, FOR ALL" branding to attempt to remain relevant in the machine learning and the python scene. Seriously. Just help me fix this error, I'm just really in a bad mood right now due to @aws and @GoogleWorkspaces problems and so I apologize for any statements that may be read as incredibly uncalled for. It's just one of those days for me that seems to just be raining cats and dogs.

Any help is appreciated.

(.venv) suntzuping@hrcs:~/workspace/TTS$ ./run.sh 
Traceback (most recent call last):
  File "./TTS/bin/train_tacotron.py", line 15, in <module>
    from TTS.tts.datasets.preprocess import load_meta_data
ModuleNotFoundError: No module named 'TTS.tts.datasets.preprocess'
(.venv) suntzuping@hrcs:~/workspace/TTS$ deviceQueryDrv 
deviceQueryDrv Starting...

CUDA Device Query (Driver API) statically linked version 
Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA TITAN RTX"
  CUDA Driver Version:                           12.1
  CUDA Capability Major/Minor version number:    7.5
  Total amount of global memory:                 24576 MBytes (25769476096 bytes)
  (72) Multiprocessors, ( 64) CUDA Cores/MP:     4608 CUDA Cores
  GPU Max Clock rate:                            1770 MHz (1.77 GHz)
  Memory Clock rate:                             7001 Mhz
  Memory Bus Width:                              384-bit
  L2 Cache Size:                                 6291456 bytes
  Max Texture Dimension Sizes                    1D=(131072) 2D=(131072, 65536) 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1024
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size (x,y,z):    (2147483647, 65535, 65535)
  Texture alignment:                             512 bytes
  Maximum memory pitch:                          2147483647 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Concurrent kernel execution:                   Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 8 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
Result = PASS

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

2 participants