Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"CUDA is not available, running on CPU." #778

Open
6 tasks done
Iheuzio opened this issue Dec 22, 2024 · 2 comments
Open
6 tasks done

"CUDA is not available, running on CPU." #778

Iheuzio opened this issue Dec 22, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@Iheuzio
Copy link

Iheuzio commented Dec 22, 2024

Self Checks

  • This template is only for bug reports. For questions, please visit Discussions.
  • I have thoroughly reviewed the project documentation (installation, training, inference) but couldn't find information to solve my problem. English 中文 日本語 Portuguese (Brazil)
  • I have searched for existing issues, including closed ones. Search issues
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template and fill in all required fields.

Cloud or Self Hosted

Self Hosted (Docker)

Environment Details

Windows Version 10.0.22631 Build 22631, Docker Engine v27.3.1
RTX 2070 Super 8gb
i7 9700k

Steps to Reproduce

  1. Run Docker image off DockerHub

✔️ Expected Behavior

Image should detect Cuda card when running

❌ Actual Behavior

4-12-22 14:40:20 /opt/fish-speech/fish_speech/text/chn_text_norm/text.py:71: SyntaxWarning: invalid escape sequence '\d'
2024-12-22 14:40:20 + "(\d"
2024-12-22 14:40:23 2024-12-22 19:40:23.576 | INFO | main::53 - CUDA is not available, running on CPU.
2024-12-22 14:40:23 2024-12-22 19:40:23.576 | INFO | main::56 - Loading Llama model...
2024-12-22 14:40:29 2024-12-22 19:40:29.817 | INFO | tools.llama.generate:load_model:682 - Restored model from checkpoint
2024-12-22 14:40:29 2024-12-22 19:40:29.817 | INFO | tools.llama.generate:load_model:688 - Using DualARTransformer
2024-12-22 14:40:29 2024-12-22 19:40:29.826 | INFO | main::64 - Loading VQ-GAN model...
2024-12-22 14:40:30 /usr/local/lib/python3.12/site-packages/vector_quantize_pytorch/vector_quantize_pytorch.py:445: FutureWarning: torch.cuda.amp.autocast(args...) is deprecated. Please use torch.amp.autocast('cuda', args...) instead.
2024-12-22 14:40:30 @autocast(enabled = False)
2024-12-22 14:40:30 /usr/local/lib/python3.12/site-packages/vector_quantize_pytorch/vector_quantize_pytorch.py:630: FutureWarning: torch.cuda.amp.autocast(args...) is deprecated. Please use torch.amp.autocast('cuda', args...) instead.
2024-12-22 14:40:30 @autocast(enabled = False)
2024-12-22 14:40:30 /usr/local/lib/python3.12/site-packages/vector_quantize_pytorch/finite_scalar_quantization.py:147: FutureWarning: torch.cuda.amp.autocast(args...) is deprecated. Please use torch.amp.autocast('cuda', args...) instead.
2024-12-22 14:40:30 @autocast(enabled = False)
2024-12-22 14:40:30 /usr/local/lib/python3.12/site-packages/vector_quantize_pytorch/lookup_free_quantization.py:209: FutureWarning: torch.cuda.amp.autocast(args...) is deprecated. Please use torch.amp.autocast('cuda', args...) instead.
2024-12-22 14:40:30 @autocast(enabled = False)
2024-12-22 14:40:30 2024-12-22 19:40:30.725 | INFO | tools.vqgan.inference:load_model:43 - Loaded model:
2024-12-22 14:40:30 2024-12-22 19:40:30.725 | INFO | main::71 - Decoder model loaded, warming up...
2024-12-22 14:40:30 2024-12-22 19:40:30.736 | INFO | tools.llama.generate:generate_long:789 - Encoded text: Hello world.
2024-12-22 14:40:30 2024-12-22 19:40:30.736 | INFO | tools.llama.generate:generate_long:807 - Generating sentence 1/1 of sample 1/1
2024-12-22 14:41:22
0%| | 0/1023 [00:00<?, ?it/s]
0%| | 1/1023 [00:01<19:34, 1.15s/it]
0%| | 2/1023 [00:02<18:41, 1.10s/it]
0%| | 3/1023 [00:03<18:20, 1.08s/it]
0%| | 4/1023 [00:04<18:18, 1.08s/it]
0%| | 5/1023 [00:05<18:19, 1.08s/it]
1%| | 6/1023 [00:06<18:40, 1.10s/it]
1%| | 7/1023 [00:07<18:28, 1.09s/it]
1%| | 8/1023 [00:08<18:45, 1.11s/it]
1%| | 9/1023 [00:10<19:26, 1.15s/it]
1%| | 10/1023 [00:11<19:20, 1.15s/it]
1%| | 11/1023 [00:12<19:26, 1.15s/it]
1%| | 12/1023 [00:13<19:55, 1.18s/it]
1%|▏ | 13/1023 [00:14<19:36, 1.17s/it]
1%|▏ | 14/1023 [00:15<19:51, 1.18s/it]
1%|▏ | 15/1023 [00:17<20:04, 1.19s/it]
2%|▏ | 16/1023 [00:18<20:15, 1.21s/it]
2%|▏ | 17/1023 [00:19<20:21, 1.21s/it]
2%|▏ | 18/1023 [00:20<19:58, 1.19s/it]
2%|▏ | 19/1023 [00:21<19:40, 1.18s/it]
2%|▏ | 20/1023 [00:23<19:37, 1.17s/it]
2%|▏ | 21/1023 [00:24<19:25, 1.16s/it]
2%|▏ | 22/1023 [00:25<19:31, 1.17s/it]
2%|▏ | 23/1023 [00:26<19:43, 1.18s/it]
2%|▏ | 24/1023 [00:27<20:35, 1.24s/it]
2%|▏ | 25/1023 [00:29<20:39, 1.24s/it]
3%|▎ | 26/1023 [00:30<20:23, 1.23s/it]
3%|▎ | 27/1023 [00:31<20:06, 1.21s/it]
3%|▎ | 28/1023 [00:32<19:46, 1.19s/it]
3%|▎ | 29/1023 [00:33<19:40, 1.19s/it]
3%|▎ | 30/1023 [00:35<19:37, 1.19s/it]
3%|▎ | 31/1023 [00:36<19:29, 1.18s/it]
3%|▎ | 32/1023 [00:37<19:42, 1.19s/it]
3%|▎ | 33/1023 [00:38<19:41, 1.19s/it]
3%|▎ | 34/1023 [00:40<20:57, 1.27s/it]
3%|▎ | 35/1023 [00:41<21:05, 1.28s/it]
4%|▎ | 36/1023 [00:42<20:52, 1.27s/it]
4%|▎ | 37/1023 [00:44<21:11, 1.29s/it]
4%|▎ | 38/1023 [00:45<20:29, 1.25s/it]
4%|▎ | 38/1023 [00:46<20:01, 1.22s/it]
2024-12-22 14:41:22 2024-12-22 19:41:22.113 | INFO | tools.llama.generate:generate_long:861 - Generated 40 tokens in 51.38 seconds, 0.78 tokens/sec
2024-12-22 14:41:22 2024-12-22 19:41:22.113 | INFO | tools.llama.generate:generate_long:864 - Bandwidth achieved: 0.50 GB/s
2024-12-22 14:41:22 2024-12-22 19:41:22.114 | INFO | tools.inference_engine.vq_manager:decode_vq_tokens:20 - VQ features: torch.Size([8, 39])
2024-12-22 14:42:21 2024-12-22 19:42:21.401 | INFO | main::98 - Warming up done, launching the web UI...

@Iheuzio Iheuzio added the bug Something isn't working label Dec 22, 2024
@Whale-Dolphin
Copy link
Collaborator

I think it may beacause the RTX 2070 do not support fp16, try to turn it off.

@Iheuzio
Copy link
Author

Iheuzio commented Jan 14, 2025

I think it may beacause the RTX 2070 do not support fp16, try to turn it off.

docker run -ti --runtime=nvidia -e NVIDIA_DRIVER_CAPABILITIES=compute,utility -e NVIDIA_VISIBLE_DEVICES=all -p 127.0.0.1:7860:7860 fishaudio/fish-speech

The image has to specified to run under cuda directly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants