Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The app isn't opening #337

Open
plusmeup opened this issue Oct 2, 2024 · 9 comments
Open

The app isn't opening #337

plusmeup opened this issue Oct 2, 2024 · 9 comments
Labels
bug Something isn't working

Comments

@plusmeup
Copy link

plusmeup commented Oct 2, 2024

Describe the bug
App launches on a second as I can see in the System Monitor, and then instantly closes itself. No loading, no pop-ups, nothing. Just a silent self-termination.

Expected behavior
Being able to use the app.

Screenshots
Nothing to screenshot really.

Debugging information
I'm unable to provide debugging information as I can't even open the app. However, I'm on Fedora 40, and the app is installed as a flatpack. My GPU is Nvidia and CPU is AMD.

@plusmeup plusmeup added the bug Something isn't working label Oct 2, 2024
@Jeffser
Copy link
Owner

Jeffser commented Oct 3, 2024

could you run the app with the command flatpak run com.jeffser.Alpaca and send me the output? thanks

@rashadgasimli
Copy link

The problem is ... NVIDIA. Use this command flatpak override --env=GSK_RENDERER=ngl com.jeffser.Alpaca

@CodingKoalaGeneral
Copy link
Sponsor

Debugging information

Please paste here the debugging information available at 'About Alpaca' > 'Troubleshooting' > 'Debugging Information'

may put the cmds to output debug info into the template for new issues

@plusmeup
Copy link
Author

plusmeup commented Oct 3, 2024

could you run the app with the command flatpak run com.jeffser.Alpaca and send me the output? thanks

INFO [main.py | main] Alpaca version: 2.0.6
INFO [connection_handler.py | start] Starting Alpaca's Ollama instance...
INFO [connection_handler.py | start] Started Alpaca's Ollama instance
2024/10/03 12:38:26 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/plusmeo/.var/app/com.jeffser.Alpaca/data/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-10-03T12:38:26.169+03:00 level=INFO source=images.go:753 msg="total blobs: 5"
time=2024-10-03T12:38:26.169+03:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-10-03T12:38:26.170+03:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11435 (version 0.3.11)"
time=2024-10-03T12:38:26.170+03:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/home/plusmeo/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama1032551870/runners
time=2024-10-03T12:38:26.170+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/libggml.so.gz
time=2024-10-03T12:38:26.170+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/libllama.so.gz
time=2024-10-03T12:38:26.170+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/ollama_llama_server.gz
time=2024-10-03T12:38:26.170+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/libggml.so.gz
time=2024-10-03T12:38:26.170+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/libllama.so.gz
time=2024-10-03T12:38:26.170+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/ollama_llama_server.gz
time=2024-10-03T12:38:26.170+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/libggml.so.gz
time=2024-10-03T12:38:26.170+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/libllama.so.gz
time=2024-10-03T12:38:26.170+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/ollama_llama_server.gz
time=2024-10-03T12:38:26.170+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/libggml.so.gz
time=2024-10-03T12:38:26.170+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/libllama.so.gz
time=2024-10-03T12:38:26.170+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/ollama_llama_server.gz
time=2024-10-03T12:38:26.170+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/libggml.so.gz
time=2024-10-03T12:38:26.170+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/libllama.so.gz
time=2024-10-03T12:38:26.170+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/ollama_llama_server.gz
time=2024-10-03T12:38:26.170+03:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/libggml.so.gz
time=2024-10-03T12:38:26.170+03:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/libllama.so.gz
time=2024-10-03T12:38:26.170+03:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/ollama_llama_server.gz
INFO [connection_handler.py | start] client version is 0.3.11
INFO [connection_handler.py | request] GET : http://127.0.0.1:11435/api/tags
Gdk-Message: 12:38:27.969: Error 71 (Protocol error) dispatching to Wayland display.

@plusmeup
Copy link
Author

plusmeup commented Oct 3, 2024

The problem is ... NVIDIA. Use this command flatpak override --env=GSK_RENDERER=ngl com.jeffser.Alpaca

I tried it with sudo and haven't got any output. Tried it without sudo right after, and it said "cannot create this override", listing a path to the directory of my flatpak. However, the app is working now like it should, here's the debug log after a successful launch:

INFO [main.py | main] Alpaca version: 2.0.6
INFO [connection_handler.py | start] Starting Alpaca's Ollama instance...
INFO [connection_handler.py | start] Started Alpaca's Ollama instance
2024/10/03 12:40:59 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/plusmeo/.var/app/com.jeffser.Alpaca/data/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-10-03T12:40:59.631+03:00 level=INFO source=images.go:753 msg="total blobs: 5"
time=2024-10-03T12:40:59.631+03:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-10-03T12:40:59.631+03:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11435 (version 0.3.11)"
time=2024-10-03T12:40:59.632+03:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/home/plusmeo/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama4194133537/runners
time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/libggml.so.gz
time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/libllama.so.gz
time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/ollama_llama_server.gz
time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/libggml.so.gz
time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/libllama.so.gz
time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/ollama_llama_server.gz
time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/libggml.so.gz
time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/libllama.so.gz
time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/ollama_llama_server.gz
time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/libggml.so.gz
time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/libllama.so.gz
time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/ollama_llama_server.gz
time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/libggml.so.gz
time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/libllama.so.gz
time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/ollama_llama_server.gz
time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/libggml.so.gz
time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/libllama.so.gz
time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/ollama_llama_server.gz
INFO [connection_handler.py | start] client version is 0.3.11
INFO [connection_handler.py | request] GET : http://127.0.0.1:11435/api/tags
time=2024-10-03T12:41:07.177+03:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/plusmeo/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama4194133537/runners/cpu/ollama_llama_server
time=2024-10-03T12:41:07.177+03:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/plusmeo/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama4194133537/runners/cpu_avx/ollama_llama_server
time=2024-10-03T12:41:07.177+03:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/plusmeo/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama4194133537/runners/cpu_avx2/ollama_llama_server
time=2024-10-03T12:41:07.177+03:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/plusmeo/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama4194133537/runners/cuda_v11/ollama_llama_server
time=2024-10-03T12:41:07.177+03:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/plusmeo/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama4194133537/runners/cuda_v12/ollama_llama_server
time=2024-10-03T12:41:07.177+03:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/plusmeo/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama4194133537/runners/rocm_v60102/ollama_llama_server
time=2024-10-03T12:41:07.177+03:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11 cuda_v12 rocm_v60102 cpu cpu_avx]"
time=2024-10-03T12:41:07.177+03:00 level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-10-03T12:41:07.177+03:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-10-03T12:41:07.177+03:00 level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
time=2024-10-03T12:41:07.178+03:00 level=DEBUG source=gpu.go:86 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-10-03T12:41:07.178+03:00 level=DEBUG source=gpu.go:467 msg="Searching for GPU library" name=libcuda.so

time=2024-10-03T12:41:07.178+03:00 level=DEBUG source=gpu.go:490 msg="gpu library search" globs="[/app/lib/ollama/libcuda.so* /app/lib/libcuda.so* /usr/lib/x86_64-linux-gnu/GL/default/lib/libcuda.so* /usr/lib/x86_64-linux-gnu/openh264/extra/libcuda.so* /usr/lib/x86_64-linux-gnu/openh264/extra/libcuda.so* /usr/lib/sdk/llvm15/lib/libcuda.so* /usr/lib/x86_64-linux-gnu/GL/default/lib/libcuda.so* /usr/lib/ollama/libcuda.so* /app/plugins/AMD/lib/ollama/libcuda.so* /usr/local/cuda*/targets//lib/libcuda.so /usr/lib/-linux-gnu/nvidia/current/libcuda.so /usr/lib/-linux-gnu/libcuda.so /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers//libcuda.so /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2024-10-03T12:41:07.180+03:00 level=DEBUG source=gpu.go:524 msg="discovered GPU libraries" paths=[]
time=2024-10-03T12:41:07.180+03:00 level=DEBUG source=gpu.go:467 msg="Searching for GPU library" name=libcudart.so*
time=2024-10-03T12:41:07.180+03:00 level=DEBUG source=gpu.go:490 msg="gpu library search" globs="[/app/lib/ollama/libcudart.so* /app/lib/libcudart.so* /usr/lib/x86_64-linux-gnu/GL/default/lib/libcudart.so* /usr/lib/x86_64-linux-gnu/openh264/extra/libcudart.so* /usr/lib/x86_64-linux-gnu/openh264/extra/libcudart.so* /usr/lib/sdk/llvm15/lib/libcudart.so* /usr/lib/x86_64-linux-gnu/GL/default/lib/libcudart.so* /usr/lib/ollama/libcudart.so* /app/plugins/AMD/lib/ollama/libcudart.so* /app/lib/ollama/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers//libcudart.so /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
time=2024-10-03T12:41:07.181+03:00 level=DEBUG source=gpu.go:524 msg="discovered GPU libraries" paths="[/app/lib/ollama/libcudart.so.12.4.99 /app/lib/ollama/libcudart.so.11.3.109]"
CUDA driver version: 12-6
time=2024-10-03T12:41:08.825+03:00 level=DEBUG source=gpu.go:130 msg="detected GPUs" library=/app/lib/ollama/libcudart.so.12.4.99 count=1
[GPU-86d86f78-e421-4a86-9143-0783f8191e6d] CUDA totalMem 3983474688
[GPU-86d86f78-e421-4a86-9143-0783f8191e6d] CUDA freeMem 3896901632
[GPU-86d86f78-e421-4a86-9143-0783f8191e6d] CUDA usedMem 0
[GPU-86d86f78-e421-4a86-9143-0783f8191e6d] Compute Capability 8.6
time=2024-10-03T12:41:08.827+03:00 level=WARN source=amd_linux.go:60 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-10-03T12:41:08.827+03:00 level=DEBUG source=amd_linux.go:103 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/0/properties"
time=2024-10-03T12:41:08.827+03:00 level=DEBUG source=amd_linux.go:128 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties"
time=2024-10-03T12:41:08.827+03:00 level=DEBUG source=amd_linux.go:103 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/1/properties"
time=2024-10-03T12:41:08.827+03:00 level=DEBUG source=amd_linux.go:218 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties vendor=4098 device=5688 unique_id=0
time=2024-10-03T12:41:08.827+03:00 level=DEBUG source=amd_linux.go:252 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties drm=/sys/class/drm/card1/device
time=2024-10-03T12:41:08.827+03:00 level=INFO source=amd_linux.go:275 msg="unsupported Radeon iGPU detected skipping" id=0 total="512.0 MiB"
time=2024-10-03T12:41:08.827+03:00 level=INFO source=amd_linux.go:361 msg="no compatible amdgpu devices detected"
releasing cudart library
time=2024-10-03T12:41:08.914+03:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-86d86f78-e421-4a86-9143-0783f8191e6d library=cuda variant=v11 compute=8.6 driver=0.0 name="" total="3.7 GiB" available="3.6 GiB"
[GIN] 2024/10/03 - 12:41:08 | 200 | 348.89µs | 127.0.0.1 | GET "/api/tags"
INFO [connection_handler.py | request] POST : http://127.0.0.1:11435/api/show
[GIN] 2024/10/03 - 12:41:08 | 200 | 8.465786ms | 127.0.0.1 | POST "/api/show"

@plusmeup
Copy link
Author

plusmeup commented Oct 3, 2024

The problem is ... NVIDIA. Use this command flatpak override --env=GSK_RENDERER=ngl com.jeffser.Alpaca

I tried it with sudo and haven't got any output. Tried it without sudo right after, and it said "cannot create this override", listing a path to the directory of my flatpak. However, the app is working now like it should, here's the debug log after a successful launch:

INFO [main.py | main] Alpaca version: 2.0.6 INFO [connection_handler.py | start] Starting Alpaca's Ollama instance... INFO [connection_handler.py | start] Started Alpaca's Ollama instance 2024/10/03 12:40:59 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/plusmeo/.var/app/com.jeffser.Alpaca/data/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2024-10-03T12:40:59.631+03:00 level=INFO source=images.go:753 msg="total blobs: 5" time=2024-10-03T12:40:59.631+03:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-10-03T12:40:59.631+03:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11435 (version 0.3.11)" time=2024-10-03T12:40:59.632+03:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/home/plusmeo/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama4194133537/runners time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/libggml.so.gz time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/libllama.so.gz time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/ollama_llama_server.gz time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/libggml.so.gz time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/libllama.so.gz time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/ollama_llama_server.gz time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/libggml.so.gz time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/libllama.so.gz time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/ollama_llama_server.gz time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/libggml.so.gz time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/libllama.so.gz time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/ollama_llama_server.gz time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/libggml.so.gz time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/libllama.so.gz time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/ollama_llama_server.gz time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/libggml.so.gz time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/libllama.so.gz time=2024-10-03T12:40:59.632+03:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/ollama_llama_server.gz INFO [connection_handler.py | start] client version is 0.3.11 INFO [connection_handler.py | request] GET : http://127.0.0.1:11435/api/tags time=2024-10-03T12:41:07.177+03:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/plusmeo/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama4194133537/runners/cpu/ollama_llama_server time=2024-10-03T12:41:07.177+03:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/plusmeo/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama4194133537/runners/cpu_avx/ollama_llama_server time=2024-10-03T12:41:07.177+03:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/plusmeo/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama4194133537/runners/cpu_avx2/ollama_llama_server time=2024-10-03T12:41:07.177+03:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/plusmeo/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama4194133537/runners/cuda_v11/ollama_llama_server time=2024-10-03T12:41:07.177+03:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/plusmeo/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama4194133537/runners/cuda_v12/ollama_llama_server time=2024-10-03T12:41:07.177+03:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/home/plusmeo/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama4194133537/runners/rocm_v60102/ollama_llama_server time=2024-10-03T12:41:07.177+03:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11 cuda_v12 rocm_v60102 cpu cpu_avx]" time=2024-10-03T12:41:07.177+03:00 level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-10-03T12:41:07.177+03:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2024-10-03T12:41:07.177+03:00 level=INFO source=gpu.go:199 msg="looking for compatible GPUs" time=2024-10-03T12:41:07.178+03:00 level=DEBUG source=gpu.go:86 msg="searching for GPU discovery libraries for NVIDIA" time=2024-10-03T12:41:07.178+03:00 level=DEBUG source=gpu.go:467 msg="Searching for GPU library" name=libcuda.so time=2024-10-03T12:41:07.178+03:00 level=DEBUG source=gpu.go:490 msg="gpu library search" globs="[/app/lib/ollama/libcuda.so* /app/lib/libcuda.so* /usr/lib/x86_64-linux-gnu/GL/default/lib/libcuda.so* /usr/lib/x86_64-linux-gnu/openh264/extra/libcuda.so* /usr/lib/x86_64-linux-gnu/openh264/extra/libcuda.so* /usr/lib/sdk/llvm15/lib/libcuda.so* /usr/lib/x86_64-linux-gnu/GL/default/lib/libcuda.so* /usr/lib/ollama/libcuda.so* /app/plugins/AMD/lib/ollama/libcuda.so* /usr/local/cuda*/targets//lib/libcuda.so /usr/lib/-linux-gnu/nvidia/current/libcuda.so /usr/lib/-linux-gnu/libcuda.so /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers//libcuda.so /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2024-10-03T12:41:07.180+03:00 level=DEBUG source=gpu.go:524 msg="discovered GPU libraries" paths=[] time=2024-10-03T12:41:07.180+03:00 level=DEBUG source=gpu.go:467 msg="Searching for GPU library" name=libcudart.so* time=2024-10-03T12:41:07.180+03:00 level=DEBUG source=gpu.go:490 msg="gpu library search" globs="[/app/lib/ollama/libcudart.so* /app/lib/libcudart.so* /usr/lib/x86_64-linux-gnu/GL/default/lib/libcudart.so* /usr/lib/x86_64-linux-gnu/openh264/extra/libcudart.so* /usr/lib/x86_64-linux-gnu/openh264/extra/libcudart.so* /usr/lib/sdk/llvm15/lib/libcudart.so* /usr/lib/x86_64-linux-gnu/GL/default/lib/libcudart.so* /usr/lib/ollama/libcudart.so* /app/plugins/AMD/lib/ollama/libcudart.so* /app/lib/ollama/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers//libcudart.so /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" time=2024-10-03T12:41:07.181+03:00 level=DEBUG source=gpu.go:524 msg="discovered GPU libraries" paths="[/app/lib/ollama/libcudart.so.12.4.99 /app/lib/ollama/libcudart.so.11.3.109]" CUDA driver version: 12-6 time=2024-10-03T12:41:08.825+03:00 level=DEBUG source=gpu.go:130 msg="detected GPUs" library=/app/lib/ollama/libcudart.so.12.4.99 count=1 [GPU-86d86f78-e421-4a86-9143-0783f8191e6d] CUDA totalMem 3983474688 [GPU-86d86f78-e421-4a86-9143-0783f8191e6d] CUDA freeMem 3896901632 [GPU-86d86f78-e421-4a86-9143-0783f8191e6d] CUDA usedMem 0 [GPU-86d86f78-e421-4a86-9143-0783f8191e6d] Compute Capability 8.6 time=2024-10-03T12:41:08.827+03:00 level=WARN source=amd_linux.go:60 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-10-03T12:41:08.827+03:00 level=DEBUG source=amd_linux.go:103 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/0/properties" time=2024-10-03T12:41:08.827+03:00 level=DEBUG source=amd_linux.go:128 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties" time=2024-10-03T12:41:08.827+03:00 level=DEBUG source=amd_linux.go:103 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/1/properties" time=2024-10-03T12:41:08.827+03:00 level=DEBUG source=amd_linux.go:218 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties vendor=4098 device=5688 unique_id=0 time=2024-10-03T12:41:08.827+03:00 level=DEBUG source=amd_linux.go:252 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties drm=/sys/class/drm/card1/device time=2024-10-03T12:41:08.827+03:00 level=INFO source=amd_linux.go:275 msg="unsupported Radeon iGPU detected skipping" id=0 total="512.0 MiB" time=2024-10-03T12:41:08.827+03:00 level=INFO source=amd_linux.go:361 msg="no compatible amdgpu devices detected" releasing cudart library time=2024-10-03T12:41:08.914+03:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-86d86f78-e421-4a86-9143-0783f8191e6d library=cuda variant=v11 compute=8.6 driver=0.0 name="" total="3.7 GiB" available="3.6 GiB" [GIN] 2024/10/03 - 12:41:08 | 200 | 348.89µs | 127.0.0.1 | GET "/api/tags" INFO [connection_handler.py | request] POST : http://127.0.0.1:11435/api/show [GIN] 2024/10/03 - 12:41:08 | 200 | 8.465786ms | 127.0.0.1 | POST "/api/show"

@Jeffser Could you please let me know if I've fixed it? If yes, then can I avoid this issue in the future? Thanks in advance.

@Jeffser
Copy link
Owner

Jeffser commented Oct 3, 2024

could you do this GSK_RENDERER=ngl flatpak run com.jeffser.Alpaca

Just to check temporarily if it works, if it does I will add that variable by default

@plusmeup
Copy link
Author

plusmeup commented Oct 3, 2024

could you do this GSK_RENDERER=ngl flatpak run com.jeffser.Alpaca

Just to check temporarily if it works, if it does I will add that variable by default

Yeah, it works when I open it like that

@Jeffser
Copy link
Owner

Jeffser commented Oct 3, 2024

Alright I will add that as a default later today

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants