Skip to content

Conversation

@DmyMi
Copy link
Contributor

@DmyMi DmyMi commented Sep 21, 2025

Fixes #16142

Main issue is llama_supports_gpu_offload should not get SIGABRT if backend can't initialize correctly in ggml_vk_instance_init. System error will make sure it is not added to registry anyway.

Optionally, added catches in ggml_backend_vk_reg to return null for any errors, unless this was intended to be handled upstream. In that case i can remove that commit.

@DmyMi DmyMi requested a review from 0cc4m as a code owner September 21, 2025 18:41
@github-actions github-actions bot added Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels Sep 21, 2025
Copy link
Collaborator

@0cc4m 0cc4m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thank you!

@0cc4m 0cc4m merged commit 0499b29 into ggml-org:master Sep 27, 2025
60 of 68 checks passed
yael-works pushed a commit to yael-works/llama.cpp that referenced this pull request Oct 15, 2025
…vices (ggml-org#16156)

* Throw system error on old Vulkan driver rather than SIGABRT

* Optionally handle any potential error in vulkan init
pwilkin pushed a commit to pwilkin/llama.cpp that referenced this pull request Oct 23, 2025
…vices (ggml-org#16156)

* Throw system error on old Vulkan driver rather than SIGABRT

* Optionally handle any potential error in vulkan init
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning Vulkan Issues specific to the Vulkan backend

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Misc. bug: llama_supports_gpu_offload SIGABRT for older Vulkan versions

2 participants