Skip to content

ggml-cuda: use CMAKE_CUDA_ARCHITECTURES if set when GGML_NATIVE=ON#18413

Merged
am17an merged 1 commit intoggml-org:masterfrom
QDelta:master
Dec 28, 2025
Merged

ggml-cuda: use CMAKE_CUDA_ARCHITECTURES if set when GGML_NATIVE=ON#18413
am17an merged 1 commit intoggml-org:masterfrom
QDelta:master

Conversation

@QDelta
Copy link
Contributor

@QDelta QDelta commented Dec 27, 2025

Fixed compilation error when building with GGML_NATIVE=ON and CUDA Toolkit but without a GPU (e.g. docker build) where CMAKE_CUDA_ARCHITECTURES_NATIVE will be No CUDA Devices found., causing nvcc error Unsupported gpu architecture 'compute_No CUDA Devices found.'.

Also if CMAKE_CUDA_ARCHITECTURES is explicitly set by the user, maybe it's better to use it even when GGML_NATIVE=ON. Which is also consistent with the behavior before #18361.

Additionally moves the logging of CMAKE_CUDA_ARCHITECTURES after the replacement.

Possibly related to #18398.

@github-actions github-actions bot added Nvidia GPU Issues specific to Nvidia GPUs ggml changes relating to the ggml tensor library for machine learning labels Dec 27, 2025
Copy link
Contributor

@JohannesGaessler JohannesGaessler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be correct since CMAKE_CUDA_ARCHITECTURES is a user input and CMAKE_CUDA_ARCHITECTURES_NATIVE seems to be a variable that CMake sets itself internally.

@am17an am17an merged commit 4fd59e8 into ggml-org:master Dec 28, 2025
70 of 71 checks passed
am17an added a commit to am17an/llama.cpp that referenced this pull request Dec 28, 2025
blime4 referenced this pull request in blime4/llama.cpp Feb 5, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning Nvidia GPU Issues specific to Nvidia GPUs

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants