Skip to content

UPSTREAM PR #18431: ggml-cuda: (cmake) expand "native" into concrete architectures#730

Open
loci-dev wants to merge 1 commit intomainfrom
upstream-PR18431-branch_QDelta-master
Open

UPSTREAM PR #18431: ggml-cuda: (cmake) expand "native" into concrete architectures#730
loci-dev wants to merge 1 commit intomainfrom
upstream-PR18431-branch_QDelta-master

Conversation

@loci-dev
Copy link

Mirrored from ggml-org/llama.cpp#18431

Fixes #18430.

#18413 caused error on blackwell local build because native will be treated as 120-real but is not captured by the regex.

Tested with or without docker (with a blackwell GPU). Seems OK.

@loci-review
Copy link

loci-review bot commented Dec 28, 2025

Explore the complete analysis inside the Version Insights

Perfect! I've retrieved the summary report for your project. Here's what the analysis shows:

Summary Report for llama.cpp PR #730

Key Findings:

  • No significant performance changes detected - All modified functions show performance changes of less than 2%
  • No regressions in response time or throughput time
  • Performance-neutral changes - The pull request maintains stable performance compared to the base version

Project Information:

This is a positive result, indicating that the changes in PR #730 don't introduce any performance degradation and maintain the existing performance characteristics of the codebase.

@loci-dev loci-dev force-pushed the main branch 27 times, most recently from 68d2c99 to 410c086 Compare January 1, 2026 02:54
@loci-dev loci-dev force-pushed the main branch 30 times, most recently from 47ebd13 to 9ddbcfc Compare January 7, 2026 11:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants