CUDA: fuse SSM_CONV + ADD(bias) + SILU#22478
Merged
Merged
Conversation
am17an
reviewed
Apr 28, 2026
gaugarg-nv
reviewed
Apr 28, 2026
Contributor
gaugarg-nv
left a comment
There was a problem hiding this comment.
I think we need to verify the actual unary subtype with ggml_get_unary_op(silu) in both previous and new pattern matching code.
Co-authored-by: Gaurav Garg <gaugarg@nvidia.com>
6410eb7 to
3a7085f
Compare
ORippler
reviewed
Apr 29, 2026
Comment on lines
+4
to
+6
| template <bool apply_bias, bool apply_silu, size_t split_d_inner, size_t d_conv> | ||
| static __global__ void ssm_conv_f32(const float * __restrict__ src0, const float * __restrict__ src1, | ||
| const float * __restrict__ bias, |
Collaborator
There was a problem hiding this comment.
Is it really necessary to template the kernel from a perf-perspective as opposed to checking bias against nullptr (this can be done in the same ternary expression)? We should be mindful of binary bloat and only template that which is truly necessary from a perf perspective.
I'd imagine the same can potentially apply to apply_silu as well, but that's beyond the scope of this PR
Contributor
Author
There was a problem hiding this comment.
Makes sense. I've done as you've suggested now.
am17an
approved these changes
Apr 29, 2026
ggerganov
approved these changes
Apr 29, 2026
Member
ggerganov
left a comment
There was a problem hiding this comment.
The test-backend-ops changes are OK
tekintian
added a commit
to tekintian/llama.cpp
that referenced
this pull request
May 1, 2026
* 'master' of github.com:tekintian/llama.cpp: (659 commits) ggml-webgpu: Improve performance of mat-vec and mat-mat for MUL_MAT_ID (ggml-org#22464) Update llama-mmap to use ftello/fseeko (ggml-org#22497) common : check for null getpwuid in hf-cache (ggml-org#22550) vulkan: add get/set tensor 2d functions (ggml-org#22514) spec: fix argument typo (ggml-org#22552) ci : bump ty to 0.0.33 (ggml-org#22535) vendor : update cpp-httplib to 0.43.2 (ggml-org#22548) CUDA: fix tile FA kernel on Pascal (ggml-org#22541) scripts : add wc2wt.sh - create worktree from current HEAD (ggml-org#22513) add fast matmul iquants (ggml-org#22504) spec : fix draft model checkpoints (ggml-org#22521) spec : fix vocab compat checks in spec example (ggml-org#22426) common : do not pass prompt tokens to reasoning budget sampler (ggml-org#22488) hexagon: make vmem and buffer-size configurable (ggml-org#22487) CUDA: fuse SSM_CONV + ADD(bias) + SILU (ggml-org#22478) spec : disacard last drafted token with low prob (ggml-org#22506) sync : ggml ggml : bump version to 0.10.1 (ggml/1469) webui: fix slow mic stop and WAV encode (ggml-org#22480) ggml-cpu : disable tiled matmul on AIX to fix page boundary segfault (ggml-org#22293) ... # Conflicts: # .gitignore
cnsiva
added a commit
to saas-home/llama.cpp
that referenced
this pull request
May 1, 2026
This reverts commit 098705a.
rsenthilkumar6
pushed a commit
to rsenthilkumar6/llama.cpp
that referenced
this pull request
May 1, 2026
Crssz
pushed a commit
to Crssz/buun-llama-cpp
that referenced
this pull request
May 1, 2026
Major upstream additions: - CUDA graph improvements: LRU eviction, node property tracking, uid-based reuse - Flash attention: stream-k fixup kernel, DKQ=320/DV=256 support, Pascal fix - SSM_CONV + ADD + SILU 3-node fusion (ggml-org#22478) - Blackwell native NVFP4 support (ggml-org#22196) - Q1_0 1-bit quantization (CPU, CUDA, Metal, Vulkan, WebGPU) - Backend-agnostic tensor parallelism (ggml-org#19378) - Speculative decoding: checkpointing, param refactoring, low-prob discard - libcommon renamed to libllama-common (ggml-org#21936) - Server: /api endpoints removed, checkpoint support, CVE-2026-21869 fix - Model refactors: build_qkv/create_tensor_qkv helpers, cmake glob for models - Recurrent state serialization fix for partial reads/writes (ggml-org#22362) - Fast mat-vec kernels for i-quants (ggml-org#22344, ggml-org#22504) Conflict resolution (22 files): - Turbo quant type IDs shifted +1 (42-46) to accommodate Q1_0 (41) - SSM_CONV tree kernels preserved alongside new fusion - DFlash spec decode coexists with upstream checkpointing - Server slot fields renamed: drafted→spec_draft, i_batch_dft→spec_i_batch - Qwen3.5/DeltaNet model registration uses new create_tensor_qkv helper - Gemma4 BF16 precision fix preserved Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
samuraieng
pushed a commit
to samuraieng/llama.cpp
that referenced
this pull request
May 6, 2026
ljubomirj
pushed a commit
to ljubomirj/llama.cpp
that referenced
this pull request
May 6, 2026
meh
pushed a commit
to meh/llama.cpp
that referenced
this pull request
May 10, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Overview
Adds a CUDA fusion for
SSM_CONV + ADD(bias) + SILU. The existingSSM_CONV + SILUfusion didn't match on Mamba-1 and Mamba-2 layers (used by Nemotron-H, Granite-Hybrid, Jamba, and other Mamba-style hybrids) because of a biasADDoperation between the conv and the SILU.Additional information
Requirements