Skip to content

Support for DeepseekV32ForCausalLM with DeepSeek Sparse Attention (DSA)#21149

Closed
fairydreaming wants to merge 80 commits into
ggml-org:masterfrom
fairydreaming:deepseek-dsa
Closed

Support for DeepseekV32ForCausalLM with DeepSeek Sparse Attention (DSA)#21149
fairydreaming wants to merge 80 commits into
ggml-org:masterfrom
fairydreaming:deepseek-dsa

Conversation

@fairydreaming
Copy link
Copy Markdown
Collaborator

@fairydreaming fairydreaming commented Mar 29, 2026

Overview

This PR adds support for DeepseekV32ForCausalLM (DeepSeek V3.2 Exp, DeepSeek V3.2, DeepSeek V3.2 Speciale) models. It contains implementation of the lightning indexer and DeepSeek Sparse Attention (DSA) - both implemented in the simplest possible way as a proof of concept. So far only CPU and CUDA backends are supported.

Due to the way it's currently implemented it doesn't improve long context performance yet, more work is needed for this. Long context performance was improved by using sparse top_k indices in CUDA flash attention MMA kernel.

Some GGUFs for testing are available here (-light models), I uploaded Q8_0/Q4_K_M quants, so you need over 700GB/400GB of RAM/VRAM to run them.

I also created a 16GB baby DeepSeek V3.2 GGUF for VRAM-deprived people. It outputs incoherent gibberish, but should be useful for testing and optimizing this implementation even with limited resources.

I really could use some help with verifying the implementation correctness. If you have large GPU cluster and can run some benchmarks to compare results with official reported benchmark results for DeepSeek V3.2 models then go for it. More details in #21183.

Fixes #16331, #20363

Additional information

Decisions I made when implementing this:

  • new model arch DEEPSEEK32 was added (mostly a copy of existing GLM_DSA arch),
  • sparse attention was implemented by masking KQ mask entries corresponding to tokens that are not in the set of top-k tokens selected by the lightning indexer,
  • for this purpose I added new GGML op GGML_OP_SCATTER that works similar to torch scatter_ operation but is currently limited to setting tensor elements at specified indices to a given scalar value, replaced by SET_ROWS with 1-element rows
  • Hadamard transform was added as another new GGML op GGML_OP_HADAMARD with implementation borrowed from ik_llama.cpp (thx @ikawrakow), implementation from llama : rotate activations for better quantization #21038 was used in lightning indexer
  • KV cache was implemented as a new llama_kv_cache_dsa class which aggregates the usual llama_kv_cache that caches MLA latent representations (same as before for DeepSeek V3) and another new llama_ik_cache class (basically a copy of llama_kv_cache stripped of code related to V vector) that caches lightning indexer keys, two instances of llama_kv_cache - one for caching MLA latent representations, second for caching lightning indexer keys
  • since there are no official jinja templates for V3.2 and V3.2 Speciale, I simply decided to ignore this problem for now. You have to explicitly set chat template for these models (using jinja template from V3.2 Exp with these models will allow you to chat but tool calls won't work correctly). PR chat: dedicated DeepSeek v3.2 parser + "official" template #21785 added DeepSeek V3.2 chat template that you can use with --chat-template-file models/templates/deepseek-ai-DeepSeek-V3.2.jinja

Requirements

Due to limitations of the current CUDA ggml_top_k() implementation NVIDIA CUDA CCCL library (version >3.2) and enabling GGML_CUDA_USE_CUB during CUDA backend compilation is needed, otherwise the CUDA implementation will crash for context sizes larger than (I think) 1024 tokens. I use it with CUDA 13.2 and CCCL 13.2.27.
Bug in ggml_top_k() is now fixed, fix is merged, so it should work even on 2.[89] CUDA without CCCL.

Also if you want to convert the model by yourself, set add_bos_token to true in tokenizer_config.json before the model conversion - this is needed for DeepSeek V3.2 and DeepSeek V3.2 Speciale. The conversion script has assert that checks this.

Next Steps

  • I'd like to confirm my architectural choices regarding the implementation,
  • If they are accepted I will clean up the code if needed, merge with the current master and it will be ready for code review,
  • If not then So Long, and Thanks for All the Fish. Just joking, we can talk about this.

  • I have read and agree with the contributing guidelines
  • AI usage disclosure: YES, AI was used as an assistant helping me find bugs in CUDA kernel implementations.

sszymczy added 26 commits March 12, 2026 13:15
…e attention). Needs manual change of add_bos_token to true in tokenizer_config.json before conversion.
…indexer implementation since the former fails for large tensors even when using CCCL.
… of llama_kv_cache and new llama_ik_cache (lightning indexer key cache).

model : used new llama_kv_cache_dsa instead of modified llama_kv_cache with indexer keys in DeepseekV32ForCausalLM
model : removed non-MLA path in DeepseekV32ForCausalLM
…e can get rid of ggml_cast() calls in sparse attention implementation
@fairydreaming fairydreaming requested review from a team, CISC and ggerganov as code owners March 29, 2026 12:56
@fairydreaming fairydreaming marked this pull request as draft March 29, 2026 12:56
@fairydreaming fairydreaming marked this pull request as ready for review May 6, 2026 09:31
Comment thread convert_hf_to_gguf.py Outdated
Comment on lines +9285 to +9292
if name.startswith("language_model."):
name = name.replace("language_model.", "")

# rename e_score_correction_bias tensors
if name.endswith("e_score_correction_bias"):
name = name.replace("e_score_correction_bias", "e_score_correction.bias")

# skip Multi-Token Prediction (MTP) layers
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if name.startswith("language_model."):
name = name.replace("language_model.", "")
# rename e_score_correction_bias tensors
if name.endswith("e_score_correction_bias"):
name = name.replace("e_score_correction_bias", "e_score_correction.bias")
# skip Multi-Token Prediction (MTP) layers
# skip Multi-Token Prediction (MTP) layers

No longer needed after #22597

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, skip the check below.

Comment thread convert_hf_to_gguf.py
Comment on lines +9274 to +9275
if (num_nextn_predict_layers := self.hparams.get("num_nextn_predict_layers")) is not None:
self.gguf_writer.add_nextn_predict_layers(num_nextn_predict_layers)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if (num_nextn_predict_layers := self.hparams.get("num_nextn_predict_layers")) is not None:
self.gguf_writer.add_nextn_predict_layers(num_nextn_predict_layers)
if not self.skip_mtp:
if (num_nextn_predict_layers := self.hparams.get("num_nextn_predict_layers")) is not None:
self.gguf_writer.add_nextn_predict_layers(num_nextn_predict_layers)

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's a good idea, DeepSeek V3.2 model C++ code uses hparams.nextn_predict_layers to calculate number of non-mtp layers. Many other models does that, they all have in convert script:

self.block_count = self.hparams["num_hidden_layers"] + self.hparams.get("num_nextn_predict_layers", 0)

so total number of layers includes MTP layers regardless of skip_mtp value. Then in C++ code:

int effective_n_layers = hparams.n_layer - hparams.nextn_predict_layers;

or similar.

Why DeepSeek V3.2 code shall behave differently?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because skip_mtp strips those layers?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Point is you are mis-reporting the number of layers included in the GGUF.

Comment thread convert_hf_to_gguf.py
Comment on lines +9229 to +9230
self.block_count = self.hparams["num_hidden_layers"] + self.hparams.get("num_nextn_predict_layers", 0)
self.tensor_map = gguf.get_tensor_name_map(self.model_arch, self.block_count)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
self.block_count = self.hparams["num_hidden_layers"] + self.hparams.get("num_nextn_predict_layers", 0)
self.tensor_map = gguf.get_tensor_name_map(self.model_arch, self.block_count)
if not self.skip_mtp:
self.block_count = self.hparams["num_hidden_layers"] + self.hparams.get("num_nextn_predict_layers", 0)
self.tensor_map = gguf.get_tensor_name_map(self.model_arch, self.block_count)

Sorry, I added this earlier, then second-guessed it because of the additional check you had, but I think that check should go instead.

@whoisjeremylam
Copy link
Copy Markdown

Forgive my ignorance. Does a model need to be re-quantized to use this PR? ie GLM 5.1, which makes use of DSA.

@fairydreaming
Copy link
Copy Markdown
Collaborator Author

Forgive my ignorance. Does a model need to be re-quantized to use this PR? ie GLM 5.1, which makes use of DSA.

@whoisjeremylam As far as I know existing GLM 5.0/5.1 GGUFs already contain weights for indexer tensors, so it's only a matter of using them in glm-dsa.cpp implementation. I think there will be no need to requantize GGUFs to run them with DSA.

However, lightning indexer output may be sensitive to quantization of indexer tensors and for this reason it may be necessary to change their quantization level in the future.

nisparks added a commit to nisparks/llama.cpp that referenced this pull request May 8, 2026
DeepSeek V3.2 / V4-Flash use a sparse-attention 'lightning indexer'
that scores compressed K vectors against per-head Q vectors via a
fused mul_mat -> relu -> weighted-sum-over-heads pipeline. The
graph emitted by build_attn_v4 today materializes that sequence as
four discrete ggml ops (mul_mat, relu, mul, sum_rows), which costs
multiple kernel launches per layer per token at decode and an
intermediate [n_comp, n_heads, n_batch] score tensor that scales
linearly with both context length and ubatch.

This commit imports the WMMA + vector CUDA kernel originally written
by Stanislaw Szymczyk for ggml-org/llama.cpp PR ggml-org#21149 (V3.2 DSA),
later kept available on cchuter/llama.cpp (feat/v4-port). It does
not yet wire the op into src/models/deepseek4.cpp -- the V4 indexer
has three distinct shape regimes (decode, collapsed-q prefill,
per-query prefill) that each need their own reshape adapter -- so
this commit only:

  * adds GGML_OP_LIGHTNING_INDEXER to the op enum and bumps
    GGML_OP_COUNT/static_asserts to 98
  * adds the ggml_lightning_indexer constructor in ggml.c with
    shape and dtype guards matching fairydreaming's reference
  * adds the CUDA dispatcher case + supports() entry. The supports()
    check restricts to the V3.2/V4 indexer config (n_embd=128,
    n_heads=64) and to the K dtypes the kernel actually instantiates
    (F32/F16/BF16/Q4_0/Q4_1/Q5_0/Q5_1/Q8_0). Other shapes return
    false so the scheduler keeps those ops on a backend that can run
    them.
  * imports the kernel implementation (WMMA path on Ampere+ NVIDIA,
    vector path on everything else, including HIP/MUSA stubs).

Build clean; existing smoke tests still pass since the op isn't
called yet.

Co-authored-by: Stanislaw Szymczyk <sszymczy@gmail.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copy link
Copy Markdown
Member

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Edit: I just realized that by forcing ncols1 to 1 there will be no problem with mixing Q vectors from different tokens in one FA kernel Q tile, so top_k optimization should work for prompt processing as well. Will test it soon!

Nice. I suppose this is a sort of a stopgap solution until we implement a more optimized large-batch top-k kernel? Or do you think it is already the right approach for the prefill?

Also, I guess it works quite well with small batches (BS <= 8), for example for parallel decoding?

Comment thread ggml/include/ggml.h
Comment on lines +2558 to +2565
GGML_API struct ggml_tensor * ggml_lightning_indexer(
struct ggml_context * ctx,
struct ggml_tensor * q,
struct ggml_tensor * k,
struct ggml_tensor * weights,
float scale_embd,
float scale_heads);

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this OP be used in DeepSeek v4?

@fairydreaming
Copy link
Copy Markdown
Collaborator Author

fairydreaming commented May 9, 2026

Edit: I just realized that by forcing ncols1 to 1 there will be no problem with mixing Q vectors from different tokens in one FA kernel Q tile, so top_k optimization should work for prompt processing as well. Will test it soon!

Nice. I suppose this is a sort of a stopgap solution until we implement a more optimized large-batch top-k kernel? Or do you think it is already the right approach for the prefill?

Also, I guess it works quite well with small batches (BS <= 8), for example for parallel decoding?

@ggerganov I think it's more like a stopgap solution - best performing one currently achievable with minimal changes. I tried to optimize it further by copying topk tiles to shared memory, but it reduced the performance. I'm hardly a CUDA expert, so waiting for @JohannesGaessler opinion on this.

I see that in V4 there's even more complicated approach - a dense part of attention with SWA (128 tokens) and a sparse top-k (1024) based part that uses compressed KV cache. But maybe a sparse kernel still could be used by always including the dense part in top-k indices.

Comment thread src/models/deepseek32.cpp

// store indexer keys to KV cache
const auto * mctx_lid = inp_attn_dsa->mctx->get_lid();
const auto & k_idxs_lid = inp_attn_dsa->get_k_idxs_lid();
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
const auto & k_idxs_lid = inp_attn_dsa->get_k_idxs_lid();
const auto * k_idxs_lid = inp_attn_dsa->get_k_idxs_lid();

Comment thread ggml/include/ggml.h

GGML_API void ggml_flash_attn_ext_add_top_k(
struct ggml_tensor * a,
struct ggml_tensor * top_k);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Semantically, is it important here that these are "Top-K" indices? If I understand correctly, this is more generic than top-k - it's just a list of any indices.

If yes, I think the name should reflect that. Instead of top_k, consider using ggml_flash_attn_ext_add_idxs().

@ggerganov ggerganov self-assigned this May 10, 2026
@JohannesGaessler
Copy link
Copy Markdown
Contributor

JohannesGaessler commented May 12, 2026

I tried to optimize it further by copying topk tiles to shared memory, but it reduced the performance. I'm hardly a CUDA expert, so waiting for @JohannesGaessler opinion on this.

My opinion is that you should initially only add CPU support as is clearly laid out in the contributing guidelines and add CUDA support in a follow-up PR. It is way more work for me to review the changes if they're in this PR.

@fairydreaming
Copy link
Copy Markdown
Collaborator Author

I tried to optimize it further by copying topk tiles to shared memory, but it reduced the performance. I'm hardly a CUDA expert, so waiting for @JohannesGaessler opinion on this.

My opinion is that you should initially only add CPU support as is clearly laid out in the contributing guidelines and add CUDA support in a follow-up PR. It is way more work for me to review the changes if they're in this PR.

Oh, sorry. I don't want to be a burden. Closing then.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning model Model specific Nvidia GPU Issues specific to Nvidia GPUs python python script changes testing Everything test related

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature Request: DeepSeek V3.2-Exp support