server : speculative checkpointing#19493
Open
srogmann wants to merge 5 commits intoggml-org:masterfrom
Open
Conversation
ggerganov
reviewed
Feb 11, 2026
Member
ggerganov
left a comment
There was a problem hiding this comment.
I think this is good as a prototype, but we must find a way to encapsulate this logic in common/speculative. We should keep server clean from extra speculative-related logic so that it is easier to maintain and introduce new speculative approaches later on.
Qwen3-Coder for auto-complete
I also use this model for auto completion. Which IDE/client do you use?
Collaborator
Author
|
For llama.cpp I use |
c591189 to
0fa66c2
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR is a follow-up to #19270 (see #19267 ) to support the use of speculative decoding with recurrent modules using checkpoints. The use of checkpoints is not as fast as
llama_memory_seq_rm, because in case of a partially accepted draft, we have to go back to the checkpoint and execute a shorter batch.However, in use cases such as the quicksort example in #19164 , we observe a large speedup (in this very repetitive case!), so this PR.
This PR contains a small fix of the
ngram-map-kimplementation.Questions / open tasks:
ngram-map-kuses the accept-feedback to shorten its drafts. I haven't looked how into how to execute a batch without sampling (this would be fine when repeating a shorter draft without reusing the speculative implementation).make room).llama_state_seqfunctions in this PR correct?server log using Qwen3-Coder-Next, arguments
--spec-type ngram-map-k --draft-max 48 --spec-ckpt-num-tries 2 --ctx-checkpoints 16with quicksort prompts from #19164 :AI usage: Qwen3-Coder for auto-complete (common.h :-) ), some questions to MiniMax-M2.1.