fix: enable prefix caching for multi-turn conversations#3
Merged
waybarrios merged 1 commit intowaybarrios:mainfrom Jan 14, 2026
Merged
fix: enable prefix caching for multi-turn conversations#3waybarrios merged 1 commit intowaybarrios:mainfrom
waybarrios merged 1 commit intowaybarrios:mainfrom
Conversation
Two changes to make prefix caching work correctly: 1. Handle empty remaining_tokens list properly (line 451): When there's an exact cache match, remaining_tokens=[] but Python treats [] as falsy, so `[] or prompt_token_ids` returns all tokens. Fix: Explicitly check for empty list and pass only the last token so BatchGenerator can start generation without reprocessing. 2. Store cache with full token sequence (prompt + output): For multi-turn chat, we want to cache the entire conversation. The next turn's prompt includes the previous response, so storing with full sequence enables prefix matching across turns. Testing shows: - Multi-turn conversations: 2x speedup (3.45s -> 1.75s) - Exact match requests: 1.4x speedup on short prompts - Long prompts (800+ tokens): up to 26x speedup
Owner
Verification ResultsI tested this PR on M4 Max (128GB) and everything looks good. Test Suite ResultsNote: The 8 failing tests in Benchmark ResultsAll benchmarks pass successfully with the prefix caching fix: LLM Performance
Paged Cache Test (20 requests, 2 rounds)Continuous Batching TestCode ReviewThe fix correctly addresses the issues described in the PR:
The implementation is clean and well-documented. Ready to merge. |
WainWong
pushed a commit
to WainWong/vllm-mlx
that referenced
this pull request
Mar 2, 2026
…mple-engine feat: Prompt cache for SimpleEngine + tool logits safety
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
remaining_tokenslist properly when there's an exact cache matchProblem
Prefix caching was enabled but not actually providing speedups because:
[]is falsy in Python, soremaining_tokens or prompt_token_idsalways returned all tokens for exact matchesSolution
remaining_tokensand pass only the last token to BatchGenerator for exact matchesTest Results