feat: Prefix cache improvements, Anthropic Messages API, and agentic reliability fixes#46
Conversation
Add /v1/messages and /v1/messages/count_tokens endpoints that translate Anthropic Messages API requests to OpenAI format, enabling clients like Claude Code to communicate with vllm-mlx directly. Also fix tool call parsing: when the configured parser (e.g. hermes) doesn't recognize a format (e.g. Nemotron XML), fall back to the generic parser which handles more formats. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace .nbytes access (which triggers lazy MLX evaluation) with shape+dtype-based memory estimation in estimate_kv_cache_memory(). Remove eager mx.eval(mx.array(0)) from cache extraction path that forced full graph evaluation. Add incremental per-layer mx.eval() in _cleanup_finished() to spread evaluation cost. Add BatchGenerator close(), periodic mx.clear_cache(), and Metal memory stats reporting. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…safety - Save KV cache incrementally during chunked prefill (every 8192 tokens) so cancelled/disconnected long-context requests preserve partial work - Add supersequence cache matching (hit when cached key is longer than query) - Add _wait_with_disconnect() for non-streaming endpoints to detect client disconnect during prefill and abort orphaned requests - Add try/finally cancellation safety to EngineCore.generate() to clean up scheduler state on CancelledError - Support hybrid Mamba+Attention models via generic class_ref.from_state() reconstruction instead of hardcoded KVCache tuple unpacking Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
BatchKVCache doesn't inherit from KVCache, so _merge_caches couldn't handle restored cache objects. Convert to KVCache during reconstruction since mid-prefill save is always batch_size=1. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Supersequence match covers all requested tokens (remaining=[]) so it should always be preferred over a prefix match which only covers a subset. Previously prefix match was checked first, causing full cache entries to be ignored in favor of partial mid-prefill ones. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Keep mid-prefill cache entries after request completion instead of deleting them, and store prompt-only entries alongside prompt+output. This allows future requests sharing the same prefix (e.g. same system prompt + tools but different user message) to get a prefix cache hit. Tested: 4.4x speedup for requests with shared 30K-token prefix. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Defer abort_request() to executor thread via thread-safe deque to prevent race condition between main asyncio thread and mlx-step_0 thread that causes Metal assertion failure (uncommitted encoder). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Check _pending_abort_ids inside _chunked_next() before each chunk so partial prefill stops within 1 chunk instead of running to completion. Also fix _do_abort_request to clean up BatchGenerator even when request was already removed from self.requests. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Streaming mode was sending tool calls as plain text in delta.content instead of structured delta.tool_calls objects. This broke tool call detection in clients (Claude Code, Cursor, etc.) during streaming. - Add tool parser integration to stream_chat_completion() with lazy initialization of _tool_parser_instance for streaming-first requests - Extend HermesToolParser to handle Nemotron XML format (<function=name><parameter=p>v</parameter></function>) used by Qwen3-Coder-Next in addition to JSON format - Add fallback for incomplete tool calls at end of stream - Set finish_reason to "tool_calls" when tool calls are detected Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Parameter values from <parameter=name>value</parameter> were always
treated as strings, causing nested arrays and objects to be
double-serialized (e.g. "books": "[{...}]" instead of "books": [{...}]).
Now try json.loads() on each parameter value first — if it parses as
valid JSON (array, object, number, boolean), use the parsed value;
otherwise keep the raw string.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Qwen3-Coder-Next is a hybrid model with 36 MambaCache + 12 KVCache layers. MambaCache stores cumulative SSM state that cannot be trimmed back to "prompt only" after output generation. The previous approach of storing prompt-only cache entries by trimming the post-generation cache caused immediate EOS on cache hit (1-token responses). Fix: capture prompt-only cache state by monkey-patching _process_prompts in BatchGenerator. At the point where it returns, the batch cache contains the exact prompt-only state (all prompt tokens processed, no output token fed back yet). This state is correct for both KVCache and MambaCache layers. Also disable supersequence match trimming for hybrid models (detected via non-trimmable cache layers) to prevent the same corruption. Pure KVCache models still use supersequence trimming. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
In multi-turn agentic workloads (e.g. tool calling), each request extends the previous conversation. The cache was storing complete KV state for every request independently — 10 sequential tool calls on a 30K-token conversation stored ~190K tokens worth of KV data (~34GB), even though 95%+ was duplicated prefix data. Now, when storing a new cache entry, any existing entries whose token sequence is a strict prefix of the new entry are evicted first. Since fetch() always prefers the longest match, these shorter entries would never be selected anyway. Expected impact: ~6x memory reduction for typical agentic sessions (190K → 30K stored tokens, ~34GB → ~5-8GB cache). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
After 2-3 rounds of tool use, the model would start generating tool
calls as text ("[Calling tool: read(...)]") instead of structured
<tool_call> XML. Root cause: the Hermes parser had
SUPPORTS_NATIVE_TOOL_FORMAT = False (default), so assistant tool_calls
were converted to "[Calling tool: ...]" text and tool results to
"[Tool Result (...)]" in the conversation history. The model then
mimicked this text format instead of producing proper tool call XML.
Two fixes:
1. Set SUPPORTS_NATIVE_TOOL_FORMAT = True on HermesToolParser, so
role="tool" and tool_calls are preserved in their native format
when building the chat template input.
2. Parse tool_call arguments from JSON string to dict before passing
to the chat template. Qwen3's template iterates
tool_call.arguments|items which requires a dict, but the OpenAI
API sends arguments as a JSON string.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The Anthropic /v1/messages streaming path filtered special tokens (im_end, im_start, etc.) but the OpenAI /v1/chat/completions streaming path did not. This caused raw <|im_end|> to appear in client output (e.g. "Dev server běží na http://localhost:3000<|im_end|>"). Use the shared SPECIAL_TOKENS_PATTERN regex in both streaming paths for consistent filtering. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When the model generates a stop token (e.g. <|im_end|>, token 151645), the scheduler correctly detects it and sets finish_reason="stop", but still decoded it into new_text which was then sent to the client as content. This caused "<|im_end|>" to appear at the end of responses. Now, when finish_reason is "stop", new_text is set to "" so the stop token is never sent as content. The server-side SPECIAL_TOKENS_PATTERN filter remains as a safety net for edge cases. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…e hits) Two bugs prevented prefix cache hits on the Anthropic endpoint: 1. prompt_cache_save was never called for chunked prefill. Large prompts (>4096 tokens) bypass _process_prompts and go through _chunked_next, which had no prompt_cache_save hook. Only prompt+output entries were stored, whose keys include generated tokens that don't match the next request's prompt — causing permanent cache misses. 2. Prefix-subset eviction removed prompt-only entries. When cache_store saved a prompt+output entry, the eviction logic deleted the shorter prompt-only entry (from prompt_cache_save) because it was technically a prefix. But the prompt-only entry was the one future requests needed. Fixes: - Add prompt_cache_save call after chunked prefill completion - Add evict_prefixes parameter to store(); set False for cache_store - Log cache clears on BatchGenerator recreation for diagnostics Before: Anthropic endpoint always MISS, TTFT 30-90s for 50-100K prompts After: Cache HIT on consecutive requests, TTFT 0.3-1.7s Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
KV cache entries depend only on input tokens, not sampling parameters (temperature, top_p, min_p). When different clients send requests with different sampler configs, the BatchGenerator is recreated but the prefix cache was unnecessarily cleared — causing full re-prefill of 50-140K token prompts (30-90s TTFT) on every model/client switch. Now the cache is preserved across BatchGenerator recreations, since the server runs a single model and all cache entries remain valid. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When cache entries survived a BatchGenerator recreation (sampler param change), stale entries with batch_size > 1 could cause a concatenation crash in _merge_caches: shapes like (2,3,8192) vs (5,2048,8192) are incompatible. The error looped indefinitely, blocking all requests. Two fixes: 1. Add batch dimension validation to _validate_cache(): reject entries where KVCache keys.shape[0] != 1 or MambaCache cache[i].shape[0] != 1. These are caught at fetch time and treated as cache MISS. 2. Wrap batch_generator.insert() in try/except: if an unexpected shape mismatch slips through validation, discard the cache and retry the insert without it, instead of entering an infinite error loop. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…odels The prefix cache was returning MISS for every turn in agentic workloads (multi-turn conversations with shared context) on hybrid models like Qwen3-Next-80B which use MambaCache layers that cannot be trimmed. Root causes and fixes: - Add two-phase prefill: save cache at prefix boundary during chunked prefill so future requests with the same prefix but different suffix get a HIT (mid_prefill_save bypasses throttle at prefix_boundary) - Fix prefix boundary computation: use two-tokenization LCP approach instead of separate prefix tokenization, avoiding Jinja template discrepancies (e.g. Qwen3 <think> markers on last assistant message) - Always install chunked prefill when memory-aware cache is active, even without explicit --chunked-prefill-tokens flag - Prevent prompt_cache_save from evicting mid-prefill boundary entries (evict_prefixes=False) - Add LCP matching in memory_cache.py fetch() for divergent sequences (works for pure KVCache models; safely skipped for MambaCache) - Pass requests dict to _install_chunked_prefill for boundary detection Verified: Turn 1 MISS (expected), Turn 2-5 HIT with 96-97% cached tokens and 7x TTFT improvement (1.0s → 0.13s). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Benchmark: vllm-mlx vs llama.cpp — Qwen3-Next-80B-A3BSetup
Test 1: Single-stream throughput
Even after normalizing for the larger 6-bit model, vllm-mlx delivers ~2.5× the throughput of llama.cpp on identical hardware. Test 2: Agentic multi-turn prompt caching8-turn tool-calling conversation (~1500 prompt tokens at turn 2):
With the new prefix boundary cache (this commit), vllm-mlx caches the shared conversation prefix and reuses it across turns. llama.cpp's slot-based cache doesn't reuse across different suffixes. Test 3: Parallel requests (continuous batching)
vllm-mlx scales near-linearly to 8 concurrent requests (203 agg tok/s). llama.cpp plateaus at ~45 tok/s with Normalized (same quantization): At ×8 concurrent, vllm-mlx would deliver ~260 agg tok/s vs 44.5 — a 5.8× advantage. Test 4: Prefix cache verification (5 turns, same context, different suffix)
vllm-mlx achieves 100% cache hit rate for repeated prefixes with different suffixes (the agentic pattern). TTFT drops from 0.54 s to 0.07 s (7.7× faster). llama.cpp shows no cache benefit in this scenario. Summary: Quantization-normalized comparison
The throughput advantage comes from MLX's efficient Metal compute pipeline. The parallel scaling advantage comes from continuous batching. The caching advantage comes from the prefix boundary mechanism introduced in this commit. 🤖 Generated with Claude Code |
Benchmark Update: Same-Quantization Comparison (4-bit vs Q4_K_XL)Previous results used 6-bit MLX vs Q4_K_XL GGUF. Here's a fair same-quantization comparison using Model: Qwen3-Next-80B-A3B (MoE: 80B total, 3B active) Test 1: Token Generation Throughput
Test 2: Prompt Caching (Agentic Multi-turn)
Test 3: Parallel Requests (Aggregate Throughput)
llama.cpp saturates at ~45 tok/s aggregate regardless of concurrency. vllm-mlx scales nearly linearly up to 8 concurrent requests. Test 4: Prefix Cache Verification (Agentic)Same base conversation, varying only the last user message across 5 turns: vllm-mlx:
llama.cpp: All turns LIKELY_MISS — TTFT stable at ~0.25s (no prefix reuse). TTFT improvement with cache: 0.884s → 0.115s (7.7x faster) on Turn 2+. SummaryAt identical quantization (~42-43 GB model size), vllm-mlx delivers:
|
|
Hi @waybarrios, I hope this PR finds you well! I wanted to reach out about PR #46. I've implemented full Anthropic Messages API support to complement the existing OpenAI compatibility. This enables vllm-mlx to work directly with Claude Code/OpenCode, which is particularly valuable for Apple Silicon users. Key highlights: The implementation follows vllm-mlx's architecture and all automated checks pass. I've provided detailed benchmarks and test results in the PR description. I believe this feature will take the project to the next level by opening it up to the entire Claude ecosystem, making vllm-mlx the go-to solution for both OpenAI and Anthropic API compatibility on Apple Silicon. Would you be available to review this when you have time? I'm happy to make any changes or answer questions. Thanks for building such a great project! Best regards, |
Benchmark: GPT-OSS-20B — vllm-mlx vs llama.cppNew model tested: GPT-OSS-20B (MoE, 20B total / ~3.6B active, 32 experts, hybrid sliding window + full attention). Setup
Test 1: Single-Request Throughput
Test 2: Agentic Multi-Turn (8 turns, growing context)Simulates an agentic workload where each turn extends the conversation (system prompt + user/assistant/tool-result messages).
vllm-mlx maintains 30-48ms TTFT across all turns thanks to prefix cache. llama.cpp fluctuates 630ms-1.5s with no effective caching. Test 3: Parallel Requests (Aggregate tok/s)
Test 4: Prefix Cache Verification (identical prefix, varying suffix)
Key Takeaways
This is the second model architecture (after Qwen3-Next-80B MoE) where vllm-mlx with this PR's prefix cache improvements shows significant gains over llama.cpp, especially for agentic workloads with repeated system prompts. |
|
UP |
…r_cache, fast-path tool parsing, hybrid executor - Replace O(N) linear scan in MemoryAwarePrefixCache.fetch() with O(log N) bisect-based lookup for prefix, supersequence, and LCP matches - Remove unnecessary copy.deepcopy() from PrefixCacheManager (MLX arrays are immutable) - Increase _clear_cache_interval from 16 to 32 and remove redundant per-layer mx.clear_cache() in _cleanup_finished - Add fast-path in streaming tool parsing: skip extract_tool_calls_streaming() until '<' is seen in the token stream - Use hybrid executor in engine loop: inline for generation-only steps (~1-3ms), ThreadPoolExecutor only for prefill steps that may block for seconds Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds a new status endpoint that exposes per-request details for debugging and production monitoring: phase (queued/prefill/generation), tokens/s, TTFT, progress, and cache hit type (exact/prefix/supersequence/lcp/miss). - Add first_token_time and cache_hit_type fields to Request dataclass - Track _last_match_type in MemoryAwarePrefixCache.fetch() for all paths - Set cache_hit_type after fetch() for all cache backends (memory-aware, paged, legacy) - Record first_token_time in _process_batch_responses() on first output token - Add Scheduler.get_running_requests_info() for per-request status data - Extend EngineCore.get_stats() with requests info - Add GET /v1/status endpoint in server.py - Fix pre-existing test failures: update abort tests to match deferred abort pattern (abort_request() enqueues, _process_pending_aborts() executes) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Resolve conflicts in: - vllm_mlx/server.py: keep _wait_with_disconnect(), adopt upstream _resolve_temperature()/_resolve_top_p() helpers - vllm_mlx/tool_parsers/hermes_tool_parser.py: combine our SUPPORTS_NATIVE_TOOL_FORMAT + Nemotron XML with upstream's lenient pattern + raw JSON fallback Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
HermesToolParser has SUPPORTS_NATIVE_TOOL_FORMAT = True to enable proper multi-turn tool calling with Qwen3/Hermes models. Move it from the "without native support" list to "with native support" in the upstream test. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
b65476c to
25557e2
Compare
Upstream merge added serve_command() usage of these args but the argparse definitions were missing, causing AttributeError on startup. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Dashboard for
|
|
All 23 modified files were reviewed from five angles: project guidelines compliance, bug detection, git history context, feedback from previous PRs, and consistency with existing code comments. No significant issues were found. Here is a summary of what was checked: Areas covered:
The PR follows the project architecture correctly, does not introduce regressions, docstrings were updated properly, and key technical decisions (mlx-lm wrapper approach, OpenAI compatibility, MLX over MPS) are respected. One observation worth considering for a future follow-up: in the Hermes tool parser, the |
The request parameter was available in _parse_tool_calls_with_parser but never forwarded to extract_tool_calls or the generic parse_tool_calls fallbacks. This meant parsers like Hermes and GLM47 could never validate tool names against the actual tools provided in the request, allowing hallucinated tool names to pass through unfiltered. Convert the Pydantic ChatCompletionRequest to a dict once at the top of the function and pass it to all four call sites.
Merge 17 upstream commits including: - KV cache quantization for prefix cache memory reduction (waybarrios#62) - Streaming tool call parsing via ToolParser integration (waybarrios#46) - MTP speculative decoding for Qwen3-Next (waybarrios#82) - GPT-OSS reasoning parser and Harmony format parsers - mlx-lm >= 0.30.5 requirement, transformers >= 5.0.0 - BatchMambaCache fix for mlx-lm >= 0.30.6 (waybarrios#89) - MLLM continuous batching fixes (waybarrios#76) - Force MLLM mode option (waybarrios#81) - Various bug fixes Conflict resolution: - server.py: Replaced local tool_call_buffering with upstream's ToolParser-based streaming (more robust) - cli.py: Deduplicated --mllm, --default-temperature, --default-top-p args (upstream already added them), kept local --embedding-model - mamba_cache.py: Took upstream's conditional HAS_MAMBA_CACHE approach - pyproject.toml: Took upstream's version and dependency changes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
/v1/messagesendpoint implementing the Anthropic Messages API (streaming + non-streaming), enabling clients like Claude Code / OpenCode to connect to vllm-mlx directly/v1/messages/count_tokensendpoint for token budget estimationGET /v1/statusendpoint with real-time per-request monitoring: phase (queued/prefill/generation), tokens/s, TTFT, progress, and cache hit type (exact/prefix/supersequence/lcp/miss)anthropic_adapter.py(request/response translation) andanthropic_models.py(Pydantic models)_wait_with_disconnect()pollsis_disconnected()during prefill and aborts orphaned requests instead of wasting GPU on requests nobody is waiting forEngineCore.generate()wrapped in try/finally to clean up scheduler state on CancelledErrorclass_ref.from_state()cache reconstruction works with both standard KVCache and runtime-generated BatchMambaCache (Mamba+Attention models)prompt_cache_savenow fires for chunked prefill (large prompts >4096 tokens), enabling prefix cache hits on Anthropic endpointReal-time Monitoring:
GET /v1/statusNew endpoint for production monitoring and debugging. Returns server-wide stats plus per-request details.
Example response (during generation)
{ "status": "running", "model": "mlx-community/Qwen3-Next-80B-A3B-Instruct-6bit", "uptime_s": 402.7, "steps_executed": 562, "num_running": 1, "num_waiting": 0, "total_requests_processed": 3, "total_prompt_tokens": 12199, "total_completion_tokens": 12418, "metal": { "active_memory_gb": 44.96, "peak_memory_gb": 44.96, "cache_memory_gb": 0.0 }, "cache": { "hits": 4, "misses": 7, "hit_rate": 0.3636, "current_memory_mb": 10864.27, "max_memory_mb": 49152.0, "memory_utilization": 0.221, "entry_count": 76 }, "requests": [ { "request_id": "3e2afa95-f9fd-4e2f-87a5-ba5fbae5a434", "status": "running", "phase": "generation", "elapsed_s": 7.81, "prompt_tokens": 32, "completion_tokens": 562, "max_tokens": 2000, "progress": 0.281, "tokens_per_second": 75.9, "ttft_s": 0.405, "cache_hit_type": "miss", "cached_tokens": 0 } ] }Per-request fields
phasequeued→prefill→generationtokens_per_secondttft_sprogresscompletion_tokens / max_tokens(0.0–1.0)cache_hit_typeexact/prefix/supersequence/lcp/misscached_tokensImplementation
first_token_timeandcache_hit_typefields added toRequestdataclassMemoryAwarePrefixCache.fetch()tracks_last_match_typeacross all match pathsScheduler.get_running_requests_info()computes per-request metricsEngineCore.get_stats()includes request detailsDebugged & Tested Against Claude Code (OpenCode)
This PR has been end-to-end tested with Claude Code (OpenCode) using Qwen3-Next-80B-A3B on Apple Silicon M3 Ultra. All features have been validated in real agentic sessions with multi-turn tool calling, long-context conversations (50K-110K tokens), and streaming.
Key fixes discovered during Claude Code testing
SUPPORTS_NATIVE_TOOL_FORMAT=False, tool calls converted to text[Calling tool: ...]SUPPORTS_NATIVE_TOOL_FORMAT=True, parse JSON arguments to dict for Qwen3 template<|im_end|>leaking into streaming responsesSPECIAL_TOKENS_PATTERNregex in both OpenAI and Anthropic streamingfinish_reasonnew_text=""whenfinish_reason="stop"prompt_cache_savenever called for chunked prefill; prefix eviction removed prompt-only entriesprompt_cache_savein_chunked_next; addevict_prefixesparam tostore()abort_request()to executor thread via thread-safe dequePrompt Cache: Time Savings
Before vs After: Anthropic Endpoint (/v1/messages)
The Anthropic endpoint is used by Claude Code. Before this PR, every request re-processed the entire prompt from scratch:
Real-World Session: Claude Code → Qwen3-Next-80B (17 tool-call requests)
Data from a real Claude Code agentic session with sequential tool calls, where each request extends the previous conversation:
Aggregate:
Longer Conversations: 100K+ Token Session
From a real Claude Code session with 100K+ token prompts (after chunked prefill cache fix):
Cache Memory Efficiency
Prefix Cache Design
Problem
The original
MemoryAwarePrefixCachestores complete KV/Mamba state per request as Python objects. In multi-turn agentic workloads, each request extends the previous conversation, but the cache stores the full state independently — ~6.3x more memory than necessary.How other frameworks solve this
Our approach: Prefix-subset eviction + prompt-only cache entries
fetch()anywaycache_store) must NOT evict prompt-only entries, because the prompt-only entry is the correct prefix for future cache hitsTest plan
/v1/messages- returnstool_usecontent blocks/v1/chat/completions- returnstool_callsarray/v1/chat/completions- structureddelta.tool_calls<|im_end|>in streaming output/v1/messages- SSE events withmessage_start,content_block_delta,message_stop/v1/statusendpoint: idle state, single/concurrent requests, queued/prefill/generation phases/v1/messages/count_tokens🤖 Generated with Claude Code