fix: bump mlx-lm minimum to 0.31.0 for hybrid model batching#227
Merged
waybarrios merged 1 commit intowaybarrios:mainfrom Mar 31, 2026
Merged
Conversation
ArraysCache gained native batching support (extract, merge, filter, prepare) in mlx-lm 0.31.0. Older versions crash with "ArraysCache.__init__() missing 1 required positional argument: 'size'" when continuous batching encounters hybrid models like Qwen3.5 that mix KVCache and ArraysCache layers. Fixes #11
waybarrios
approved these changes
Mar 31, 2026
Owner
waybarrios
left a comment
There was a problem hiding this comment.
Now that #183 landed with the scheduler fixes for mlx-lm 0.31.x, this version bump is the matching piece. Bumping the floor to 0.31.0 prevents users from installing older versions that are incompatible with the current codebase (ArraysCache native batching, _make_cache 3-arg signature, prompt_checkpoints tuple).
janhilgard
added a commit
to janhilgard/vllm-mlx
that referenced
this pull request
Apr 1, 2026
Brings in: prompt_tokens fix (waybarrios#236), ArraysCache batching (waybarrios#160), platform rename (waybarrios#185), mlx-lm 0.31 compat (waybarrios#183, waybarrios#227), base64 hash fix (waybarrios#206), streaming UTF-8 detokenizer (waybarrios#109), and cleanup commits. Conflicts resolved: - scheduler.py: keep make_logits_processors import (fork feature) - mllm_scheduler.py: take upstream stop-token skip in detokenizer - models/mllm.py: keep SHA256 hash (fork fix for collision) - utils/tokenizer.py: merge upstream error message with fork elif chain Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
sysit
pushed a commit
to sysit/vllm-mlx
that referenced
this pull request
Apr 1, 2026
…or-hybrid-batching fix: bump mlx-lm minimum to 0.31.0 for hybrid model batching
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Bump the
mlx-lmminimum version from>=0.30.5to>=0.31.0.Why
ArraysCachegained native batching support (extract,merge,filter,prepare) in mlx-lm 0.31.0. Older versions crash withArraysCache.__init__() missing 1 required positional argument: 'size'when continuous batching encounters hybrid models like Qwen3.5 that mix KVCache and ArraysCache layers.The
ensure_mamba_support()monkey-patch is already correctly disabled since these methods are native. The only missing piece was the version floor inpyproject.toml.Reproduction
Verification
Files
pyproject.tomlFixes computor-org#11
Related: #160, #159