Adding Support for Qwen3.5 & Qwen3.5 MoE (Vision)#120
Adding Support for Qwen3.5 & Qwen3.5 MoE (Vision)#120davidkoski merged 5 commits intoml-explore:mainfrom
Conversation
|
Would love this to be merged soon @davidkoski 🙏 |
davidkoski
left a comment
There was a problem hiding this comment.
The new models look good but see my question on the weight loading and let me know what you think.
Previously, I referred to the implementation at Blaizzy/mlx-vlm -> utils.py/#L215, and at first I thought it was necessary to keep the logic consistent with mlx-vlm. After reading your suggestion, I’ve already used |
775074d to
37fee31
Compare
|
Am I the only one checking the status of this PR every few hours? Can't wait @johnmai-dev :) |
haha,I can't wait too!This PR is ready.❤️ |
davidkoski
left a comment
There was a problem hiding this comment.
Looks great, thank you!
* Add Qwen3.5 and Qwen3.5 MoE (Vision) * Use `loadArraysAndMetadata` * performance optimization





Proposed changes
Ported
Checklist
Put an
xin the boxes that apply.pre-commit run --all-filesto format my code / installed pre-commit prior to committing changes