Enforce that TP > 1 is not supported for Mamba2 if Quantization is Enabled.#14617
Enforce that TP > 1 is not supported for Mamba2 if Quantization is Enabled.#14617tlrmchlsmth merged 2 commits intovllm-project:mainfrom
Conversation
Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
…abled. (vllm-project#14617) Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
…abled. (vllm-project#14617) Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com> Signed-off-by: Louis Ulmer <ulmerlouis@gmail.com>
…abled. (vllm-project#14617) Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
…abled. (vllm-project#14617) Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com> Signed-off-by: Mu Huai <tianbowen.tbw@antgroup.com>
Currently there is a bug in the logic that will fail to load the model if the mamba2 layer is quantised. This is because quantised layers have parameters of type
ModelWeightParameterthat do not allow theweight_loaderto be modified, which is our current strategy of handling TP for the mamba2in_proj, see #13660.This PR prevents an Attribute Error
from being thrown, in the case when using quant layers with TP=1. And also, it prevents TP > 2 from being used if quant layers are present
cc: @tlrmchlsmth