[Model] Fix minimax model cache & lm_head precision#19592
[Model] Fix minimax model cache & lm_head precision#19592DarkLight1337 merged 2 commits intovllm-project:mainfrom
Conversation
Signed-off-by: qingjun <qingjun@minimaxi.com>
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
houseroad
left a comment
There was a problem hiding this comment.
Could you provide some precision comparison results?
We conducted the AIME24 evaluation using the minimax-text-01 model and observed that the scores were relatively low. Upon further investigation, we noticed two types of issues: in some cases, the model failed to output certain expected characters, while in other cases, it produced repeated outputs. After thorough debugging, we identified that the root cause was insufficient precision in both the KV cache and LM head outputs. Adjusting these components to higher precision resolved the problem and improved output stability and accuracy. |
|
Yeah, I mean list some scores (before vs after the changes) in the PR description to help folks understand the impact of the changes. :-) |
Change the precision of the MiniMax model in vLLM: update the LM head and KV cache from bfloat16 (bf16) to float32 (fp32).
Purpose: To improve numerical stability and output accuracy during inference.