Skip to content

[Model] Fix minimax model cache & lm_head precision#19592

Merged
DarkLight1337 merged 2 commits intovllm-project:mainfrom
MiniMax-AI:bug/fix_minimax_fp
Jun 13, 2025
Merged

[Model] Fix minimax model cache & lm_head precision#19592
DarkLight1337 merged 2 commits intovllm-project:mainfrom
MiniMax-AI:bug/fix_minimax_fp

Conversation

@qscqesze
Copy link
Copy Markdown
Contributor

@qscqesze qscqesze commented Jun 13, 2025

Change the precision of the MiniMax model in vLLM: update the LM head and KV cache from bfloat16 (bf16) to float32 (fp32).
Purpose: To improve numerical stability and output accuracy during inference.

qscqesze added 2 commits June 13, 2025 10:26
Signed-off-by: qingjun <qingjun@minimaxi.com>
Signed-off-by: qingjun <qingjun@minimaxi.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Warning

You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again!

@github-actions
Copy link
Copy Markdown

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Copy Markdown
Collaborator

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you provide some precision comparison results?

@qscqesze
Copy link
Copy Markdown
Contributor Author

qscqesze commented Jun 13, 2025

Could you provide some precision comparison results?

We conducted the AIME24 evaluation using the minimax-text-01 model and observed that the scores were relatively low. Upon further investigation, we noticed two types of issues: in some cases, the model failed to output certain expected characters, while in other cases, it produced repeated outputs.

After thorough debugging, we identified that the root cause was insufficient precision in both the KV cache and LM head outputs. Adjusting these components to higher precision resolved the problem and improved output stability and accuracy.

Copy link
Copy Markdown
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, this LGTM

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) June 13, 2025 08:54
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jun 13, 2025
@houseroad
Copy link
Copy Markdown
Collaborator

Yeah, I mean list some scores (before vs after the changes) in the PR description to help folks understand the impact of the changes. :-)

@DarkLight1337 DarkLight1337 merged commit a24cb91 into vllm-project:main Jun 13, 2025
79 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants