Skip to content

[Model] Remove the unnecessary dtype conversion in MiniCPM#32523

Merged
Isotr0py merged 1 commit intovllm-project:mainfrom
gcanlin:minicpm
Jan 18, 2026
Merged

[Model] Remove the unnecessary dtype conversion in MiniCPM#32523
Isotr0py merged 1 commit intovllm-project:mainfrom
gcanlin:minicpm

Conversation

@gcanlin
Copy link
Contributor

@gcanlin gcanlin commented Jan 17, 2026

Purpose

In the past, vllm-ascend has to add the patch for the unsupported dtype conversion in MiniCPMAttention forward. For removing the patch, we upstream the change into vLLM. For now, the conversion would be unnecessary for any platforms because the kernels handle precision internally.

If necessary, I will give a acc test for MiniCPM.

Test Plan

See vllm-project/vllm-ascend#5975.

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request removes an explicit dtype conversion to float32 in MiniCPMAttention before applying rotary embeddings. The rationale is that the underlying kernels now handle precision internally. While this is likely true for optimized custom kernels, I've raised a concern about the PyTorch-native fallback path (forward_native), which could suffer from precision loss without this explicit casting. I've recommended restoring the casting to ensure numerical stability across all execution paths.

@gcanlin
Copy link
Contributor Author

gcanlin commented Jan 18, 2026

@Isotr0py Hi :) Could you please help take a look?

Copy link
Member

@Isotr0py Isotr0py left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but would be better to have some acc benchmark results in case.

@Isotr0py Isotr0py enabled auto-merge (squash) January 18, 2026 06:12
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jan 18, 2026
@Isotr0py Isotr0py merged commit fe36bf5 into vllm-project:main Jan 18, 2026
57 of 58 checks passed
@gcanlin
Copy link
Contributor Author

gcanlin commented Jan 18, 2026

@Isotr0py Thanks! Here is the comparison between before and after this change. It didn't cause the acc regression.

Command:

vllm serve openbmb/MiniCPM-2B-sft-bf16 --trust-remote-code --max_num_batched_tokens 65536
evalscope eval  --model openbmb/MiniCPM-2B-sft-bf16  --api-url http://localhost:8000/v1  --api-key EMPTY  --eval-type server  --datasets gsm8k

Before:

+---------------------+-----------+----------+----------+-------+---------+---------+
| Model               | Dataset   | Metric   | Subset   |   Num |   Score | Cat.0   |
+=====================+===========+==========+==========+=======+=========+=========+
| MiniCPM-2B-sft-bf16 | gsm8k     | mean_acc | main     |  1319 |  0.4428 | default |
+---------------------+-----------+----------+----------+-------+---------+---------+

After:

+---------------------+-----------+----------+----------+-------+---------+---------+
| Model               | Dataset   | Metric   | Subset   |   Num |   Score | Cat.0   |
+=====================+===========+==========+==========+=======+=========+=========+
| MiniCPM-2B-sft-bf16 | gsm8k     | mean_acc | main     |  1319 |   0.442 | default |
+---------------------+-----------+----------+----------+-------+---------+---------+

gopalsarda pushed a commit to gopalsarda/vllm that referenced this pull request Jan 20, 2026
dsuhinin pushed a commit to dsuhinin/vllm that referenced this pull request Jan 21, 2026
…ect#32523)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Signed-off-by: dsuhinin <suhinin.dmitriy@gmail.com>
wangxiyuan pushed a commit to vllm-project/vllm-ascend that referenced this pull request Feb 9, 2026
### What this PR does / why we need it?

Part of #5304.

After vllm-project/vllm#32523 merge, we could
remove the patch of `MiniCPMAttention`.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?
Test it locally.

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2c24bc6

---------

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
chenchuw886 pushed a commit to chenchuw886/vllm-ascend that referenced this pull request Feb 12, 2026
### What this PR does / why we need it?

Part of vllm-project#5304.

After vllm-project/vllm#32523 merge, we could
remove the patch of `MiniCPMAttention`.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?
Test it locally.

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2c24bc6

---------

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Signed-off-by: momochenchuw <chenchuw@huawei.com>
ItzDEXX pushed a commit to ItzDEXX/vllm that referenced this pull request Feb 19, 2026
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Feb 28, 2026
### What this PR does / why we need it?

Part of vllm-project#5304.

After vllm-project/vllm#32523 merge, we could
remove the patch of `MiniCPMAttention`.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?
Test it locally.

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2c24bc6

---------

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
maoxx241 pushed a commit to maoxx241/vllm-ascend that referenced this pull request Mar 2, 2026
### What this PR does / why we need it?

Part of vllm-project#5304.

After vllm-project/vllm#32523 merge, we could
remove the patch of `MiniCPMAttention`.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?
Test it locally.

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2c24bc6

---------

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Mar 4, 2026
### What this PR does / why we need it?

Part of vllm-project#5304.

After vllm-project/vllm#32523 merge, we could
remove the patch of `MiniCPMAttention`.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?
Test it locally.

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2c24bc6

---------

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
LCAIZJ pushed a commit to LCAIZJ/vllm-ascend that referenced this pull request Mar 7, 2026
### What this PR does / why we need it?

Part of vllm-project#5304.

After vllm-project/vllm#32523 merge, we could
remove the patch of `MiniCPMAttention`.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?
Test it locally.

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2c24bc6

---------

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants