Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion vllm_ascend/patch/worker/patch_multimodal_merge.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,9 @@ def _merge_multimodal_embeddings(
This updates ``inputs_embeds`` in place.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any plan to remove this patch?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that we should request for torch_npu/CANN team to support torch.Tensor.masked_scatter_ then we can remove this patch.

Copy link
Copy Markdown
Collaborator Author

@Potabk Potabk Dec 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After communicating offline with the author of this patch, I learned that it was added for performance reasons. The original mask_scatter operator has no functional issues. Therefore, we may need to push for the addition of a new ascend branch upstream.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After testing on NPU, it really doesn't have functional issues. @booker123456 is there any performance test for this patch change?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest we consider directly removing this patch to reduce the maintaining cost. It seems that it can't take much performance. @booker123456 WDYT?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this patch is still necessary until torch_npu's masked_scatter_ performance catches up with index put.

"""
flattened = _flatten_embeddings(multimodal_embeddings)
input_dtype = inputs_embeds.dtype
try:
inputs_embeds[is_multimodal] = flattened
inputs_embeds[is_multimodal] = flattened.to(dtype=input_dtype)
except RuntimeError as e:
num_expected_tokens = is_multimodal.sum().item()
assert isinstance(num_expected_tokens, int)
Expand Down
Loading