-
-
Notifications
You must be signed in to change notification settings - Fork 11.9k
Fix fused_scaled_matmul_reduce_scatter callsite #26506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -19,7 +19,7 @@ | |
| ) | ||
| from vllm.logger import init_logger | ||
| from vllm.platforms import current_platform | ||
| from vllm.utils import direct_register_custom_op | ||
| from vllm.utils import direct_register_custom_op, is_torch_equal_or_newer | ||
|
|
||
| from .inductor_pass import enable_fake_mode | ||
| from .vllm_inductor_pass import VllmInductorPass, VllmPatternMatcherPass | ||
|
|
@@ -169,16 +169,37 @@ def replacement( | |
| scale_a: torch.Tensor, | ||
| scale_b: torch.Tensor, | ||
| ) -> torch.Tensor: | ||
| gemm_rs = torch.ops.symm_mem.fused_scaled_matmul_reduce_scatter( | ||
| input, | ||
| mat2, | ||
| scale_a, | ||
| scale_b, | ||
| "avg", | ||
| scatter_dim=0, | ||
| out_dtype=self.dtype, | ||
| group_name=self.tp.device_group.group_name, | ||
| ) | ||
| if is_torch_equal_or_newer("2.8.0.dev"): | ||
| # TODO: This fails in the dynamic shapes case because the shapes | ||
| # get specialized | ||
| output_shape = ( | ||
| torch.ops.aten.sym_size.int(input, 0), | ||
| torch.ops.aten.sym_size.int(mat2, 1), | ||
| ) | ||
| gemm_rs = torch.ops.symm_mem.fused_scaled_matmul_reduce_scatter( | ||
| input, | ||
| mat2, | ||
| scale_a, | ||
| scale_b, | ||
| "avg", | ||
| orig_scatter_dim=0, | ||
| scatter_dim_after_maybe_reshape=0, | ||
| output_shape=output_shape, | ||
| out_dtype=self.dtype, | ||
| group_name=self.tp.device_group.group_name, | ||
| ) | ||
| else: | ||
| # For older versions, use the old signature | ||
| gemm_rs = torch.ops.symm_mem.fused_scaled_matmul_reduce_scatter( | ||
| input, | ||
| mat2, | ||
| scale_a, | ||
| scale_b, | ||
| "avg", | ||
| scatter_dim=0, | ||
| out_dtype=self.dtype, | ||
| group_name=self.tp.device_group.group_name, | ||
| ) | ||
|
|
||
| return gemm_rs | ||
|
Comment on lines
+172
to
204
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This version-checking logic for |
||
|
|
||
|
|
@@ -296,16 +317,38 @@ def replacement( | |
| scale_b: torch.Tensor, | ||
| cutlass_mm_output: torch.Tensor, | ||
| ) -> torch.Tensor: | ||
| gemm_rs = torch.ops.symm_mem.fused_scaled_matmul_reduce_scatter( | ||
| input, | ||
| mat2, | ||
| scale_a, | ||
| scale_b, | ||
| "avg", | ||
| scatter_dim=0, | ||
| out_dtype=self.dtype, | ||
| group_name=self.tp.device_group.group_name, | ||
| ) | ||
| if is_torch_equal_or_newer("2.8.0.dev"): | ||
| # TODO: This fails in the dynamic shapes case because the shapes | ||
| # get specialized | ||
| output_shape = ( | ||
| torch.ops.aten.sym_size.int(input, 0), | ||
| torch.ops.aten.sym_size.int(mat2, 1), | ||
| ) | ||
|
|
||
| gemm_rs = torch.ops.symm_mem.fused_scaled_matmul_reduce_scatter( | ||
| input, | ||
| mat2, | ||
| scale_a, | ||
| scale_b, | ||
| "avg", | ||
| orig_scatter_dim=0, | ||
| scatter_dim_after_maybe_reshape=0, | ||
| output_shape=output_shape, | ||
| out_dtype=self.dtype, | ||
|
Comment on lines
+320
to
+337
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
The cutlass variant builds Useful? React with 👍 / 👎. |
||
| group_name=self.tp.device_group.group_name, | ||
| ) | ||
| else: | ||
| # For older versions, use the old signature | ||
| gemm_rs = torch.ops.symm_mem.fused_scaled_matmul_reduce_scatter( | ||
| input, | ||
| mat2, | ||
| scale_a, | ||
| scale_b, | ||
| "avg", | ||
| scatter_dim=0, | ||
| out_dtype=self.dtype, | ||
| group_name=self.tp.device_group.group_name, | ||
| ) | ||
|
|
||
| return gemm_rs | ||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When targeting the new
fused_scaled_matmul_reduce_scattersignature,output_shapeis derived directly frominputandmat2without accounting for the tensor-parallel world size. In the unfused graph this matmul output is immediately reduced along dim‑0, so each rank ultimately sees a first dimension ofscaled_mm.size(0) // tp_world_size. Passing the pre‑scatter size (input.shape[0]) will request the wrong shape from the fused op and either misallocate or fail once torch 2.8 executes this branch.output_shape[0]should reflect the reduce-scatter result (divide byself.tp_size).Useful? React with 👍 / 👎.