Skip to content

Commit a3f1196

Browse files
committed
fix moe operator function name
Signed-off-by: Neta Zmora <[email protected]>
1 parent a9bab41 commit a3f1196

File tree

1 file changed

+3
-3
lines changed
  • tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe

1 file changed

+3
-3
lines changed

tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/trtllm_moe.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55

66
@torch.library.custom_op("auto_deploy::trtllm_moe_fused", mutates_args=())
7-
def trtllm_fused_moe(
7+
def trtllm_moe_fused(
88
x: torch.Tensor,
99
selected_experts: torch.Tensor,
1010
routing_weights: torch.Tensor,
@@ -55,8 +55,8 @@ def trtllm_fused_moe(
5555
)[0].view(x_shape)
5656

5757

58-
@trtllm_fused_moe.register_fake
59-
def trtllm_fused_moe(
58+
@trtllm_moe_fused.register_fake
59+
def trtllm_moe_fused_fake(
6060
x: torch.Tensor,
6161
selected_experts: torch.Tensor,
6262
routing_weights: torch.Tensor,

0 commit comments

Comments
 (0)