Skip to content

Commit 990e674

Browse files
authored
[None][fix] Switch AD AllReduce strategy to NCCL (#8979)
Signed-off-by: Eran Geva <[email protected]>
1 parent ee20e67 commit 990e674

File tree

1 file changed

+1
-1
lines changed
  • tensorrt_llm/_torch/auto_deploy/distributed

1 file changed

+1
-1
lines changed

tensorrt_llm/_torch/auto_deploy/distributed/trtllm.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ def trtllm_allreduce(tensor, op, all_reduce_params=None):
2828
p_config = Mapping(world_size=world_size, tp_size=world_size, rank=rank)
2929
# Use Strategy.AUTO for optimal performance
3030
_allreduce_cache[cache_key] = AllReduce(
31-
mapping=p_config, strategy=AllReduceStrategy.AUTO, dtype=tensor.dtype
31+
mapping=p_config, strategy=AllReduceStrategy.NCCL, dtype=tensor.dtype
3232
)
3333

3434
torch_op = _allreduce_cache[cache_key]

0 commit comments

Comments
 (0)