Skip to content

Commit 17e0d0f

Browse files
fix: fix illeagel memory access (#6437)
Signed-off-by: Jiying Dong <[email protected]>
1 parent 4b299cb commit 17e0d0f

File tree

1 file changed

+1
-1
lines changed
  • tensorrt_llm/_torch/attention_backend

1 file changed

+1
-1
lines changed

tensorrt_llm/_torch/attention_backend/trtllm.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -634,7 +634,7 @@ def __post_init__(self) -> None:
634634
self.block_ids_per_seq = None
635635
self.kv_block_ids_per_seq = None
636636
if self.enable_flash_mla:
637-
self.block_ids_per_seq = torch.empty(
637+
self.block_ids_per_seq = torch.zeros(
638638
[
639639
self.kv_cache_manager.max_batch_size,
640640
self.kv_cache_manager.max_blocks_per_seq

0 commit comments

Comments
 (0)