Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions python/sglang/srt/layers/attention/fla/kda.py
Original file line number Diff line number Diff line change
Expand Up @@ -102,8 +102,11 @@ def fused_recurrent_kda_fwd(
# stride_final_state_token=stride_final_state_token,
# stride_indices_seq=stride_indices_seq,
# stride_indices_tok=stride_indices_tok,
USE_INITIAL_STATE=initial_state is not None,
STORE_FINAL_STATE=final_state is not None,
Comment on lines +105 to +106
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Adding these explicit flags is a good improvement for clarity and correctness.

However, this change highlights that initial_state (and consequently final_state) can be None. The surrounding function fused_recurrent_kda_fwd does not seem to handle this case correctly and will raise an AttributeError in several places if initial_state is None:

  • initial_state.dtype on line 66.
  • initial_state.stride(0) on line 68.
  • final_state.stride(0) on line 69.

Since the variables computed on lines 68-69 are not used in the kernel call, they could potentially be removed. The issue on line 66 would also need to be addressed.

This appears to be a pre-existing issue, but it's worth fixing to prevent potential runtime errors.

IS_BETA_HEADWISE=beta.ndim == v.ndim,
USE_QK_L2NORM_IN_KERNEL=use_qk_l2norm_in_kernel,
IS_VARLEN=cu_seqlens is not None,
# INPLACE_FINAL_STATE=inplace_final_state,
IS_KDA=True,
num_warps=num_warps,
Expand Down
Loading