Conversation
WalkthroughThe changes systematically remove all usage of the Changes
Sequence Diagram(s)sequenceDiagram
participant Test as Test Function
participant Kernel as Kernel (e.g., chunk_delta_h, chunk_o)
participant Exp as exp
participant Mask as Masking Logic
Test->>Kernel: Call forward/backward kernel
Kernel->>Mask: Compute valid indices (e.g., o_t < T)
Kernel->>Exp: Compute exp(gating_difference)
Mask->>Kernel: Provide mask (valid/invalid positions)
Kernel->>Kernel: Apply mask using tl.where (zero out invalid)
Kernel-->>Test: Return result (with correct masking)
Possibly related PRs
Poem
✨ Finishing Touches
🧪 Generate Unit Tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (3)
fla/ops/common/chunk_scaled_dot_kkt.py (1)
51-53: Minor: avoid extra work on tail chunks
m_tis already computed; wrapping the subsequent loads / dots with it (e.g. early‐exit whentl.all(~m_t)) would save registers on the very last chunk whenT % BT != 0.fla/ops/common/chunk_delta_h.py (1)
147-154: Broadcast clarity
tl.where(m_t, exp(b_g_last - b_g), 0)[:, None]works but is easy to mis-read. Wrapping the mask:mask = m_t[:, None] b_v_new *= mask * exp(b_g_last - b_g)[:, None]improves readability without extra cost.
tests/ops/test_gated_delta.py (1)
229-230: Flaky test danger
torch.rand_likewithout a fixed seed inside the test makes the outcome non-deterministic across runs. Capture the RNG state or feed a localtorch.manual_seedbefore masking to keep CI deterministic.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
fla/ops/common/chunk_delta_h.py(3 hunks)fla/ops/common/chunk_o.py(8 hunks)fla/ops/common/chunk_scaled_dot_kkt.py(3 hunks)fla/ops/gated_delta_rule/wy_fast.py(2 hunks)tests/ops/test_gated_delta.py(6 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
🔇 Additional comments (3)
fla/ops/common/chunk_scaled_dot_kkt.py (1)
70-71: Mask is fine — just confirm diagonal exclusion
m_Auses the strict “>`” test, so the diagonal is zeroed. If self-interaction should be kept, switch to “>=”.fla/ops/common/chunk_o.py (2)
98-104: Triangular mask direction changed — please double-checkForward path now keeps
(row >= col)whereas backward kernels use(row <= col). This is probably intentional due to transposed A usage, but a mismatch would silently corrupt gradients. Verify with a quick unit test on a tinyT.
242-255: Potential off-by-one in gradient accumulation
b_dg_lastis added only to the last element viab_dg = tl.where(o_t < last_idx, b_dg, b_dg + b_dg_last)If
Tis an exact multiple ofBT,last_idxequalsBT-1; the last row is updated, but earlier rows in the same chunk should also include the suffix-sum ofb_dg_last. Please cross-validate against the reference implementation.
|
|
||
| from fla.ops.utils import prepare_chunk_indices | ||
| from fla.ops.utils.op import safe_exp | ||
| from fla.ops.utils.op import exp |
There was a problem hiding this comment.
Loss of overflow-protection after dropping safe_exp
safe_exp intentionally clipped extreme inputs; exp will overflow for large positive g differences, yielding inf and NaNs that propagate through the kernel. Please either (1) keep the numerically-stable helper, or (2) clamp the argument, e.g.
- b_A = b_A * exp(b_g_diff)
+ b_A = b_A * exp(tl.clip(b_g_diff, -20, 20))Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In fla/ops/common/chunk_scaled_dot_kkt.py at line 11, the import changed from a
numerically stable `safe_exp` to a plain `exp`, which can cause overflow for
large inputs. To fix this, either revert to importing and using the original
`safe_exp` function that clips extreme inputs or add explicit clamping of the
argument before calling `exp` to prevent overflow and NaNs in the kernel
calculations.
|
|
||
| from fla.ops.utils import prepare_chunk_indices, prepare_chunk_offsets | ||
| from fla.ops.utils.op import exp, safe_exp | ||
| from fla.ops.utils.op import exp |
There was a problem hiding this comment.
Same overflow concern as above
Replacing safe_exp with bare exp risks inf. Consider clamping the argument or re-introducing the safe wrapper.
🤖 Prompt for AI Agents
In fla/ops/common/chunk_delta_h.py at line 11, the code imports and uses the
bare exp function instead of the previously used safe_exp, which can cause
overflow issues resulting in infinity values. To fix this, either revert to
using the safe_exp wrapper that handles large inputs safely or implement
argument clamping before calling exp to prevent overflow and ensure numerical
stability.
| o_t = i_t * BT + tl.arange(0, BT) | ||
| m_t = o_t < T | ||
| m_A = (o_t[:, None] > o_t[None, :]) & (m_t[:, None] & m_t) | ||
| b_dA = tl.where(m_A, b_dA, 0) | ||
| b_dA = tl.dot(b_dA.to(b_A.dtype), b_A) | ||
| b_dA = tl.dot(b_A, b_dA.to(b_A.dtype)) | ||
| b_dA = tl.where(tl.arange(0, BT)[:, None] > tl.arange(0, BT)[None, :], -b_dA, 0).to(k.dtype.element_ty) | ||
|
|
||
| b_dA *= safe_exp(b_g[:, None] - b_g[None, :]) | ||
| b_dA = tl.where(m_A, -b_dA * exp(b_g[:, None] - b_g[None, :]), 0) | ||
| b_dA = b_dA.to(k.dtype.element_ty) |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Numerical-stability & work done on invalid blocks
- Same overflow note:
exp(b_g[:,None]-b_g[None,:])can blow up. b_dAaccumulates contributions for out-of-range rows/cols and is zeroed only afterwards. Guarding the dot updates withm_Awould save compute & SRAM on tail chunks.
- b_dA += tl.dot(b_dw, tl.trans(b_k_beta_g))
+ valid = tl.where(m_A, 1., 0.)
+ b_dA += tl.dot(b_dw*valid, tl.trans(b_k_beta_g))Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In fla/ops/gated_delta_rule/wy_fast.py around lines 99 to 106, the expression
exp(b_g[:, None] - b_g[None, :]) can cause numerical overflow and b_dA
accumulates values for out-of-range indices before being zeroed, wasting compute
and memory. To fix this, apply a numerical stability technique such as
subtracting the max value from b_g before exponentiation to prevent overflow.
Also, modify the dot product computations to only include valid indices guarded
by m_A, avoiding unnecessary calculations on invalid blocks and reducing
resource usage.
| b_dv *= tl.where(m_t, exp(-b_g + b_g_last), 0)[:, None] | ||
|
|
||
| m_A = (o_t[:, None] <= o_t[None, :]) & (m_t[:, None] & m_t) | ||
| if USE_G: | ||
| b_A = tl.where(mask, b_A * safe_exp(b_g[None, :] - b_g[:, None]) * scale, 0).to(do.dtype.element_ty) | ||
| b_A = tl.where(m_A, b_A * exp(b_g[None, :] - b_g[:, None]) * scale, 0).to(do.dtype.element_ty) | ||
| else: | ||
| b_A = tl.where(mask, b_A * scale, 0).to(do.dtype.element_ty) | ||
| b_A = tl.where(m_A, b_A * scale, 0).to(do.dtype.element_ty) | ||
| p_do = tl.make_block_ptr(do, (T, V), (H*V, 1), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) |
There was a problem hiding this comment.
Again, raw exp may overflow
Every instance of exp(...) that used to be safe_exp needs the same clamp or stable formulation.
🤖 Prompt for AI Agents
In fla/ops/common/chunk_o.py around lines 351 to 358, the use of raw exp(...)
can cause overflow errors. Replace all exp calls with a stable version by
clamping the input values to a safe range before applying exp, similar to the
previous safe_exp usage. This prevents overflow and ensures numerical stability
in the calculations.
Summary by CodeRabbit