Skip to content

Add fused_dynamic_mxfp4_quant_moe_sort_hip#2620

Merged
valarLip merged 11 commits intomainfrom
jun/fp4_moe_quant_sort
Apr 8, 2026
Merged

Add fused_dynamic_mxfp4_quant_moe_sort_hip#2620
valarLip merged 11 commits intomainfrom
jun/fp4_moe_quant_sort

Conversation

@junhaha666
Copy link
Copy Markdown
Contributor

python3 op_tests/test_moe_sorting_mxfp4.py -ek 256,8 -dim 7168
test on MI355:
image

image

Motivation

Technical Details

Test Plan

Test Result

Submission Checklist

@junhaha666 junhaha666 requested review from a team and Copilot April 5, 2026 14:24
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 5, 2026

🏷️ CI Guide

Runs automatically on every PR:

  • ✅ Pre-checks (submodule verification, code formatting)
  • ✅ Aiter op tests (gfx942 + gfx950)
  • ✅ Triton tests (only when aiter/ops/triton/** or related paths are changed)

Extended tests (opt-in via labels):

Label Tests
ci:triton-355 Run Triton tests on MI355 in addition to MI325
ci:sglang SGLang integration tests
ci:atom ATOM benchmark (DeepSeek-R1 + GPT-OSS)
ci:vllm vLLM benchmark
ci:all All of the above

Add labels via the sidebar or gh pr edit 2620 --add-label <label>

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a new HIP fused kernel path to perform dynamic MXFP4 (fp4x2) quantization while also producing MoE-sorted (swizzled) e8m0 scale bytes, and wires it into the Python API and MoE flow with an accompanying benchmark/test.

Changes:

  • Add fused_dynamic_mxfp4_quant_moe_sort_hip HIP kernel + pybind export.
  • Add Python wrapper fused_dynamic_mxfp4_quant_moe_sort(...) and switch fused_moe.py to use it.
  • Extend op_tests/test_moe_sorting_mxfp4.py to benchmark/validate HIP vs Triton scale sorting.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 7 comments.

Show a summary per file
File Description
csrc/kernels/quant_kernels.cu Adds the fused quant+MoE-scale-sort HIP kernel and C++ entrypoint.
csrc/include/quant.h Declares the new HIP entrypoint in the quant header.
csrc/include/rocm_ops.hpp Exposes the new entrypoint to Python via pybind.
aiter/ops/quant.py Adds Python bindings + a convenience wrapper that allocates output/scale buffers.
aiter/fused_moe.py Switches MoE quant+sort path to use the new top-level fused helper.
op_tests/test_moe_sorting_mxfp4.py Adds HIP-vs-reference checks and updates CLI options/bench coverage.
Comments suppressed due to low confidence (1)

op_tests/test_moe_sorting_mxfp4.py:25

  • run_torch uses sorted_ids[num_valid_ids:] slicing, but in both call sites num_valid_ids is a CUDA tensor (num_valid_ids = num_valid_ids[0]). CUDA tensors can’t be used as Python slice indices, so this will raise at runtime. Convert to a Python int before calling (e.g., num_valid_ids = int(num_valid_ids[0].item())) or change run_torch to do the .item() internally for the slice boundary.
def run_torch(scale, sorted_ids, num_valid_ids, token_num):
    topk = 1
    if len(scale.shape) == 3:
        topk = scale.shape[1]
        scale = scale.view(-1, scale.shape[-1])
    sorted_ids[num_valid_ids:] = token_num
    topk_ids = sorted_ids >> 24

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread csrc/kernels/quant_kernels.cu
Comment thread csrc/kernels/quant_kernels.cu
Comment thread aiter/ops/quant.py Outdated
Comment thread aiter/ops/quant.py
Comment thread op_tests/test_moe_sorting_mxfp4.py
Comment thread op_tests/test_moe_sorting_mxfp4.py
Comment thread csrc/kernels/quant_kernels.cu
@junhaha666 junhaha666 marked this pull request as draft April 7, 2026 10:43
@junhaha666 junhaha666 marked this pull request as ready for review April 7, 2026 16:12
@valarLip valarLip merged commit 7d063b2 into main Apr 8, 2026
29 of 31 checks passed
@valarLip valarLip deleted the jun/fp4_moe_quant_sort branch April 8, 2026 15:07
scale[addr] = bs_e8m0;
}

if(topk_id < topk)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @junhaha666
We found EP MXFP4 acc issue may be from here.
For stage1 and topk=1, the code is equal to if(topk_id == 0). When TP mode, there is no issue because all tokens' topk_id is in this rank, while for EP mode, the token's top-1 expert maybe on other rank and here if condition exclude this token computation even when this token picks the expert in this rank.
CC: @ZhangLirong-amd

sunway513 pushed a commit that referenced this pull request Apr 21, 2026
* add fused_dynamic_mxfp4_quant_moe_sort_hip

* use hip fused_dynamic_mxfp4_quant_moe_sort in fuse_moe

* update

* add mxfp4_moe_sort_hip

* add dispatch to choose the fused kernel or not in aiter.fused_dynamic_mxfp4_quant_moe_sort

* rm topk in api and use mxfp4_moe_sort_fwd instead of fp4_utils.moe_mxfp4_sort in  fused_moe

* format

* update

---------

Co-authored-by: Lingpeng Jin <103567126+valarLip@users.noreply.github.com>
ClementLinCF pushed a commit that referenced this pull request Apr 25, 2026
* add fused_dynamic_mxfp4_quant_moe_sort_hip

* use hip fused_dynamic_mxfp4_quant_moe_sort in fuse_moe

* update

* add mxfp4_moe_sort_hip

* add dispatch to choose the fused kernel or not in aiter.fused_dynamic_mxfp4_quant_moe_sort

* rm topk in api and use mxfp4_moe_sort_fwd instead of fp4_utils.moe_mxfp4_sort in  fused_moe

* format

* update

---------

Co-authored-by: Lingpeng Jin <103567126+valarLip@users.noreply.github.com>
Liang-jianhao97 pushed a commit that referenced this pull request Apr 30, 2026
* add fused_dynamic_mxfp4_quant_moe_sort_hip

* use hip fused_dynamic_mxfp4_quant_moe_sort in fuse_moe

* update

* add mxfp4_moe_sort_hip

* add dispatch to choose the fused kernel or not in aiter.fused_dynamic_mxfp4_quant_moe_sort

* rm topk in api and use mxfp4_moe_sort_fwd instead of fp4_utils.moe_mxfp4_sort in  fused_moe

* format

* update

---------

Co-authored-by: Lingpeng Jin <103567126+valarLip@users.noreply.github.com>
azaidy added a commit that referenced this pull request May 4, 2026
aiter/fused_moe.py:
- Restore to origin/main. Per sunway513's own comment, #2457 and #2547
  were excluded from this bulk merge; per valarLip, #2687 was rejected.
  No source PR should land changes in this file. The previous state
  (+110/-119 vs main) was collateral damage from auto-resolved conflicts
  taking older sides, which silently reverted #2262 (xbf16 asm fmoe path),
  #2726 (FlyDSL a8w4 MoE wrapper params + fuse_quant), #2658 (CK fp8
  blockscale splitk tuner support), and #2620 (mxfp4_moe_sort_hip,
  flagged by valarLip).

op_tests/test_gemm_a8w8_blockscale.py:
- Replace with a clean 3-way merge of origin/main + #2541. Now +55/-0
  vs main, matching #2541's actual contribution exactly. The previous
  state was silently reverting #2645 (CK GEMM multi-arch + test infra:
  TEST_NUM_ITERS, --csv/--output args, kernel_name= param).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
sunway513 added a commit that referenced this pull request May 5, 2026
…3-Next, pa_mqa OOB) (#3005)

* fix: remap QuantType.No to per_1x32 for fp4x2 MoE weights (W4A6 support)

* Fixing two cascading bugs when running the MoE tuner

* Enable split-K for block-scale A8W8 CK and CKTile GEMMs

Propagate the splitK parameter (as KBatch = 2^splitK) through the
block-scale GEMM kernel infrastructure so that the tuning scripts
can sweep split-K values to improve occupancy on small-M shapes.

CK path: add KBatch parameter to gemm_a8w8_blockscale_impl and call
SetKBatch on the device argument. The CK invoker handles output
zeroing and atomic accumulation internally.

CKTile path: add k_batch parameter to gemm_a8w8_blockscale_cktile_impl,
remove the "split-k is not supported yet" runtime guard, and add
hipMemsetAsync to zero the output buffer before atomic accumulation.

Non-tune entry points pass KBatch=1 (no split-K) to preserve existing
behavior. Code generation scripts (gen_instances.py, gen_instances_cktile.py)
updated to include the new parameter in generated wrappers and manifests.

Made-with: Cursor

* Wire splitK from tuning CSV through production blockscale GEMM dispatch

The tuning infrastructure already sweeps splitK and writes it to the CSV,
but the production dispatch ignored it and hardcoded KBatch=1. Add splitK
as a runtime parameter to the non-tune entry points so tuned split-K
values are used without compiling the full _tune instance set.

Made-with: Cursor

* fix: ck_moe_stage1 split-K output buffer overflow from padding scatter

The CK kernel scatters output via sorted_token_ids using:
  token_offset = (fused_token & 0xffffff) * topk + (fused_token >> 24)

Padding entries use the sentinel value (topk << 24 | token_num),
which decodes to scatter position (token_num * topk + topk) -- beyond
the valid output range [0, token_num * topk). The original buffer
(token_num, topk, w1.shape[1]) only has token_num * topk rows, so
the padding scatter writes out of bounds, causing "HIP runtime error:
invalid argument" during CUDA graph capture (e.g. DeepSeek-R1 decode
with token_num=1, topk=8, block_m=16).

Fix: allocate (token_num * topk + topk + 1) rows -- the exact minimum
needed to absorb all padding scatter writes. After the kernel, slice
only the valid [0, token_num * topk) rows for the activation.

Related: #2508
Made-with: Cursor

* Address PR review feedback: validate splitK, fix hipMemset stride issue, add correctness test

Agent-Logs-Url: https://github.com/ROCm/aiter/sessions/e3b37b0f-e151-4935-ad89-fd72436d41e2

Co-authored-by: samremes <181322991+samremes@users.noreply.github.com>

* black format

* fix splitk test dimensions

* Add gdn fusions

* style: fix ruff F841 and black-format Triton PR files

Remove unused variable in rmsnorm FP8 test ref. Apply Black to
kernels, launchers, tests, and gated_delta_rule decode __init__.

Made-with: Cursor

* Update fused_rearrange_sigmoid_gdr.py

* Update op_tests

* Fix BLACK format problem

* Fix black check failure

* Update test_fused_rearrange_sigmoid_gdr.py

* Allow callers to pass pre-allocated moe_buf to avoid output copy

Add an optional `moe_buf` parameter through the moe_sorting and
fused_moe call chain. When provided, the sorting kernel writes
directly into the caller's buffer instead of allocating a new one,
eliminating a redundant copy on the output path.

Made-with: Cursor

* Add moe_buf pass-through test to existing test_moe_sorting

Made-with: Cursor

* Replace _fast with _single_token for causal conv1d update kernels for single token decoding

* Fix blck format error

* Add tuned a8w8 blockscale GEMM config for Qwen3-Next-80B-A3B on MI355X

Tuned 1482 shapes (TP1/TP2/TP4) for Qwen/Qwen3-Next-80B-A3B-Instruct-FP8
on MI355X using CK + CK-TILE backends with splitK support.

Depends on:
- PR #2862 (CK bump for stride fix in CK-TILE blockscale)
- PR #2541 (splitK support for CK/CK-TILE blockscale GEMMs)
- PR #2487 (AQLayout tunable for CK-TILE blockscale 8-warp kernels)

* refactor(triton): rename gated RMSNorm+FP8 op to fused_rms_gated_fp8_group_quant

Colocate the gated RMSNorm + FP8 group quant path with the other fused FP8
ops. The Triton kernel is now _fused_rms_gated_fp8_group_quant_kernel in
_triton_kernels/quant/fused_fp8_quant.py; the Python entry point is
fused_rms_gated_fp8_group_quant in quant/fused_fp8_quant.py, with a docstring
that contrasts it with fused_rms_fp8_group_quant. Remove the old
rmsnorm_input_quant_fp8 module and rms_norm_input_quant_fp8 kernel file.
Re-export the new symbol and helpers (get_fp8_min_max_bounds,
calc_rows_per_block) from aiter.ops.triton.quant. Rename the test file to
test_fused_rms_gated_fp8_group_quant.py and update test.sh.

BREAKING CHANGE: rmsnorm_input_quant_fp8 is removed; use
fused_rms_gated_fp8_group_quant instead.

Made-with: Cursor

* Retune blockscale GEMM configs to fix invalid kernelId+splitK combinations

Full retune of all 1482 shapes on MI355X (gfx950, cu_num=256).
Key changes:
- SplitK usage dropped from 613 to 88 CK shapes (splitK > 0)
- All shapes validated via --run_config (1482/1482 OK)
- E2e perf: 2-8% output throughput improvement vs untuned heuristic

* [Bug] pa_mqa_logits: mask OOB stores on OutLogits_buffer

The gluon `_gluon_deepgemm_fp8_paged_mqa_logits_preshuffle` and
`_gluon_deepgemm_fp8_paged_mqa_logits_preshuffle_varctx` kernels have 10
`buffer_store(ptr=OutLogits_buffer, ...)` call sites that are missing the
upper-bound mask present on their sibling stores.  When
`context_length == max_model_len` (the last-token position in a long-
context decode step), `split_context_length` is rounded UP to a
`KVBlockSize` multiple at line 427 and the final prefix/suffix store then
writes up to `ChunkKPerStage` float32 elements past the logical row end.
With `stride_out_batch == max_model_len`, those writes cross into the
next row / the next allocation, causing intermittent HIP memory-access
faults on gfx950 during DeepSeek V3.2 MTP decoding.

This change adds `mask=<offset> < max_model_len` to every unmasked
`buffer_store` on `OutLogits_buffer` in both preshuffle kernels, matching
the pattern of their already-masked neighbours.  The existing
`tl.where(..., -inf)` masking of the *values* is preserved; the only
behavioural change is that out-of-row lanes no longer emit buffer
stores.  Hardware overhead is negligible: `buffer_store` with a predicate
is the same SMEM descriptor path as the unmasked variant, just with a
VCC mask setup.

Repro + end-to-end fix evidence: see PR description.

Signed-off-by: Markus Hartikainen <markus.hartikainen@amd.com>

* style: fix Black formatting

* style: fix Black formatting (Python 3.12 compatible)

* ci: replace deprecated zmq package with pyzmq

The `zmq` meta-package fails to install on some CI runners because
it cannot resolve the `pyzmq` dependency. Use `pyzmq` directly,
which is the actual package providing ZeroMQ bindings for Python.

Fixes Triton Test Shard 7 setup failures.

* ci: increase pip retries and timeout for CI reliability

Set pip global retries=15 and timeout=120s in build_aiter_triton.sh
to handle transient PyPI network failures on self-hosted runners.
Shard 5/7 failures were caused by RemoteDisconnected during pip install.

* ci: make pyzmq install non-blocking in triton test setup

pyzmq is only used by aiter.dist.shm_broadcast, not by any triton
test. When PyPI is unreachable on self-hosted runners, the pyzmq
install failure should not block the entire CI shard.

Split pyzmq into a separate pip install with || fallback so triton
tests can proceed even when PyPI connectivity is degraded.

* ci: retry pip install individually on batch failure

When batch pip install fails (e.g., PyPI connectivity issues on
self-hosted runners), retry each package individually. Only pyzmq
is allowed to fail silently since it's only used by
aiter.dist.shm_broadcast and not required by any CI test suite.

Critical packages (pandas, einops, numpy) must still succeed.

* [MLA] Fix nhead=32 non-persistent decode crash on gfx950

Commit c849fd5 ("Add bf16 MLA decode kernel for gqa_ratio=64,
qseqlen=1 (non-persistent)") zeroed ptr_RP and out_16_nosplit for all
non-persistent dispatch. The legacy QH16 ASM kernel used for nhead=32
(MLA_A16W16_1TG_4W_32mx1_16nx1_Coex0_Msk1_QH16.co) still writes
directly to the output buffer via ptr_RP when kv_split==1.
Dereferencing nullptr causes a GPU memory access fault during CUDA
graph capture on MI355X (gfx950) with DeepSeek-V3.2 at TP4.

Fix:
- Conditionally restore ptr_RP and out_16_nosplit in the non-persistent
  path for legacy kernels (gqa_ratio * max_seqlen_q <= 64) while
  keeping nullptr for newer kernels (e.g. gqa_ratio=64).
- Restore the bf16 nhead in [32,64] early-return after stage1 when
  num_kv_splits==1 to prevent stage2 from overwriting the kernel's
  direct output.

Tested on MI355X TP4 with deepseek-ai/DeepSeek-V3.2 (nhead=32):
- No crash during CUDA graph capture
- Correct GSM8K accuracy

Made-with: Cursor

* revert: remove #2983 (MLA nhead=32 fix) — causes test_mla CI failures

Reverting cherry-pick of #2983 from this bulk merge. The MLA nhead=32
non-persistent decode fix causes deterministic test_mla k_cache and
mla_decode-absorb precision failures on CI MI35X runners (Shard 1 & 2).

#2983 should go through its own PR with proper CI validation by the
original author (frida-andersson).

* fix: restore tuple unpack for FlyDSL fused-quant stage1 return

flydsl_moe_stage1 returns (out, out_scale_sorted) when the kernel uses
fused fp4/fp8 quantization. The tuple unpack logic was removed during
earlier refactoring but the kernel behavior was not changed, causing
fused_moe_2stages to crash with:
  AttributeError: 'tuple' object has no attribute 'view'

Restore the unpack: detect tuple return, extract tensor and scale,
handle fp4 byte-packing trim, and skip redundant Python-side requant
when the kernel already produced sorted scales.

* Revert leaked changes from excluded PRs #2457/#2547/#2687 in fused_moe.py

- Restore import to match main: use `from aiter import
  fused_dynamic_mxfp4_quant_moe_sort, mxfp4_moe_sort_fwd` instead of
  importing from internal triton path and fp4_utils
- Replace all fp4_utils.moe_mxfp4_sort() calls with mxfp4_moe_sort_fwd()
  using correct parameter names (cols= instead of block_size=)
- Remove all moe_buf preallocated buffer additions (PR #2687 rejected):
  parameter defaults, if-guards, and pass-throughs in _moe_sorting_impl,
  moe_sorting, fused_moe, fused_moe_fake, and fused_moe_
- Fix moe_sorting_dispatch_policy type annotation: bool -> int in
  fused_moe_fake and fused_moe_
- Remove moe_buf pass-through test from test_moe_sorting.py
- Preserve legitimate fp4_utils usage (mxfp4_to_f32, e8m0_to_f32) with
  local imports in stage1/stage2 fallback functions

* fix: restore fp4_utils.moe_mxfp4_sort for new code paths (different output layout than mxfp4_moe_sort_fwd)

* style: fix Black formatting for local imports

* fix: remove rejected W4A6 QuantType remap from fused_moe_dp_shared_expert

Lingpeng explicitly rejected this change (from excluded PR #2457).
Reverts the QuantType.No -> per_1x32 remap for fp4x2 weights.

* fix: restore silently-reverted main features from bad merge resolution

aiter/fused_moe.py:
- Restore to origin/main. Per sunway513's own comment, #2457 and #2547
  were excluded from this bulk merge; per valarLip, #2687 was rejected.
  No source PR should land changes in this file. The previous state
  (+110/-119 vs main) was collateral damage from auto-resolved conflicts
  taking older sides, which silently reverted #2262 (xbf16 asm fmoe path),
  #2726 (FlyDSL a8w4 MoE wrapper params + fuse_quant), #2658 (CK fp8
  blockscale splitk tuner support), and #2620 (mxfp4_moe_sort_hip,
  flagged by valarLip).

op_tests/test_gemm_a8w8_blockscale.py:
- Replace with a clean 3-way merge of origin/main + #2541. Now +55/-0
  vs main, matching #2541's actual contribution exactly. The previous
  state was silently reverting #2645 (CK GEMM multi-arch + test infra:
  TEST_NUM_ITERS, --csv/--output args, kernel_name= param).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: remove #2464 from bulk merge per author request

@xaguilar-amd asked to drop #2464 (CK MoE tuner bug fixes) from this
bulk merge — they don't need it for the uplift.

Verified that #2464 is the only PR in this bulk merge touching
aiter/jit/core.py and aiter/utility/mp_tuner.py: the diff between the
branch and origin/main on those files is exactly #2464's +9/-1 and
+5/-0, with no other PR content mixed in. Restoring both files to
origin/main therefore drops #2464 cleanly.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Signed-off-by: Markus Hartikainen <markus.hartikainen@amd.com>
Co-authored-by: vecheruk-amd <vecheruk@amd.com>
Co-authored-by: xaguilar-amd <xavier.aguilarfruto@amd.com>
Co-authored-by: Sami Remes <samremes@amd.com>
Co-authored-by: Li <chuali@amd.com>
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: samremes <181322991+samremes@users.noreply.github.com>
Co-authored-by: hellozhuo <zhuo.su@amd.com>
Co-authored-by: Tres Popp <tres.popp@amd.com>
Co-authored-by: Juuso Korhonen <40278371+juuso-oskari@users.noreply.github.com>
Co-authored-by: Niklas Holmberg <nholmber@users.noreply.github.com>
Co-authored-by: Markus Hartikainen <markus.hartikainen@amd.com>
Co-authored-by: frida-andersson <fanderss@amd.com>
Co-authored-by: Aliasger Zaidy <aliasger.zaidy@amd.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Liang-jianhao97 pushed a commit that referenced this pull request May 7, 2026
…3-Next, pa_mqa OOB) (#3005)

* fix: remap QuantType.No to per_1x32 for fp4x2 MoE weights (W4A6 support)

* Fixing two cascading bugs when running the MoE tuner

* Enable split-K for block-scale A8W8 CK and CKTile GEMMs

Propagate the splitK parameter (as KBatch = 2^splitK) through the
block-scale GEMM kernel infrastructure so that the tuning scripts
can sweep split-K values to improve occupancy on small-M shapes.

CK path: add KBatch parameter to gemm_a8w8_blockscale_impl and call
SetKBatch on the device argument. The CK invoker handles output
zeroing and atomic accumulation internally.

CKTile path: add k_batch parameter to gemm_a8w8_blockscale_cktile_impl,
remove the "split-k is not supported yet" runtime guard, and add
hipMemsetAsync to zero the output buffer before atomic accumulation.

Non-tune entry points pass KBatch=1 (no split-K) to preserve existing
behavior. Code generation scripts (gen_instances.py, gen_instances_cktile.py)
updated to include the new parameter in generated wrappers and manifests.

Made-with: Cursor

* Wire splitK from tuning CSV through production blockscale GEMM dispatch

The tuning infrastructure already sweeps splitK and writes it to the CSV,
but the production dispatch ignored it and hardcoded KBatch=1. Add splitK
as a runtime parameter to the non-tune entry points so tuned split-K
values are used without compiling the full _tune instance set.

Made-with: Cursor

* fix: ck_moe_stage1 split-K output buffer overflow from padding scatter

The CK kernel scatters output via sorted_token_ids using:
  token_offset = (fused_token & 0xffffff) * topk + (fused_token >> 24)

Padding entries use the sentinel value (topk << 24 | token_num),
which decodes to scatter position (token_num * topk + topk) -- beyond
the valid output range [0, token_num * topk). The original buffer
(token_num, topk, w1.shape[1]) only has token_num * topk rows, so
the padding scatter writes out of bounds, causing "HIP runtime error:
invalid argument" during CUDA graph capture (e.g. DeepSeek-R1 decode
with token_num=1, topk=8, block_m=16).

Fix: allocate (token_num * topk + topk + 1) rows -- the exact minimum
needed to absorb all padding scatter writes. After the kernel, slice
only the valid [0, token_num * topk) rows for the activation.

Related: #2508
Made-with: Cursor

* Address PR review feedback: validate splitK, fix hipMemset stride issue, add correctness test

Agent-Logs-Url: https://github.com/ROCm/aiter/sessions/e3b37b0f-e151-4935-ad89-fd72436d41e2

Co-authored-by: samremes <181322991+samremes@users.noreply.github.com>

* black format

* fix splitk test dimensions

* Add gdn fusions

* style: fix ruff F841 and black-format Triton PR files

Remove unused variable in rmsnorm FP8 test ref. Apply Black to
kernels, launchers, tests, and gated_delta_rule decode __init__.

Made-with: Cursor

* Update fused_rearrange_sigmoid_gdr.py

* Update op_tests

* Fix BLACK format problem

* Fix black check failure

* Update test_fused_rearrange_sigmoid_gdr.py

* Allow callers to pass pre-allocated moe_buf to avoid output copy

Add an optional `moe_buf` parameter through the moe_sorting and
fused_moe call chain. When provided, the sorting kernel writes
directly into the caller's buffer instead of allocating a new one,
eliminating a redundant copy on the output path.

Made-with: Cursor

* Add moe_buf pass-through test to existing test_moe_sorting

Made-with: Cursor

* Replace _fast with _single_token for causal conv1d update kernels for single token decoding

* Fix blck format error

* Add tuned a8w8 blockscale GEMM config for Qwen3-Next-80B-A3B on MI355X

Tuned 1482 shapes (TP1/TP2/TP4) for Qwen/Qwen3-Next-80B-A3B-Instruct-FP8
on MI355X using CK + CK-TILE backends with splitK support.

Depends on:
- PR #2862 (CK bump for stride fix in CK-TILE blockscale)
- PR #2541 (splitK support for CK/CK-TILE blockscale GEMMs)
- PR #2487 (AQLayout tunable for CK-TILE blockscale 8-warp kernels)

* refactor(triton): rename gated RMSNorm+FP8 op to fused_rms_gated_fp8_group_quant

Colocate the gated RMSNorm + FP8 group quant path with the other fused FP8
ops. The Triton kernel is now _fused_rms_gated_fp8_group_quant_kernel in
_triton_kernels/quant/fused_fp8_quant.py; the Python entry point is
fused_rms_gated_fp8_group_quant in quant/fused_fp8_quant.py, with a docstring
that contrasts it with fused_rms_fp8_group_quant. Remove the old
rmsnorm_input_quant_fp8 module and rms_norm_input_quant_fp8 kernel file.
Re-export the new symbol and helpers (get_fp8_min_max_bounds,
calc_rows_per_block) from aiter.ops.triton.quant. Rename the test file to
test_fused_rms_gated_fp8_group_quant.py and update test.sh.

BREAKING CHANGE: rmsnorm_input_quant_fp8 is removed; use
fused_rms_gated_fp8_group_quant instead.

Made-with: Cursor

* Retune blockscale GEMM configs to fix invalid kernelId+splitK combinations

Full retune of all 1482 shapes on MI355X (gfx950, cu_num=256).
Key changes:
- SplitK usage dropped from 613 to 88 CK shapes (splitK > 0)
- All shapes validated via --run_config (1482/1482 OK)
- E2e perf: 2-8% output throughput improvement vs untuned heuristic

* [Bug] pa_mqa_logits: mask OOB stores on OutLogits_buffer

The gluon `_gluon_deepgemm_fp8_paged_mqa_logits_preshuffle` and
`_gluon_deepgemm_fp8_paged_mqa_logits_preshuffle_varctx` kernels have 10
`buffer_store(ptr=OutLogits_buffer, ...)` call sites that are missing the
upper-bound mask present on their sibling stores.  When
`context_length == max_model_len` (the last-token position in a long-
context decode step), `split_context_length` is rounded UP to a
`KVBlockSize` multiple at line 427 and the final prefix/suffix store then
writes up to `ChunkKPerStage` float32 elements past the logical row end.
With `stride_out_batch == max_model_len`, those writes cross into the
next row / the next allocation, causing intermittent HIP memory-access
faults on gfx950 during DeepSeek V3.2 MTP decoding.

This change adds `mask=<offset> < max_model_len` to every unmasked
`buffer_store` on `OutLogits_buffer` in both preshuffle kernels, matching
the pattern of their already-masked neighbours.  The existing
`tl.where(..., -inf)` masking of the *values* is preserved; the only
behavioural change is that out-of-row lanes no longer emit buffer
stores.  Hardware overhead is negligible: `buffer_store` with a predicate
is the same SMEM descriptor path as the unmasked variant, just with a
VCC mask setup.

Repro + end-to-end fix evidence: see PR description.

Signed-off-by: Markus Hartikainen <markus.hartikainen@amd.com>

* style: fix Black formatting

* style: fix Black formatting (Python 3.12 compatible)

* ci: replace deprecated zmq package with pyzmq

The `zmq` meta-package fails to install on some CI runners because
it cannot resolve the `pyzmq` dependency. Use `pyzmq` directly,
which is the actual package providing ZeroMQ bindings for Python.

Fixes Triton Test Shard 7 setup failures.

* ci: increase pip retries and timeout for CI reliability

Set pip global retries=15 and timeout=120s in build_aiter_triton.sh
to handle transient PyPI network failures on self-hosted runners.
Shard 5/7 failures were caused by RemoteDisconnected during pip install.

* ci: make pyzmq install non-blocking in triton test setup

pyzmq is only used by aiter.dist.shm_broadcast, not by any triton
test. When PyPI is unreachable on self-hosted runners, the pyzmq
install failure should not block the entire CI shard.

Split pyzmq into a separate pip install with || fallback so triton
tests can proceed even when PyPI connectivity is degraded.

* ci: retry pip install individually on batch failure

When batch pip install fails (e.g., PyPI connectivity issues on
self-hosted runners), retry each package individually. Only pyzmq
is allowed to fail silently since it's only used by
aiter.dist.shm_broadcast and not required by any CI test suite.

Critical packages (pandas, einops, numpy) must still succeed.

* [MLA] Fix nhead=32 non-persistent decode crash on gfx950

Commit c849fd5 ("Add bf16 MLA decode kernel for gqa_ratio=64,
qseqlen=1 (non-persistent)") zeroed ptr_RP and out_16_nosplit for all
non-persistent dispatch. The legacy QH16 ASM kernel used for nhead=32
(MLA_A16W16_1TG_4W_32mx1_16nx1_Coex0_Msk1_QH16.co) still writes
directly to the output buffer via ptr_RP when kv_split==1.
Dereferencing nullptr causes a GPU memory access fault during CUDA
graph capture on MI355X (gfx950) with DeepSeek-V3.2 at TP4.

Fix:
- Conditionally restore ptr_RP and out_16_nosplit in the non-persistent
  path for legacy kernels (gqa_ratio * max_seqlen_q <= 64) while
  keeping nullptr for newer kernels (e.g. gqa_ratio=64).
- Restore the bf16 nhead in [32,64] early-return after stage1 when
  num_kv_splits==1 to prevent stage2 from overwriting the kernel's
  direct output.

Tested on MI355X TP4 with deepseek-ai/DeepSeek-V3.2 (nhead=32):
- No crash during CUDA graph capture
- Correct GSM8K accuracy

Made-with: Cursor

* revert: remove #2983 (MLA nhead=32 fix) — causes test_mla CI failures

Reverting cherry-pick of #2983 from this bulk merge. The MLA nhead=32
non-persistent decode fix causes deterministic test_mla k_cache and
mla_decode-absorb precision failures on CI MI35X runners (Shard 1 & 2).

#2983 should go through its own PR with proper CI validation by the
original author (frida-andersson).

* fix: restore tuple unpack for FlyDSL fused-quant stage1 return

flydsl_moe_stage1 returns (out, out_scale_sorted) when the kernel uses
fused fp4/fp8 quantization. The tuple unpack logic was removed during
earlier refactoring but the kernel behavior was not changed, causing
fused_moe_2stages to crash with:
  AttributeError: 'tuple' object has no attribute 'view'

Restore the unpack: detect tuple return, extract tensor and scale,
handle fp4 byte-packing trim, and skip redundant Python-side requant
when the kernel already produced sorted scales.

* Revert leaked changes from excluded PRs #2457/#2547/#2687 in fused_moe.py

- Restore import to match main: use `from aiter import
  fused_dynamic_mxfp4_quant_moe_sort, mxfp4_moe_sort_fwd` instead of
  importing from internal triton path and fp4_utils
- Replace all fp4_utils.moe_mxfp4_sort() calls with mxfp4_moe_sort_fwd()
  using correct parameter names (cols= instead of block_size=)
- Remove all moe_buf preallocated buffer additions (PR #2687 rejected):
  parameter defaults, if-guards, and pass-throughs in _moe_sorting_impl,
  moe_sorting, fused_moe, fused_moe_fake, and fused_moe_
- Fix moe_sorting_dispatch_policy type annotation: bool -> int in
  fused_moe_fake and fused_moe_
- Remove moe_buf pass-through test from test_moe_sorting.py
- Preserve legitimate fp4_utils usage (mxfp4_to_f32, e8m0_to_f32) with
  local imports in stage1/stage2 fallback functions

* fix: restore fp4_utils.moe_mxfp4_sort for new code paths (different output layout than mxfp4_moe_sort_fwd)

* style: fix Black formatting for local imports

* fix: remove rejected W4A6 QuantType remap from fused_moe_dp_shared_expert

Lingpeng explicitly rejected this change (from excluded PR #2457).
Reverts the QuantType.No -> per_1x32 remap for fp4x2 weights.

* fix: restore silently-reverted main features from bad merge resolution

aiter/fused_moe.py:
- Restore to origin/main. Per sunway513's own comment, #2457 and #2547
  were excluded from this bulk merge; per valarLip, #2687 was rejected.
  No source PR should land changes in this file. The previous state
  (+110/-119 vs main) was collateral damage from auto-resolved conflicts
  taking older sides, which silently reverted #2262 (xbf16 asm fmoe path),
  #2726 (FlyDSL a8w4 MoE wrapper params + fuse_quant), #2658 (CK fp8
  blockscale splitk tuner support), and #2620 (mxfp4_moe_sort_hip,
  flagged by valarLip).

op_tests/test_gemm_a8w8_blockscale.py:
- Replace with a clean 3-way merge of origin/main + #2541. Now +55/-0
  vs main, matching #2541's actual contribution exactly. The previous
  state was silently reverting #2645 (CK GEMM multi-arch + test infra:
  TEST_NUM_ITERS, --csv/--output args, kernel_name= param).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: remove #2464 from bulk merge per author request

@xaguilar-amd asked to drop #2464 (CK MoE tuner bug fixes) from this
bulk merge — they don't need it for the uplift.

Verified that #2464 is the only PR in this bulk merge touching
aiter/jit/core.py and aiter/utility/mp_tuner.py: the diff between the
branch and origin/main on those files is exactly #2464's +9/-1 and
+5/-0, with no other PR content mixed in. Restoring both files to
origin/main therefore drops #2464 cleanly.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Signed-off-by: Markus Hartikainen <markus.hartikainen@amd.com>
Co-authored-by: vecheruk-amd <vecheruk@amd.com>
Co-authored-by: xaguilar-amd <xavier.aguilarfruto@amd.com>
Co-authored-by: Sami Remes <samremes@amd.com>
Co-authored-by: Li <chuali@amd.com>
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: samremes <181322991+samremes@users.noreply.github.com>
Co-authored-by: hellozhuo <zhuo.su@amd.com>
Co-authored-by: Tres Popp <tres.popp@amd.com>
Co-authored-by: Juuso Korhonen <40278371+juuso-oskari@users.noreply.github.com>
Co-authored-by: Niklas Holmberg <nholmber@users.noreply.github.com>
Co-authored-by: Markus Hartikainen <markus.hartikainen@amd.com>
Co-authored-by: frida-andersson <fanderss@amd.com>
Co-authored-by: Aliasger Zaidy <aliasger.zaidy@amd.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants