Skip to content

(test) add solve_tril from upstream#339

Merged
RuixuanZhang06 merged 2 commits intosgl-project:mainfrom
zouzias:9-unit-test-solve_tril
Jan 22, 2026
Merged

(test) add solve_tril from upstream#339
RuixuanZhang06 merged 2 commits intosgl-project:mainfrom
zouzias:9-unit-test-solve_tril

Conversation

@zouzias
Copy link
Copy Markdown
Contributor

@zouzias zouzias commented Jan 21, 2026

Adding one unit test from flash-linear-attention repo: https://github.com/fla-org/flash-linear-attention/blob/main/tests/ops/test_solve_tril.py#L30

Environment:
CANN: 8.5.0.alpha002
PyTorch: 2.8.0.post2
Triton: triton-ascend==3.2.0.dev20260119

Log

pytest tests/python/sgl_kernel_npu/test_solve_tril.py 
============================================================================================================================ test session starts =============================================================================================================================
platform linux -- Python 3.10.12, pytest-8.3.4, pluggy-1.6.0
rootdir: <MASKED>/TRI_INV/sgl-kernel-npu
plugins: forked-1.6.0
collected 5 items                                                                                                                                                                                                                                                            

tests/python/sgl_kernel_npu/test_solve_tril.py .[W121 08:45:12.897784394 ToKernelNpu.cpp:164] Warning: Device do not support double dtype now, dtype cast replace with float. (function operator())
FFFF                                                                                                                                                                                                                   [100%]

================================================================================================================================== FAILURES ==================================================================================================================================
__________________________________________________________________________________________________________________ test_solve_tril[B2-T500-H4-chunk_size32] __________________________________________________________________________________________________________________

B = 2, T = 500, H = 4, chunk_size = 32

    @pytest.mark.parametrize(
        ("B", "T", "H", "chunk_size"),
        [
            pytest.param(*test, id="B{}-T{}-H{}-chunk_size{}".format(*test))
            for test in [
                (1, 63, 1, 16),
                (2, 500, 4, 32),
                (2, 1000, 5, 64),
                (3, 1024, 6, 64),
                (4, 2048, 8, 64),
            ]
        ],
    )
    def test_solve_tril(B, T, H, chunk_size):
        # do not randomly initialize A otherwise the inverse is not stable
        k = F.normalize(
            torch.randn((B, H, T, 64), dtype=torch.float32, device=NPU_DEVICE), dim=-1
        )
        torch.npu.synchronize()
        # Pad the second-to-last dimension (T) to be a multiple of chunk_size
        padding_size = (chunk_size - T % chunk_size) % chunk_size
        k_padded = F.pad(k, (0, 0, 0, padding_size, 0, 0, 0, 0))
        torch.npu.synchronize()
        k_padded = k_padded.reshape(B, H, -1, chunk_size, 64)
        torch.npu.synchronize()
        A = (k_padded @ k_padded.transpose(-1, -2)).tril(-1).npu()
        torch.npu.synchronize()
    
        ref = torch.inverse(
            A
            + torch.eye(A.shape[-1], dtype=A.dtype, device=A.device)[None, None, None, ...]
        )
        torch.npu.synchronize()
        ref = ref.reshape(B, H, -1, chunk_size)[:, :, :T, :]
    
        torch.npu.synchronize()
        tri = solve_tril(
            A.reshape(B, H, -1, chunk_size)[:, :, :T, :].transpose(1, 2)
        ).transpose(1, 2)
        torch.npu.synchronize()
    
>       assert_close("solve_tril", ref, tri, 0.0001)

tests/python/sgl_kernel_npu/test_solve_tril.py:81: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

prefix = 'solve_tril', ref = tensor([[[[ 1.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00],
          [...    [-3.0265e-03,  6.3627e-02,  8.5826e-02,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00]]]], device='npu:1')
tri = tensor([[[[ 1.0000,  0.0000,  0.0000,  ...,  0.0000,  0.0000,  0.0000],
          [-0.0835,  1.1057, -0.0886,  ...,  0...0,  0.0000,  0.0000],
          [-0.1266, -0.2544, -0.0772,  ...,  0.0235, -0.0023,  0.0000]]]],
       device='npu:1'), ratio = 0.0001
warning = False, err_atol = 1e-06

    def assert_close(prefix, ref, tri, ratio, warning=False, err_atol=1e-6):
        abs_atol = get_abs_err(ref, tri)
        msg = f"{prefix:>16} diff: {abs_atol:.6f} ratio: {get_err_ratio(ref, tri):.6f}"
        error_rate = get_err_ratio(ref, tri)
        if abs_atol <= err_atol:
            return
        else:
>           assert error_rate < ratio, msg
E           AssertionError:       solve_tril diff: 0.761018 ratio: 0.607361
E           assert 0.6073608456862104 < 0.0001

tests/python/sgl_kernel_npu/test_solve_tril.py:29: AssertionError
---------------------------------------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------------------------------------
[WARNING] Please DO NOT tune args ['num_warps', 'num_stages']!
[WARNING] Please DO NOT tune args ['num_warps', 'num_stages']!
_________________________________________________________________________________________________________________ test_solve_tril[B2-T1000-H5-chunk_size64] __________________________________________________________________________________________________________________

B = 2, T = 1000, H = 5, chunk_size = 64

    @pytest.mark.parametrize(
        ("B", "T", "H", "chunk_size"),
        [
            pytest.param(*test, id="B{}-T{}-H{}-chunk_size{}".format(*test))
            for test in [
                (1, 63, 1, 16),
                (2, 500, 4, 32),
                (2, 1000, 5, 64),
                (3, 1024, 6, 64),
                (4, 2048, 8, 64),
            ]
        ],
    )
    def test_solve_tril(B, T, H, chunk_size):
        # do not randomly initialize A otherwise the inverse is not stable
        k = F.normalize(
            torch.randn((B, H, T, 64), dtype=torch.float32, device=NPU_DEVICE), dim=-1
        )
        torch.npu.synchronize()
        # Pad the second-to-last dimension (T) to be a multiple of chunk_size
        padding_size = (chunk_size - T % chunk_size) % chunk_size
        k_padded = F.pad(k, (0, 0, 0, padding_size, 0, 0, 0, 0))
        torch.npu.synchronize()
        k_padded = k_padded.reshape(B, H, -1, chunk_size, 64)
        torch.npu.synchronize()
        A = (k_padded @ k_padded.transpose(-1, -2)).tril(-1).npu()
        torch.npu.synchronize()
    
        ref = torch.inverse(
            A
            + torch.eye(A.shape[-1], dtype=A.dtype, device=A.device)[None, None, None, ...]
        )
        torch.npu.synchronize()
        ref = ref.reshape(B, H, -1, chunk_size)[:, :, :T, :]
    
        torch.npu.synchronize()
        tri = solve_tril(
            A.reshape(B, H, -1, chunk_size)[:, :, :T, :].transpose(1, 2)
        ).transpose(1, 2)
        torch.npu.synchronize()
    
>       assert_close("solve_tril", ref, tri, 0.0001)

tests/python/sgl_kernel_npu/test_solve_tril.py:81: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

prefix = 'solve_tril', ref = tensor([[[[ 1.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00],
          [...    [-1.5804e-01,  9.9914e-02,  1.2779e-01,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00]]]], device='npu:1')
tri = tensor([[[[ 1.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00],
          [...    [-5.9636e-02,  2.2865e-01, -2.9713e-01,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00]]]], device='npu:1'), ratio = 0.0001
warning = False, err_atol = 1e-06

    def assert_close(prefix, ref, tri, ratio, warning=False, err_atol=1e-6):
        abs_atol = get_abs_err(ref, tri)
        msg = f"{prefix:>16} diff: {abs_atol:.6f} ratio: {get_err_ratio(ref, tri):.6f}"
        error_rate = get_err_ratio(ref, tri)
        if abs_atol <= err_atol:
            return
        else:
>           assert error_rate < ratio, msg
E           AssertionError:       solve_tril diff: 0.930396 ratio: 0.801516
E           assert 0.8015162599770439 < 0.0001

tests/python/sgl_kernel_npu/test_solve_tril.py:29: AssertionError
---------------------------------------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------------------------------------
[WARNING] Please DO NOT tune args ['num_warps', 'num_stages']!
[WARNING] Please DO NOT tune args ['num_warps', 'num_stages']!
_________________________________________________________________________________________________________________ test_solve_tril[B3-T1024-H6-chunk_size64] __________________________________________________________________________________________________________________

B = 3, T = 1024, H = 6, chunk_size = 64

    @pytest.mark.parametrize(
        ("B", "T", "H", "chunk_size"),
        [
            pytest.param(*test, id="B{}-T{}-H{}-chunk_size{}".format(*test))
            for test in [
                (1, 63, 1, 16),
                (2, 500, 4, 32),
                (2, 1000, 5, 64),
                (3, 1024, 6, 64),
                (4, 2048, 8, 64),
            ]
        ],
    )
    def test_solve_tril(B, T, H, chunk_size):
        # do not randomly initialize A otherwise the inverse is not stable
        k = F.normalize(
            torch.randn((B, H, T, 64), dtype=torch.float32, device=NPU_DEVICE), dim=-1
        )
        torch.npu.synchronize()
        # Pad the second-to-last dimension (T) to be a multiple of chunk_size
        padding_size = (chunk_size - T % chunk_size) % chunk_size
        k_padded = F.pad(k, (0, 0, 0, padding_size, 0, 0, 0, 0))
        torch.npu.synchronize()
        k_padded = k_padded.reshape(B, H, -1, chunk_size, 64)
        torch.npu.synchronize()
        A = (k_padded @ k_padded.transpose(-1, -2)).tril(-1).npu()
        torch.npu.synchronize()
    
        ref = torch.inverse(
            A
            + torch.eye(A.shape[-1], dtype=A.dtype, device=A.device)[None, None, None, ...]
        )
        torch.npu.synchronize()
        ref = ref.reshape(B, H, -1, chunk_size)[:, :, :T, :]
    
        torch.npu.synchronize()
        tri = solve_tril(
            A.reshape(B, H, -1, chunk_size)[:, :, :T, :].transpose(1, 2)
        ).transpose(1, 2)
        torch.npu.synchronize()
    
>       assert_close("solve_tril", ref, tri, 0.0001)

tests/python/sgl_kernel_npu/test_solve_tril.py:81: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

prefix = 'solve_tril', ref = tensor([[[[ 1.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00],
          [...    [-5.4516e-03, -3.2743e-02, -5.9132e-02,  ...,  3.9660e-04,
           -6.0246e-02,  1.0000e+00]]]], device='npu:1')
tri = tensor([[[[ 1.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00],
          [...    [-1.1992e-01, -1.7553e-01,  1.5066e-01,  ...,  5.3048e-03,
           -6.0246e-02,  1.0000e+00]]]], device='npu:1'), ratio = 0.0001
warning = False, err_atol = 1e-06

    def assert_close(prefix, ref, tri, ratio, warning=False, err_atol=1e-6):
        abs_atol = get_abs_err(ref, tri)
        msg = f"{prefix:>16} diff: {abs_atol:.6f} ratio: {get_err_ratio(ref, tri):.6f}"
        error_rate = get_err_ratio(ref, tri)
        if abs_atol <= err_atol:
            return
        else:
>           assert error_rate < ratio, msg
E           AssertionError:       solve_tril diff: 1.126149 ratio: 0.808762
E           assert 0.8087621634011518 < 0.0001

tests/python/sgl_kernel_npu/test_solve_tril.py:29: AssertionError
---------------------------------------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------------------------------------
[WARNING] Please DO NOT tune args ['num_warps', 'num_stages']!
[WARNING] Please DO NOT tune args ['num_warps', 'num_stages']!
_________________________________________________________________________________________________________________ test_solve_tril[B4-T2048-H8-chunk_size64] __________________________________________________________________________________________________________________

B = 4, T = 2048, H = 8, chunk_size = 64

    @pytest.mark.parametrize(
        ("B", "T", "H", "chunk_size"),
        [
            pytest.param(*test, id="B{}-T{}-H{}-chunk_size{}".format(*test))
            for test in [
                (1, 63, 1, 16),
                (2, 500, 4, 32),
                (2, 1000, 5, 64),
                (3, 1024, 6, 64),
                (4, 2048, 8, 64),
            ]
        ],
    )
    def test_solve_tril(B, T, H, chunk_size):
        # do not randomly initialize A otherwise the inverse is not stable
        k = F.normalize(
            torch.randn((B, H, T, 64), dtype=torch.float32, device=NPU_DEVICE), dim=-1
        )
        torch.npu.synchronize()
        # Pad the second-to-last dimension (T) to be a multiple of chunk_size
        padding_size = (chunk_size - T % chunk_size) % chunk_size
        k_padded = F.pad(k, (0, 0, 0, padding_size, 0, 0, 0, 0))
        torch.npu.synchronize()
        k_padded = k_padded.reshape(B, H, -1, chunk_size, 64)
        torch.npu.synchronize()
        A = (k_padded @ k_padded.transpose(-1, -2)).tril(-1).npu()
        torch.npu.synchronize()
    
        ref = torch.inverse(
            A
            + torch.eye(A.shape[-1], dtype=A.dtype, device=A.device)[None, None, None, ...]
        )
        torch.npu.synchronize()
        ref = ref.reshape(B, H, -1, chunk_size)[:, :, :T, :]
    
        torch.npu.synchronize()
        tri = solve_tril(
            A.reshape(B, H, -1, chunk_size)[:, :, :T, :].transpose(1, 2)
        ).transpose(1, 2)
        torch.npu.synchronize()
    
>       assert_close("solve_tril", ref, tri, 0.0001)

tests/python/sgl_kernel_npu/test_solve_tril.py:81: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

prefix = 'solve_tril', ref = tensor([[[[ 1.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00],
          [...    [ 4.4869e-02,  1.1858e-01, -3.4823e-02,  ..., -4.2822e-02,
            1.2116e-01,  1.0000e+00]]]], device='npu:1')
tri = tensor([[[[ 1.0000e+00,  0.0000e+00,  0.0000e+00,  ...,  0.0000e+00,
            0.0000e+00,  0.0000e+00],
          [...    [ 1.2331e-01,  1.1185e-02,  3.4423e-01,  ..., -3.2281e-02,
            1.3875e-01,  1.0000e+00]]]], device='npu:1'), ratio = 0.0001
warning = False, err_atol = 1e-06

    def assert_close(prefix, ref, tri, ratio, warning=False, err_atol=1e-6):
        abs_atol = get_abs_err(ref, tri)
        msg = f"{prefix:>16} diff: {abs_atol:.6f} ratio: {get_err_ratio(ref, tri):.6f}"
        error_rate = get_err_ratio(ref, tri)
        if abs_atol <= err_atol:
            return
        else:
>           assert error_rate < ratio, msg
E           AssertionError:       solve_tril diff: 1.079847 ratio: 0.805887
E           assert 0.8058865543648472 < 0.0001

tests/python/sgl_kernel_npu/test_solve_tril.py:29: AssertionError
---------------------------------------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------------------------------------
[WARNING] Please DO NOT tune args ['num_warps', 'num_stages']!
[WARNING] Please DO NOT tune args ['num_warps', 'num_stages']!
============================================================================================================================== warnings summary ==============================================================================================================================
venv/lib/python3.10/site-packages/torch_npu/utils/collect_env.py:58
venv/lib/python3.10/site-packages/torch_npu/utils/collect_env.py:58
  <MASKED>/TRI_INV/sgl-kernel-npu/venv/lib/python3.10/site-packages/torch_npu/utils/collect_env.py:58: UserWarning: Warning: The /usr/local/Ascend/ascend-toolkit/latest owner does not match the current owner.
    warnings.warn(f"Warning: The {path} owner does not match the current owner.")

venv/lib/python3.10/site-packages/torch_npu/utils/collect_env.py:58
venv/lib/python3.10/site-packages/torch_npu/utils/collect_env.py:58
  <MASKED>/TRI_INV/sgl-kernel-npu/venv/lib/python3.10/site-packages/torch_npu/utils/collect_env.py:58: UserWarning: Warning: The /usr/local/Ascend/ascend-toolkit/8.5.0.alpha002/x86_64-linux/ascend_toolkit_install.info owner does not match the current owner.
    warnings.warn(f"Warning: The {path} owner does not match the current owner.")

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
========================================================================================================================== short test summary info ===========================================================================================================================
FAILED tests/python/sgl_kernel_npu/test_solve_tril.py::test_solve_tril[B2-T500-H4-chunk_size32] - AssertionError:       solve_tril diff: 0.761018 ratio: 0.607361
FAILED tests/python/sgl_kernel_npu/test_solve_tril.py::test_solve_tril[B2-T1000-H5-chunk_size64] - AssertionError:       solve_tril diff: 0.930396 ratio: 0.801516
FAILED tests/python/sgl_kernel_npu/test_solve_tril.py::test_solve_tril[B3-T1024-H6-chunk_size64] - AssertionError:       solve_tril diff: 1.126149 ratio: 0.808762
FAILED tests/python/sgl_kernel_npu/test_solve_tril.py::test_solve_tril[B4-T2048-H8-chunk_size64] - AssertionError:       solve_tril diff: 1.079847 ratio: 0.805887
================================================================================================================== 4 failed, 1 passed, 4 warnings in 4.82s ===================================================================================================================

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @zouzias, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new unit test to verify the numerical accuracy of the solve_tril_npu function, which is likely a custom NPU-optimized implementation for solving triangular systems. The test, adapted from an upstream project, currently highlights significant discrepancies between the custom implementation and a reference PyTorch inverse operation, suggesting that further investigation and refinement of solve_tril_npu's precision are required.

Highlights

  • New Unit Test Added: A new unit test file, tests/python/sgl_kernel_npu/test_solve_tril.py, has been introduced to validate the solve_tril_npu function.
  • Upstream Test Adaptation: The added test is adapted from the flash-linear-attention repository, ensuring consistency with established benchmarks.
  • Accuracy Validation: The test compares the solve_tril_npu implementation against torch.inverse for various tensor configurations, using a strict error ratio tolerance of 0.0001.
  • Identified Failures: Initial runs of the new test reveal that 4 out of 5 parameterized test cases are failing due to significant accuracy discrepancies, indicating potential issues with the solve_tril_npu implementation.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This PR adds a unit test for solve_tril from an upstream repository. The test is currently failing with large numerical errors, which likely points to an issue in the solve_tril_npu implementation that is being tested.

My review focuses on the quality and clarity of the new test code. I've found a few opportunities for improvement:

  • The file contains an unused import (os) and an unused function (print_diff).
  • The assert_close helper function has an unused parameter (warning).
  • The tolerance for the assertion is very strict, and the test is failing. While this indicates a problem in the kernel, the test could be more helpful for debugging.
  • The tensor reshaping logic is complex and could be clarified with a code comment.

I've provided specific suggestions in the comments below.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@RuixuanZhang06 RuixuanZhang06 merged commit 640d5a7 into sgl-project:main Jan 22, 2026
1 check passed
Yael-X added a commit to Yael-X/sgl-kernel-npu that referenced this pull request Jan 26, 2026
* 'main' of https://github.com/sgl-project/sgl-kernel-npu: (24 commits)
  [Doc] Improved README.md content and English grammar and integrated the DeepWiki badge for Ask AI (sgl-project#345)
  (test) add solve_tril from upstream (sgl-project#339)
  Add AscendC triangular inverse (sgl-project#332)
  support the situation that topk maybe -1 on machine A3 (sgl-project#313)
  chunk_gated_delta_rule_npu output final state (sgl-project#341)
  The environment variable DEEPEP_HCCL_BUFFSIZE is added, and the priority of DEEPEP_HCCL_BUFFSIZE is higher than that of HCCL_BUFFSIZE. (sgl-project#329)
  Added the low_latency operator API documentation. (sgl-project#337)
  Added the verification of num_max_dispatch_tokens_per_rank to the decode operator adaptation layer. (sgl-project#330)
  Document get_dispatch_layout API (sgl-project#338)
  【Doc】add fused deep moe doc (sgl-project#335)
  add deepep normal api doc (sgl-project#336)
  remove the limit that A2 internode only support topk 8 (sgl-project#323)
  Optimize the performance of the Combine Ant Moving function and the use of HCCL buffer (sgl-project#314)
  deepep adapt custom cann installation path (sgl-project#327)
  [Chore] CANN version bump to 8.5.0 (sgl-project#326)
  add dfx for operator FusedDeepMoe (sgl-project#317)
  Integrate ccache for faster compilation (sgl-project#318)
  Modify contribution guide (sgl-project#315)
  fix bmm transpose in cann 8.5 (sgl-project#316)
  fix little batchsize and int8 quant on ci (sgl-project#302)
  ...
zhuyutong332 added a commit to zhuyutong332/sgl-kernel-npu that referenced this pull request Jan 27, 2026
* upstream/main:
  add function for deep-ep tests (sgl-project#301)
  [Doc] Improved README.md content and English grammar and integrated the DeepWiki badge for Ask AI (sgl-project#345)
  (test) add solve_tril from upstream (sgl-project#339)
  Add AscendC triangular inverse (sgl-project#332)
  support the situation that topk maybe -1 on machine A3 (sgl-project#313)
  chunk_gated_delta_rule_npu output final state (sgl-project#341)
  The environment variable DEEPEP_HCCL_BUFFSIZE is added, and the priority of DEEPEP_HCCL_BUFFSIZE is higher than that of HCCL_BUFFSIZE. (sgl-project#329)
  Added the low_latency operator API documentation. (sgl-project#337)
  Added the verification of num_max_dispatch_tokens_per_rank to the decode operator adaptation layer. (sgl-project#330)
  Document get_dispatch_layout API (sgl-project#338)
  【Doc】add fused deep moe doc (sgl-project#335)
  add deepep normal api doc (sgl-project#336)
  remove the limit that A2 internode only support topk 8 (sgl-project#323)
  Optimize the performance of the Combine Ant Moving function and the use of HCCL buffer (sgl-project#314)
  deepep adapt custom cann installation path (sgl-project#327)
  [Chore] CANN version bump to 8.5.0 (sgl-project#326)
  add dfx for operator FusedDeepMoe (sgl-project#317)
  Integrate ccache for faster compilation (sgl-project#318)
AndyKong2020 pushed a commit to AndyKong2020/sgl-kernel-npu that referenced this pull request Mar 24, 2026
* (test) add solve_tril from upstream
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants