Skip to content

[Gated DeltaNet] fix gdn kernel bugs on h100 when vdim=64#256

Merged
yzhangcs merged 2 commits intofla-org:mainfrom
kugwzk:h100-gdn-kernel-fix
Mar 29, 2025
Merged

[Gated DeltaNet] fix gdn kernel bugs on h100 when vdim=64#256
yzhangcs merged 2 commits intofla-org:mainfrom
kugwzk:h100-gdn-kernel-fix

Conversation

@kugwzk
Copy link
Copy Markdown
Contributor

@kugwzk kugwzk commented Mar 29, 2025

Summary by CodeRabbit

  • Chores
    • Refined performance tuning settings for dynamic adjustment based on device capabilities, enhancing flexibility while maintaining existing options.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 29, 2025

Walkthrough

The pull request modifies two functions in the fla/ops/common/chunk_o.py file. Specifically, the autotuning configuration for the triton.autotune decorator in both chunk_fwd_kernel_o and chunk_bwd_kernel_dqkwg functions has been updated. The range for the num_warps parameter is now adjusted to include [2, 4], with [8] conditionally appended based on device capability. The num_stages parameter remains unchanged. No changes have been made to exported or public entities.

Changes

File Modified Functions Summary
fla/ops/.../chunk_o.py chunk_fwd_kernel_o, chunk_bwd_kernel_dqkwg Updated the triton.autotune decorator by modifying num_warps range to include [2, 4] and conditionally [8] based on device capability.

Possibly related PRs

Poem

I’m a bunny with code so light,
Tweaking warps to run just right,
From three to two, the numbers shrink,
In kernels where the bytes all link,
Hopping through loops with a joyful beat—
Debugging by day, coding so sweet!
🐇🌟


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d7716a7 and fe76842.

📒 Files selected for processing (1)
  • fla/ops/common/chunk_o.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • fla/ops/common/chunk_o.py
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: test
  • GitHub Check: test

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai plan to trigger planning for file edits and PR creation.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@kugwzk kugwzk changed the title [Gated Deltaet] fix gdn kernel bugs on h100 when vdim=64 [Gated DeltaNet] fix gdn kernel bugs on h100 when vdim=64 Mar 29, 2025
@yzhangcs
Copy link
Copy Markdown
Member

@kugwzk how about
[2, 4] + [] if torch.cuda.get_device_capability()[0] >= 9 else [8]

@zhiyuan1i
Copy link
Copy Markdown
Collaborator

zhiyuan1i commented Mar 29, 2025

We are moving towards Triton 3.3.0 adaptation, and blindly reducing wraps will significantly reduce the performance of other platforms.
e100ca10faff13e7a0214928637a3cfa
It seems that triton 3.3.0 will solve this problem? compiletimeasserterror.

I will pay a close attention on it, and maybe we can give some workaround solutions.

It seems that still failed on H100 with yesterday triton3.3.0 nightly:
https://github.com/fla-org/flash-linear-attention/actions/runs/14147130702/job/39635780940?pr=256

@yzhangcs yzhangcs merged commit 7962e24 into fla-org:main Mar 29, 2025
3 of 5 checks passed
yzhangcs added a commit that referenced this pull request Mar 30, 2025
* [Gated DeltaNet] Fix gdn kernel bugs on h100 when vdim=64 (#256)

* fix h100 erros(part1)

* fix

* fix2

* fix

* fix

* update ci pipeline

* pre-commit

* add ci proxy

* remove https to use hosts

* fix nightly ci

* f

* [README] Fix footnote bugs

* enhance

* pre-commit

* Delete unnecessary lines

* remove magic numer

* remove hardcode proxy

* remove unnecessary seq_len in test

* Refactor code and variable naming

* fffff

* revert

* Change item order in Enum

* update

* add comments for intel grf_mode

* use `check_shared_mem()` instead of `device_capacity`

* pre-commit

* fix

* skip tests for 4090 and use triton 3.3.0 for h100

* skip

* update faq

---------

Co-authored-by: Yu Zhang <yzhang.cs@outlook.com>
@coderabbitai coderabbitai bot mentioned this pull request Apr 13, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants