Skip to content

Fix CUDA stream race in CuteDSL MoE when CUDA graphs are disabled#2840

Draft
leejnau wants to merge 1 commit intoflashinfer-ai:mainfrom
leejnau:cutedsl-fix-async-memset-race
Draft

Fix CUDA stream race in CuteDSL MoE when CUDA graphs are disabled#2840
leejnau wants to merge 1 commit intoflashinfer-ai:mainfrom
leejnau:cutedsl-fix-async-memset-race

Conversation

@leejnau
Copy link
Contributor

@leejnau leejnau commented Mar 20, 2026

📌 Description

Gate async memset on CUDA graph mode in the CuteDSL MoE wrapper. When CUDA graphs are enabled, the same pre-allocated moe_output buffer is reused on every graph replay, so zeroing it on a dedicated per-instance auxiliary stream overlaps with GEMM1 and hides the cost. When CUDA graphs are disabled, buffers are allocated dynamically each call and don't need async zeroing — a synchronous .zero_() on the main stream is sufficient and avoids a race condition where multiple MoE layers would share the module-level singleton stream from _get_cuda_graph_resources().

The fix moves stream and event creation from _allocate_buffers() into the if use_cuda_graph: block of init, so non-graph wrappers never create CUDA stream resources they don't need. The functional API (_cute_dsl_fused_moe_nvfp4_impl) is similarly updated to only use async memset when the caller provides an explicit auxiliary stream.

🔍 Related Issues

feat: cuteDSL fp4 moe for better DSR1 performance.
chore: cute dsl nvfp4 moe clean up
CuteDSL MoE fix redundant output buffer zeroing

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 20, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 9e15fd6a-4b20-4af0-9b82-1c353b963cc8

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can validate your CodeRabbit configuration file in your editor.

If your editor has YAML language server, you can enable auto-completion and validation by adding # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json at the top of your CodeRabbit configuration file.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a potential CUDA stream race condition within the CuteDSL MoE wrapper when CUDA graphs are disabled. The core change involves making the use of asynchronous memset and the allocation of associated CUDA stream resources conditional on whether CUDA graphs are enabled. This ensures correct buffer zeroing behavior, preventing races by using synchronous zeroing when buffers are dynamically allocated without graphs, and optimizing resource allocation by only creating streams and events when they are actually needed for graph-based operations.

Highlights

  • Conditional Async Memset: Asynchronous memset operations are now gated on the use_cuda_graph flag, preventing race conditions when CUDA graphs are disabled by ensuring synchronous zeroing on the main stream for dynamically allocated buffers.
  • Resource Allocation Refinement: CUDA stream and event resources (_aux_stream, _main_event, _memset_event) are now only initialized when use_cuda_graph is enabled, avoiding unnecessary resource creation for non-graph wrappers.
  • Functional API Update: The _cute_dsl_fused_moe_nvfp4_impl functional API has been updated to only utilize asynchronous memset when an explicit auxiliary stream is provided by the caller.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request effectively addresses the CUDA stream race condition in CuteDSL MoE when CUDA graphs are disabled. The changes correctly gate the creation and usage of async memset resources (auxiliary stream and events) to only occur when CUDA graphs are enabled and pre-allocated buffers are utilized. This prevents unnecessary resource allocation and potential race conditions in non-graph scenarios, aligning with the stated objective of the pull request. The modifications to both the CuteDslMoEWrapper and the functional API _cute_dsl_fused_moe_nvfp4_impl are consistent and logical, ensuring that async memset is only employed when an auxiliary stream is available and intended for use. The added comments enhance code clarity regarding these conditional behaviors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant