Skip to content

[Bugfix] Work around FlashInfer unified transport issue on GB#20039

Merged
Fridge003 merged 3 commits intosgl-project:mainfrom
mmangkad-dev:test-fix-fi-ar
Mar 23, 2026
Merged

[Bugfix] Work around FlashInfer unified transport issue on GB#20039
Fridge003 merged 3 commits intosgl-project:mainfrom
mmangkad-dev:test-fix-fi-ar

Conversation

@mmangkad
Copy link
Copy Markdown
Contributor

@mmangkad mmangkad commented Mar 6, 2026

Summary

Fix #19884

This adds a temporary workaround for incorrect outputs on GB by forcing PosixFD transport for FlashInfer’s unified allreduce-fusion workspace path until the transport issue is fixed

@Fridge003 @JustinTong0323

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a critical bug fix to prevent data corruption and incorrect outputs when using FlashInfer's unified allreduce-fusion workspace on specific NVIDIA GB series GPUs (GB200/GB300). It achieves this by implementing a temporary, platform-aware workaround that forces the use of PosixFD transport for symmetric-memory handle exchange, bypassing a known issue with Fabric handle exchange on these systems. The change improves the stability and reliability of FlashInfer operations on affected hardware.

Highlights

  • FlashInfer Transport Workaround: Implemented a temporary workaround to address incorrect outputs on GB200/GB300 systems by forcing PosixFD transport for FlashInfer's unified allreduce-fusion workspace.
  • Platform-Specific Activation: The workaround is conditionally applied to 'aarch64' or 'arm64' machines with CUDA compute capability 10.x (SM10x).
  • Configurable Override: Introduced an environment variable SGLANG_FLASHINFER_FORCE_POSIX_FD_TRANSPORT to explicitly enable or disable the PosixFD transport override.
  • Context Manager for Safety: Encapsulated the transport override logic within a contextlib.contextmanager to ensure proper restoration of FlashInfer's original behavior.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/srt/layers/flashinfer_comm_fusion.py
    • Added necessary imports for contextlib, os, and platform.
    • Introduced a global flag _posix_transport_override_logged to prevent repetitive warning messages.
    • Implemented _parse_optional_env_bool to safely parse boolean environment variables.
    • Developed _should_force_posix_fd_transport to dynamically determine if the workaround is required based on the system's architecture (aarch64/arm64) and CUDA compute capability (SM10x).
    • Created _flashinfer_posix_fd_transport_override_if_needed, a context manager that temporarily overrides FlashInfer's is_mnnvl_fabric_supported function to force PosixFD transport.
    • Wrapped the call to _flashinfer_comm.create_allreduce_fusion_workspace within the new context manager to apply the workaround during workspace initialization.
Activity
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a temporary workaround for an issue with FlashInfer's unified transport on Grace Blackwell systems, which was causing incorrect outputs. The fix involves forcing the use of PosixFD transport by monkey-patching flashinfer.comm.mnnvl.is_mnnvl_fabric_supported when running on aarch64 architecture with SM 10.x GPUs. The changes are well-contained within a context manager to ensure the patch is applied only when needed and is properly reverted. My review focuses on improving the robustness of the exception handling. Overall, this is a good and necessary fix.

@mmangkad
Copy link
Copy Markdown
Contributor Author

mmangkad commented Mar 7, 2026

/rerun-failed-ci

@Fridge003
Copy link
Copy Markdown
Collaborator

Maybe #12787 can tackle this issue with newly added mnnvl backend

@mmangkad
Copy link
Copy Markdown
Contributor Author

/rerun-stage stage-b-test-large-2-gpu

@github-actions
Copy link
Copy Markdown
Contributor

✅ Triggered stage-b-test-large-2-gpu to run independently (skipping dependencies).

@github-actions
Copy link
Copy Markdown
Contributor

🔗 View workflow run

@seindum
Copy link
Copy Markdown

seindum commented Mar 16, 2026

Hi @mmangkad @Fridge003 @JustinTong0323, may I ask when can this PR be merged? We also meet this problem on the same GB300 device, and this PR can fix it. But only use the mnnvl backend in 12787 cannot fix this problem. Thank you!

@Fridge003 Fridge003 merged commit d8a5b1d into sgl-project:main Mar 23, 2026
566 of 647 checks passed
@mmangkad mmangkad deleted the test-fix-fi-ar branch March 23, 2026 04:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] allreduce_fusion unified API produces garbled output on Blackwell (GB300, SM 10.3) while legacy trtllm_allreduce_fusion works correctly

5 participants