Skip to content

Support internvl on Blackwell (which doesn't support fa3): add SingletonCache support to Vision{Sdpa|Triton|Ascend}Attention#13151

Merged
Fridge003 merged 9 commits intosgl-project:mainfrom
netanel-haber:bugfix/internvl-support-blackwell-via-triton-and-sdpa-singleton-attention
Nov 26, 2025
Merged

Support internvl on Blackwell (which doesn't support fa3): add SingletonCache support to Vision{Sdpa|Triton|Ascend}Attention#13151
Fridge003 merged 9 commits intosgl-project:mainfrom
netanel-haber:bugfix/internvl-support-blackwell-via-triton-and-sdpa-singleton-attention

Conversation

@netanel-haber
Copy link
Contributor

@netanel-haber netanel-haber commented Nov 12, 2025

Unblocks blackwell support for: #12277

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @netanel-haber, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the handling of cu_seqlens across various vision attention mechanisms by introducing a dedicated helper function, resolve_seqlens. This change centralizes the logic for obtaining cu_seqlens, including robust support for SingletonCache instances, thereby improving code consistency, maintainability, and type safety within the attention layers.

Highlights

  • New Helper Function: A new function, resolve_seqlens, has been introduced to centralize the logic for determining and retrieving cu_seqlens.
  • SingletonCache Support: The resolve_seqlens function now explicitly handles SingletonCache instances, ensuring that cu_seqlens are correctly obtained from the cache if available and empty, or generated if not.
  • Code Refactoring: The inline logic for resolving cu_seqlens within the forward methods of VisionSdpaAttention, VisionTritonAttention, and VisionTritonFlashAttention has been replaced with calls to the new resolve_seqlens helper function, reducing code duplication.
  • Type Hinting Updates: Type hints for the cu_seqlens parameter in the affected attention mechanisms have been updated to reflect the new torch.Tensor | SingletonCache | None union type, improving clarity and type safety.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@netanel-haber netanel-haber changed the title add SingletonCache support to VisionTritonAttention,VisionTritonAtten… add SingletonCache support to VisionTritonAttention,VisionTritonAttention,VisionAscendAttention, thus fixing internvl support on blackwell, which doesn't support fa3 Nov 12, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the handling of cumulative sequence lengths (cu_seqlens) by introducing a new helper function, resolve_seqlens. This change centralizes duplicated logic from several vision attention implementations, improving code reuse and clarity. It also adds SingletonCache support to VisionTritonAttention and VisionAscendAttention, making them consistent with other attention modules. The type hints are also updated to the modern | syntax. The changes are solid, and I have one suggestion to further improve the readability of the new helper function.

…on,VisionAscendAttention, thus fixing internvl support on blackwell, which doesn't support fa3
@netanel-haber netanel-haber force-pushed the bugfix/internvl-support-blackwell-via-triton-and-sdpa-singleton-attention branch from 3de4dde to 3d652f7 Compare November 12, 2025 12:04
@netanel-haber netanel-haber changed the title add SingletonCache support to VisionTritonAttention,VisionTritonAttention,VisionAscendAttention, thus fixing internvl support on blackwell, which doesn't support fa3 add SingletonCache support to VisionTritonAttention,VisionSdpaAttention,VisionAscendAttention, thus fixing internvl support on blackwell, which doesn't support fa3 Nov 12, 2025
@netanel-haber netanel-haber changed the title add SingletonCache support to VisionTritonAttention,VisionSdpaAttention,VisionAscendAttention, thus fixing internvl support on blackwell, which doesn't support fa3 Support internvl on Blackwell (which doesn't fa3): add SingletonCache support to Vision{Sdpa|Triton|Ascend}Attention Nov 12, 2025
netanel-haber and others added 3 commits November 12, 2025 14:09
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@netanel-haber netanel-haber changed the title Support internvl on Blackwell (which doesn't fa3): add SingletonCache support to Vision{Sdpa|Triton|Ascend}Attention Support internvl on Blackwell (which doesn't support fa3): add SingletonCache support to Vision{Sdpa|Triton|Ascend}Attention Nov 12, 2025
@netanel-haber netanel-haber changed the title Support internvl on Blackwell (which doesn't support fa3): add SingletonCache support to Vision{Sdpa|Triton|Ascend}Attention Support internvl on Blackwell (which doesn't support fa3): add SingletonCache support to Vision{Sdpa|Triton|Ascend}Attention Nov 12, 2025
@github-actions github-actions bot added the Multi-modal multi-modal language model label Nov 17, 2025
@b8zhong b8zhong added the run-ci label Nov 17, 2025
@netanel-haber
Copy link
Contributor Author

@yhyang201 - this is another small orthogonal pr for my nano-vl pr that touches encoder code. Perhaps you can take a look?

@yhyang201
Copy link
Collaborator

I think this is ready to be merged.
cc @Fridge003

@Fridge003 Fridge003 merged commit 8308cd3 into sgl-project:main Nov 26, 2025
161 of 177 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Multi-modal multi-modal language model run-ci

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants