Skip to content

Conversation

sufubao
Copy link
Collaborator

@sufubao sufubao commented Sep 15, 2025

No description provided.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @sufubao, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly simplifies the configuration for key performance optimizations by enabling Flash Attention 3.0 and Fused Shared Experts by default. It introduces corresponding --disable flags for users who need to opt out of these features. A crucial enhancement is the automatic runtime detection of FA3 compatibility, ensuring stable operation across diverse environments. Additionally, the default HTTP server worker count has been adjusted to improve out-of-the-box performance.

Highlights

  • Default FA3 Activation: Flash Attention 3.0 (FA3) is now enabled by default, improving performance without requiring explicit --enable_fa3 flags. A new --disable_fa3 flag is introduced to turn it off.
  • Default Fused Shared Experts: Fused Shared Experts are now enabled by default for DeepSeekV3 models, enhancing efficiency. A new --disable_fused_shared_experts flag allows disabling this feature.
  • Automatic FA3 Compatibility Check: The system now automatically detects if the current hardware (Hopper/Ada Lovelace GPUs) and software (sgl_kernel package) support FA3 at startup. If not, FA3 is automatically disabled with a warning.
  • Increased Default HTTP Server Workers: The default number of HTTP server workers has been increased from 1 to 4, potentially improving API server responsiveness.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the configuration to enable FA3 and fused_shared_experts by default. This is achieved by changing the --enable_* flags to --disable_* flags and updating the logic throughout the codebase to reflect this inversion. The documentation and example scripts have also been updated accordingly. A key improvement is the addition of a runtime check for FA3 support, which gracefully disables the feature with a warning if the environment is not compatible. The changes are well-implemented and consistent. I've provided a couple of suggestions to improve the clarity of user-facing error and warning messages.

Comment on lines +115 to 127
assert args.disable_fa3 is False or (
args.enable_flashinfer_prefill is True and args.enable_flashinfer_decode is True
), (
"offline_calibration_fp8kv mode need enable fa3 or flashinfer, add --enable_fa3 or "
"offline_calibration_fp8kv mode need enable fa3 or flashinfer, add --disable_fa3 False or "
"--enable_flashinfer_prefill and --enable_flashinfer_decode"
)
if "export_fp8kv_calibration" in args.mode:
assert args.enable_fa3 is True or (
assert args.disable_fa3 is False or (
args.enable_flashinfer_prefill is True and args.enable_flashinfer_decode is True
), (
"export_fp8kv_calibration mode need enable fa3 or flashinfer, add --enable_fa3 or "
"export_fp8kv_calibration mode need enable fa3 or flashinfer, add --disable_fa3 False or "
"--enable_flashinfer_prefill and --enable_flashinfer_decode"
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error messages for these assertions are a bit confusing for the user. --disable_fa3 is a boolean flag, so add --disable_fa3 False is not a valid action. A clearer message would improve user experience.

Suggested change
assert args.disable_fa3 is False or (
args.enable_flashinfer_prefill is True and args.enable_flashinfer_decode is True
), (
"offline_calibration_fp8kv mode need enable fa3 or flashinfer, add --enable_fa3 or "
"offline_calibration_fp8kv mode need enable fa3 or flashinfer, add --disable_fa3 False or "
"--enable_flashinfer_prefill and --enable_flashinfer_decode"
)
if "export_fp8kv_calibration" in args.mode:
assert args.enable_fa3 is True or (
assert args.disable_fa3 is False or (
args.enable_flashinfer_prefill is True and args.enable_flashinfer_decode is True
), (
"export_fp8kv_calibration mode need enable fa3 or flashinfer, add --enable_fa3 or "
"export_fp8kv_calibration mode need enable fa3 or flashinfer, add --disable_fa3 False or "
"--enable_flashinfer_prefill and --enable_flashinfer_decode"
)
assert args.disable_fa3 is False or (
args.enable_flashinfer_prefill is True and args.enable_flashinfer_decode is True
), (
"offline_calibration_fp8kv mode requires FA3 (enabled by default) or FlashInfer. "
"To use FlashInfer, please add --enable_flashinfer_prefill and --enable_flashinfer_decode."
)
if "export_fp8kv_calibration" in args.mode:
assert args.disable_fa3 is False or (
args.enable_flashinfer_prefill is True and args.enable_flashinfer_decode is True
), (
"export_fp8kv_calibration mode requires FA3 (enabled by default) or FlashInfer. "
"To use FlashInfer, please add --enable_flashinfer_prefill and --enable_flashinfer_decode."
)

Comment on lines +135 to +139
logger.warning(
"FA3 is enabled but not supported on this hardware/software environment. "
"FA3 requires Hopper architecture (H100, H200, H800) or newer, and sgl_kernel package. "
"Disabling FA3 and falling back to other attention kernels."
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The is_fa3_supported() check correctly includes support for Ada Lovelace architecture GPUs. To avoid confusion for users with these cards, it would be good to mention Ada in this warning message as well.

Suggested change
logger.warning(
"FA3 is enabled but not supported on this hardware/software environment. "
"FA3 requires Hopper architecture (H100, H200, H800) or newer, and sgl_kernel package. "
"Disabling FA3 and falling back to other attention kernels."
)
logger.warning(
"FA3 is enabled but not supported on this hardware/software environment. "
"FA3 requires Hopper (e.g., H100) or Ada (e.g., RTX 4090) architecture or newer, and the sgl_kernel package. "
"Disabling FA3 and falling back to other attention kernels."
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant