Skip to content

update benchmark README to use --fp8-gemm-backend instead of env var#15689

Merged
Fridge003 merged 1 commit intosgl-project:mainfrom
leejnau:dsv3benchmark_remove_gemm_envvar_use_flag
Dec 24, 2025
Merged

update benchmark README to use --fp8-gemm-backend instead of env var#15689
Fridge003 merged 1 commit intosgl-project:mainfrom
leejnau:dsv3benchmark_remove_gemm_envvar_use_flag

Conversation

@leejnau
Copy link
Collaborator

@leejnau leejnau commented Dec 23, 2025

Motivation

Update documentation to reflect changes in: #14379

Modifications

Update benchmark README file for DeepSeek V3.

Accuracy Tests

N/A

Benchmarking and Profiling

N/A

Checklist

@github-actions github-actions bot added the documentation Improvements or additions to documentation label Dec 23, 2025
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @leejnau, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request updates the documentation for DeepSeek V3 benchmarks within the sglang project. It specifically modifies the example command for launching the SGLang server when using FP8 configurations, ensuring the documentation reflects the current method of specifying the FP8 GEMM backend via a command-line argument rather than an environment variable. This change aligns the benchmark instructions with recent internal updates to the server's configuration options.

Highlights

  • Documentation Update: The benchmark/deepseek_v3/README.md file has been updated to reflect recent changes in how the FP8 GEMM backend is configured.
  • FP8 Backend Configuration: The method for specifying the FP8 GEMM backend in the SGLang server launch command has been changed from an environment variable to a command-line argument.
  • Command Line Argument Adoption: The environment variable SGLANG_ENABLE_FLASHINFER_GEMM=true has been replaced by the dedicated command-line argument --fp8-gemm-backend flashinfer_trtllm.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the benchmark README for DeepSeek V3, replacing the deprecated SGLANG_ENABLE_FLASHINFER_GEMM environment variable with the --fp8-gemm-backend command-line argument. The change is correct and aligns with recent updates. I've added one suggestion to improve the readability of the command by splitting it into multiple lines, consistent with other examples in the document.

```
SGLANG_ENABLE_JIT_DEEPGEMM=false SGLANG_ENABLE_FLASHINFER_GEMM=true bash python3 -m sglang.launch_server --model-path=deepseek-ai/DeepSeek-R1-0528 --host=0.0.0.0 --port=8000 --tensor-parallel-size=8 --cuda-graph-max-bs 128 --max-running-requests 128 --mem-fraction-static 0.82 --kv-cache-dtype fp8_e4m3 --chunked-prefill-size 32768 --max-prefill-tokens 32768 --scheduler-recv-interval 30 --stream-interval 30
```bash
SGLANG_ENABLE_JIT_DEEPGEMM=false python3 -m sglang.launch_server --model-path=deepseek-ai/DeepSeek-R1-0528 --host=0.0.0.0 --port=8000 --tensor-parallel-size=8 --cuda-graph-max-bs 128 --max-running-requests 128 --mem-fraction-static 0.82 --kv-cache-dtype fp8_e4m3 --chunked-prefill-size 32768 --max-prefill-tokens 32768 --scheduler-recv-interval 30 --stream-interval 30 --fp8-gemm-backend flashinfer_trtllm
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better readability, consider splitting this long command into multiple lines using backslashes, similar to other examples in this file.

Suggested change
SGLANG_ENABLE_JIT_DEEPGEMM=false python3 -m sglang.launch_server --model-path=deepseek-ai/DeepSeek-R1-0528 --host=0.0.0.0 --port=8000 --tensor-parallel-size=8 --cuda-graph-max-bs 128 --max-running-requests 128 --mem-fraction-static 0.82 --kv-cache-dtype fp8_e4m3 --chunked-prefill-size 32768 --max-prefill-tokens 32768 --scheduler-recv-interval 30 --stream-interval 30 --fp8-gemm-backend flashinfer_trtllm
SGLANG_ENABLE_JIT_DEEPGEMM=false python3 -m sglang.launch_server \
--model-path=deepseek-ai/DeepSeek-R1-0528 \
--host=0.0.0.0 \
--port=8000 \
--tensor-parallel-size=8 \
--cuda-graph-max-bs 128 \
--max-running-requests 128 \
--mem-fraction-static 0.82 \
--kv-cache-dtype fp8_e4m3 \
--chunked-prefill-size 32768 \
--max-prefill-tokens 32768 \
--scheduler-recv-interval 30 \
--stream-interval 30 \
--fp8-gemm-backend flashinfer_trtllm

@Fridge003 Fridge003 merged commit 7e02769 into sgl-project:main Dec 24, 2025
50 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

deepseek documentation Improvements or additions to documentation nvidia

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants