Skip to content

Fix spec decoding acc length for dpsk-r1-fp4 tp8#12896

Merged
zhyncs merged 1 commit intosgl-project:mainfrom
Qiaolin-Yu:fix_tp8
Nov 9, 2025
Merged

Fix spec decoding acc length for dpsk-r1-fp4 tp8#12896
zhyncs merged 1 commit intosgl-project:mainfrom
Qiaolin-Yu:fix_tp8

Conversation

@Qiaolin-Yu
Copy link
Copy Markdown
Collaborator

Motivation

Before this pr,

SGLANG_ENABLE_SPEC_V2=1 python3 -m sglang.launch_server --model-path nvidia/DeepSeek-R1-0528-FP4-v2 --trust-remote-code --quantization modelopt_fp4 --tp 8  --speculative-algorithm=EAGLE  --port 40020   --kv-cache-dtype fp8_e4m3

This command will get a really low acc length. I found the root cause is #12758 . Therefore, I reverted some parts of the pr, and have confirmed the issue fixed by that pr will not be reproduced.

Not sure what the purpose was for increasing the workspace size. If it’s truly necessary, we can add it back after confirming that my command works correctly. cc @elvischenv

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @Qiaolin-Yu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a performance regression in speculative decoding for the DeepSeek-R1-0528-FP4-v2 model, particularly when configured with tp 8 and the EAGLE algorithm. The issue, characterized by an unexpectedly low 'acc length', was found to stem from an earlier change that increased the max_token_num parameter. The current solution reverts this specific modification, restoring the max_token_num to its previous value, which successfully mitigates the performance bottleneck without reintroducing prior bugs.

Highlights

  • Issue Fix: Addresses a critical issue where speculative decoding yielded a very low 'acc length' when using the DeepSeek-R1-0528-FP4-v2 model with tp 8 and the EAGLE algorithm, leading to suboptimal performance.
  • Root Cause Identified: The problem was traced back to a previous pull request ([Bugfix] Fix illegal memory access #12758) that introduced an increase in the max_token_num parameter.
  • Targeted Reversion: The fix involves reverting specific changes related to the max_token_num parameter within flashinfer_comm_fusion.py, specifically reducing it from 16384 to 2048 in relevant functions.
  • Validation: The author has confirmed that this change resolves the low 'acc length' issue and does not reintroduce the problem that the original PR [Bugfix] Fix illegal memory access #12758 aimed to fix.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a performance issue with speculative decoding by reducing the default max_token_num for FlashInfer workspace initialization from 16384 to 2048. While this change appears to fix the reported problem, modifying default parameter values can have unintended consequences for other users, potentially causing silent performance regressions for use cases involving larger batch sizes. My feedback focuses on improving the maintainability of this change by adding comments to explain the reasoning behind the new default value, ensuring future developers understand the context.


def ensure_workspace_initialized(
max_token_num: int = 16384, hidden_dim: int = 4096, use_fp32_lamport: bool = False
max_token_num: int = 2048, hidden_dim: int = 4096, use_fp32_lamport: bool = False
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Changing the default max_token_num from 16384 to 2048 is a significant reduction. While this fixes the speculative decoding issue you've identified, it's a potentially breaking change for users who might have been relying on the larger default for batches with more than 2048 tokens. They will now silently fall back to a slower, non-fused implementation. To improve maintainability and prevent future confusion, it would be beneficial to add a comment explaining why this value was changed and what specific issue it resolves. This context is crucial for understanding this 'magic number'.

weight: torch.Tensor,
eps: float = 1e-6,
max_token_num: int = 16384,
max_token_num: int = 2048,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the change in ensure_workspace_initialized, this modification of the default max_token_num should be documented with a comment. It helps clarify why the default was reduced and what problem it solves, which is important for anyone maintaining or using this function in the future.

@zhyncs zhyncs merged commit 9299a62 into sgl-project:main Nov 9, 2025
78 of 82 checks passed
Qiaolin-Yu added a commit that referenced this pull request Nov 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants