Skip to content

[Bug fix] Fix DP attention IndexError in draft_extend mode#14574

Closed
alisonshao wants to merge 2 commits intosgl-project:mainfrom
alisonshao:fix-dp-attention-draft-extend-regression
Closed

[Bug fix] Fix DP attention IndexError in draft_extend mode#14574
alisonshao wants to merge 2 commits intosgl-project:mainfrom
alisonshao:fix-dp-attention-draft-extend-regression

Conversation

@alisonshao
Copy link
Collaborator

@alisonshao alisonshao commented Dec 7, 2025

Summary

  • Fix deterministic failure in unit-test-deepep-8-gpu test
  • Remove is_draft_extend(include_v2=True) from DP attention batch preparation condition

Problem

When running Eagle speculative decoding with DP attention and DeepEP, the draft model forward triggers get_dp_local_info() which expects global_num_tokens_gpu to have dp_size elements. However, in some configurations (when require_mlp_tp_gather is False), this tensor only has 1 element, causing an IndexError.

Example failure: https://github.com/sgl-project/sglang/actions/runs/20001177440/job/57360950954

Error:

File "dp_attention.py", line 393, in get_dp_local_info
    local_start_pos = cumtokens[dp_rank - 1]
IndexError: index 4 is out of bounds for dimension 0 with size 1

Fix

Remove is_draft_extend(include_v2=True) from the DP attention batch preparation condition in forward_batch_info.py. Draft extend mode should not use this DP attention padding logic as the global_num_tokens_gpu tensor may not be properly sized for it.

Test plan

  • Verify unit-test-deepep-8-gpu passes after this fix

This fixes a deterministic failure in unit-test-deepep-8-gpu.

Example failure: https://github.com/sgl-project/sglang/actions/runs/20001177440/job/57360950954

## Problem

When running Eagle speculative decoding with DP attention and DeepEP,
the draft model forward triggers `get_dp_local_info()` which expects
`global_num_tokens_gpu` to have `dp_size` elements. However, in some
configurations (when `require_mlp_tp_gather` is False), this tensor
only has 1 element, causing an IndexError.

Error:
```
File "dp_attention.py", line 393, in get_dp_local_info
    local_start_pos = cumtokens[dp_rank - 1]
IndexError: index 4 is out of bounds for dimension 0 with size 1
```

## Fix

Remove `is_draft_extend(include_v2=True)` from the DP attention batch
preparation condition in `forward_batch_info.py`. Draft extend mode
should not use this DP attention padding logic as the
`global_num_tokens_gpu` tensor may not be properly sized for it.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @alisonshao, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical bug that caused an IndexError in the distributed attention mechanism when using Eagle speculative decoding with DeepEP. The problem stemmed from an incorrect condition in the batch preparation logic that led to global_num_tokens_gpu being improperly sized for draft_extend mode. By excluding draft_extend from this specific DP attention padding logic, the fix ensures stable operation and passes the previously failing unit-test-deepep-8-gpu test.

Highlights

  • Fixes IndexError: Resolves a deterministic IndexError occurring in DP attention when running Eagle speculative decoding with DeepEP.
  • Addresses Test Failure: Specifically fixes the unit-test-deepep-8-gpu test which was failing due to this issue.
  • Modifies Batch Preparation Logic: Removes is_draft_extend(include_v2=True) from the DP attention batch preparation condition to prevent incorrect sizing of global_num_tokens_gpu.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a critical IndexError that occurs during speculative decoding with DP attention. The error is caused by draft_extend mode incorrectly using DP attention padding logic, leading to an incorrectly sized global_num_tokens_gpu tensor. The fix is to remove is_draft_extend(include_v2=True) from the conditional logic in prepare_mlp_sync_batch. This ensures that draft_extend mode is handled by the general extend mode logic, which correctly sets up the necessary parameters without causing the index error. The change is correct, minimal, and effectively resolves the bug. The pull request description clearly explains the problem and the solution. I have reviewed the change and the surrounding code and I have no further comments.

@alisonshao
Copy link
Collaborator Author

/tag-and-rerun-ci

@github-actions github-actions bot added the run-ci label Dec 7, 2025
@Fridge003
Copy link
Collaborator

Fridge003 commented Dec 7, 2025

@rainj-me is checking this issue
Will be temporarily skipped in #14586

@Fridge003 Fridge003 closed this Dec 7, 2025
@rainj-me
Copy link
Collaborator

rainj-me commented Dec 8, 2025

fixed PR: #14601

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants