[Bug fix] Fix DP attention IndexError in draft_extend mode#14574
[Bug fix] Fix DP attention IndexError in draft_extend mode#14574alisonshao wants to merge 2 commits intosgl-project:mainfrom
Conversation
This fixes a deterministic failure in unit-test-deepep-8-gpu. Example failure: https://github.com/sgl-project/sglang/actions/runs/20001177440/job/57360950954 ## Problem When running Eagle speculative decoding with DP attention and DeepEP, the draft model forward triggers `get_dp_local_info()` which expects `global_num_tokens_gpu` to have `dp_size` elements. However, in some configurations (when `require_mlp_tp_gather` is False), this tensor only has 1 element, causing an IndexError. Error: ``` File "dp_attention.py", line 393, in get_dp_local_info local_start_pos = cumtokens[dp_rank - 1] IndexError: index 4 is out of bounds for dimension 0 with size 1 ``` ## Fix Remove `is_draft_extend(include_v2=True)` from the DP attention batch preparation condition in `forward_batch_info.py`. Draft extend mode should not use this DP attention padding logic as the `global_num_tokens_gpu` tensor may not be properly sized for it.
Summary of ChangesHello @alisonshao, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves a critical bug that caused an Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request addresses a critical IndexError that occurs during speculative decoding with DP attention. The error is caused by draft_extend mode incorrectly using DP attention padding logic, leading to an incorrectly sized global_num_tokens_gpu tensor. The fix is to remove is_draft_extend(include_v2=True) from the conditional logic in prepare_mlp_sync_batch. This ensures that draft_extend mode is handled by the general extend mode logic, which correctly sets up the necessary parameters without causing the index error. The change is correct, minimal, and effectively resolves the bug. The pull request description clearly explains the problem and the solution. I have reviewed the change and the surrounding code and I have no further comments.
|
/tag-and-rerun-ci |
|
fixed PR: #14601 |
Summary
unit-test-deepep-8-gputestis_draft_extend(include_v2=True)from DP attention batch preparation conditionProblem
When running Eagle speculative decoding with DP attention and DeepEP, the draft model forward triggers
get_dp_local_info()which expectsglobal_num_tokens_gputo havedp_sizeelements. However, in some configurations (whenrequire_mlp_tp_gatheris False), this tensor only has 1 element, causing an IndexError.Example failure: https://github.com/sgl-project/sglang/actions/runs/20001177440/job/57360950954
Error:
Fix
Remove
is_draft_extend(include_v2=True)from the DP attention batch preparation condition inforward_batch_info.py. Draft extend mode should not use this DP attention padding logic as theglobal_num_tokens_gputensor may not be properly sized for it.Test plan
unit-test-deepep-8-gpupasses after this fix