Skip to content

[Spec] Fix hidden_states size mismatch in STANDALONE speculative decoding#14563

Closed
alisonshao wants to merge 17 commits intomainfrom
fix-mamba2-spec-batch-size
Closed

[Spec] Fix hidden_states size mismatch in STANDALONE speculative decoding#14563
alisonshao wants to merge 17 commits intomainfrom
fix-mamba2-spec-batch-size

Conversation

@alisonshao
Copy link
Copy Markdown
Collaborator

@alisonshao alisonshao commented Dec 7, 2025

Summary

Fix tensor size mismatch errors for STANDALONE speculative decoding with different draft and target model architectures (e.g., Nemotron-9B target + Llama-3.2-1B draft).

Problem

When using --max-running-requests 8, batches get merged via merge_batch(). This failed with:

RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 4480 but got size 2048

The issue: idle batches were created with target model's hidden_size (4480), but after draft model forward pass, spec_info.hidden_states had draft model's hidden_size (2048). When merging these batches, the concatenation failed.

Root Cause

For STANDALONE mode:

  • Target model (Nemotron-9B): hidden_size = 4480
  • Draft model (Llama-3.2-1B): hidden_size = 2048

The draft model produces its own hidden_states (size 2048), not the target model's. So all EagleDraftInput.hidden_states should consistently use the draft model's hidden_size.

Changes

  1. eagle_draft_cuda_graph_runner.py:

    • Use draft model's hidden_size for CUDA graph buffer (not target's)
    • Add conditional check before copying hidden_states when sizes differ
  2. eagle_draft_extend_cuda_graph_runner.py:

    • Use draft model's hidden_size for CUDA graph buffer (not target's)
  3. eagle_worker.py:

    • _draft_preprocess_idle(): Use draft model's hidden_size for idle batches
    • forward_draft_extend_after_decode(): Use draft model's hidden_size for idle batches
    • Use spec_info.draft_token_num for num_tokens_per_batch in verify phase

Logic

  • For EAGLE models: Use target_hidden_size from hf_config (EAGLE head uses target's hidden states as input)
  • For STANDALONE: Use self.model_runner.model_config.hidden_size (draft model produces its own hidden states)

Test plan

  • Fixes TestNvidiaNemotronNanoV2SpeculativeDecoding in test_nvidia_nemotron_nano_v2.py

Use spec_info.draft_token_num instead of self.speculative_num_steps + 1
for num_tokens_per_batch in the verify() method. The previous value
caused incorrect batch size calculation when draft_token_num differs
from speculative_num_steps + 1, leading to tensor size mismatches
during Mamba2 state updates.
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @alisonshao, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request implements a critical fix for the speculative decoding verification process in Mamba2 models. By adjusting how the batch size is determined during the verification phase, it ensures that the system correctly accounts for the number of draft tokens per sequence. This prevents tensor dimension mismatches and enhances the stability and reliability of the model's inference, particularly in configurations where the number of speculative steps and draft tokens might vary.

Highlights

  • Batch Size Calculation Fix: Corrected the calculation of num_tokens_per_batch in the verify method for Mamba2 models, ensuring it uses spec_info.draft_token_num instead of self.speculative_num_steps + 1.
  • Root Cause Addressed: This change resolves an issue where incorrect batch size calculation led to tensor size mismatches during the TARGET_VERIFY phase, particularly when draft_token_num differed from speculative_num_steps + 1.
  • Test Failure Resolution: The fix addresses a specific test failure observed in test_nvidia_nemotron_nano_v2.py related to speculative decoding.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a bug in the batch size calculation for the verification phase of speculative decoding, particularly affecting Mamba2 models. The change correctly uses spec_info.draft_token_num for num_tokens_per_batch instead of the hardcoded assumption of self.speculative_num_steps + 1. This is the correct approach, as draft_token_num accurately reflects the number of tokens per sequence in the verification batch, resolving potential tensor size mismatches when this value differs from the number of speculative steps. The fix is clear, logical, and directly solves the described issue. The change is approved.

@alisonshao
Copy link
Copy Markdown
Collaborator Author

/tag-and-rerun-ci

@github-actions github-actions bot added the run-ci label Dec 7, 2025
…peculative decoding

When creating idle batch spec_info, use the target model's hidden_size
instead of the draft model's hidden_size. The hidden_states are used
during verification with the target model, so they need to match the
target model's dimensions.

This fixes tensor size mismatch errors (e.g., 4480 vs 2048) when merging
batches during speculative decoding with different draft and target model
architectures.
…culative decoding

Use target model's hidden_size instead of draft model's hidden_size when
creating hidden_states buffer in EagleDraftExtendCudaGraphRunner. The
hidden_states are used during verification with the target model.
@alisonshao

This comment was marked as outdated.

@alisonshao alisonshao changed the title [Spec] Fix batch size calculation in verify phase for Mamba2 models [Spec] Use target model's hidden_size for STANDALONE speculative decoding Dec 7, 2025
…LONE mode

Use target model's hidden_size for hidden_states buffer allocation to prevent
tensor size mismatch during batch merging.
…ding

For STANDALONE mode with different target/draft model architectures,
use draft model's hidden_size consistently throughout:
- CUDA graph buffers use draft model's hidden_size
- Idle batch creation uses draft model's hidden_size
- Add conditional check for hidden_states copy when sizes differ

This prevents tensor size mismatch errors when merging batches.
@alisonshao alisonshao changed the title [Spec] Use target model's hidden_size for STANDALONE speculative decoding [Spec] Fix hidden_states size mismatch in STANDALONE speculative decoding Dec 9, 2025
@alisonshao
Copy link
Copy Markdown
Collaborator Author

@alisonshao
Copy link
Copy Markdown
Collaborator Author

alisonshao commented Dec 11, 2025

covered by: #14733; the current PR will be kept for reference

@hnyls2002 hnyls2002 closed this Dec 29, 2025
@zhyncs zhyncs deleted the fix-mamba2-spec-batch-size branch December 31, 2025 20:45
@Ximingwang-09
Copy link
Copy Markdown
Contributor

Why did we skip the tests instead of fixing it like this PR did?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants