Skip to content

Refactor graph input buffers#18991

Merged
ch-wan merged 4 commits intomainfrom
cheng/refactor/cuda-graph-buffer
Feb 21, 2026
Merged

Refactor graph input buffers#18991
ch-wan merged 4 commits intomainfrom
cheng/refactor/cuda-graph-buffer

Conversation

@ch-wan
Copy link
Copy Markdown
Collaborator

@ch-wan ch-wan commented Feb 19, 2026

Motivation

The CUDA graph input buffer management was concentrated in a monolithic GraphInputBuffers class that mixed decode, prefill, and speculative decoding concerns. Additionally, each cuda graph runner (decode, prefill, eagle draft, eagle draft-extend) allocated its own independent tensor buffers, missing opportunities to share memory across runners that use buffers with the same name, dtype, and device.

This PR refactors the buffer management into:

  1. A lightweight ForwardInputBuffers base class with a shared buffer pool (_forward_input_buffer_pool) that automatically reuses the largest existing allocation for each named buffer.
  2. Concrete per-phase dataclasses (DecodeInputBuffers, PrefillInputBuffers, EagleDraftInputBuffers, etc.) that declare their fields and are pooled via share_buffers().

Modifications

input_buffers.pyForwardInputBuffers base class

  • Replace the monolithic GraphInputBuffers with a lightweight base dataclass.
  • share_buffers() iterates over all tensor fields and registers them in a module-level _forward_input_buffer_pool. If a buffer with the same name already exists and is larger, the larger one is reused via as_strided.
  • Skip buffer pooling when torch.is_inference_mode_enabled() to prevent inference-mode tensors from contaminating the pool (they cannot be updated in-place outside InferenceMode, which would break CUDA graph capture).

cuda_graph_runner.pyDecodeInputBuffers

  • Move decode-specific buffer fields, create(), and populate_from_forward_batch() into a new DecodeInputBuffers(ForwardInputBuffers) dataclass.
  • populate_from_forward_batch no longer returns seq_lens_cpu; callers access buffers.seq_lens_cpu[:bs] directly.

piecewise_cuda_graph_runner.pyPrefillInputBuffers

  • Introduce PrefillInputBuffers(ForwardInputBuffers) for prefill-phase buffers.
  • Consolidate scattered self.<field> attributes into a single self.buffers object; all access updated to self.buffers.<field>.

eagle_draft_cuda_graph_runner.pyEagleDraftInputBuffers

  • Introduce EagleDraftInputBuffers(ForwardInputBuffers) with fields: input_ids, req_pool_indices, out_cache_loc, positions, mrope_positions, seq_lens, seq_lens_cpu, extend_seq_lens, topk_p, topk_index, hidden_states, and optional DP gather buffers.
  • All self.<field> buffer accesses in EAGLEDraftCudaGraphRunner replaced with self.buffers.<field>.

eagle_draft_extend_cuda_graph_runner.pyEagleDraftExtendInputBuffers

  • Introduce EagleDraftExtendInputBuffers(ForwardInputBuffers) with fields: input_ids, req_pool_indices, out_cache_loc, positions, mrope_positions, hidden_states, seq_lens, seq_lens_cpu, extend_seq_lens, accept_length, next_token_logits_buffer, and optional DP gather buffers.
  • All self.<field> buffer accesses in EAGLEDraftExtendCudaGraphRunner replaced with self.buffers.<field>.

multi_layer_eagle_draft_extend_cuda_graph_runner.pyMultiLayerEagleDraftExtendInputBuffers

  • Introduce MultiLayerEagleDraftExtendInputBuffers(ForwardInputBuffers) for multi-layer EAGLE.
  • Handles sliced buffers from shared parent allocations and per-step buffers, all accessed via self.buffers.<field>.
  • Cross-runner references updated to self.next_cuda_graph_runner.buffers.<field>.

model_runner.py

  • Import DecodeInputBuffers from cuda_graph_runner instead of the removed GraphInputBuffers from input_buffers.

Checklist

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @ch-wan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the management of input buffers for CUDA graphs, separating buffer definitions and population logic into distinct, phase-specific dataclasses. The introduction of a generic ForwardInputBuffers base class with a pooling mechanism enhances code organization, promotes extensibility, and optimizes memory usage by reusing tensor buffers. This change streamlines the handling of decode and prefill phase inputs, making the system more modular and maintainable.

Highlights

  • Refactored Input Buffer Management: Introduced a new ForwardInputBuffers base class in input_buffers.py to manage tensor buffer pooling and reuse across different input buffer types, improving memory efficiency and code organization.
  • Dedicated Decode Input Buffers: Refactored the existing GraphInputBuffers into a more specific DecodeInputBuffers dataclass within cuda_graph_runner.py, handling decode-phase CUDA graph inputs with dedicated creation and population logic.
  • Dedicated Prefill Input Buffers: Created a new PrefillInputBuffers dataclass in piecewise_cuda_graph_runner.py to consolidate and manage prefill-phase CUDA graph inputs, centralizing their definition and access.
  • Updated Buffer Initialization and Access: Updated CudaGraphRunner and ModelRunner to utilize the new DecodeInputBuffers and its build() method for proper buffer initialization, and modified PiecewiseCudaGraphRunner to access input buffers through a unified self.buffers object.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/srt/model_executor/cuda_graph_runner.py
    • Imported dataclass and Dict for type hinting and class definition.
    • Imported compute_local_num_token_non_padded and ForwardInputBuffers.
    • Defined DecodeInputBuffers as a new dataclass inheriting from ForwardInputBuffers, encapsulating decode-specific input tensors and their creation/population logic.
    • Replaced GraphInputBuffers with DecodeInputBuffers in CudaGraphRunner initialization and type hints.
    • Added a call to self.buffers.build() after DecodeInputBuffers creation to initialize buffer pooling.
    • Modified populate_from_forward_batch to no longer return seq_lens_cpu, instead accessing it directly from buffers.
  • python/sglang/srt/model_executor/input_buffers.py
    • Removed the monolithic GraphInputBuffers class and its associated creation and population methods.
    • Introduced ForwardInputBuffers as a base dataclass, providing a generic build() method.
    • Implemented a module-level _forward_input_buffer_pool for global tensor buffer reuse.
    • Added _build_one_buffer helper to manage buffer pooling, dtype/device validation, and as_strided view creation.
  • python/sglang/srt/model_executor/model_runner.py
    • Updated imports to use DecodeInputBuffers from cuda_graph_runner and removed the old GraphInputBuffers import.
    • Modified the _dummy_run method to instantiate DecodeInputBuffers and call its build() method.
  • python/sglang/srt/model_executor/piecewise_cuda_graph_runner.py
    • Imported dataclass and ForwardInputBuffers.
    • Defined PrefillInputBuffers as a new dataclass inheriting from ForwardInputBuffers, consolidating prefill-specific input tensors.
    • Refactored the initialization of prefill input tensors into the PrefillInputBuffers instance.
    • Initialized self.buffers with PrefillInputBuffers and called self.buffers.build().
    • Updated all internal references from direct self.<field> access to self.buffers.<field> for input tensors in warmup_torch_compile, capture_one_batch_size, and replay_prepare.
Activity
  • No human activity (comments, reviews, progress updates) was provided in the context for this pull request.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@ch-wan
Copy link
Copy Markdown
Collaborator Author

ch-wan commented Feb 19, 2026

/tag-and-rerun-ci

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively refactors the CUDA graph input buffer management, improving code organization and extensibility. The introduction of the ForwardInputBuffers base class with a generic buffer pooling mechanism is a solid design choice. The creation of specialized DecodeInputBuffers and PrefillInputBuffers subclasses successfully encapsulates phase-specific logic, making the code cleaner and more maintainable. The changes across CudaGraphRunner, PiecewiseCudaGraphRunner, and ModelRunner are consistent with this new design. I have one minor suggestion for improving an assertion message for better clarity.

Comment on lines +51 to +53
assert isinstance(
buffer, torch.Tensor
), f"Field {name} is expected to be a torch.Tensor or a dict of torch.Tensor, but got {type(buffer)}."
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The assertion message here is slightly misleading. Since the elif isinstance(buffer, dict): block handles dictionaries, the else block will only be reached by types other than dict and None. The error message should reflect that it expects a torch.Tensor at this point, not a torch.Tensor or a dict.

Suggested change
assert isinstance(
buffer, torch.Tensor
), f"Field {name} is expected to be a torch.Tensor or a dict of torch.Tensor, but got {type(buffer)}."
assert isinstance(
buffer, torch.Tensor
), f"Field {name} is expected to be a torch.Tensor, but got {type(buffer)}."

@ch-wan ch-wan force-pushed the cheng/refactor/cuda-graph-buffer branch from da9e79d to 70bac89 Compare February 19, 2026 07:32
@ch-wan
Copy link
Copy Markdown
Collaborator Author

ch-wan commented Feb 19, 2026

/rerun-stage stage-c-test-4-gpu-b200

@github-actions
Copy link
Copy Markdown
Contributor

✅ Triggered stage-c-test-4-gpu-b200 to run independently (skipping dependencies).

@github-actions
Copy link
Copy Markdown
Contributor

🔗 View workflow run

@ch-wan
Copy link
Copy Markdown
Collaborator Author

ch-wan commented Feb 19, 2026

/rerun-stage stage-c-test-4-gpu-b200

@github-actions
Copy link
Copy Markdown
Contributor

✅ Triggered stage-c-test-4-gpu-b200 to run independently (skipping dependencies).

@github-actions
Copy link
Copy Markdown
Contributor

🔗 View workflow run

@ch-wan ch-wan force-pushed the cheng/refactor/cuda-graph-buffer branch from 704ea92 to f6a106d Compare February 19, 2026 20:07
@ch-wan ch-wan force-pushed the cheng/refactor/cuda-graph-buffer branch from f6a106d to a4a295e Compare February 20, 2026 00:50
@ch-wan ch-wan merged commit 84c67c8 into main Feb 21, 2026
30 of 69 checks passed
@ch-wan ch-wan deleted the cheng/refactor/cuda-graph-buffer branch February 21, 2026 02:09
Fridge003 added a commit that referenced this pull request Feb 23, 2026
xiaobaicxy added a commit to xiaobaicxy/sglang that referenced this pull request Feb 24, 2026
…o xverse_moe

* 'xverse_moe' of https://github.com/xiaobaicxy/sglang: (275 commits)
  fix: add missing blank line after docstring in serving_transcription.py (sgl-project#19206)
  Whisper model support & `/v1/audio/transcriptions` endpoint & benchmark (sgl-project#16983)
  fix: patch docker image fixes (sgl-project#19100)
  [PD-Disagg] Unify prefill info data transition flow, all with `PrefillServerInfo` (sgl-project#19195)
  [CI] Tiny enhance the dp attention load blance benchmark (sgl-project#19194)
  add new ci user (sgl-project#19133)
  [CI] fix the teardown output of disaggregation test (sgl-project#19193)
  [PD-Disagg] Support query dp rank from bootstrap server. (sgl-project#19168)
  [Kernel Slimming] Migrate AWQ marlin repack kernel to JIT (sgl-project#18949)
  [Diffusion] Match rotary_embedding module name style (sgl-project#19179)
  [Refactor] Split rotary_embedding.py into a modular package (sgl-project#19144)
  [NPU] bump sgl-kernel-npu to 2026.02.01.post2 (sgl-project#19178)
  Use single mma warp group for short q_len in FA to optimize decoding performance (sgl-project#18985)
  Reorganize topk logic to clean up code and expose logical experts (sgl-project#16945)
  [ROCm] Use unreg path for custom all-reduce during CUDA graph capture (sgl-project#19162)
  [diffusion] feat: detect Flux2 custom VAE path from component_paths (sgl-project#19170)
  [AMD] ENV flags tuning and cleanup (sgl-project#19176)
  Fix bench_one_batch_server by moving the print statements (sgl-project#19175)
  Update rocm7.2 Dockerfile to install amdsmi for QuickReduce Initialization (sgl-project#19091)
  Revert "Refactor graph input buffers (sgl-project#18991)" (sgl-project#19173)
  ...
magicYang1573 pushed a commit to magicYang1573/sglang that referenced this pull request Mar 9, 2026
magicYang1573 pushed a commit to magicYang1573/sglang that referenced this pull request Mar 9, 2026
sammysun0711 pushed a commit to sammysun0711/sglang that referenced this pull request Mar 20, 2026
sammysun0711 pushed a commit to sammysun0711/sglang that referenced this pull request Mar 20, 2026
Wangzheee pushed a commit to Wangzheee/sglang that referenced this pull request Mar 21, 2026
Wangzheee pushed a commit to Wangzheee/sglang that referenced this pull request Mar 21, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants