Skip to content

Fix OOM by offloading multimodal features to CPU after embedding#16018

Merged
hnyls2002 merged 3 commits intosgl-project:mainfrom
liusy58:oom
Dec 31, 2025
Merged

Fix OOM by offloading multimodal features to CPU after embedding#16018
hnyls2002 merged 3 commits intosgl-project:mainfrom
liusy58:oom

Conversation

@liusy58
Copy link
Collaborator

@liusy58 liusy58 commented Dec 28, 2025

Motivation

In the multimodal scheduling flow, image/audio features are moved to the GPU during the prepare_for_extend phase. However, these tensors remained on the GPU throughout the entire generation process (Decode phase), even though they are only utilized once during Prefill stage.

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @liusy58, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a critical memory optimization for multimodal models by ensuring that once image or audio features have been embedded and utilized during the initial prefill stage, they are promptly moved from the GPU to the CPU. This strategic offloading frees up valuable GPU memory, mitigating the risk of Out-Of-Memory errors that could occur if these features remained on the GPU throughout the entire generation process, thereby enhancing the stability and scalability of multimodal inference.

Highlights

  • Multimodal Feature Offloading: Multimodal features (e.g., image/audio embeddings) are now explicitly moved from the GPU to the CPU after their initial use during the prefill stage. This change prevents these large tensors from unnecessarily occupying GPU memory during the subsequent decode phase.
  • GPU Memory Optimization: By offloading features to the CPU, this pull request addresses potential Out-Of-Memory (OOM) errors, particularly in scenarios involving long generation sequences or larger batch sizes for multimodal models.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a memory optimization by offloading multimodal features to the CPU after they have been used for embedding during the prefill stage. This is a good change to prevent potential out-of-memory errors during the generation phase. My review includes a suggestion to simplify the new code and fix a potential bug.

@yhyang201
Copy link
Collaborator

/tag-and-rerun-ci

@yudian0504
Copy link
Contributor

May I ask if you enabled the SGLANG_USE_CUDA_IPC_TRANSPORT feature when you encountered the OOM issue?

In fact, I found that in base_processor.py in the main, there is also a lack of CPU fallback handling after the IPC pool is full. @yuan-luo can you check the fallback please?

@liusy58
Copy link
Collaborator Author

liusy58 commented Dec 29, 2025

May I ask if you enabled the SGLANG_USE_CUDA_IPC_TRANSPORT feature when you encountered the OOM issue?

In fact, I found that in base_processor.py in the main, there is also a lack of CPU fallback handling after the IPC pool is full. @yuan-luo can you check the fallback please?

No, this PR isn't specifically designed to fix SGLANG_USE_CUDA_IPC_TRANSPORT issues, but you can give it a try to see if it resolves the problem. We've also encountered OOM issues ourselves when that flag was enabled.

@yhyang201
Copy link
Collaborator

/rerun-failed-ci

@yudian0504
Copy link
Contributor

May I ask if you enabled the SGLANG_USE_CUDA_IPC_TRANSPORT feature when you encountered the OOM issue?
In fact, I found that in base_processor.py in the main, there is also a lack of CPU fallback handling after the IPC pool is full. @yuan-luo can you check the fallback please?

No, this PR isn't specifically designed to fix SGLANG_USE_CUDA_IPC_TRANSPORT issues, but you can give it a try to see if it resolves the problem. We've also encountered OOM issues ourselves when that flag was enabled.

cc: #16118

@liusy58
Copy link
Collaborator Author

liusy58 commented Dec 30, 2025

/rerun-failed-ci

Copy link
Collaborator

@hnyls2002 hnyls2002 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moving to CPU is a workaround; we should use a weak ref to handle the GPU memory free. like what @yhyang201 did in #9673

@liusy58
Copy link
Collaborator Author

liusy58 commented Dec 31, 2025

Moving to CPU is a workaround; we should use a weak ref to handle the GPU memory free. like what @yhyang201 did in #9673

@hnyls2002 Updated the comments. Please take another look.

@hnyls2002
Copy link
Collaborator

@hnyls2002 hnyls2002 merged commit abdf65d into sgl-project:main Dec 31, 2025
56 of 83 checks passed
@ShangmingCai ShangmingCai mentioned this pull request Jan 4, 2026
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants