Skip to content

(WIP) Async LoRA prefetch - add scheduler logic for lora prefetch#13828

Closed
glenliu21 wants to merge 1 commit intosgl-project:mainfrom
glenliu21:lora_prefetch
Closed

(WIP) Async LoRA prefetch - add scheduler logic for lora prefetch#13828
glenliu21 wants to merge 1 commit intosgl-project:mainfrom
glenliu21:lora_prefetch

Conversation

@glenliu21
Copy link
Contributor

@glenliu21 glenliu21 commented Nov 24, 2025

Motivation

This is the first PR for #8712. In this PR, we use the prefetch policy used in S-Lora, where we prefetch LoRA adapters based on what requests are on the Scheduler's waiting queue.

Modifications

  • Added @ConnorLi96's profiling code
  • Implement creation of a ForwardBatch as a LoRA prefetch batch, which consists of requests that are next to be ran on the waiting queue
  • Implement the LoRA prefetch backend in LoRAManager, the memory pool, and the LoRA backend

Accuracy Tests

Benchmarking and Profiling

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @glenliu21, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request lays the groundwork for asynchronous LoRA prefetching, a strategy designed to improve the efficiency of serving requests that utilize LoRA adapters. By proactively identifying and loading LoRAs for requests waiting in the queue, the system aims to reduce latency and enhance overall throughput. The changes primarily involve integrating prefetching logic into the scheduler and LoRA management components, along with adding profiling capabilities to monitor the performance impact of these operations. This is the first step in a multi-part implementation.

Highlights

  • Asynchronous LoRA Prefetching: Implements the initial scheduler logic for prefetching LoRA adapters based on the S-LoRA policy, anticipating future requests in the waiting queue.
  • Scheduler Integration: Introduces a mechanism within the scheduler to identify requests needing LoRA prefetching and construct a dedicated ForwardBatch for these operations.
  • Profiling Enhancements: Adds detailed timing and logging for LoRA eviction and loading processes within the memory pool, as well as overall batch processing times in the scheduler.
  • API Extension: Modifies LoRAManager.prepare_lora_batch to support a prefetch flag and introduces new prefetch_lora_adapters methods in tp_worker and model_runner for handling prefetch requests.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the scheduler-side logic for asynchronous LoRA prefetching, which is a good step towards improving performance for multi-LoRA workloads. The addition of profiling code for LoRA loading and batch execution is also very helpful for performance analysis.

My review focuses on improving code maintainability by reducing duplication, fixing a potential bug in ForwardBatch creation, and addressing some minor code quality issues.

Comment on lines +189 to +209
def prefetch_lora_adapters(self, prefetch_lora_batch: ModelWorkerBatch):
prefetch_fwd = ForwardBatch(
forward_mode=prefetch_lora_batch.forward_mode,
batch_size=len(prefetch_lora_batch.seq_lens),
input_ids=prefetch_lora_batch.input_ids,
req_pool_indices=prefetch_lora_batch.req_pool_indices,
seq_lens=prefetch_lora_batch.seq_lens,
out_cache_loc=prefetch_lora_batch.out_cache_loc,
seq_lens_sum=prefetch_lora_batch.seq_lens_sum,
seq_lens_cpu=prefetch_lora_batch.seq_lens_cpu,
orig_seq_lens=prefetch_lora_batch.orig_seq_lens,
lora_ids=prefetch_lora_batch.lora_ids,
)
assert isinstance(prefetch_lora_batch.extend_seq_lens, list)
prefetch_fwd.extend_seq_lens = torch.tensor(
prefetch_lora_batch.extend_seq_lens, dtype=torch.int32
).to(self.model_runner.device, non_blocking=True)
prefetch_fwd.extend_seq_lens_cpu = prefetch_lora_batch.extend_seq_lens

result = self.model_runner.prefetch_lora_batch(prefetch_fwd)
return result
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The manual creation of ForwardBatch here is brittle and incomplete. For instance, it's missing the positions tensor, which is required by some attention backends and is normally computed in ForwardBatch.init_new.

To make this more robust, I suggest using ForwardBatch.init_new. To prevent lora_manager.prepare_lora_batch from being called twice, you could add a prepare_lora: bool = True parameter to ForwardBatch.init_new and call it with prepare_lora=False here.

Here's how you could modify ForwardBatch.init_new in python/sglang/srt/model_executor/forward_batch_info.py:

# In ForwardBatch.init_new
def init_new(cls, batch: ModelWorkerBatch, model_runner: ModelRunner, prepare_lora: bool = True):
    # ...
    # Init lora information
    if model_runner.server_args.enable_lora and prepare_lora:
        model_runner.lora_manager.prepare_lora_batch(ret)
    # ...

Then, you can simplify prefetch_lora_adapters as suggested.

    def prefetch_lora_adapters(self, prefetch_lora_batch: ModelWorkerBatch):
        prefetch_fwd = ForwardBatch.init_new(
            prefetch_lora_batch, self.model_runner, prepare_lora=False
        )
        result = self.model_runner.prefetch_lora_batch(prefetch_fwd)
        return result

Comment on lines +1449 to +1483
def prepare_for_lora_prefetch(self):
"""Taken mainly from prepare_for_extend()"""
self.forward_mode = ForwardMode.EXTEND

# Init tensors
reqs = self.reqs
input_ids = [r.fill_ids[len(r.prefix_indices) :] for r in reqs]
extend_num_tokens = sum(len(ids) for ids in input_ids)
seq_lens = [len(r.fill_ids) for r in reqs]
orig_seq_lens = [max(len(r.fill_ids), len(r.origin_input_ids)) for r in reqs]
prefix_lens = [len(r.prefix_indices) for r in reqs]
extend_lens = [r.extend_input_len for r in reqs]

input_ids_tensor = torch.tensor(
list(chain.from_iterable(input_ids)), dtype=torch.int64
).to(self.device, non_blocking=True)
seq_lens_tensor = torch.tensor(seq_lens, dtype=torch.int64).to(
self.device, non_blocking=True
)
seq_lens_cpu = torch.tensor(seq_lens, dtype=torch.int64)
orig_seq_lens_tensor = torch.tensor(orig_seq_lens, dtype=torch.int32).to(
self.device, non_blocking=True
)

# Set batch fields needed by alloc_for_extend
self.prefix_lens = prefix_lens
self.extend_lens = extend_lens
self.seq_lens = seq_lens_tensor
self.seq_lens_cpu = seq_lens_cpu
self.extend_num_tokens = extend_num_tokens

self.input_ids = input_ids_tensor
self.orig_seq_lens = orig_seq_lens_tensor
self.seq_lens_sum = sum(seq_lens)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The new method prepare_for_lora_prefetch duplicates a significant amount of code from prepare_for_extend. To improve maintainability and avoid code duplication, consider refactoring the common tensor initialization logic into a private helper method. This helper could then be called by both prepare_for_lora_prefetch and prepare_for_extend.

prefetch_batch.get_model_worker_batch()
)

print(f"current batch lora ids: {running_batch_lora_ids}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This print statement appears to be for debugging. It's better to use the logging module (e.g., logger.debug(...)) for such messages. This allows for better control over log levels and output streams in different environments.

Suggested change
print(f"current batch lora ids: {running_batch_lora_ids}")
logger.debug(f"current batch lora ids: {running_batch_lora_ids}")

Comment on lines +2118 to +2123
has_lora = hasattr(batch, "lora_ids") and batch.lora_ids
lora_info = (
f", lora_ids={len(set(batch.lora_ids)) if has_lora else 0}"
if has_lora
else ""
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The batch object here is a ScheduleBatch, which does not have a lora_ids attribute. This causes has_lora to always be False, and the LoRA information is never logged. You can get the LoRA IDs by iterating through batch.reqs.

            lora_ids = [req.lora_id for req in batch.reqs if req.lora_id]
            lora_info = f", lora_ids={len(set(lora_ids))}" if lora_ids else ""

@glenliu21 glenliu21 marked this pull request as draft November 24, 2025 23:51
@glenliu21 glenliu21 changed the title [1/2] Async LoRA prefetch - add scheduler logic for lora prefetch (WIP) Async LoRA prefetch - add scheduler logic for lora prefetch Nov 24, 2025
@github-actions github-actions bot added documentation Improvements or additions to documentation dependencies Pull requests that update a dependency file sgl-kernel npu diffusion SGLang Diffusion model-gateway labels Nov 27, 2025
@glenliu21
Copy link
Contributor Author

Moved to #14190.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file diffusion SGLang Diffusion documentation Improvements or additions to documentation lora model-gateway npu sgl-kernel

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant