[Core] Reduce TTFT with concurrent partial prefills#10235
[Core] Reduce TTFT with concurrent partial prefills#10235comaniac merged 71 commits intovllm-project:mainfrom
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
vllm/config.py
Outdated
| max_num_batched_tokens: Optional[int], | ||
| max_num_seqs: int, | ||
| max_model_len: int, | ||
| num_prefill_slots: int = 1, |
There was a problem hiding this comment.
Is this actually "maximum number of prefill sequence in a batch"? If so could we name it something more informative, like max_num_batched_prefill_seqs ?
There was a problem hiding this comment.
It's technically only the number of partial prefills allowed in a batch. You could still have like 100 sequence groups with 5 prompt tokens each all schedule in a single step here.
max_num_partial_prefills?
vllm/core/scheduler.py
Outdated
| # Requests with more than (4% max context length) tokens to prefill | ||
| # are "big". |
There was a problem hiding this comment.
Why this definition and threshold?
There was a problem hiding this comment.
The entire goal here is to not allow decode to be starved by the prefill phase blocking on long requests- this part of the PR description:
A single very large prompt will block all other prompts from prefilling for many iterations. This can eventually starve decoding- for example a 130k token prompt with —max-num-batched-tokens=512 will take about 250 iterations to prefill, in which time the currently decoding sequences may all finish. Send a few of these requests at once and very quickly nothing will be decoding.
Just allowing concurrent partial prefills doesn't solve the problem by itself, because multiple long requests could still block up the prefill. So what we do is only allow a single long request to prefill, and allow smaller requests to be pulled from the waiting queue instead of more long ones
|
|
||
| @pytest.mark.parametrize("model", ["facebook/opt-125m"]) | ||
| @pytest.mark.parametrize("max_num_partial_prefills", [2, 4, 8]) | ||
| def test_chunked_prefill_with_actual_engine(model: str, |
There was a problem hiding this comment.
cc @rickyyx here's what we tried to do to test that the sampler doesn't throw any assertions- we put multiple prompts into an engine and manually step it forward with them all partially prefilled
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
Signed-off-by: Prashant Gupta <prashantgupta@us.ibm.com>
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
Signed-off-by: Prashant Gupta <prashantgupta@us.ibm.com>
Signed-off-by: Prashant Gupta <prashantgupta@us.ibm.com>
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
Signed-off-by: Prashant Gupta <prashantgupta@us.ibm.com>
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
|
Wow, this feature is very cool! Awesome PR! But does it help in case of 2 large requests? Let's say I have |
|
@joerunde there seem to be some typos regarding
and b.t.w it's not clear to me if it's 4% from the bottom or from the top. e.g. if the model has context-length of 100. 4% is context above 4 or above 96 ? as 4% sounds too small for being a threshold for "long", no? |
|
@hibukipanim Both 0 and None don't make sense as a threshold. The logic in the parser takes It is indeed 4% from the bottom. With modern models this actually makes sense since they often have a max context length of ~100k and 4000 tokens is indeed already quite long (though for most use cases it probably still makes sense to tune this a little and not rely on the default). |
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com> Signed-off-by: Prashant Gupta <prashantgupta@us.ibm.com> Co-authored-by: Prashant Gupta <prashantgupta@us.ibm.com> Co-authored-by: Cody Yu <hao.yu.cody@gmail.com> Signed-off-by: Louis Ulmer <ulmerlouis@gmail.com>
|
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com> Signed-off-by: Prashant Gupta <prashantgupta@us.ibm.com> Co-authored-by: Prashant Gupta <prashantgupta@us.ibm.com> Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>
# What this PR does / why we need it? When processing a mix of large and small requests, the TTFT of responses is significantly reduc\ed. Please refer to vllm-project/vllm#10235, which achieves the same effect by simply limiting the number of prompt fills for long requests. This solution can be applied to both AscendScheduler (V0) and vLLM Scheduler (V1). Tests show that TTFT can be significantly improved when handling such mixed requests. However, This capability is currently missing when Ascend Scheduler is enabled. This benchmark used the Qwen3-8B model, with a context length of 128K, running on a single card. Regarding dataset selection, the sharegpt_clean dataset is used, with its content concatenated and cropped. Small requests with token=50 and medium requests with token=10240 were constructed (there were also large requests with token=102400, but these were ignored because when using the Prefill First scheduling strategy, max_num_batched_tokens will not be set to such a large value). When loading vLLM, set max_num_batched_tokens=22000. This length can accommodate two medium-sized requests and some short requests, reflecting an extreme scenario where the budget is almost entirely occupied by longer requests. Next, we mix 990 small requests and 100 medium requests into one type of load scenario (hereinafter referred to as 10%), and similarly generate load scenarios with 5% medium requests and 1% load scenarios. Performance tests were conducted separately for enabling vLLMScheduler, AscendScheduler, and AscendScheduler (long prompt concurrency set to 1). - vLLM version: v0.10.2 - vLLM main: vllm-project/vllm@1dfea5f --------- Signed-off-by: Csrayz <jover@cmbchina.com>
# What this PR does / why we need it? When processing a mix of large and small requests, the TTFT of responses is significantly reduc\ed. Please refer to vllm-project/vllm#10235, which achieves the same effect by simply limiting the number of prompt fills for long requests. This solution can be applied to both AscendScheduler (V0) and vLLM Scheduler (V1). Tests show that TTFT can be significantly improved when handling such mixed requests. However, This capability is currently missing when Ascend Scheduler is enabled. This benchmark used the Qwen3-8B model, with a context length of 128K, running on a single card. Regarding dataset selection, the sharegpt_clean dataset is used, with its content concatenated and cropped. Small requests with token=50 and medium requests with token=10240 were constructed (there were also large requests with token=102400, but these were ignored because when using the Prefill First scheduling strategy, max_num_batched_tokens will not be set to such a large value). When loading vLLM, set max_num_batched_tokens=22000. This length can accommodate two medium-sized requests and some short requests, reflecting an extreme scenario where the budget is almost entirely occupied by longer requests. Next, we mix 990 small requests and 100 medium requests into one type of load scenario (hereinafter referred to as 10%), and similarly generate load scenarios with 5% medium requests and 1% load scenarios. Performance tests were conducted separately for enabling vLLMScheduler, AscendScheduler, and AscendScheduler (long prompt concurrency set to 1). - vLLM version: v0.10.2 - vLLM main: vllm-project/vllm@1dfea5f --------- Signed-off-by: Csrayz <jover@cmbchina.com>
# What this PR does / why we need it? When processing a mix of large and small requests, the TTFT of responses is significantly reduc\ed. Please refer to vllm-project/vllm#10235, which achieves the same effect by simply limiting the number of prompt fills for long requests. This solution can be applied to both AscendScheduler (V0) and vLLM Scheduler (V1). Tests show that TTFT can be significantly improved when handling such mixed requests. However, This capability is currently missing when Ascend Scheduler is enabled. This benchmark used the Qwen3-8B model, with a context length of 128K, running on a single card. Regarding dataset selection, the sharegpt_clean dataset is used, with its content concatenated and cropped. Small requests with token=50 and medium requests with token=10240 were constructed (there were also large requests with token=102400, but these were ignored because when using the Prefill First scheduling strategy, max_num_batched_tokens will not be set to such a large value). When loading vLLM, set max_num_batched_tokens=22000. This length can accommodate two medium-sized requests and some short requests, reflecting an extreme scenario where the budget is almost entirely occupied by longer requests. Next, we mix 990 small requests and 100 medium requests into one type of load scenario (hereinafter referred to as 10%), and similarly generate load scenarios with 5% medium requests and 1% load scenarios. Performance tests were conducted separately for enabling vLLMScheduler, AscendScheduler, and AscendScheduler (long prompt concurrency set to 1). - vLLM version: v0.10.2 - vLLM main: vllm-project/vllm@1dfea5f --------- Signed-off-by: Csrayz <jover@cmbchina.com> Signed-off-by: luolun <luolun1995@cmbchina.com>
# What this PR does / why we need it? When processing a mix of large and small requests, the TTFT of responses is significantly reduc\ed. Please refer to vllm-project/vllm#10235, which achieves the same effect by simply limiting the number of prompt fills for long requests. This solution can be applied to both AscendScheduler (V0) and vLLM Scheduler (V1). Tests show that TTFT can be significantly improved when handling such mixed requests. However, This capability is currently missing when Ascend Scheduler is enabled. This benchmark used the Qwen3-8B model, with a context length of 128K, running on a single card. Regarding dataset selection, the sharegpt_clean dataset is used, with its content concatenated and cropped. Small requests with token=50 and medium requests with token=10240 were constructed (there were also large requests with token=102400, but these were ignored because when using the Prefill First scheduling strategy, max_num_batched_tokens will not be set to such a large value). When loading vLLM, set max_num_batched_tokens=22000. This length can accommodate two medium-sized requests and some short requests, reflecting an extreme scenario where the budget is almost entirely occupied by longer requests. Next, we mix 990 small requests and 100 medium requests into one type of load scenario (hereinafter referred to as 10%), and similarly generate load scenarios with 5% medium requests and 1% load scenarios. Performance tests were conducted separately for enabling vLLMScheduler, AscendScheduler, and AscendScheduler (long prompt concurrency set to 1). - vLLM version: v0.10.2 - vLLM main: vllm-project/vllm@1dfea5f --------- Signed-off-by: Csrayz <jover@cmbchina.com> Signed-off-by: luolun <luolun1995@cmbchina.com>
# What this PR does / why we need it? When processing a mix of large and small requests, the TTFT of responses is significantly reduc\ed. Please refer to vllm-project/vllm#10235, which achieves the same effect by simply limiting the number of prompt fills for long requests. This solution can be applied to both AscendScheduler (V0) and vLLM Scheduler (V1). Tests show that TTFT can be significantly improved when handling such mixed requests. However, This capability is currently missing when Ascend Scheduler is enabled. This benchmark used the Qwen3-8B model, with a context length of 128K, running on a single card. Regarding dataset selection, the sharegpt_clean dataset is used, with its content concatenated and cropped. Small requests with token=50 and medium requests with token=10240 were constructed (there were also large requests with token=102400, but these were ignored because when using the Prefill First scheduling strategy, max_num_batched_tokens will not be set to such a large value). When loading vLLM, set max_num_batched_tokens=22000. This length can accommodate two medium-sized requests and some short requests, reflecting an extreme scenario where the budget is almost entirely occupied by longer requests. Next, we mix 990 small requests and 100 medium requests into one type of load scenario (hereinafter referred to as 10%), and similarly generate load scenarios with 5% medium requests and 1% load scenarios. Performance tests were conducted separately for enabling vLLMScheduler, AscendScheduler, and AscendScheduler (long prompt concurrency set to 1). - vLLM version: v0.10.2 - vLLM main: vllm-project/vllm@1dfea5f --------- Signed-off-by: Csrayz <jover@cmbchina.com> Signed-off-by: hwhaokun <haokun0405@163.com>
# What this PR does / why we need it? When processing a mix of large and small requests, the TTFT of responses is significantly reduc\ed. Please refer to vllm-project/vllm#10235, which achieves the same effect by simply limiting the number of prompt fills for long requests. This solution can be applied to both AscendScheduler (V0) and vLLM Scheduler (V1). Tests show that TTFT can be significantly improved when handling such mixed requests. However, This capability is currently missing when Ascend Scheduler is enabled. This benchmark used the Qwen3-8B model, with a context length of 128K, running on a single card. Regarding dataset selection, the sharegpt_clean dataset is used, with its content concatenated and cropped. Small requests with token=50 and medium requests with token=10240 were constructed (there were also large requests with token=102400, but these were ignored because when using the Prefill First scheduling strategy, max_num_batched_tokens will not be set to such a large value). When loading vLLM, set max_num_batched_tokens=22000. This length can accommodate two medium-sized requests and some short requests, reflecting an extreme scenario where the budget is almost entirely occupied by longer requests. Next, we mix 990 small requests and 100 medium requests into one type of load scenario (hereinafter referred to as 10%), and similarly generate load scenarios with 5% medium requests and 1% load scenarios. Performance tests were conducted separately for enabling vLLMScheduler, AscendScheduler, and AscendScheduler (long prompt concurrency set to 1). - vLLM version: v0.10.2 - vLLM main: vllm-project/vllm@1dfea5f --------- Signed-off-by: Csrayz <jover@cmbchina.com> Signed-off-by: nsdie <yeyifan@huawei.com>
# What this PR does / why we need it? When processing a mix of large and small requests, the TTFT of responses is significantly reduc\ed. Please refer to vllm-project/vllm#10235, which achieves the same effect by simply limiting the number of prompt fills for long requests. This solution can be applied to both AscendScheduler (V0) and vLLM Scheduler (V1). Tests show that TTFT can be significantly improved when handling such mixed requests. However, This capability is currently missing when Ascend Scheduler is enabled. This benchmark used the Qwen3-8B model, with a context length of 128K, running on a single card. Regarding dataset selection, the sharegpt_clean dataset is used, with its content concatenated and cropped. Small requests with token=50 and medium requests with token=10240 were constructed (there were also large requests with token=102400, but these were ignored because when using the Prefill First scheduling strategy, max_num_batched_tokens will not be set to such a large value). When loading vLLM, set max_num_batched_tokens=22000. This length can accommodate two medium-sized requests and some short requests, reflecting an extreme scenario where the budget is almost entirely occupied by longer requests. Next, we mix 990 small requests and 100 medium requests into one type of load scenario (hereinafter referred to as 10%), and similarly generate load scenarios with 5% medium requests and 1% load scenarios. Performance tests were conducted separately for enabling vLLMScheduler, AscendScheduler, and AscendScheduler (long prompt concurrency set to 1). - vLLM version: v0.10.2 - vLLM main: vllm-project/vllm@1dfea5f --------- Signed-off-by: Csrayz <jover@cmbchina.com>
|
Not bad |
Replaces #10061, as inspired by @njhill and @comaniac's comments. Co-authored by @prashantgupta24
Context: our customers running large multi-tenanted SaaS deployments of vLLM have a problem where high volumes of small-prompt requests are usually processed smoothly, but quickly pile up in a giant queue when a small number of large-prompt requests are submitted. We see the decoding throughput drop to zero on multiple replicas when this happens.
The current chunked prefill implementation only allows a single sequence to be partially prefilled at a time. This has a few limitations:
—max-num-batched-tokens=512will take about 250 iterations to prefill, in which time the currently decoding sequences may all finish. Send a few of these requests at once and very quickly nothing will be decoding.This PR implements both
--max-num-partial-prefills=N--max-long-partial-prefills=Nto set the limit on the number of long sequences that can be concurrently prefilled. This defaults to 1 sequence.--long-prefill-threashold=x%to set a percentage of the context length that determines which sequences are considered "long". This defaults to 4%This is implemented in the v0 scheduler. We’re aware that the v1 implementation is underway and will later become the default, but we need a fix for our customers soon and we hope that what we discover here may help inform a different, better solution in the v1 scheduler.
To test this we created three scenarios, a “medium request” case, a “large request” case, and a “mixed” case.
For the medium request case, we created a subset of the
sharegptdataset with 900 small requests (<50 prompt characters) and 100 of the largest requests (typically between 10k and 20k prompt characters, which we call “medium” sized). We modified thebenchmark_serving.pytest to not filter out any of the small or large requests, and ran it with this dataset. What we expect to find is similar throughput compared to the main branch, but much lower TTFT on the small requests. Since 10% of the requests are larger than the rest, we should see better TTFT at p90 and below, with comparable TTFT above p90.For the large request case, we took 990 of the smallest requests from the
sharegptdataset, and then took 10 of the largest requests and duplicated the prompts until they were around 100k characters in length. We ran this in the same way as the medium request case, and here we expect to see smaller TTFT across the board since the small requests will no longer be blocked from prefilling by the few very large requests.For the mixed case, we used 850 “small”, and 140 “medium” requests, as well as 10 "large" requests where we duplicated the prompts up to 200k characters.
All tests were run on a single 80GB A100, with the command:
We ran the tests against the main branch (commit
874f551b3626321f6bf9a902b8fd9fc1fa7c7f2e), as well as this PR with the new optimization both disabled (--max-num-partial-prefills=1), and enabled (--max-num-partial-prefills=4)The results are shown here:

The TTFT improvements are very easy to see- in the medium case we cut the p90 TTFT in half, and in the large case we cut it nearly 30x. In both cases we did not measure a throughput drop when run with
--max-num-partial-prefills=1, and the throughput drop with--max-num-partial-prefills=4is minimal.Surprisingly, along with the massive TTFT improvements in the "mixed" test case, we also see a 4% throughput improvement (3506 tokens/s up from 3368 tokens/s). Based on the fact that ITL still looks a little slower, it seems that the throughput is higher simply because more requests were able to be successfully scheduled at the same time.
cc @rickyyx
PR Checklist (Click to Expand)
Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]for bug fixes.[CI/Build]for build or continuous integration improvements.[Doc]for documentation fixes and improvements.[Model]for adding a new model or improving an existing model. Model name should appear in the title.[Frontend]For changes on the vLLM frontend (e.g., OpenAI API server,LLMclass, etc.)[Kernel]for changes affecting CUDA kernels or other compute kernels.[Core]for changes in the core vLLM logic (e.g.,LLMEngine,AsyncLLMEngine,Scheduler, etc.)[Hardware][Vendor]for hardware-specific changes. Vendor name should appear in the prefix (e.g.,[Hardware][AMD]).[Misc]for PRs that do not fit the above categories. Please use this sparingly.Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
format.shto format your code.docs/source/if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.Adding or changing kernels
Each custom kernel needs a schema and one or more implementations to be registered with PyTorch.
Tensorsrequire meta-functions. Meta-functions should be implemented and registered in python so that dynamic dims can be handled automatically. See above documents for a description of meta-functions.torch.libary.opcheck()to test the function registration and meta-function for any registered ops. Seetests/kernelsfor examples.Notes for Large Changes
Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with
rfc-requiredand might not go through the PR.What to Expect for the Reviews
The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:
action-requiredlabel on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.Thank You
Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!