-
-
Notifications
You must be signed in to change notification settings - Fork 11.7k
[Compile] Conditional compilation. Introduce compile_ranges #24252
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[Compile] Conditional compilation. Introduce compile_ranges #24252
Conversation
|
|
||
| def __call__(self, *args) -> Any: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Btw, does this PR work, or is it mostly WIP? (Are you sure that the graph generated ends up being dynamic on the specific range that is passed?)
There's one problem that I don't know how to solve yet. Let's say we're compiling with ranges [2, 16] and (16, 4096]. Each compilation needs its own ShapeEnv (environment with symbols in it), which has the batch_size constrained to the particular range.
So what we should do is for each range, take the current ShapeEnv (which thinks the batch_size is dynamic on range [2, 4096], clone it, constrain to the current range (e.g. [2, 16]), and use this throughout the compilation.
I don't know how to "clone" ShapeEnvs. Is there anything else we can do here @laithsakka @bobrenjc93 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It works already leaving aside a pytorch standalone_compile that should be fixed in new pytorch release in this commit. But the graphs for each range are dynamically generated, and fusions are applied differently in each graph.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dynamo traces out a graph that is fully dynamic over the batch_size. We should tell torch.compile that that we know things about the batch_size for each range, for example, that the range is constrained to [2, 16]. This will help it generate better code. In order to do this, you'll need to grab the SymInt that is the batch_size and add constraints to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, got it. These are the hints for torch.compile, I meant at the meeting. Thanks, I'll add ShapeEnv here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we are using is_applicable_for_range (the current form of the PR this is fine), if we want to go with the other approach[see my other comment on the PR], which is more complicated i think if we are doing we want a reason) then yeh this is problematic mm./
| return compile_range is not None and ( | ||
| compile_range[0] | ||
| == compile_range[1]) and (compile_range[1] % tp_size == 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way I originally thought of doing this is something like:
return statically_known_true(batch_size %tp_size == 0):If we are able to access the batch_size SymInt here, then we are able to query things about it.
cc @laithsakka @bobrenjc93 on if I'm butchering this API
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you elaborate on how statically_known_true is going to improve the existing approach? Is it more stable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of implementing your own range analysis, PyTorch already encodes range information in the SymInts themselves. So this is more of a code-reuse thing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So it really depends on the goals of those ranges. If the goal is solely/mainly to allow custom passes to branch on ranges, this is fine. In fact, it's simpler than mutating the shape env and having to fork it.
Also, we can then keep the invariant that inductor itself does not specialize and run the same checks here (which we do not have yet).
On the other hand, if someone really thinks that inductor can do better itself significantly if we actually specialize the shape env, then yeah we would not have to do something else.
But it sounds to me like the intention is the earlier one?
|
@ilmarkov out of curiosity, do you have a sense of how much perf wins you'll get out of this (and from which models?) |
|
This pull request has merge conflicts that must be resolved before it can be |
|
@bobrenjc93 Without multiple graphs our fallback (for the large input sizes, i.e. when we don't use allreduce fusion) uses either custom ops or non optimized pytorch operations and which are slower than torch triton operations. I think reasonable perf comparison was done in #19830 |
| if compile_range[0] == compile_range[1]: | ||
| dynamic_shapes = "from_example_inputs" | ||
| else: | ||
| dynamic_shapes = "from_graph" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
both "from_graph" and "from_tracing_context" here have the same effect of getting the shape env we traced the DS graph with? if yes lets do less divergence.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We want to get this PR over the line soon, could you take this on in a follow up?
laithsakka
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one good side effect of this also other than custom passes is that
Each range is tuned with a hint from that range in inductor meaning that we can use this also to ensure that small inputs vs large inputs are max auto tuned with separate hints.
but splitting ranges
this would also work for unbacked which is good! (Well except that we would have to call override hint for unabcked with the actual example value when we do the range compilations cc @bobrenjc93 )
|
here is once concern of this, it will make the soundness story with respect to the DS added by inductor harder. Now the ideal and only actual right fix, is to use unbacked, unbacked comes with a perf hit. with this! now we we have so much more branching, we would need to track Inductor guards per each of those compilations |
Signed-off-by: Luka Govedič <[email protected]>
Signed-off-by: Luka Govedič <[email protected]>
…nels) Signed-off-by: Luka Govedič <[email protected]>
…replacements). TODO pass to remove unnecessary conversions? Signed-off-by: Luka Govedič <[email protected]>
Signed-off-by: Luka Govedič <[email protected]>
Signed-off-by: Luka Govedič <[email protected]>
Signed-off-by: Luka Govedič <[email protected]>
Signed-off-by: Luka Govedič <[email protected]>
Signed-off-by: ilmarkov <[email protected]>
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: ilmarkov <[email protected]>
Signed-off-by: ilmarkov <[email protected]>
|
|
||
|
|
||
| def test_compile_ranges(use_fresh_inductor_cache): | ||
| post_grad_range_checker = PostGradRangeChecker( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How come this works without disabling the vllm cache?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably clean inductor cache allows us to avoid cache hits of vllm cache
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm ok
Signed-off-by: ilmarkov <[email protected]>
Signed-off-by: ilmarkov <[email protected]>
| compilation_config=CompilationConfig( | ||
| mode=CompilationMode.VLLM_COMPILE, | ||
| compile_ranges_split_points=[8, 32], | ||
| compile_sizes=[16, 64, 128], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I wonder if we shall we call those now specialize sizes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
compile_specialize_sizes? so that it is symmetrical to compile_ranges
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah we can do in follow-up
| compilation_start_time = 0.0 | ||
|
|
||
|
|
||
| class PiecewiseCompileInterpreter(torch.fx.Interpreter): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wonder if we shall just replace this with Fx graph pass at this point . (not in this PR )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you explain in more details what you mean? We'll add it to the list of follow-ups in the PR description
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I'm not sure I understand what you mean, this sounds like a big change - could you open an RFC to explain what you mean? Also cc @zou3519
| # First we try to find the range entry for the concrete compile size | ||
| # If not found, we search for the range entry | ||
| # that contains the runtime shape. | ||
| if runtime_shape in self.compile_sizes: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i mean do you need the branch here
the other path works for all no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, if we add the compile_sizes-based ranges to compile_ranges (at the moment they are in range_entries) and sort them keys . In current way it is more clear that we first to to specialize for the compile_sizes then search for ranges.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah because the specific size can overlap with the ranges so I think this is fine
| all_sizes.update([x for x in warmup_sizes if isinstance(x, int)]) | ||
| for compile_range in compile_ranges: | ||
| if not any(x in compile_range for x in all_sizes): | ||
| warmup_sizes.append(compile_range.end) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mm wonder what is the best value to pass here ? end, start or mid point
this will be used as the hint for the inductor compilation (well unless we cache hit).
This actually brings a new question, if we have two identical graphs, inductor internal cache will cache hit even if ranges are different (hint is different, do we want to force a cache miss there in that case?
kind if add the hint to the internal inductor cache lookup)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We add that cache hint via the PostGradPassManager
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mm wonder what is the best value to pass here ? end, start or mid point
I think the end is the best value, often perf of kernels changes after passing a power-of-two multiple (which is what end is)
laithsakka
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
took another pass looks good over all some nits.
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: ilmarkov <[email protected]>
Signed-off-by: ilmarkov <[email protected]>
|
This pull request has merge conflicts that must be resolved before it can be |
|
@ProExpertProg I confirm that I can replicate the failure of the Prithvi tests. I don't know yet why this is failing. I will spend some time tomorrow to debug what is going on. |
|
@christian-pinto sounds great, thanks for helping with this! Let us know if you need any help |
Second part of splitting #22086
Dynamic Graph dispatch via compile_ranges: Introduces a new configuration option, compile_ranges, as an alternative to compile_sizes. This enables dynamic dispatch to different compiled graphs based on the input batch size.
Now with this approach, when allreduce fusion is enabled, vllm adds additional compile range split point in order to separate the graphs: 1. One with fused allreduce for small-middle shape inputs. 2 One with nccl based allreduce for large shape inputs
The existing compile_sizes feature is extended and generalized with compile_ranges. Defined by split points, these ranges allow vllm to dynamically dispatch requests to specific, pre-compiled graphs based on input batch size. For example, a configuration of (32, 64) defines three distinct ranges: [1, 32), [32, 64), and [64, max_num_batched_tokens). This provides granular control, allowing developers to statically enable or disable fusions within each graph to optimize performance for different batch sizes.
All the compilation now is going through
piecewise_backend.py. All compilations will now be done in the bounds on certain compile range, dynamic shape compilation is removed.Purpose
Corresponding RFC: #23113
The primary motivation for these changes is to enhance vllm's performance and adaptability for diverse workloads. By supporting allreduce fusion without custom ops and introducing dynamic graph dispatch, we empower users to fine-tune vllm for more efficient and scalable inference.
Test Plan
Added test
test_compile_ranges.pyFollow ups
shapenv.assume_ranges, shapenv.do_error_at_specialize.Performance benchmarks:
Server:
To enable allreduce fusions:
--compilation-config "{\"pass_config\":{\"enable_fusion\":false,\"enable_attn_fusion\":false,\"enable_noop\":true,\"enable_sequence_parallelism\":false,\"enable_async_tp\":false,\"enable_fi_allreduce_fusion\":true}}"Client. Input len 1024, output len 128.
B200 TP=2,
Llama-3.1-70B-Instruct-FP8Baseline:
Allreduce + RMSNorm + QuantFp8
B200 TP=4
Qwen3-Next-80B-A3B-Instruct, No EPBaseline:
Allreduce + RMSNorm + QuantFp8
B200 TP=8
DeepSeek-V3.1, No EP.Baseline:
Allreduce + RMSNorm + QuantFp8
Start up time increase
Increases start up time as it adds more graph compilations.
For the two graphs compilation (typical case for enabled allreduce fusions) cold start for Deepseek-V3 model takes 181.91 s , warm start takes 12.40 s.
Based on PR: #24604
First part: #24248