-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Frontend] Fix request length check and add option to disallow auto truncation in scheduler #2876
[Frontend] Fix request length check and add option to disallow auto truncation in scheduler #2876
Conversation
7cdf63d
to
e20cbf4
Compare
@@ -683,11 +694,22 @@ def handle_embedding_request( | |||
|
|||
# Truncate prompts that are too long | |||
if len(req.origin_input_ids) >= self.max_req_input_len: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
qq: why do we have truncation here after checking it in TokenizerManager? @merrymercy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TokenizerManager checks the input against the context length form config.json
.
Sometimes, the GPU memory is not enough, so the KV cache pool size will not be enough to hold even a single request with the maximum context length.
scheduler.py will do this fine-grained check against the KV cache size pool.
0c69cec
to
90629ef
Compare
@@ -683,11 +694,22 @@ def handle_embedding_request( | |||
|
|||
# Truncate prompts that are too long | |||
if len(req.origin_input_ids) >= self.max_req_input_len: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TokenizerManager checks the input against the context length form config.json
.
Sometimes, the GPU memory is not enough, so the KV cache pool size will not be enough to hold even a single request with the maximum context length.
scheduler.py will do this fine-grained check against the KV cache size pool.
2ba5e85
to
a901fd5
Compare
…runcation in scheduler - `TokenizerManager` should check for `input_ids is not None` instead of `obj.input_ids` as obj.input_ids could be None. - Silently truncating for inputs that exceed the maximum length in scheduler.py will be easily ignored by users. This PR: - Used `input_ids` instead of `obj.input_ids` in TokenizerManager - Added UT for request length check - Added --allow-auto-truncate flag to optionally enable the old truncation behavior. Default is False.
a901fd5
to
fa012c2
Compare
and obj.sampling_params.get("max_new_tokens") + input_token_num | ||
>= self.context_len | ||
): | ||
raise ValueError( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This check is too strong. I think people sometimes just set a very large fixed max_new_tokens
regardless of the input length. Can we make this a warning instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think without this check, it is a weird behavior when I send a max_tokens: 200K
and ignore_eos: True
request with llama3.1-405B, the server will just generate less than 200K silently. vLLM also has this check for max_tokens: https://github.com/vllm-project/vllm/blob/main/vllm/entrypoints/openai/serving_engine.py#L238-L245
req.origin_input_ids = req.origin_input_ids[:max_req_input_len] | ||
return None | ||
else: | ||
error_msg = ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This check is also too strong. For example, when people use a small GPU to hold a llama 3.2. It is likely that they do not have enough memory for 128K tokens. Can we make the warning default mode instead of error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGLang has memory profiling to check for the max num of tokens it can support based on the model and the gpu hardware, correct? For instance, when I use llama3.1-405B on a 8xH100, SGLang sets max_total_num_tokens to be 8k instead of 128k. And it truncates silently here. I feel this is not an expected behavior. How is this log propageted properly to users that his request is getting trucated? And it is right truncation. Should the user not be able to set this truncation behavior?
@CatherineSue Thanks. It is merged. |
Unfortunately this breaks our use case, even with --allow-auto-truncate. E.g. we use a model with 8192 context, send 8000 tokens input, and max_tokens=200, expecting an EOS around the 100 tokens in the vast majority of cases. Previously, it would correctly generate ~100 tokens and return successfully. Now it will instantly error out before even trying to generate. We could in theory calculate the input size on the client and cap the max_tokens value accurately on our side, but SGLang does not easily expose the tokenizer on the client as far as I know, so that isn't simple. |
@CatherineSue @merrymercy please see @max99x 's comment. We do online inference in a game loop where it is impossible to precisely obtain max tokens in a request (since the context grows as the game is played out). This would force us either to switch to a different model that has a larger context length or default to |
…runcation in scheduler (sgl-project#2876)
Motivation
TokenizerManager
should check forinput_ids is not None
instead ofobj.input_ids
as obj.input_ids could be None.Modifications
input_ids
instead ofobj.input_ids
in TokenizerManagerChecklist