Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Frontend] Fix request length check and add option to disallow auto truncation in scheduler #2876

Merged
merged 3 commits into from
Jan 16, 2025

Conversation

CatherineSue
Copy link
Collaborator

@CatherineSue CatherineSue commented Jan 14, 2025

Motivation

  • TokenizerManager should check for input_ids is not None instead of obj.input_ids as obj.input_ids could be None.
  • When max_tokens + input_len exceeds the model's context length, there should be a ValueError.
  • Silently truncating for inputs that exceed the maximum length in scheduler.py will be easily ignored by users.

Modifications

  • Used input_ids instead of obj.input_ids in TokenizerManager
  • Added check for max_tokens in TokenizerManager
  • Added --allow-auto-truncate flag to optionally enable the old truncation behavior in scheduler.py. Default is False.

Checklist

  • Format your code according to the Contributor Guide.
  • Add unit tests as outlined in the Contributor Guide.
  • Update documentation as needed, including docstrings or example tutorials.

@CatherineSue CatherineSue force-pushed the disallow-auto-truncate branch 4 times, most recently from 7cdf63d to e20cbf4 Compare January 14, 2025 18:34
@@ -683,11 +694,22 @@ def handle_embedding_request(

# Truncate prompts that are too long
if len(req.origin_input_ids) >= self.max_req_input_len:
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

qq: why do we have truncation here after checking it in TokenizerManager? @merrymercy

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TokenizerManager checks the input against the context length form config.json.
Sometimes, the GPU memory is not enough, so the KV cache pool size will not be enough to hold even a single request with the maximum context length.
scheduler.py will do this fine-grained check against the KV cache size pool.

@CatherineSue CatherineSue force-pushed the disallow-auto-truncate branch 8 times, most recently from 0c69cec to 90629ef Compare January 15, 2025 22:00
@@ -683,11 +694,22 @@ def handle_embedding_request(

# Truncate prompts that are too long
if len(req.origin_input_ids) >= self.max_req_input_len:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TokenizerManager checks the input against the context length form config.json.
Sometimes, the GPU memory is not enough, so the KV cache pool size will not be enough to hold even a single request with the maximum context length.
scheduler.py will do this fine-grained check against the KV cache size pool.

@CatherineSue CatherineSue force-pushed the disallow-auto-truncate branch 2 times, most recently from 2ba5e85 to a901fd5 Compare January 16, 2025 21:29
…runcation in scheduler

- `TokenizerManager` should check for `input_ids is not None` instead of `obj.input_ids` as obj.input_ids could be None.
- Silently truncating for inputs that exceed the maximum length in scheduler.py will be easily ignored by users.

This PR:
- Used `input_ids` instead of `obj.input_ids` in TokenizerManager
- Added UT for request length check
- Added --allow-auto-truncate flag to optionally enable the old truncation
behavior. Default is False.
@CatherineSue CatherineSue force-pushed the disallow-auto-truncate branch from a901fd5 to fa012c2 Compare January 16, 2025 21:30
and obj.sampling_params.get("max_new_tokens") + input_token_num
>= self.context_len
):
raise ValueError(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This check is too strong. I think people sometimes just set a very large fixed max_new_tokens regardless of the input length. Can we make this a warning instead?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think without this check, it is a weird behavior when I send a max_tokens: 200K and ignore_eos: True request with llama3.1-405B, the server will just generate less than 200K silently. vLLM also has this check for max_tokens: https://github.com/vllm-project/vllm/blob/main/vllm/entrypoints/openai/serving_engine.py#L238-L245

req.origin_input_ids = req.origin_input_ids[:max_req_input_len]
return None
else:
error_msg = (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This check is also too strong. For example, when people use a small GPU to hold a llama 3.2. It is likely that they do not have enough memory for 128K tokens. Can we make the warning default mode instead of error?

Copy link
Collaborator Author

@CatherineSue CatherineSue Jan 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGLang has memory profiling to check for the max num of tokens it can support based on the model and the gpu hardware, correct? For instance, when I use llama3.1-405B on a 8xH100, SGLang sets max_total_num_tokens to be 8k instead of 128k. And it truncates silently here. I feel this is not an expected behavior. How is this log propageted properly to users that his request is getting trucated? And it is right truncation. Should the user not be able to set this truncation behavior?

@merrymercy merrymercy merged commit a8ccacc into sgl-project:main Jan 16, 2025
15 of 16 checks passed
@merrymercy
Copy link
Contributor

@CatherineSue Thanks. It is merged.

@max99x
Copy link
Contributor

max99x commented Jan 29, 2025

Unfortunately this breaks our use case, even with --allow-auto-truncate.

E.g. we use a model with 8192 context, send 8000 tokens input, and max_tokens=200, expecting an EOS around the 100 tokens in the vast majority of cases. Previously, it would correctly generate ~100 tokens and return successfully. Now it will instantly error out before even trying to generate.

We could in theory calculate the input size on the client and cap the max_tokens value accurately on our side, but SGLang does not easily expose the tokenizer on the client as far as I know, so that isn't simple.

@binarycrayon
Copy link
Contributor

binarycrayon commented Jan 31, 2025

@CatherineSue @merrymercy please see @max99x 's comment. We do online inference in a game loop where it is impossible to precisely obtain max tokens in a request (since the context grows as the game is played out). This would force us either to switch to a different model that has a larger context length or default to 0.4.1.

timethink pushed a commit to timethink/sglang that referenced this pull request Mar 9, 2025
@CatherineSue CatherineSue deleted the disallow-auto-truncate branch March 19, 2025 00:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants