-
Notifications
You must be signed in to change notification settings - Fork 358
[VLLM] Allows for max tokens to be set in model config file #547
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…ghteval into nathan-fix-vllm-from-file
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
sampling_params.max_tokens = ( | ||
max_new_tokens if sampling_params.max_tokens is None else sampling_params.max_tokens | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
vLLM has max_tokens
as Optional[int] but defaulting to 16 here
That means whenever sampling_params is created, it assumes the value 16 and hence this sampling_params.max_tokens ends up being always equal to 16
Then the lighteval benchmark goes on to warn that the output is not in the Gold Format ...
* commit * commit * Update src/lighteval/main_vllm.py * commit * change doc * change doc * change doc * allow max new token to be set in model config file
* commit * commit * Update src/lighteval/main_vllm.py * commit * change doc * change doc * change doc * allow max new token to be set in model config file
No description provided.