generated from fastai/nbdev_template
-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Refactor vLLM generation [1/N]: Extract vLLM generation #4700
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
albertvillanova
merged 35 commits into
huggingface:main
from
albertvillanova:refactor-vllm-1
Jan 27, 2026
+812
−795
Merged
Changes from all commits
Commits
Show all changes
35 commits
Select commit
Hold shift + click to select a range
fb3fcf1
Extract vLLM generation from GRPOTrainer
albertvillanova 5f7326f
Merge remote-tracking branch 'upstream/main' into refactor-vllm-1
albertvillanova 793ecc0
Extract vLLM generation from RLOOTrainer
albertvillanova b5d27e1
Merge remote-tracking branch 'upstream/main' into refactor-vllm-1
albertvillanova 264febc
Update deprecated max_prompt_length with vllm_max_model_length after …
albertvillanova dae8c28
Set logprobs_mode in RLOOTrainer as well
albertvillanova ae18630
Remove back-reference to trainer
albertvillanova 5d6eb43
Remove unused param 'mode'
albertvillanova 1a9ca52
Merge remote-tracking branch 'upstream/main' into refactor-vllm-1
albertvillanova fcd3f56
Pass profiler to decouple trainer from VLLMGeneration
albertvillanova 3f11f99
Decouple trainer by making rollout_func a single-argument callable
albertvillanova ec26f47
Merge remote-tracking branch 'upstream/main' into refactor-vllm-1
albertvillanova 35f6b4b
Pass profiler to generate
albertvillanova 54b52f8
Merge remote-tracking branch 'upstream/main' into refactor-vllm-1
albertvillanova 1323ef3
Merge remote-tracking branch 'upstream/main' into refactor-vllm-1
albertvillanova 7201755
Merge remote-tracking branch 'upstream/main' into refactor-vllm-1
albertvillanova c607a8e
Fix style
albertvillanova 9a91585
Make precommit
albertvillanova da8f976
Replace args with explicit parameters
albertvillanova 9580b71
Rename vllm_quantization as quantization
albertvillanova b66ab8a
Merge remote-tracking branch 'upstream/main' into refactor-vllm-1
albertvillanova 2ae35cc
Merge branch 'main' into refactor-vllm-1
albertvillanova 17b57a1
Import VLLMGeneration unconditionally
albertvillanova 67c93c8
Fix server_timeout type and align default value
albertvillanova 62ff3cc
Move __init__ docstring to class header
albertvillanova 4b0c483
Add explanatory comments
albertvillanova a21e401
Fix misleading renaming
albertvillanova 167be7d
Remove mention to old code lines
albertvillanova f80a327
Add type hints
albertvillanova b7726f4
Pass kwargs directly
albertvillanova 04138e7
Merge remote-tracking branch 'upstream/main' into refactor-vllm-1
albertvillanova 1b0c2a6
Update docstring
albertvillanova edd0101
Update param defaults
albertvillanova ddb9ce5
Sort params
albertvillanova ea03910
Fix param renaming
albertvillanova File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,25 @@ | ||
| # Copyright 2020-2026 The HuggingFace Team. All rights reserved. | ||
| # | ||
| # Licensed under the Apache License, Version 2.0 (the "License"); | ||
| # you may not use this file except in compliance with the License. | ||
| # You may obtain a copy of the License at | ||
| # | ||
| # http://www.apache.org/licenses/LICENSE-2.0 | ||
| # | ||
| # Unless required by applicable law or agreed to in writing, software | ||
| # distributed under the License is distributed on an "AS IS" BASIS, | ||
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| # See the License for the specific language governing permissions and | ||
| # limitations under the License. | ||
|
|
||
| """Generation backends for TRL trainers.""" | ||
|
|
||
| from ..import_utils import is_vllm_available | ||
|
|
||
|
|
||
| __all__ = [] | ||
|
|
||
| if is_vllm_available(): | ||
| from .vllm_generation import VLLMGeneration | ||
|
|
||
| __all__.append("VLLMGeneration") |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not necessarily for this PR, but since we add a generation
submodule, we could havevllm_clientin this submodule as well