Expose engine_kwargs from SGLang to Verl configuration#1616
Merged
zhaochenyang20 merged 2 commits intoverl-project:mainfrom May 21, 2025
Merged
Expose engine_kwargs from SGLang to Verl configuration#1616zhaochenyang20 merged 2 commits intoverl-project:mainfrom
engine_kwargs from SGLang to Verl configuration#1616zhaochenyang20 merged 2 commits intoverl-project:mainfrom
Conversation
Collaborator
|
great work. I think for VLM, FA3 could be even better? |
Collaborator
|
Which backend is best as default ? |
Collaborator
right now any of it is okay. I think fa3 is default? |
Collaborator
Author
It's determined by SGLang's logic, for now, it's fa3 on hopper, flashinfer on other arch. But even on hopper or ampere, sometimes we found triton is faster, that's why we need this granularity. |
Collaborator
Author
It depends on the batch size, context size and response length, at least we can provide the freedom to user to decide. |
Collaborator
|
@zhaochenyang20 Shall we merge this? |
zhaochenyang20
approved these changes
May 21, 2025
Collaborator
|
@vermouth1992 thanks, merged |
cedricbeta
pushed a commit
to cedricbeta/verl
that referenced
this pull request
May 21, 2025
…t#1616) ### Checklist Before Starting - [x] Search for similar PR(s). ### What does this PR do? Expose `engine_kwargs` from SGLang to Verl configuration This PR enables RL users to configure `engine_kwargs` directly through Verl, providing more control and flexibility over inference behavior. ### High-Level Design One key motivation is the choice of attention backend, which can significantly affect rollout performance. The SGLang team has observed that different attention backends perform better in different phases: FA3 tends to be more efficient during the prefill stage. FlashInfer or Triton generally offer better performance during decode. Moreover, the optimal backend may change across versions of SGLang. By exposing these parameters, we allow users to tune their setup based on the specific use case and version, ultimately improving performance and adaptability. > Add one-line overview of what this PR aims to achieve or accomplish. > Demonstrate the high-level design if this PR is complex. ### Specific Changes > List the specific changes. ### API > Demonstrate how the API changes if any. ### Usage Example > Provide usage example(s) for easier usage. ```python # Add code snippet or script demonstrating how to use this ``` ### Test In my setup about QWen 2.5 7B Instruct on H200: ``` timing_s/step:106.761 (flash infer) timing_s/step:100.520 (fa3) timing_s/step:100.364 (triton) ``` Hence, I would suggest our team to use fa3 or triton for now. > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [ ] Add `[BREAKING]` to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
yellowbee686
pushed a commit
to yellowbee686/verl
that referenced
this pull request
May 22, 2025
…t#1616) ### Checklist Before Starting - [x] Search for similar PR(s). ### What does this PR do? Expose `engine_kwargs` from SGLang to Verl configuration This PR enables RL users to configure `engine_kwargs` directly through Verl, providing more control and flexibility over inference behavior. ### High-Level Design One key motivation is the choice of attention backend, which can significantly affect rollout performance. The SGLang team has observed that different attention backends perform better in different phases: FA3 tends to be more efficient during the prefill stage. FlashInfer or Triton generally offer better performance during decode. Moreover, the optimal backend may change across versions of SGLang. By exposing these parameters, we allow users to tune their setup based on the specific use case and version, ultimately improving performance and adaptability. > Add one-line overview of what this PR aims to achieve or accomplish. > Demonstrate the high-level design if this PR is complex. ### Specific Changes > List the specific changes. ### API > Demonstrate how the API changes if any. ### Usage Example > Provide usage example(s) for easier usage. ```python # Add code snippet or script demonstrating how to use this ``` ### Test In my setup about QWen 2.5 7B Instruct on H200: ``` timing_s/step:106.761 (flash infer) timing_s/step:100.520 (fa3) timing_s/step:100.364 (triton) ``` Hence, I would suggest our team to use fa3 or triton for now. > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [ ] Add `[BREAKING]` to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
6 tasks
vermouth1992
pushed a commit
that referenced
this pull request
May 22, 2025
### Checklist Before Starting - [x] Search for similar PR(s). ### What does this PR do? #1616 causes vllm engine arg init failed, not know why CI of that PR fail to detect. Some errors have shown up.  We may better separate engine args for different inference systems ### High-Level Design > Demonstrate the high-level design if this PR is complex. ### Specific Changes > List the specific changes. ### API ```yml engine_kwargs: # inference engine parameters vllm: swap_space: null # null means "use the engine default value" (usually 4 GB), setting it to, e.g., 32 means 32 GB sglang: attention_backend: null # null means use the engine default value, available options: flashinfer, triton, flashmla ``` ### Usage Example > Provide usage example(s) for easier usage. ```python # Add code snippet or script demonstrating how to use this ``` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [ ] Add `[BREAKING]` to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
wwwjn
pushed a commit
to wwwjn/verl
that referenced
this pull request
Jun 10, 2025
### Checklist Before Starting - [x] Search for similar PR(s). ### What does this PR do? verl-project#1616 causes vllm engine arg init failed, not know why CI of that PR fail to detect. Some errors have shown up.  We may better separate engine args for different inference systems ### High-Level Design > Demonstrate the high-level design if this PR is complex. ### Specific Changes > List the specific changes. ### API ```yml engine_kwargs: # inference engine parameters vllm: swap_space: null # null means "use the engine default value" (usually 4 GB), setting it to, e.g., 32 means 32 GB sglang: attention_backend: null # null means use the engine default value, available options: flashinfer, triton, flashmla ``` ### Usage Example > Provide usage example(s) for easier usage. ```python # Add code snippet or script demonstrating how to use this ``` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [ ] Add `[BREAKING]` to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
chenjiaoAngel
added a commit
to chenjiaoAngel/verl
that referenced
this pull request
Nov 14, 2025
…t#1616) ### Checklist Before Starting - [x] Search for similar PR(s). ### What does this PR do? Expose `engine_kwargs` from SGLang to Verl configuration This PR enables RL users to configure `engine_kwargs` directly through Verl, providing more control and flexibility over inference behavior. ### High-Level Design One key motivation is the choice of attention backend, which can significantly affect rollout performance. The SGLang team has observed that different attention backends perform better in different phases: FA3 tends to be more efficient during the prefill stage. FlashInfer or Triton generally offer better performance during decode. Moreover, the optimal backend may change across versions of SGLang. By exposing these parameters, we allow users to tune their setup based on the specific use case and version, ultimately improving performance and adaptability. > Add one-line overview of what this PR aims to achieve or accomplish. > Demonstrate the high-level design if this PR is complex. ### Specific Changes > List the specific changes. ### API > Demonstrate how the API changes if any. ### Usage Example > Provide usage example(s) for easier usage. ```python # Add code snippet or script demonstrating how to use this ``` ### Test In my setup about QWen 2.5 7B Instruct on H200: ``` timing_s/step:106.761 (flash infer) timing_s/step:100.520 (fa3) timing_s/step:100.364 (triton) ``` Hence, I would suggest our team to use fa3 or triton for now. > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [ ] Add `[BREAKING]` to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
chenjiaoAngel
added a commit
to chenjiaoAngel/verl
that referenced
this pull request
Nov 14, 2025
### Checklist Before Starting - [x] Search for similar PR(s). ### What does this PR do? verl-project#1616 causes vllm engine arg init failed, not know why CI of that PR fail to detect. Some errors have shown up.  We may better separate engine args for different inference systems ### High-Level Design > Demonstrate the high-level design if this PR is complex. ### Specific Changes > List the specific changes. ### API ```yml engine_kwargs: # inference engine parameters vllm: swap_space: null # null means "use the engine default value" (usually 4 GB), setting it to, e.g., 32 means 32 GB sglang: attention_backend: null # null means use the engine default value, available options: flashinfer, triton, flashmla ``` ### Usage Example > Provide usage example(s) for easier usage. ```python # Add code snippet or script demonstrating how to use this ``` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [ ] Add `[BREAKING]` to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
TimurTaepov
pushed a commit
to giorgossideris/verl
that referenced
this pull request
Dec 20, 2025
…t#1616) ### Checklist Before Starting - [x] Search for similar PR(s). ### What does this PR do? Expose `engine_kwargs` from SGLang to Verl configuration This PR enables RL users to configure `engine_kwargs` directly through Verl, providing more control and flexibility over inference behavior. ### High-Level Design One key motivation is the choice of attention backend, which can significantly affect rollout performance. The SGLang team has observed that different attention backends perform better in different phases: FA3 tends to be more efficient during the prefill stage. FlashInfer or Triton generally offer better performance during decode. Moreover, the optimal backend may change across versions of SGLang. By exposing these parameters, we allow users to tune their setup based on the specific use case and version, ultimately improving performance and adaptability. > Add one-line overview of what this PR aims to achieve or accomplish. > Demonstrate the high-level design if this PR is complex. ### Specific Changes > List the specific changes. ### API > Demonstrate how the API changes if any. ### Usage Example > Provide usage example(s) for easier usage. ```python # Add code snippet or script demonstrating how to use this ``` ### Test In my setup about QWen 2.5 7B Instruct on H200: ``` timing_s/step:106.761 (flash infer) timing_s/step:100.520 (fa3) timing_s/step:100.364 (triton) ``` Hence, I would suggest our team to use fa3 or triton for now. > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [ ] Add `[BREAKING]` to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
TimurTaepov
pushed a commit
to giorgossideris/verl
that referenced
this pull request
Dec 20, 2025
### Checklist Before Starting - [x] Search for similar PR(s). ### What does this PR do? verl-project#1616 causes vllm engine arg init failed, not know why CI of that PR fail to detect. Some errors have shown up.  We may better separate engine args for different inference systems ### High-Level Design > Demonstrate the high-level design if this PR is complex. ### Specific Changes > List the specific changes. ### API ```yml engine_kwargs: # inference engine parameters vllm: swap_space: null # null means "use the engine default value" (usually 4 GB), setting it to, e.g., 32 means 32 GB sglang: attention_backend: null # null means use the engine default value, available options: flashinfer, triton, flashmla ``` ### Usage Example > Provide usage example(s) for easier usage. ```python # Add code snippet or script demonstrating how to use this ``` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [ ] Add `[BREAKING]` to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
vyomakesh0728
added a commit
to vyomakesh0728/verl
that referenced
this pull request
Jan 22, 2026
…t#1616) ### Checklist Before Starting - [x] Search for similar PR(s). ### What does this PR do? Expose `engine_kwargs` from SGLang to Verl configuration This PR enables RL users to configure `engine_kwargs` directly through Verl, providing more control and flexibility over inference behavior. ### High-Level Design One key motivation is the choice of attention backend, which can significantly affect rollout performance. The SGLang team has observed that different attention backends perform better in different phases: FA3 tends to be more efficient during the prefill stage. FlashInfer or Triton generally offer better performance during decode. Moreover, the optimal backend may change across versions of SGLang. By exposing these parameters, we allow users to tune their setup based on the specific use case and version, ultimately improving performance and adaptability. > Add one-line overview of what this PR aims to achieve or accomplish. > Demonstrate the high-level design if this PR is complex. ### Specific Changes > List the specific changes. ### API > Demonstrate how the API changes if any. ### Usage Example > Provide usage example(s) for easier usage. ```python # Add code snippet or script demonstrating how to use this ``` ### Test In my setup about QWen 2.5 7B Instruct on H200: ``` timing_s/step:106.761 (flash infer) timing_s/step:100.520 (fa3) timing_s/step:100.364 (triton) ``` Hence, I would suggest our team to use fa3 or triton for now. > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [ ] Add `[BREAKING]` to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
vyomakesh0728
added a commit
to vyomakesh0728/verl
that referenced
this pull request
Jan 22, 2026
### Checklist Before Starting - [x] Search for similar PR(s). ### What does this PR do? verl-project#1616 causes vllm engine arg init failed, not know why CI of that PR fail to detect. Some errors have shown up.  We may better separate engine args for different inference systems ### High-Level Design > Demonstrate the high-level design if this PR is complex. ### Specific Changes > List the specific changes. ### API ```yml engine_kwargs: # inference engine parameters vllm: swap_space: null # null means "use the engine default value" (usually 4 GB), setting it to, e.g., 32 means 32 GB sglang: attention_backend: null # null means use the engine default value, available options: flashinfer, triton, flashmla ``` ### Usage Example > Provide usage example(s) for easier usage. ```python # Add code snippet or script demonstrating how to use this ``` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [ ] Add `[BREAKING]` to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Checklist Before Starting
What does this PR do?
Expose
engine_kwargsfrom SGLang to Verl configurationThis PR enables RL users to configure
engine_kwargsdirectly through Verl, providing more control and flexibility over inference behavior.High-Level Design
One key motivation is the choice of attention backend, which can significantly affect rollout performance. The SGLang team has observed that different attention backends perform better in different phases:
FA3 tends to be more efficient during the prefill stage.
FlashInfer or Triton generally offer better performance during decode.
Moreover, the optimal backend may change across versions of SGLang. By exposing these parameters, we allow users to tune their setup based on the specific use case and version, ultimately improving performance and adaptability.
Specific Changes
API
Usage Example
# Add code snippet or script demonstrating how to use thisTest
In my setup about QWen 2.5 7B Instruct on H200:
Hence, I would suggest our team to use fa3 or triton for now.
Additional Info.
Checklist Before Submitting
[BREAKING]to the PR title if it breaks any API.