Skip to content

Expose engine_kwargs from SGLang to Verl configuration#1616

Merged
zhaochenyang20 merged 2 commits intoverl-project:mainfrom
hebiao064:support_eng_args_for_sglang
May 21, 2025
Merged

Expose engine_kwargs from SGLang to Verl configuration#1616
zhaochenyang20 merged 2 commits intoverl-project:mainfrom
hebiao064:support_eng_args_for_sglang

Conversation

@hebiao064
Copy link
Copy Markdown
Collaborator

Checklist Before Starting

  • Search for similar PR(s).

What does this PR do?

Expose engine_kwargs from SGLang to Verl configuration

This PR enables RL users to configure engine_kwargs directly through Verl, providing more control and flexibility over inference behavior.

High-Level Design

One key motivation is the choice of attention backend, which can significantly affect rollout performance. The SGLang team has observed that different attention backends perform better in different phases:

FA3 tends to be more efficient during the prefill stage.

FlashInfer or Triton generally offer better performance during decode.

Moreover, the optimal backend may change across versions of SGLang. By exposing these parameters, we allow users to tune their setup based on the specific use case and version, ultimately improving performance and adaptability.

Add one-line overview of what this PR aims to achieve or accomplish.

Demonstrate the high-level design if this PR is complex.

Specific Changes

List the specific changes.

API

Demonstrate how the API changes if any.

Usage Example

Provide usage example(s) for easier usage.

# Add code snippet or script demonstrating how to use this 

Test

In my setup about QWen 2.5 7B Instruct on H200:

timing_s/step:106.761 (flash infer)
timing_s/step:100.520 (fa3)
timing_s/step:100.364 (triton)

Hence, I would suggest our team to use fa3 or triton for now.

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc.

Additional Info.

  • Issue Number: Fixes issue # or discussion # if any.
  • Training: [Note which backend this PR will affect: FSDP, Megatron, both, or none]
  • Inference: [Note which backend this PR will affect: vLLM, SGLang, both, or none]

Checklist Before Submitting

  • Read the Contribute Guide.
  • Apply pre-commit checks.
  • Add [BREAKING] to the PR title if it breaks any API.
  • Update the documentation about your changes in the docs.
  • Add CI test(s) if necessary.

@zhaochenyang20
Copy link
Copy Markdown
Collaborator

great work. I think for VLM, FA3 could be even better?

@SwordFaith
Copy link
Copy Markdown
Collaborator

Which backend is best as default ?

@zhaochenyang20
Copy link
Copy Markdown
Collaborator

Which backend is best as default ?

right now any of it is okay. I think fa3 is default?

@hebiao064
Copy link
Copy Markdown
Collaborator Author

hebiao064 commented May 21, 2025

Which backend is best as default ?

It's determined by SGLang's logic, for now, it's fa3 on hopper, flashinfer on other arch.

https://github.com/sgl-project/sglang/blob/4024e1d2a853903cfddfb70f5e10e165db30305c/python/sglang/srt/utils.py#L2096

But even on hopper or ampere, sometimes we found triton is faster, that's why we need this granularity.

@hebiao064
Copy link
Copy Markdown
Collaborator Author

great work. I think for VLM, FA3 could be even better?

It depends on the batch size, context size and response length, at least we can provide the freedom to user to decide.

@vermouth1992
Copy link
Copy Markdown
Collaborator

@zhaochenyang20 Shall we merge this?

@zhaochenyang20 zhaochenyang20 merged commit 72683a7 into verl-project:main May 21, 2025
33 checks passed
@zhaochenyang20
Copy link
Copy Markdown
Collaborator

@vermouth1992 thanks, merged

cedricbeta pushed a commit to cedricbeta/verl that referenced this pull request May 21, 2025
…t#1616)

### Checklist Before Starting

- [x] Search for similar PR(s).

### What does this PR do?

Expose `engine_kwargs` from SGLang to Verl configuration

This PR enables RL users to configure `engine_kwargs` directly through
Verl, providing more control and flexibility over inference behavior.



### High-Level Design

One key motivation is the choice of attention backend, which can
significantly affect rollout performance. The SGLang team has observed
that different attention backends perform better in different phases:

FA3 tends to be more efficient during the prefill stage.

FlashInfer or Triton generally offer better performance during decode.

Moreover, the optimal backend may change across versions of SGLang. By
exposing these parameters, we allow users to tune their setup based on
the specific use case and version, ultimately improving performance and
adaptability.
> Add one-line overview of what this PR aims to achieve or accomplish. 

> Demonstrate the high-level design if this PR is complex.

### Specific Changes

> List the specific changes.

### API

> Demonstrate how the API changes if any.

### Usage Example

> Provide usage example(s) for easier usage.

```python
# Add code snippet or script demonstrating how to use this 
```

### Test

In my setup about QWen 2.5 7B Instruct on H200:
```
timing_s/step:106.761 (flash infer)
timing_s/step:100.520 (fa3)
timing_s/step:100.364 (triton)
```

Hence, I would suggest our team to use fa3 or triton for now.

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluatuion results, etc.

### Additional Info.

- **Issue Number**: Fixes issue # or discussion # if any.
- **Training**: [Note which backend this PR will affect: FSDP, Megatron,
both, or none]
- **Inference**: [Note which backend this PR will affect: vLLM, SGLang,
both, or none]

### Checklist Before Submitting

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting).
- [ ] Add `[BREAKING]` to the PR title if it breaks any API.
- [ ] Update the documentation about your changes in the
[docs](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add CI test(s) if necessary.
yellowbee686 pushed a commit to yellowbee686/verl that referenced this pull request May 22, 2025
…t#1616)

### Checklist Before Starting

- [x] Search for similar PR(s).

### What does this PR do?

Expose `engine_kwargs` from SGLang to Verl configuration

This PR enables RL users to configure `engine_kwargs` directly through
Verl, providing more control and flexibility over inference behavior.



### High-Level Design

One key motivation is the choice of attention backend, which can
significantly affect rollout performance. The SGLang team has observed
that different attention backends perform better in different phases:

FA3 tends to be more efficient during the prefill stage.

FlashInfer or Triton generally offer better performance during decode.

Moreover, the optimal backend may change across versions of SGLang. By
exposing these parameters, we allow users to tune their setup based on
the specific use case and version, ultimately improving performance and
adaptability.
> Add one-line overview of what this PR aims to achieve or accomplish. 

> Demonstrate the high-level design if this PR is complex.

### Specific Changes

> List the specific changes.

### API

> Demonstrate how the API changes if any.

### Usage Example

> Provide usage example(s) for easier usage.

```python
# Add code snippet or script demonstrating how to use this 
```

### Test

In my setup about QWen 2.5 7B Instruct on H200:
```
timing_s/step:106.761 (flash infer)
timing_s/step:100.520 (fa3)
timing_s/step:100.364 (triton)
```

Hence, I would suggest our team to use fa3 or triton for now.

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluatuion results, etc.

### Additional Info.

- **Issue Number**: Fixes issue # or discussion # if any.
- **Training**: [Note which backend this PR will affect: FSDP, Megatron,
both, or none]
- **Inference**: [Note which backend this PR will affect: vLLM, SGLang,
both, or none]

### Checklist Before Submitting

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting).
- [ ] Add `[BREAKING]` to the PR title if it breaks any API.
- [ ] Update the documentation about your changes in the
[docs](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add CI test(s) if necessary.
vermouth1992 pushed a commit that referenced this pull request May 22, 2025
### Checklist Before Starting

- [x] Search for similar PR(s).

### What does this PR do?

#1616 causes vllm engine arg init failed, not know why CI of that PR
fail to detect. Some errors have shown up.


![image](https://github.com/user-attachments/assets/ac6bb86e-1576-458e-b341-0e949724ac12)

We may better separate engine args for different inference systems

### High-Level Design

> Demonstrate the high-level design if this PR is complex.

### Specific Changes

> List the specific changes.

### API

```yml
    engine_kwargs: # inference engine parameters
      vllm:
        swap_space: null # null means "use the engine default value" (usually 4 GB), setting it to, e.g., 32 means 32 GB
      sglang:
        attention_backend: null # null means use the engine default value, available options: flashinfer, triton, flashmla
```

### Usage Example

> Provide usage example(s) for easier usage.

```python
# Add code snippet or script demonstrating how to use this 
```

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluatuion results, etc.

### Additional Info.

- **Issue Number**: Fixes issue # or discussion # if any.
- **Training**: [Note which backend this PR will affect: FSDP, Megatron,
both, or none]
- **Inference**: [Note which backend this PR will affect: vLLM, SGLang,
both, or none]

### Checklist Before Submitting

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting).
- [ ] Add `[BREAKING]` to the PR title if it breaks any API.
- [ ] Update the documentation about your changes in the
[docs](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add CI test(s) if necessary.
wwwjn pushed a commit to wwwjn/verl that referenced this pull request Jun 10, 2025
### Checklist Before Starting

- [x] Search for similar PR(s).

### What does this PR do?

verl-project#1616 causes vllm engine arg init failed, not know why CI of that PR
fail to detect. Some errors have shown up.


![image](https://github.com/user-attachments/assets/ac6bb86e-1576-458e-b341-0e949724ac12)

We may better separate engine args for different inference systems

### High-Level Design

> Demonstrate the high-level design if this PR is complex.

### Specific Changes

> List the specific changes.

### API

```yml
    engine_kwargs: # inference engine parameters
      vllm:
        swap_space: null # null means "use the engine default value" (usually 4 GB), setting it to, e.g., 32 means 32 GB
      sglang:
        attention_backend: null # null means use the engine default value, available options: flashinfer, triton, flashmla
```

### Usage Example

> Provide usage example(s) for easier usage.

```python
# Add code snippet or script demonstrating how to use this 
```

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluatuion results, etc.

### Additional Info.

- **Issue Number**: Fixes issue # or discussion # if any.
- **Training**: [Note which backend this PR will affect: FSDP, Megatron,
both, or none]
- **Inference**: [Note which backend this PR will affect: vLLM, SGLang,
both, or none]

### Checklist Before Submitting

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting).
- [ ] Add `[BREAKING]` to the PR title if it breaks any API.
- [ ] Update the documentation about your changes in the
[docs](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add CI test(s) if necessary.
chenjiaoAngel added a commit to chenjiaoAngel/verl that referenced this pull request Nov 14, 2025
…t#1616)

### Checklist Before Starting

- [x] Search for similar PR(s).

### What does this PR do?

Expose `engine_kwargs` from SGLang to Verl configuration

This PR enables RL users to configure `engine_kwargs` directly through
Verl, providing more control and flexibility over inference behavior.



### High-Level Design

One key motivation is the choice of attention backend, which can
significantly affect rollout performance. The SGLang team has observed
that different attention backends perform better in different phases:

FA3 tends to be more efficient during the prefill stage.

FlashInfer or Triton generally offer better performance during decode.

Moreover, the optimal backend may change across versions of SGLang. By
exposing these parameters, we allow users to tune their setup based on
the specific use case and version, ultimately improving performance and
adaptability.
> Add one-line overview of what this PR aims to achieve or accomplish. 

> Demonstrate the high-level design if this PR is complex.

### Specific Changes

> List the specific changes.

### API

> Demonstrate how the API changes if any.

### Usage Example

> Provide usage example(s) for easier usage.

```python
# Add code snippet or script demonstrating how to use this 
```

### Test

In my setup about QWen 2.5 7B Instruct on H200:
```
timing_s/step:106.761 (flash infer)
timing_s/step:100.520 (fa3)
timing_s/step:100.364 (triton)
```

Hence, I would suggest our team to use fa3 or triton for now.

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluatuion results, etc.

### Additional Info.

- **Issue Number**: Fixes issue # or discussion # if any.
- **Training**: [Note which backend this PR will affect: FSDP, Megatron,
both, or none]
- **Inference**: [Note which backend this PR will affect: vLLM, SGLang,
both, or none]

### Checklist Before Submitting

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting).
- [ ] Add `[BREAKING]` to the PR title if it breaks any API.
- [ ] Update the documentation about your changes in the
[docs](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add CI test(s) if necessary.
chenjiaoAngel added a commit to chenjiaoAngel/verl that referenced this pull request Nov 14, 2025
### Checklist Before Starting

- [x] Search for similar PR(s).

### What does this PR do?

verl-project#1616 causes vllm engine arg init failed, not know why CI of that PR
fail to detect. Some errors have shown up.


![image](https://github.com/user-attachments/assets/ac6bb86e-1576-458e-b341-0e949724ac12)

We may better separate engine args for different inference systems

### High-Level Design

> Demonstrate the high-level design if this PR is complex.

### Specific Changes

> List the specific changes.

### API

```yml
    engine_kwargs: # inference engine parameters
      vllm:
        swap_space: null # null means "use the engine default value" (usually 4 GB), setting it to, e.g., 32 means 32 GB
      sglang:
        attention_backend: null # null means use the engine default value, available options: flashinfer, triton, flashmla
```

### Usage Example

> Provide usage example(s) for easier usage.

```python
# Add code snippet or script demonstrating how to use this 
```

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluatuion results, etc.

### Additional Info.

- **Issue Number**: Fixes issue # or discussion # if any.
- **Training**: [Note which backend this PR will affect: FSDP, Megatron,
both, or none]
- **Inference**: [Note which backend this PR will affect: vLLM, SGLang,
both, or none]

### Checklist Before Submitting

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting).
- [ ] Add `[BREAKING]` to the PR title if it breaks any API.
- [ ] Update the documentation about your changes in the
[docs](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add CI test(s) if necessary.
TimurTaepov pushed a commit to giorgossideris/verl that referenced this pull request Dec 20, 2025
…t#1616)

### Checklist Before Starting

- [x] Search for similar PR(s).

### What does this PR do?

Expose `engine_kwargs` from SGLang to Verl configuration

This PR enables RL users to configure `engine_kwargs` directly through
Verl, providing more control and flexibility over inference behavior.



### High-Level Design

One key motivation is the choice of attention backend, which can
significantly affect rollout performance. The SGLang team has observed
that different attention backends perform better in different phases:

FA3 tends to be more efficient during the prefill stage.

FlashInfer or Triton generally offer better performance during decode.

Moreover, the optimal backend may change across versions of SGLang. By
exposing these parameters, we allow users to tune their setup based on
the specific use case and version, ultimately improving performance and
adaptability.
> Add one-line overview of what this PR aims to achieve or accomplish. 

> Demonstrate the high-level design if this PR is complex.

### Specific Changes

> List the specific changes.

### API

> Demonstrate how the API changes if any.

### Usage Example

> Provide usage example(s) for easier usage.

```python
# Add code snippet or script demonstrating how to use this 
```

### Test

In my setup about QWen 2.5 7B Instruct on H200:
```
timing_s/step:106.761 (flash infer)
timing_s/step:100.520 (fa3)
timing_s/step:100.364 (triton)
```

Hence, I would suggest our team to use fa3 or triton for now.

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluatuion results, etc.

### Additional Info.

- **Issue Number**: Fixes issue # or discussion # if any.
- **Training**: [Note which backend this PR will affect: FSDP, Megatron,
both, or none]
- **Inference**: [Note which backend this PR will affect: vLLM, SGLang,
both, or none]

### Checklist Before Submitting

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting).
- [ ] Add `[BREAKING]` to the PR title if it breaks any API.
- [ ] Update the documentation about your changes in the
[docs](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add CI test(s) if necessary.
TimurTaepov pushed a commit to giorgossideris/verl that referenced this pull request Dec 20, 2025
### Checklist Before Starting

- [x] Search for similar PR(s).

### What does this PR do?

verl-project#1616 causes vllm engine arg init failed, not know why CI of that PR
fail to detect. Some errors have shown up.


![image](https://github.com/user-attachments/assets/ac6bb86e-1576-458e-b341-0e949724ac12)

We may better separate engine args for different inference systems

### High-Level Design

> Demonstrate the high-level design if this PR is complex.

### Specific Changes

> List the specific changes.

### API

```yml
    engine_kwargs: # inference engine parameters
      vllm:
        swap_space: null # null means "use the engine default value" (usually 4 GB), setting it to, e.g., 32 means 32 GB
      sglang:
        attention_backend: null # null means use the engine default value, available options: flashinfer, triton, flashmla
```

### Usage Example

> Provide usage example(s) for easier usage.

```python
# Add code snippet or script demonstrating how to use this 
```

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluatuion results, etc.

### Additional Info.

- **Issue Number**: Fixes issue # or discussion # if any.
- **Training**: [Note which backend this PR will affect: FSDP, Megatron,
both, or none]
- **Inference**: [Note which backend this PR will affect: vLLM, SGLang,
both, or none]

### Checklist Before Submitting

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting).
- [ ] Add `[BREAKING]` to the PR title if it breaks any API.
- [ ] Update the documentation about your changes in the
[docs](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add CI test(s) if necessary.
vyomakesh0728 added a commit to vyomakesh0728/verl that referenced this pull request Jan 22, 2026
…t#1616)

### Checklist Before Starting

- [x] Search for similar PR(s).

### What does this PR do?

Expose `engine_kwargs` from SGLang to Verl configuration

This PR enables RL users to configure `engine_kwargs` directly through
Verl, providing more control and flexibility over inference behavior.



### High-Level Design

One key motivation is the choice of attention backend, which can
significantly affect rollout performance. The SGLang team has observed
that different attention backends perform better in different phases:

FA3 tends to be more efficient during the prefill stage.

FlashInfer or Triton generally offer better performance during decode.

Moreover, the optimal backend may change across versions of SGLang. By
exposing these parameters, we allow users to tune their setup based on
the specific use case and version, ultimately improving performance and
adaptability.
> Add one-line overview of what this PR aims to achieve or accomplish. 

> Demonstrate the high-level design if this PR is complex.

### Specific Changes

> List the specific changes.

### API

> Demonstrate how the API changes if any.

### Usage Example

> Provide usage example(s) for easier usage.

```python
# Add code snippet or script demonstrating how to use this 
```

### Test

In my setup about QWen 2.5 7B Instruct on H200:
```
timing_s/step:106.761 (flash infer)
timing_s/step:100.520 (fa3)
timing_s/step:100.364 (triton)
```

Hence, I would suggest our team to use fa3 or triton for now.

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluatuion results, etc.

### Additional Info.

- **Issue Number**: Fixes issue # or discussion # if any.
- **Training**: [Note which backend this PR will affect: FSDP, Megatron,
both, or none]
- **Inference**: [Note which backend this PR will affect: vLLM, SGLang,
both, or none]

### Checklist Before Submitting

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting).
- [ ] Add `[BREAKING]` to the PR title if it breaks any API.
- [ ] Update the documentation about your changes in the
[docs](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add CI test(s) if necessary.
vyomakesh0728 added a commit to vyomakesh0728/verl that referenced this pull request Jan 22, 2026
### Checklist Before Starting

- [x] Search for similar PR(s).

### What does this PR do?

verl-project#1616 causes vllm engine arg init failed, not know why CI of that PR
fail to detect. Some errors have shown up.


![image](https://github.com/user-attachments/assets/ac6bb86e-1576-458e-b341-0e949724ac12)

We may better separate engine args for different inference systems

### High-Level Design

> Demonstrate the high-level design if this PR is complex.

### Specific Changes

> List the specific changes.

### API

```yml
    engine_kwargs: # inference engine parameters
      vllm:
        swap_space: null # null means "use the engine default value" (usually 4 GB), setting it to, e.g., 32 means 32 GB
      sglang:
        attention_backend: null # null means use the engine default value, available options: flashinfer, triton, flashmla
```

### Usage Example

> Provide usage example(s) for easier usage.

```python
# Add code snippet or script demonstrating how to use this 
```

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluatuion results, etc.

### Additional Info.

- **Issue Number**: Fixes issue # or discussion # if any.
- **Training**: [Note which backend this PR will affect: FSDP, Megatron,
both, or none]
- **Inference**: [Note which backend this PR will affect: vLLM, SGLang,
both, or none]

### Checklist Before Submitting

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting).
- [ ] Add `[BREAKING]` to the PR title if it breaks any API.
- [ ] Update the documentation about your changes in the
[docs](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add CI test(s) if necessary.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants