[BREAKING][worker, rollout, vllm] feat: implement vLLM colocated training-inference rollout with process separation#4280
Merged
wuxibin89 merged 64 commits intoverl-project:mainfrom Jan 23, 2026
Conversation
…ss separation Signed-off-by: jianjunzhong <jianjunzhong@foxmail.com>
51c8ad9 to
714a32f
Compare
Signed-off-by: jianjunzhong <jianjunzhong@foxmail.com>
Signed-off-by: jianjunzhong <jianjunzhong@foxmail.com>
Signed-off-by: jianjunzhong <jianjunzhong@foxmail.com>
Signed-off-by: jianjunzhong <jianjunzhong@foxmail.com>
Signed-off-by: jianjunzhong <jianjunzhong@foxmail.com>
Signed-off-by: jianjunzhong <jianjunzhong@foxmail.com>
Signed-off-by: jianjunzhong <jianjunzhong@foxmail.com>
ba4512b to
ca088a2
Compare
Signed-off-by: jianjunzhong <jianjunzhong@foxmail.com>
Merged
7 tasks
ef46ad3 to
2d5b9f1
Compare
Signed-off-by: jianjunzhong <jianjunzhong@foxmail.com>
Signed-off-by: jianjunzhong <jianjunzhong@foxmail.com>
pengwu22
reviewed
Jan 23, 2026
| lora_path=VLLM_LORA_PATH, | ||
| peft_config=asdict(peft_config), | ||
| lora_tensors=weights, | ||
| # build cuda ipc buffer |
Collaborator
There was a problem hiding this comment.
hi. thank you for the contribution! just one comment: could you abstract the zmq+ipc communication channel out, and add some corresponding unittests please?
wuxibin89
approved these changes
Jan 23, 2026
sophiayyya
pushed a commit
to sophiayyya/verl
that referenced
this pull request
Jan 25, 2026
…ning-inference rollout with process separation (verl-project#4280) ### What does this PR do? Refactor vLLM co-located training-inference rollout from single-process to multi-process architecture. This refactoring separates training and inference into different processes, enabling better resource isolation and paving the way for future checkpoint-engine integration (in roadmap verl-project#3624). **Key Changes:** - Transform `vLLMAsyncRollout` into `ServerAdapter` - a client-side adapter that communicates with the inference executor - Remove `ExternalZeroMQDistributedExecutor` and use `MultiprocExecutor` as the inference backend - Implement CUDA IPC-based weight updates via ZeroMQ for efficient parameter synchronization between training and inference processes ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: ... - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example This refactoring maintains full backward compatibility with existing vLLM rollout APIs. No changes are required to user code. **Key API Components:** * **ServerAdapter** (replaces `vLLMAsyncRollout`): - Acts as client-side adapter for communicating with inference executor - Manages CUDA IPC-based weight updates - Provides same interface as previous `vLLMAsyncRollout` class ### Design #### Architecture Overview 1. Before (Single-Process Architecture) * Single-Process Design In the original `AsyncActorRolloutRefWorker`, the training engine and inference engine shared the same process. The vLLM inference engine directly received weight updates through parameter passing.  * Communication Architecture `ExternalZeroMQDistributedExecutor` acts as a client, sending instructions to all `AsyncActorRolloutRefWorker` inference engines via ZMQ to execute operations like `init_worker`, `load_model`, `init_device`, and `generate`. Operations like `wake_up`, `sleep`, and weight updates were executed directly in `vLLMAsyncRollout` without going through `ExternalZeroMQDistributedExecutor`.  2. After (Multi-Process Architecture): * Multi-Process Design Transform `vLLMAsyncRollout` into `ServerAdapter`, serving as a client for communicating with the inference engine (AsyncLLM). Weight updates are based on CUDA IPC, passing through ZeroMQ to the inference engine.  * Communication Architecture Deprecate the original `ExternalZeroMQDistributedExecutor` class and directly use vllm's `MultiprocExecutor` by passing `distributed_executor_backend = "mp"`. All inference engine operations are uniformly broadcast to all inference workers through `MultiprocExecutor`'s RPC Broadcast MQ.  ### Convergence test - model: Qwen3-VL-30B-A3B-Instruct - dataset: geo3k - GPU: 4*8 H100 <img width="660" height="618" alt="image" src="https://github.com/user-attachments/assets/6e3e7dbd-03f9-471a-b8d5-bc0344dba299" /> ### Performance test: update weights - CUDA IPC bucket_size: 2GB - GPU: H100, ConnectX-7 400 Gbps (InfiniBand) | Model | #GPU | Parallelism | Time | |---|---|---|---| |Qwen3-VL-30B-A3B-Instruct|TP2,EP8|4*8|5s| |DeepSeek-V3.1-Terminus|TP8, PP16, EP8| 16*8 | 120s | |DeepSeek-V3.1-Terminus|TP16,PP16| 32*8 | 80s| ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Signed-off-by: jianjunzhong <jianjunzhong@foxmail.com> Co-authored-by: wuxibin <wuxibin@bytedance.com>
4 tasks
meichangsu1
pushed a commit
to meichangsu1/verl
that referenced
this pull request
Jan 27, 2026
…ning-inference rollout with process separation (verl-project#4280) ### What does this PR do? Refactor vLLM co-located training-inference rollout from single-process to multi-process architecture. This refactoring separates training and inference into different processes, enabling better resource isolation and paving the way for future checkpoint-engine integration (in roadmap verl-project#3624). **Key Changes:** - Transform `vLLMAsyncRollout` into `ServerAdapter` - a client-side adapter that communicates with the inference executor - Remove `ExternalZeroMQDistributedExecutor` and use `MultiprocExecutor` as the inference backend - Implement CUDA IPC-based weight updates via ZeroMQ for efficient parameter synchronization between training and inference processes ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: ... - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example This refactoring maintains full backward compatibility with existing vLLM rollout APIs. No changes are required to user code. **Key API Components:** * **ServerAdapter** (replaces `vLLMAsyncRollout`): - Acts as client-side adapter for communicating with inference executor - Manages CUDA IPC-based weight updates - Provides same interface as previous `vLLMAsyncRollout` class ### Design #### Architecture Overview 1. Before (Single-Process Architecture) * Single-Process Design In the original `AsyncActorRolloutRefWorker`, the training engine and inference engine shared the same process. The vLLM inference engine directly received weight updates through parameter passing.  * Communication Architecture `ExternalZeroMQDistributedExecutor` acts as a client, sending instructions to all `AsyncActorRolloutRefWorker` inference engines via ZMQ to execute operations like `init_worker`, `load_model`, `init_device`, and `generate`. Operations like `wake_up`, `sleep`, and weight updates were executed directly in `vLLMAsyncRollout` without going through `ExternalZeroMQDistributedExecutor`.  2. After (Multi-Process Architecture): * Multi-Process Design Transform `vLLMAsyncRollout` into `ServerAdapter`, serving as a client for communicating with the inference engine (AsyncLLM). Weight updates are based on CUDA IPC, passing through ZeroMQ to the inference engine.  * Communication Architecture Deprecate the original `ExternalZeroMQDistributedExecutor` class and directly use vllm's `MultiprocExecutor` by passing `distributed_executor_backend = "mp"`. All inference engine operations are uniformly broadcast to all inference workers through `MultiprocExecutor`'s RPC Broadcast MQ.  ### Convergence test - model: Qwen3-VL-30B-A3B-Instruct - dataset: geo3k - GPU: 4*8 H100 <img width="660" height="618" alt="image" src="https://github.com/user-attachments/assets/6e3e7dbd-03f9-471a-b8d5-bc0344dba299" /> ### Performance test: update weights - CUDA IPC bucket_size: 2GB - GPU: H100, ConnectX-7 400 Gbps (InfiniBand) | Model | #GPU | Parallelism | Time | |---|---|---|---| |Qwen3-VL-30B-A3B-Instruct|TP2,EP8|4*8|5s| |DeepSeek-V3.1-Terminus|TP8, PP16, EP8| 16*8 | 120s | |DeepSeek-V3.1-Terminus|TP16,PP16| 32*8 | 80s| ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Signed-off-by: jianjunzhong <jianjunzhong@foxmail.com> Co-authored-by: wuxibin <wuxibin@bytedance.com>
vermouth1992
pushed a commit
that referenced
this pull request
Jan 27, 2026
### What does this PR do? #4280 refactor vllm breaking `one-step-off-policy` and `fully-async`. This PR introduce CheckpointEngineManager to coordinate weight synchronization between trainer and rollout replicas. Next PR, refactor `one-step-off-policy` and `fully-async` with CheckpointEngineManager. design doc: https://github.com/volcengine/verl/tree/main/verl/checkpoint_engine
meichangsu1
pushed a commit
to meichangsu1/verl
that referenced
this pull request
Jan 27, 2026
…ning-inference rollout with process separation (verl-project#4280) ### What does this PR do? Refactor vLLM co-located training-inference rollout from single-process to multi-process architecture. This refactoring separates training and inference into different processes, enabling better resource isolation and paving the way for future checkpoint-engine integration (in roadmap verl-project#3624). **Key Changes:** - Transform `vLLMAsyncRollout` into `ServerAdapter` - a client-side adapter that communicates with the inference executor - Remove `ExternalZeroMQDistributedExecutor` and use `MultiprocExecutor` as the inference backend - Implement CUDA IPC-based weight updates via ZeroMQ for efficient parameter synchronization between training and inference processes ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: ... - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example This refactoring maintains full backward compatibility with existing vLLM rollout APIs. No changes are required to user code. **Key API Components:** * **ServerAdapter** (replaces `vLLMAsyncRollout`): - Acts as client-side adapter for communicating with inference executor - Manages CUDA IPC-based weight updates - Provides same interface as previous `vLLMAsyncRollout` class ### Design #### Architecture Overview 1. Before (Single-Process Architecture) * Single-Process Design In the original `AsyncActorRolloutRefWorker`, the training engine and inference engine shared the same process. The vLLM inference engine directly received weight updates through parameter passing.  * Communication Architecture `ExternalZeroMQDistributedExecutor` acts as a client, sending instructions to all `AsyncActorRolloutRefWorker` inference engines via ZMQ to execute operations like `init_worker`, `load_model`, `init_device`, and `generate`. Operations like `wake_up`, `sleep`, and weight updates were executed directly in `vLLMAsyncRollout` without going through `ExternalZeroMQDistributedExecutor`.  2. After (Multi-Process Architecture): * Multi-Process Design Transform `vLLMAsyncRollout` into `ServerAdapter`, serving as a client for communicating with the inference engine (AsyncLLM). Weight updates are based on CUDA IPC, passing through ZeroMQ to the inference engine.  * Communication Architecture Deprecate the original `ExternalZeroMQDistributedExecutor` class and directly use vllm's `MultiprocExecutor` by passing `distributed_executor_backend = "mp"`. All inference engine operations are uniformly broadcast to all inference workers through `MultiprocExecutor`'s RPC Broadcast MQ.  ### Convergence test - model: Qwen3-VL-30B-A3B-Instruct - dataset: geo3k - GPU: 4*8 H100 <img width="660" height="618" alt="image" src="https://github.com/user-attachments/assets/6e3e7dbd-03f9-471a-b8d5-bc0344dba299" /> ### Performance test: update weights - CUDA IPC bucket_size: 2GB - GPU: H100, ConnectX-7 400 Gbps (InfiniBand) | Model | #GPU | Parallelism | Time | |---|---|---|---| |Qwen3-VL-30B-A3B-Instruct|TP2,EP8|4*8|5s| |DeepSeek-V3.1-Terminus|TP8, PP16, EP8| 16*8 | 120s | |DeepSeek-V3.1-Terminus|TP16,PP16| 32*8 | 80s| ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Signed-off-by: jianjunzhong <jianjunzhong@foxmail.com> Co-authored-by: wuxibin <wuxibin@bytedance.com>
8 tasks
8 tasks
Merged
8 tasks
wuxibin89
pushed a commit
that referenced
this pull request
Jan 30, 2026
…pport checks (#5089) ### What does this PR do? To address the issue of older NPU drivers not supporting weight updates via IPC in #4280, this PR adds support for shared memory for weight updates. ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: ... - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) - [ ] If your PR is related to the `recipe` submodule, please also update the reference to the submodule commit via `git submodule update --remote` or `cd recipe && git pull origin main`. --------- Signed-off-by: jianjunzhong <jianjunzhong@foxmail.com>
RobotGF
pushed a commit
to RobotGF/verl
that referenced
this pull request
Jan 30, 2026
…pport checks (verl-project#5089) ### What does this PR do? To address the issue of older NPU drivers not supporting weight updates via IPC in verl-project#4280, this PR adds support for shared memory for weight updates. ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: ... - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) - [ ] If your PR is related to the `recipe` submodule, please also update the reference to the submodule commit via `git submodule update --remote` or `cd recipe && git pull origin main`. --------- Signed-off-by: jianjunzhong <jianjunzhong@foxmail.com>
8 tasks
vermouth1992
pushed a commit
that referenced
this pull request
Jan 31, 2026
### What does this PR do? #4280 Revert the default value of vllm max_num_seqs which may effect the throughput ### Checklist Before Starting - [ ] Search for similar PRs. Paste at least one query link here: ... - [ ] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) - [ ] If your PR is related to the `recipe` submodule, please also update the reference to the submodule commit via `git submodule update --remote` or `cd recipe && git pull origin main`.
wuxibin89
pushed a commit
that referenced
this pull request
Feb 2, 2026
…caused by multiple PRs (#5100) ### What does this PR do? > Add **concise** overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review. **Problem1: ConfigAttributeError('Missing key config\n full_key: config\n object_type=dict')** This error was introduced by this [PR](#5034): `dataset_config` type was changed from `DictConfig` to `DictConfigWrap` in `AgentLoopBase` initialization (using dataset_config.config for passing), but the fully async agentloop failed to update `dataset_config` to `DictConfigWrap`, causing the error. The following two problems were introduced by this [PR](#4280): **Problem2: TypeError: got an unexpected keyword argument 'cuda_visible_devices'** The PR added `cuda_visible_devices` to `vLLMHttpServer`, but its subclass `vLLMHttpServerForPartial` in fully async was not updated accordingly, causing conflicts. **Problem3: KeyError: 'ASCEND_RT_VISIBLE_DEVICES'** The PR references the environment variable `ASCEND_RT_VISIBLE_DEVICES` in `get_device_uuid` but does not handle its absence or set a default value, leading to potential errors. ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: ... - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) - [ ] If your PR is related to the `recipe` submodule, please also update the reference to the submodule commit via `git submodule update --remote` or `cd recipe && git pull origin main`.
7 tasks
amzfang
pushed a commit
to amzfang/verl
that referenced
this pull request
Feb 3, 2026
…caused by multiple PRs (verl-project#5100) ### What does this PR do? > Add **concise** overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review. **Problem1: ConfigAttributeError('Missing key config\n full_key: config\n object_type=dict')** This error was introduced by this [PR](verl-project#5034): `dataset_config` type was changed from `DictConfig` to `DictConfigWrap` in `AgentLoopBase` initialization (using dataset_config.config for passing), but the fully async agentloop failed to update `dataset_config` to `DictConfigWrap`, causing the error. The following two problems were introduced by this [PR](verl-project#4280): **Problem2: TypeError: got an unexpected keyword argument 'cuda_visible_devices'** The PR added `cuda_visible_devices` to `vLLMHttpServer`, but its subclass `vLLMHttpServerForPartial` in fully async was not updated accordingly, causing conflicts. **Problem3: KeyError: 'ASCEND_RT_VISIBLE_DEVICES'** The PR references the environment variable `ASCEND_RT_VISIBLE_DEVICES` in `get_device_uuid` but does not handle its absence or set a default value, leading to potential errors. ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: ... - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) - [ ] If your PR is related to the `recipe` submodule, please also update the reference to the submodule commit via `git submodule update --remote` or `cd recipe && git pull origin main`.
Merged
8 tasks
ArronHZG
added a commit
that referenced
this pull request
Feb 6, 2026
…ly async / one step off) (#5184) ### What does this PR do? * Add a new Ray Trainer class to facilitate reusing the core logic. * And fix fully async / one step off CI. * Currently, our parameter synchronization logic is still in a broken state. CI break in #4280 ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: ... - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [x] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [x] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [x] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) - [ ] If your PR is related to the `recipe` submodule, please also update the reference to the submodule commit via `git submodule update --remote` or `cd recipe && git pull origin main`.
Tjh-UKN
pushed a commit
to Tjh-UKN/verl
that referenced
this pull request
Feb 13, 2026
…caused by multiple PRs (verl-project#5100) ### What does this PR do? > Add **concise** overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review. **Problem1: ConfigAttributeError('Missing key config\n full_key: config\n object_type=dict')** This error was introduced by this [PR](verl-project#5034): `dataset_config` type was changed from `DictConfig` to `DictConfigWrap` in `AgentLoopBase` initialization (using dataset_config.config for passing), but the fully async agentloop failed to update `dataset_config` to `DictConfigWrap`, causing the error. The following two problems were introduced by this [PR](verl-project#4280): **Problem2: TypeError: got an unexpected keyword argument 'cuda_visible_devices'** The PR added `cuda_visible_devices` to `vLLMHttpServer`, but its subclass `vLLMHttpServerForPartial` in fully async was not updated accordingly, causing conflicts. **Problem3: KeyError: 'ASCEND_RT_VISIBLE_DEVICES'** The PR references the environment variable `ASCEND_RT_VISIBLE_DEVICES` in `get_device_uuid` but does not handle its absence or set a default value, leading to potential errors. ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: ... - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) - [ ] If your PR is related to the `recipe` submodule, please also update the reference to the submodule commit via `git submodule update --remote` or `cd recipe && git pull origin main`.
Tjh-UKN
pushed a commit
to Tjh-UKN/verl
that referenced
this pull request
Feb 13, 2026
…ly async / one step off) (verl-project#5184) ### What does this PR do? * Add a new Ray Trainer class to facilitate reusing the core logic. * And fix fully async / one step off CI. * Currently, our parameter synchronization logic is still in a broken state. CI break in verl-project#4280 ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: ... - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [x] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [x] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [x] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) - [ ] If your PR is related to the `recipe` submodule, please also update the reference to the submodule commit via `git submodule update --remote` or `cd recipe && git pull origin main`.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
Refactor vLLM co-located training-inference rollout from single-process to multi-process architecture. This refactoring separates training and inference into different processes, enabling better resource isolation and paving the way for future checkpoint-engine integration (in roadmap #3624).
Key Changes:
vLLMAsyncRolloutintoServerAdapter- a client-side adapter that communicates with the inference executorExternalZeroMQDistributedExecutorand useMultiprocExecutoras the inference backendChecklist Before Starting
[{modules}] {type}: {description}(This will be checked by the CI){modules}includefsdp,megatron,sglang,vllm,rollout,trainer,ci,training_utils,recipe,hardware,deployment,ray,worker,single_controller,misc,perf,model,algo,env,tool,ckpt,doc,data,like[megatron, fsdp, doc]{type}is infeat,fix,refactor,chore,test[BREAKING]to the beginning of the title.[BREAKING][fsdp, megatron] feat: dynamic batchingTest
API and Usage Example
This refactoring maintains full backward compatibility with existing vLLM rollout APIs. No changes are required to user code.
Key API Components:
vLLMAsyncRollout):vLLMAsyncRolloutclassDesign
Architecture Overview
In the original
AsyncActorRolloutRefWorker, the training engine and inference engine shared the same process. The vLLM inference engine directly received weight updates through parameter passing.ExternalZeroMQDistributedExecutoracts as a client, sending instructions to allAsyncActorRolloutRefWorkerinference engines via ZMQ to execute operations likeinit_worker,load_model,init_device, andgenerate. Operations likewake_up,sleep, and weight updates were executed directly invLLMAsyncRolloutwithout going throughExternalZeroMQDistributedExecutor.Transform
vLLMAsyncRolloutintoServerAdapter, serving as a client for communicating with the inference engine (AsyncLLM). Weight updates are based on CUDA IPC, passing through ZeroMQ to the inference engine.Deprecate the original
ExternalZeroMQDistributedExecutorclass and directly use vllm'sMultiprocExecutorby passingdistributed_executor_backend = "mp". All inference engine operations are uniformly broadcast to all inference workers throughMultiprocExecutor's RPC Broadcast MQ.Convergence test
Performance test: update weights
Checklist Before Submitting
Important
Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.
pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=alwaysci-requestchannel in theverlSlack workspace. (If not accessible, please try the Feishu group (飞书群).)