Skip to content

Comments

[perf] feat: verl profiler system support Agent Loop scenario and integrate torch.profiler#4320

Merged
wuxibin89 merged 4 commits intoverl-project:mainfrom
mengchengTang:profiler_support_agent_loop
Jan 21, 2026
Merged

[perf] feat: verl profiler system support Agent Loop scenario and integrate torch.profiler#4320
wuxibin89 merged 4 commits intoverl-project:mainfrom
mengchengTang:profiler_support_agent_loop

Conversation

@mengchengTang
Copy link
Contributor

@mengchengTang mengchengTang commented Nov 27, 2025

What does this PR do?

Summary
This PR enhances the verl profiling system to support the Agent Loop architecture (decoupled inference and training processes). It integrates torch.profiler as a core backend within the DistProfiler framework, providing a unified performance tuning interface for both distributed training workers and asynchronous rollout servers.

Key Enhancements
Agent Loop Support: Implemented a coordinated control flow that allows the RayTrainer to explicitly trigger profiling sessions on remote inference servers (vLLM/SGLang) via AgentLoopManager.

  • For vLLM, we leverage the AsyncLLM profiling interface.
  • For SGLang, we utilize the TokenizerCommunicatorMixin profiling API.

Unified Torch Profiler: Integrated the native PyTorch Profiler into the verl ecosystem, supporting both continuous and discrete collection modes consistent with Nsight systems and NPU.

Profiling Workflow
The provided Sequence Diagram illustrates the decoupled profiling logic:
Rollout Phase: Profiling is managed through explicit RPC calls (start_profile/stop_profile) to the inference engine's server interface (AsyncLLM / TokenizerCommunicatorMixin), ensuring capture is synchronized with generation steps.
whiteboard_exported_image

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

@CLAassistant
Copy link

CLAassistant commented Nov 27, 2025

CLA assistant check
All committers have signed the CLA.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces profiling capabilities for the agent loop by adding start_profile and stop_profile methods to AgentLoopManager, vLLMHttpServerBase, and vLLMReplica. The changes correctly propagate the profiling calls down to the workers. My review includes one high-severity suggestion to fix a blocking call (ray.get) within an asynchronous actor in vLLMHttpServerBase, which should be converted to a non-blocking asyncio.gather to prevent blocking the event loop and to maintain consistency with other asynchronous methods in the class.

@mengchengTang mengchengTang force-pushed the profiler_support_agent_loop branch from b620922 to d721a17 Compare November 27, 2025 09:28
@mengchengTang mengchengTang marked this pull request as draft November 28, 2025 02:14
@tardis-key
Copy link
Collaborator

It can be seen that the overall process control and the actual profiling startup are far apart, and the original solution requires passing the calling function many times, which is not elegant enough.

if (not self.discrete or self.async_start) and NPUProfiler._define_count == 0:
if not self.discrete:
prof_role = "e2e"
prof_step = profile_step
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prof_step and profile_step are confusing. And new variables are unnecessary, the original role&profile_step can still meet the requirements.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The corresponding variable(s) have been deprecated in the new solution.

@mengchengTang mengchengTang force-pushed the profiler_support_agent_loop branch 10 times, most recently from 8486b44 to b76689b Compare December 4, 2025 09:43
@mengchengTang mengchengTang marked this pull request as ready for review December 4, 2025 11:20
@mengchengTang mengchengTang force-pushed the profiler_support_agent_loop branch from b76689b to 17d67e9 Compare December 4, 2025 11:27
@mengchengTang mengchengTang changed the title profiler bug fix for agent loop [perf] fix: profiler bug fix for agent loop Dec 4, 2025
@tardis-key
Copy link
Collaborator

@FightingZhen @wuxibin89 ready for review. This pr is bugfix for legacy_workers.

@mengchengTang mengchengTang force-pushed the profiler_support_agent_loop branch 2 times, most recently from bd1e8e3 to 882201c Compare December 4, 2025 12:16
@mengchengTang mengchengTang force-pushed the profiler_support_agent_loop branch from 882201c to c49435b Compare December 8, 2025 03:38
@mengchengTang mengchengTang changed the title [perf] fix: profiler bug fix for agent loop [perf] feat: verl profiler system support agent loop Jan 14, 2026
@mengchengTang mengchengTang changed the title [perf] feat: verl profiler system support agent loop [perf] feat: verl profiler system support Agent Loop scenario and integrate torch.profiler Jan 14, 2026
@wuxibin89
Copy link
Collaborator

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant and valuable enhancement to the verl profiling system. It successfully integrates torch.profiler and extends support to the Agent Loop architecture, providing a more unified and powerful performance tuning experience. The refactoring of the profiler implementations under a central DistProfiler is a solid architectural improvement. While the overall changes are excellent, I've identified a few critical issues related to null-pointer exceptions in the new profiler initialization logic for sglang and vllm rollout servers. These could lead to crashes if profiling is configured in specific ways. Addressing these will make the new profiling system much more robust.

Comment on lines 118 to 120
else:
logger.warning(f"agent loop only support torch and npu profiler, got {profiler_config.tool}")
self.tool_config = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There's a potential AttributeError here. If self.config.profiler is not set in the configuration, profiler_config will be None. In that case, the else block is executed, and accessing profiler_config.tool will raise an exception. You should add a check to ensure profiler_config is not None before accessing its attributes.

Suggested change
else:
logger.warning(f"agent loop only support torch and npu profiler, got {profiler_config.tool}")
self.tool_config = None
if profiler_config is not None:
logger.warning(f"agent loop only support torch and npu profiler, got {profiler_config.tool}")
self.tool_config = None

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed. I have added a check if profiler_config is not None: to ensure we only access .tool when the config exists. If profiler_config is None, we skip the logic and pass None to DistProfiler, which handles it gracefully.

Comment on lines 373 to 374
if (self.profiler_controller.check_enable() and self.profiler_controller.check_this_rank() and
self.profiler_controller.is_discrete_mode()):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This condition can lead to a crash. If profiling is enabled with a tool other than torch or npu (e.g., nsys), self.tool_config will be None from the __init__ method. However, self.profiler_controller.check_enable() will return True, causing the code to enter this block and fail with an AttributeError on self.tool_config.contents. You should add a check to ensure self.tool_config is not None before proceeding.

Suggested change
if (self.profiler_controller.check_enable() and self.profiler_controller.check_this_rank() and
self.profiler_controller.is_discrete_mode()):
if (self.tool_config and self.profiler_controller.check_enable() and self.profiler_controller.check_this_rank() and
self.profiler_controller.is_discrete_mode()):

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the crash scenario won't happen because when profiler_config.tool is not "torch" or "npu" (e.g. "nsys"), we explicitly set profiler_config = None in the __init__ method.

As a result, DistProfiler is initialized with config=None, which defaults to enable=False. Therefore, self.profiler_controller.check_enable() will return False, and the code block accessing tool_config.contents will not be executed.

Comment on lines 227 to 229
else:
logger.warning(f"agent loop only support torch and npu profiler, got {profiler_config.tool}")
tool_config = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There's a potential AttributeError here. If self.config.profiler is not set in the configuration, profiler_config will be None. When the else block is executed, accessing profiler_config.tool will raise an exception. You should guard this access with a check to ensure profiler_config is not None.

Suggested change
else:
logger.warning(f"agent loop only support torch and npu profiler, got {profiler_config.tool}")
tool_config = None
else:
if profiler_config is not None:
logger.warning(f"agent loop only support torch and npu profiler, got {profiler_config.tool}")
tool_config = None

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed. I have added a check if profiler_config is not None: to ensure we only access .tool when the config exists. If profiler_config is None, we skip the logic and pass None to DistProfiler, which handles it gracefully.

@mengchengTang mengchengTang force-pushed the profiler_support_agent_loop branch 4 times, most recently from d8d0670 to ba0e8e8 Compare January 19, 2026 08:21
@mengchengTang mengchengTang force-pushed the profiler_support_agent_loop branch 2 times, most recently from e2068cb to 35c6860 Compare January 19, 2026 09:07
@mengchengTang mengchengTang force-pushed the profiler_support_agent_loop branch from 35c6860 to 1391f8b Compare January 19, 2026 11:07
@mengchengTang
Copy link
Contributor Author

mengchengTang commented Jan 20, 2026

Please add the test results for the different backends of GPU and NPU.

Verification Results

Scene Discrete Mode With Stack Record Shapes Steps Ranks Status
vLLM + NPU True False False [2] [1] ✅ Pass
vLLM + NPU True True True [2] All ✅ Pass
vLLM + GPU True False False [2] [1] ✅ Pass
vLLM + GPU True True True [2] All ✅ Pass
SGLang + NPU True False False [2] [1] ✅ Pass
SGLang + NPU True True True [2] All ✅ Pass
SGLang + GPU True False False [2] [1] ✅ Pass
SGLang + GPU True True True [2] All ✅ Pass

Profiling Interface Compatibility Report

  1. SGLang Compatibility
    We leverage TokenizerCommunicatorMixin (inherited by TokenizerManager) for profiling control.
    Stability: The start_profile and stop_profile interfaces, along with parameters like output_dir and with_stack, have been consistent across versions.
    Conclusion: Integration via self.tokenizer_manager.start_profile() is fully compatible with existing SGLang versions.
  2. vLLM Compatibility
    We use AsyncLLM's stable start_profile() / stop_profile() interfaces.
    Configuration: Currently relies on environment variables (e.g., VLLM_TORCH_PROFILER_DIR), which works across versions.
    Future Plan: After PR [BREAKING][worker, rollout, vllm] feat: implement vLLM colocated training-inference rollout with process separation #4280 (process separation for colocated rollout) merges, we will upgrade to inject profiler_config and environment variables before the inference process launches, providing native support for vLLM >= 0.13.0 while keeping legacy compatibility.

@mengchengTang mengchengTang reopened this Jan 20, 2026
@wuxibin89 wuxibin89 mentioned this pull request Jan 21, 2026
37 tasks
@wuxibin89
Copy link
Collaborator

Nice work!

@wuxibin89 wuxibin89 merged commit cd1926c into verl-project:main Jan 21, 2026
151 of 158 checks passed
vyomakesh0728 added a commit to vyomakesh0728/verl that referenced this pull request Jan 22, 2026
…egrate torch.profiler (verl-project#4320)

### What does this PR do?

**Summary**
This PR enhances the verl profiling system to support the Agent Loop
architecture (decoupled inference and training processes). It integrates
torch.profiler as a core backend within the DistProfiler framework,
providing a unified performance tuning interface for both distributed
training workers and asynchronous rollout servers.

**Key Enhancements**
Agent Loop Support: Implemented a coordinated control flow that allows
the RayTrainer to explicitly trigger profiling sessions on remote
inference servers (vLLM/SGLang) via AgentLoopManager.
- For vLLM, we leverage the AsyncLLM profiling interface.
- For SGLang, we utilize the TokenizerCommunicatorMixin profiling API.

Unified Torch Profiler: Integrated the native PyTorch Profiler into the
verl ecosystem, supporting both continuous and discrete collection modes
consistent with Nsight systems and NPU.

**Profiling Workflow**
The provided Sequence Diagram illustrates the decoupled profiling logic:
Rollout Phase: Profiling is managed through explicit RPC calls
(start_profile/stop_profile) to the inference engine's server interface
(AsyncLLM / TokenizerCommunicatorMixin), ensuring capture is
synchronized with generation steps.
<img width="3986" height="3834" alt="whiteboard_exported_image"
src="https://github.com/user-attachments/assets/afe68413-c338-4eec-a843-df5a9c106d98"
/>

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: tardis-key <huxiaobo@zju.edu.cn>
sophiayyya pushed a commit to sophiayyya/verl that referenced this pull request Jan 25, 2026
…egrate torch.profiler (verl-project#4320)

### What does this PR do?

**Summary**
This PR enhances the verl profiling system to support the Agent Loop
architecture (decoupled inference and training processes). It integrates
torch.profiler as a core backend within the DistProfiler framework,
providing a unified performance tuning interface for both distributed
training workers and asynchronous rollout servers.

**Key Enhancements**
Agent Loop Support: Implemented a coordinated control flow that allows
the RayTrainer to explicitly trigger profiling sessions on remote
inference servers (vLLM/SGLang) via AgentLoopManager.
- For vLLM, we leverage the AsyncLLM profiling interface.
- For SGLang, we utilize the TokenizerCommunicatorMixin profiling API.

Unified Torch Profiler: Integrated the native PyTorch Profiler into the
verl ecosystem, supporting both continuous and discrete collection modes
consistent with Nsight systems and NPU.

**Profiling Workflow**
The provided Sequence Diagram illustrates the decoupled profiling logic:
Rollout Phase: Profiling is managed through explicit RPC calls
(start_profile/stop_profile) to the inference engine's server interface
(AsyncLLM / TokenizerCommunicatorMixin), ensuring capture is
synchronized with generation steps.
<img width="3986" height="3834" alt="whiteboard_exported_image"
src="https://github.com/user-attachments/assets/afe68413-c338-4eec-a843-df5a9c106d98"
/>

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: tardis-key <huxiaobo@zju.edu.cn>
meichangsu1 pushed a commit to meichangsu1/verl that referenced this pull request Jan 27, 2026
…egrate torch.profiler (verl-project#4320)

### What does this PR do?

**Summary**
This PR enhances the verl profiling system to support the Agent Loop
architecture (decoupled inference and training processes). It integrates
torch.profiler as a core backend within the DistProfiler framework,
providing a unified performance tuning interface for both distributed
training workers and asynchronous rollout servers.

**Key Enhancements**
Agent Loop Support: Implemented a coordinated control flow that allows
the RayTrainer to explicitly trigger profiling sessions on remote
inference servers (vLLM/SGLang) via AgentLoopManager.
- For vLLM, we leverage the AsyncLLM profiling interface.
- For SGLang, we utilize the TokenizerCommunicatorMixin profiling API.

Unified Torch Profiler: Integrated the native PyTorch Profiler into the
verl ecosystem, supporting both continuous and discrete collection modes
consistent with Nsight systems and NPU.

**Profiling Workflow**
The provided Sequence Diagram illustrates the decoupled profiling logic:
Rollout Phase: Profiling is managed through explicit RPC calls
(start_profile/stop_profile) to the inference engine's server interface
(AsyncLLM / TokenizerCommunicatorMixin), ensuring capture is
synchronized with generation steps.
<img width="3986" height="3834" alt="whiteboard_exported_image"
src="https://github.com/user-attachments/assets/afe68413-c338-4eec-a843-df5a9c106d98"
/>

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: tardis-key <huxiaobo@zju.edu.cn>
meichangsu1 pushed a commit to meichangsu1/verl that referenced this pull request Jan 27, 2026
…egrate torch.profiler (verl-project#4320)

### What does this PR do?

**Summary**
This PR enhances the verl profiling system to support the Agent Loop
architecture (decoupled inference and training processes). It integrates
torch.profiler as a core backend within the DistProfiler framework,
providing a unified performance tuning interface for both distributed
training workers and asynchronous rollout servers.

**Key Enhancements**
Agent Loop Support: Implemented a coordinated control flow that allows
the RayTrainer to explicitly trigger profiling sessions on remote
inference servers (vLLM/SGLang) via AgentLoopManager.
- For vLLM, we leverage the AsyncLLM profiling interface.
- For SGLang, we utilize the TokenizerCommunicatorMixin profiling API.

Unified Torch Profiler: Integrated the native PyTorch Profiler into the
verl ecosystem, supporting both continuous and discrete collection modes
consistent with Nsight systems and NPU.

**Profiling Workflow**
The provided Sequence Diagram illustrates the decoupled profiling logic:
Rollout Phase: Profiling is managed through explicit RPC calls
(start_profile/stop_profile) to the inference engine's server interface
(AsyncLLM / TokenizerCommunicatorMixin), ensuring capture is
synchronized with generation steps.
<img width="3986" height="3834" alt="whiteboard_exported_image"
src="https://github.com/user-attachments/assets/afe68413-c338-4eec-a843-df5a9c106d98"
/>

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: tardis-key <huxiaobo@zju.edu.cn>
wuxibin89 pushed a commit that referenced this pull request Jan 27, 2026
…le control params for rollout profiling (sglang backend) (#5025)

### What does this PR do?

> Address key functionality gaps in rollout discrete profiling for the
sglang backend by adding global step awareness and expanding support for
flexible profile control parameters:
> 1. Missing global step information resulted in disorganized profiling
files;
> 2. Lack of support for specifying critical sglang backend profile
control parameters (including `num_steps`, `profile_by_stage`, and
`merge_profiles`) led to overly large profile files and hindered
convenient profiling analysis.
> 
> Solutions implemented:
> - Pass `kwargs` parameters to the rollout server to enable global step
awareness, and generate an independent folder for each global step to
organize profiling data in a structured way;
> - Extend the profiling config by adding optional parameters to the
`content` field, and remove unnecessary restrictions on `content` values
in `TorchProfilerToolConfig` to support specifying sglang-specific
profile control parameters (e.g., `profile_by_stage`, `merge_profiles`).

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: [PR
#4320: [perf] feat: verl profiler system support Agent Loop scenario and
integrate torch.profiler](#4320)
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`,
`rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`,
`deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`,
`model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

To validate the functionality of each profile control parameter for the
sglang rollout backend, orthogonal test cases were designed (minimizing
redundant combinations while covering all key parameter values). The
test results are as follows:

| step_start | step_end | profile-by-stage | merge-profiles | stack |
shapes | cpu/cuda | Test Result |

|------------|----------|------------------|----------------|-------|--------|----------------|-------------|
| default | default | default (False) | default (False)| default (False)
| default (False) | default | Pass |
| 0 (≥0) | 1 (≥1) | default | default | default | default | default |
Pass |
| 0 (≥0) | default | default | default | default | default | default |
Pass |
| default | 1 (≥1) | default | default | default | default | default |
Pass |
| default | default | True | default | default | default | default |
Pass |
| default | default | default | True | default | default | default |
Pass |
| default | default | default | default | True | default | default |
Pass |
| default | default | default | default | default | True | default |
Pass |
| default | default | default | default | default | default | [] / [cpu]
/ [cuda] / [cpu,cuda] | Pass |

### API and Usage Example

```yml
  rollout:
    quantization: null
    profiler:
      enable: True
      all_ranks: False
      ranks: [0, 1, 2] 
      tool_config:
        torch:
          discrete: True
          step_start: 0
          step_end: 1
          contents: [cpu, cuda, stack, shapes, profile_by_stage, merge_profiles]
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [x] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants