[perf] feat: verl profiler system support Agent Loop scenario and integrate torch.profiler#4320
Conversation
There was a problem hiding this comment.
Code Review
This pull request introduces profiling capabilities for the agent loop by adding start_profile and stop_profile methods to AgentLoopManager, vLLMHttpServerBase, and vLLMReplica. The changes correctly propagate the profiling calls down to the workers. My review includes one high-severity suggestion to fix a blocking call (ray.get) within an asynchronous actor in vLLMHttpServerBase, which should be converted to a non-blocking asyncio.gather to prevent blocking the event loop and to maintain consistency with other asynchronous methods in the class.
b620922 to
d721a17
Compare
|
It can be seen that the overall process control and the actual profiling startup are far apart, and the original solution requires passing the calling function many times, which is not elegant enough. |
verl/utils/profiler/mstx_profile.py
Outdated
| if (not self.discrete or self.async_start) and NPUProfiler._define_count == 0: | ||
| if not self.discrete: | ||
| prof_role = "e2e" | ||
| prof_step = profile_step |
There was a problem hiding this comment.
prof_step and profile_step are confusing. And new variables are unnecessary, the original role&profile_step can still meet the requirements.
There was a problem hiding this comment.
The corresponding variable(s) have been deprecated in the new solution.
8486b44 to
b76689b
Compare
b76689b to
17d67e9
Compare
|
@FightingZhen @wuxibin89 ready for review. This pr is bugfix for legacy_workers. |
bd1e8e3 to
882201c
Compare
882201c to
c49435b
Compare
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request introduces a significant and valuable enhancement to the verl profiling system. It successfully integrates torch.profiler and extends support to the Agent Loop architecture, providing a more unified and powerful performance tuning experience. The refactoring of the profiler implementations under a central DistProfiler is a solid architectural improvement. While the overall changes are excellent, I've identified a few critical issues related to null-pointer exceptions in the new profiler initialization logic for sglang and vllm rollout servers. These could lead to crashes if profiling is configured in specific ways. Addressing these will make the new profiling system much more robust.
| else: | ||
| logger.warning(f"agent loop only support torch and npu profiler, got {profiler_config.tool}") | ||
| self.tool_config = None |
There was a problem hiding this comment.
There's a potential AttributeError here. If self.config.profiler is not set in the configuration, profiler_config will be None. In that case, the else block is executed, and accessing profiler_config.tool will raise an exception. You should add a check to ensure profiler_config is not None before accessing its attributes.
| else: | |
| logger.warning(f"agent loop only support torch and npu profiler, got {profiler_config.tool}") | |
| self.tool_config = None | |
| if profiler_config is not None: | |
| logger.warning(f"agent loop only support torch and npu profiler, got {profiler_config.tool}") | |
| self.tool_config = None |
There was a problem hiding this comment.
Fixed. I have added a check if profiler_config is not None: to ensure we only access .tool when the config exists. If profiler_config is None, we skip the logic and pass None to DistProfiler, which handles it gracefully.
| if (self.profiler_controller.check_enable() and self.profiler_controller.check_this_rank() and | ||
| self.profiler_controller.is_discrete_mode()): |
There was a problem hiding this comment.
This condition can lead to a crash. If profiling is enabled with a tool other than torch or npu (e.g., nsys), self.tool_config will be None from the __init__ method. However, self.profiler_controller.check_enable() will return True, causing the code to enter this block and fail with an AttributeError on self.tool_config.contents. You should add a check to ensure self.tool_config is not None before proceeding.
| if (self.profiler_controller.check_enable() and self.profiler_controller.check_this_rank() and | |
| self.profiler_controller.is_discrete_mode()): | |
| if (self.tool_config and self.profiler_controller.check_enable() and self.profiler_controller.check_this_rank() and | |
| self.profiler_controller.is_discrete_mode()): |
There was a problem hiding this comment.
I believe the crash scenario won't happen because when profiler_config.tool is not "torch" or "npu" (e.g. "nsys"), we explicitly set profiler_config = None in the __init__ method.
As a result, DistProfiler is initialized with config=None, which defaults to enable=False. Therefore, self.profiler_controller.check_enable() will return False, and the code block accessing tool_config.contents will not be executed.
| else: | ||
| logger.warning(f"agent loop only support torch and npu profiler, got {profiler_config.tool}") | ||
| tool_config = None |
There was a problem hiding this comment.
There's a potential AttributeError here. If self.config.profiler is not set in the configuration, profiler_config will be None. When the else block is executed, accessing profiler_config.tool will raise an exception. You should guard this access with a check to ensure profiler_config is not None.
| else: | |
| logger.warning(f"agent loop only support torch and npu profiler, got {profiler_config.tool}") | |
| tool_config = None | |
| else: | |
| if profiler_config is not None: | |
| logger.warning(f"agent loop only support torch and npu profiler, got {profiler_config.tool}") | |
| tool_config = None |
There was a problem hiding this comment.
Fixed. I have added a check if profiler_config is not None: to ensure we only access .tool when the config exists. If profiler_config is None, we skip the logic and pass None to DistProfiler, which handles it gracefully.
d8d0670 to
ba0e8e8
Compare
Co-authored-by: tardis-key <huxiaobo@zju.edu.cn>
e2068cb to
35c6860
Compare
35c6860 to
1391f8b
Compare
Verification Results
Profiling Interface Compatibility Report
|
|
Nice work! |
…egrate torch.profiler (verl-project#4320) ### What does this PR do? **Summary** This PR enhances the verl profiling system to support the Agent Loop architecture (decoupled inference and training processes). It integrates torch.profiler as a core backend within the DistProfiler framework, providing a unified performance tuning interface for both distributed training workers and asynchronous rollout servers. **Key Enhancements** Agent Loop Support: Implemented a coordinated control flow that allows the RayTrainer to explicitly trigger profiling sessions on remote inference servers (vLLM/SGLang) via AgentLoopManager. - For vLLM, we leverage the AsyncLLM profiling interface. - For SGLang, we utilize the TokenizerCommunicatorMixin profiling API. Unified Torch Profiler: Integrated the native PyTorch Profiler into the verl ecosystem, supporting both continuous and discrete collection modes consistent with Nsight systems and NPU. **Profiling Workflow** The provided Sequence Diagram illustrates the decoupled profiling logic: Rollout Phase: Profiling is managed through explicit RPC calls (start_profile/stop_profile) to the inference engine's server interface (AsyncLLM / TokenizerCommunicatorMixin), ensuring capture is synchronized with generation steps. <img width="3986" height="3834" alt="whiteboard_exported_image" src="https://github.com/user-attachments/assets/afe68413-c338-4eec-a843-df5a9c106d98" /> ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: ... - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [x] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Co-authored-by: tardis-key <huxiaobo@zju.edu.cn>
…egrate torch.profiler (verl-project#4320) ### What does this PR do? **Summary** This PR enhances the verl profiling system to support the Agent Loop architecture (decoupled inference and training processes). It integrates torch.profiler as a core backend within the DistProfiler framework, providing a unified performance tuning interface for both distributed training workers and asynchronous rollout servers. **Key Enhancements** Agent Loop Support: Implemented a coordinated control flow that allows the RayTrainer to explicitly trigger profiling sessions on remote inference servers (vLLM/SGLang) via AgentLoopManager. - For vLLM, we leverage the AsyncLLM profiling interface. - For SGLang, we utilize the TokenizerCommunicatorMixin profiling API. Unified Torch Profiler: Integrated the native PyTorch Profiler into the verl ecosystem, supporting both continuous and discrete collection modes consistent with Nsight systems and NPU. **Profiling Workflow** The provided Sequence Diagram illustrates the decoupled profiling logic: Rollout Phase: Profiling is managed through explicit RPC calls (start_profile/stop_profile) to the inference engine's server interface (AsyncLLM / TokenizerCommunicatorMixin), ensuring capture is synchronized with generation steps. <img width="3986" height="3834" alt="whiteboard_exported_image" src="https://github.com/user-attachments/assets/afe68413-c338-4eec-a843-df5a9c106d98" /> ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: ... - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [x] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Co-authored-by: tardis-key <huxiaobo@zju.edu.cn>
…egrate torch.profiler (verl-project#4320) ### What does this PR do? **Summary** This PR enhances the verl profiling system to support the Agent Loop architecture (decoupled inference and training processes). It integrates torch.profiler as a core backend within the DistProfiler framework, providing a unified performance tuning interface for both distributed training workers and asynchronous rollout servers. **Key Enhancements** Agent Loop Support: Implemented a coordinated control flow that allows the RayTrainer to explicitly trigger profiling sessions on remote inference servers (vLLM/SGLang) via AgentLoopManager. - For vLLM, we leverage the AsyncLLM profiling interface. - For SGLang, we utilize the TokenizerCommunicatorMixin profiling API. Unified Torch Profiler: Integrated the native PyTorch Profiler into the verl ecosystem, supporting both continuous and discrete collection modes consistent with Nsight systems and NPU. **Profiling Workflow** The provided Sequence Diagram illustrates the decoupled profiling logic: Rollout Phase: Profiling is managed through explicit RPC calls (start_profile/stop_profile) to the inference engine's server interface (AsyncLLM / TokenizerCommunicatorMixin), ensuring capture is synchronized with generation steps. <img width="3986" height="3834" alt="whiteboard_exported_image" src="https://github.com/user-attachments/assets/afe68413-c338-4eec-a843-df5a9c106d98" /> ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: ... - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [x] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Co-authored-by: tardis-key <huxiaobo@zju.edu.cn>
…egrate torch.profiler (verl-project#4320) ### What does this PR do? **Summary** This PR enhances the verl profiling system to support the Agent Loop architecture (decoupled inference and training processes). It integrates torch.profiler as a core backend within the DistProfiler framework, providing a unified performance tuning interface for both distributed training workers and asynchronous rollout servers. **Key Enhancements** Agent Loop Support: Implemented a coordinated control flow that allows the RayTrainer to explicitly trigger profiling sessions on remote inference servers (vLLM/SGLang) via AgentLoopManager. - For vLLM, we leverage the AsyncLLM profiling interface. - For SGLang, we utilize the TokenizerCommunicatorMixin profiling API. Unified Torch Profiler: Integrated the native PyTorch Profiler into the verl ecosystem, supporting both continuous and discrete collection modes consistent with Nsight systems and NPU. **Profiling Workflow** The provided Sequence Diagram illustrates the decoupled profiling logic: Rollout Phase: Profiling is managed through explicit RPC calls (start_profile/stop_profile) to the inference engine's server interface (AsyncLLM / TokenizerCommunicatorMixin), ensuring capture is synchronized with generation steps. <img width="3986" height="3834" alt="whiteboard_exported_image" src="https://github.com/user-attachments/assets/afe68413-c338-4eec-a843-df5a9c106d98" /> ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: ... - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [x] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) --------- Co-authored-by: tardis-key <huxiaobo@zju.edu.cn>
…le control params for rollout profiling (sglang backend) (#5025) ### What does this PR do? > Address key functionality gaps in rollout discrete profiling for the sglang backend by adding global step awareness and expanding support for flexible profile control parameters: > 1. Missing global step information resulted in disorganized profiling files; > 2. Lack of support for specifying critical sglang backend profile control parameters (including `num_steps`, `profile_by_stage`, and `merge_profiles`) led to overly large profile files and hindered convenient profiling analysis. > > Solutions implemented: > - Pass `kwargs` parameters to the rollout server to enable global step awareness, and generate an independent folder for each global step to organize profiling data in a structured way; > - Extend the profiling config by adding optional parameters to the `content` field, and remove unnecessary restrictions on `content` values in `TorchProfilerToolConfig` to support specifying sglang-specific profile control parameters (e.g., `profile_by_stage`, `merge_profiles`). ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: [PR #4320: [perf] feat: verl profiler system support Agent Loop scenario and integrate torch.profiler](#4320) - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test To validate the functionality of each profile control parameter for the sglang rollout backend, orthogonal test cases were designed (minimizing redundant combinations while covering all key parameter values). The test results are as follows: | step_start | step_end | profile-by-stage | merge-profiles | stack | shapes | cpu/cuda | Test Result | |------------|----------|------------------|----------------|-------|--------|----------------|-------------| | default | default | default (False) | default (False)| default (False) | default (False) | default | Pass | | 0 (≥0) | 1 (≥1) | default | default | default | default | default | Pass | | 0 (≥0) | default | default | default | default | default | default | Pass | | default | 1 (≥1) | default | default | default | default | default | Pass | | default | default | True | default | default | default | default | Pass | | default | default | default | True | default | default | default | Pass | | default | default | default | default | True | default | default | Pass | | default | default | default | default | default | True | default | Pass | | default | default | default | default | default | default | [] / [cpu] / [cuda] / [cpu,cuda] | Pass | ### API and Usage Example ```yml rollout: quantization: null profiler: enable: True all_ranks: False ranks: [0, 1, 2] tool_config: torch: discrete: True step_start: 0 step_end: 1 contents: [cpu, cuda, stack, shapes, profile_by_stage, merge_profiles] ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [x] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) - [ ] If your PR is related to the `recipe` submodule, please also update the reference to the submodule commit via `git submodule update --remote` or `cd recipe && git pull origin main`.
What does this PR do?
Summary
This PR enhances the verl profiling system to support the Agent Loop architecture (decoupled inference and training processes). It integrates torch.profiler as a core backend within the DistProfiler framework, providing a unified performance tuning interface for both distributed training workers and asynchronous rollout servers.
Key Enhancements
Agent Loop Support: Implemented a coordinated control flow that allows the RayTrainer to explicitly trigger profiling sessions on remote inference servers (vLLM/SGLang) via AgentLoopManager.
Unified Torch Profiler: Integrated the native PyTorch Profiler into the verl ecosystem, supporting both continuous and discrete collection modes consistent with Nsight systems and NPU.
Profiling Workflow

The provided Sequence Diagram illustrates the decoupled profiling logic:
Rollout Phase: Profiling is managed through explicit RPC calls (start_profile/stop_profile) to the inference engine's server interface (AsyncLLM / TokenizerCommunicatorMixin), ensuring capture is synchronized with generation steps.
Checklist Before Starting
[{modules}] {type}: {description}(This will be checked by the CI){modules}includefsdp,megatron,sglang,vllm,rollout,trainer,ci,training_utils,recipe,hardware,deployment,ray,worker,single_controller,misc,perf,model,algo,env,tool,ckpt,doc,data,like[megatron, fsdp, doc]{type}is infeat,fix,refactor,chore,test[BREAKING]to the beginning of the title.[BREAKING][fsdp, megatron] feat: dynamic batchingTest
API and Usage Example
# Add code snippet or script demonstrating how to use thisDesign & Code Changes
Checklist Before Submitting
Important
Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.
pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=alwaysci-requestchannel in theverlSlack workspace. (If not accessible, please try the Feishu group (飞书群).)