Skip to content

[megatron, training_utils] fix: router replay R3 align router replay data with global layer indices#5037

Merged
wuxibin89 merged 1 commit intoverl-project:mainfrom
HollowMan6:r3_replay
Jan 28, 2026
Merged

[megatron, training_utils] fix: router replay R3 align router replay data with global layer indices#5037
wuxibin89 merged 1 commit intoverl-project:mainfrom
HollowMan6:r3_replay

Conversation

@HollowMan6
Copy link
Copy Markdown
Collaborator

@HollowMan6 HollowMan6 commented Jan 24, 2026

What does this PR do?

DeepSeek-V3-style MoE employs a hybrid architecture with the first three layers as dense FFN blocks before switching to MoE layers, which means not every layer has a router.

This PR fixes DeepSeek V3 architecture for router replay R3, as vLLM reports routed_experts across all transformer layers (including dense). Megatron only has routers for MoE layers. Mapping with i + offset silently shifts every MoE layer after a dense layer. So, when routed‑experts tensors include dense layers (full num_layers), we should map replay data by each router’s global layer_number; Otherwise, we should fall back to local offset indexing and validate bounds to catch mismatches. We also patch TopKRouter.set_layer_number to store the global layer number in each RouterReplay instance so global alignment is reliable with VPP/PP.

Dependent on vllm-project/vllm#33013

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, veomni, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

Without this fix:
image

With this fix, it looks good now:
image

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

✨ Presented to you with Mind Lab - A Lab for Experiential Intelligence.

Copilot AI review requested due to automatic review settings January 24, 2026 21:08
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly addresses an issue in the router replay logic by aligning replay data with global layer indices when appropriate. The change introduces a conditional logic to use the global layer_number from router instances and falls back to local offset indexing otherwise, which aligns with the stated goal. The addition of a bounds check for the layer index is a good defensive measure. I've included one suggestion to make the logic more robust by failing fast if a router is missing the layer_number attribute when global indexing is expected, which would prevent silent errors from potential misconfigurations.

Comment thread verl/utils/megatron/router_replay_utils.py
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adjusts how router replay (R3) maps recorded routing indices back to router instances in Megatron, aiming to use global layer indices when routed‑experts tensors span all transformer layers and otherwise fall back to local offsets. The goal is to correctly align replay data for architectures like DeepSeek V3 where routers and layers may not map 1:1 by simple local offsets.

Changes:

  • Compute num_layers_in_data from the replay tensor and detect whether it matches tf_config.num_layers to decide between global and local layer indexing.
  • For each router instance, determine a layer_idx either from a layer_number attribute (global index path) or from the original i + offset scheme (local index path).
  • Add explicit bounds checking on layer_idx and raise a ValueError if the computed index is outside the replay data’s layer dimension.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread verl/utils/megatron/router_replay_utils.py
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a fix for router replay in the DeepSeek V3 architecture, aligning router replay data with global layer indices when the data encompasses all layers, and correctly falling back to local offset indexing otherwise. The changes involve patching TopKRouter to store the global layer index and updating the data loading logic in set_router_replay_data to use this global index when appropriate. The implementation is sound, includes necessary fallbacks for robustness, and adds a validation check for layer indices. The code appears to correctly address the issue described, and I did not find any issues of high or critical severity.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated no new comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

…data with global layer indices

This PR fixes DeepSeek V3 architecture for router replay R3, as when
routed‑experts tensors include dense layers (full `num_layers`), we should
map replay data by each router’s global layer_number;
Otherwise, we should fall back to local offset indexing and
validate bounds to catch mismatches.

Signed-off-by: Hollow Man <hollowman@opensuse.org>
@wuxibin89 wuxibin89 merged commit 6b4a867 into verl-project:main Jan 28, 2026
71 of 80 checks passed
@HollowMan6 HollowMan6 deleted the r3_replay branch January 28, 2026 06:33
PeterSH6 pushed a commit that referenced this pull request Mar 3, 2026
… PP/VPP (#5452)

What does this PR do?
Router replay previously assumed all transformer layers are MoE layers,
which caused incorrect layer indexing for hybrid models (e.g., models
with both dense and MoE layers determined by moe_layer_freq).
This led to bugs when using pipeline parallelism (PP) and virtual
pipeline parallelism (VPP), as layer offset calculations did not account
for dense layers.

Although #5037 introduced the
router replay mechanism by patching Megatron's TopKRouter, it did not
fully handle hybrid (dense + MoE) models under VPP.
  Specifically:

- Bug 1 — Incorrect VPP offset (root cause): In
https://github.com/verl-project/verl/blob/c179476754150a5384f96d56b622a8f6330d2c04/verl/utils/megatron/router_replay_utils.py#L422,
get_num_layers_to_build() was used to compute the offset across prior
VPP stages. This returns the count of all transformer layers (including
dense layers), but RouterReplay instances only exist on MoE layers. For
hybrid models
this over-counts the offset, causing the wrong slice of router instances
to be selected.
- Bug 2 — Replay data not set correctly (consequence): Because Bug 1
returns the wrong router instance list,

https://github.com/verl-project/verl/blob/c179476754150a5384f96d56b622a8f6330d2c04/verl/utils/megatron/router_replay_utils.py#L256
either assigns target_indices to the wrong router or goes out of bounds,
so replay data is never correctly dispatched to the corresponding MoE
layers.

The same issue also exists in pp_gather(), where VPP offset calculation
must slice gathered data by MoE layer count rather than total layer
count.

  Key changes:
- Add is_moe_layer() and get_moe_num_layers_to_build() helpers to
distinguish MoE layers from dense layers based on moe_layer_freq
- Rewrite set_router_replay_data() to correctly index router replay data
by MoE-layer ordinal for R2 mode with mixed dense/MoE models
- Fix VPP offset calculation in pp_gather() and RouterReplayHelper to
count only MoE layers instead of all transformer layers
- Remove unnecessary layer_number tracking from RouterReplay patch to
minimize intrusive changes to Megatron.

### Checklist Before Starting

- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`,
`rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`,
`deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`,
`model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`,
`fully_async`, `one_step_off`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.

---------

Signed-off-by: xhx1022 <1737006628@qq.com>
JasonWei05 pushed a commit to JasonWei05/eca that referenced this pull request Mar 3, 2026
… PP/VPP (#5452)

What does this PR do?
Router replay previously assumed all transformer layers are MoE layers,
which caused incorrect layer indexing for hybrid models (e.g., models
with both dense and MoE layers determined by moe_layer_freq).
This led to bugs when using pipeline parallelism (PP) and virtual
pipeline parallelism (VPP), as layer offset calculations did not account
for dense layers.

Although verl-project/verl#5037 introduced the
router replay mechanism by patching Megatron's TopKRouter, it did not
fully handle hybrid (dense + MoE) models under VPP.
  Specifically:

- Bug 1 — Incorrect VPP offset (root cause): In
https://github.com/verl-project/verl/blob/c179476754150a5384f96d56b622a8f6330d2c04/verl/utils/megatron/router_replay_utils.py#L422,
get_num_layers_to_build() was used to compute the offset across prior
VPP stages. This returns the count of all transformer layers (including
dense layers), but RouterReplay instances only exist on MoE layers. For
hybrid models
this over-counts the offset, causing the wrong slice of router instances
to be selected.
- Bug 2 — Replay data not set correctly (consequence): Because Bug 1
returns the wrong router instance list,

https://github.com/verl-project/verl/blob/c179476754150a5384f96d56b622a8f6330d2c04/verl/utils/megatron/router_replay_utils.py#L256
either assigns target_indices to the wrong router or goes out of bounds,
so replay data is never correctly dispatched to the corresponding MoE
layers.

The same issue also exists in pp_gather(), where VPP offset calculation
must slice gathered data by MoE layer count rather than total layer
count.

  Key changes:
- Add is_moe_layer() and get_moe_num_layers_to_build() helpers to
distinguish MoE layers from dense layers based on moe_layer_freq
- Rewrite set_router_replay_data() to correctly index router replay data
by MoE-layer ordinal for R2 mode with mixed dense/MoE models
- Fix VPP offset calculation in pp_gather() and RouterReplayHelper to
count only MoE layers instead of all transformer layers
- Remove unnecessary layer_number tracking from RouterReplay patch to
minimize intrusive changes to Megatron.

### Checklist Before Starting

- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`,
`rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`,
`deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`,
`model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`,
`fully_async`, `one_step_off`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.

---------

Signed-off-by: xhx1022 <1737006628@qq.com>
guillemgt pushed a commit to guillemgt/verl that referenced this pull request Mar 9, 2026
… PP/VPP (verl-project#5452)

What does this PR do?
Router replay previously assumed all transformer layers are MoE layers,
which caused incorrect layer indexing for hybrid models (e.g., models
with both dense and MoE layers determined by moe_layer_freq).
This led to bugs when using pipeline parallelism (PP) and virtual
pipeline parallelism (VPP), as layer offset calculations did not account
for dense layers.

Although verl-project#5037 introduced the
router replay mechanism by patching Megatron's TopKRouter, it did not
fully handle hybrid (dense + MoE) models under VPP.
  Specifically:

- Bug 1 — Incorrect VPP offset (root cause): In
https://github.com/verl-project/verl/blob/c179476754150a5384f96d56b622a8f6330d2c04/verl/utils/megatron/router_replay_utils.py#L422,
get_num_layers_to_build() was used to compute the offset across prior
VPP stages. This returns the count of all transformer layers (including
dense layers), but RouterReplay instances only exist on MoE layers. For
hybrid models
this over-counts the offset, causing the wrong slice of router instances
to be selected.
- Bug 2 — Replay data not set correctly (consequence): Because Bug 1
returns the wrong router instance list,

https://github.com/verl-project/verl/blob/c179476754150a5384f96d56b622a8f6330d2c04/verl/utils/megatron/router_replay_utils.py#L256
either assigns target_indices to the wrong router or goes out of bounds,
so replay data is never correctly dispatched to the corresponding MoE
layers.

The same issue also exists in pp_gather(), where VPP offset calculation
must slice gathered data by MoE layer count rather than total layer
count.

  Key changes:
- Add is_moe_layer() and get_moe_num_layers_to_build() helpers to
distinguish MoE layers from dense layers based on moe_layer_freq
- Rewrite set_router_replay_data() to correctly index router replay data
by MoE-layer ordinal for R2 mode with mixed dense/MoE models
- Fix VPP offset calculation in pp_gather() and RouterReplayHelper to
count only MoE layers instead of all transformer layers
- Remove unnecessary layer_number tracking from RouterReplay patch to
minimize intrusive changes to Megatron.

### Checklist Before Starting

- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`,
`rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`,
`deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`,
`model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`,
`fully_async`, `one_step_off`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.

---------

Signed-off-by: xhx1022 <1737006628@qq.com>
guillemgt added a commit to guillemgt/verl that referenced this pull request Mar 9, 2026
… PP/VPP (verl-project#5452)

What does this PR do?
Router replay previously assumed all transformer layers are MoE layers,
which caused incorrect layer indexing for hybrid models (e.g., models
with both dense and MoE layers determined by moe_layer_freq).
This led to bugs when using pipeline parallelism (PP) and virtual
pipeline parallelism (VPP), as layer offset calculations did not account
for dense layers.

Although verl-project#5037 introduced the
router replay mechanism by patching Megatron's TopKRouter, it did not
fully handle hybrid (dense + MoE) models under VPP.
  Specifically:

- Bug 1 — Incorrect VPP offset (root cause): In
https://github.com/verl-project/verl/blob/c179476754150a5384f96d56b622a8f6330d2c04/verl/utils/megatron/router_replay_utils.py#L422,
get_num_layers_to_build() was used to compute the offset across prior
VPP stages. This returns the count of all transformer layers (including
dense layers), but RouterReplay instances only exist on MoE layers. For
hybrid models
this over-counts the offset, causing the wrong slice of router instances
to be selected.
- Bug 2 — Replay data not set correctly (consequence): Because Bug 1
returns the wrong router instance list,

https://github.com/verl-project/verl/blob/c179476754150a5384f96d56b622a8f6330d2c04/verl/utils/megatron/router_replay_utils.py#L256
either assigns target_indices to the wrong router or goes out of bounds,
so replay data is never correctly dispatched to the corresponding MoE
layers.

The same issue also exists in pp_gather(), where VPP offset calculation
must slice gathered data by MoE layer count rather than total layer
count.

  Key changes:
- Add is_moe_layer() and get_moe_num_layers_to_build() helpers to
distinguish MoE layers from dense layers based on moe_layer_freq
- Rewrite set_router_replay_data() to correctly index router replay data
by MoE-layer ordinal for R2 mode with mixed dense/MoE models
- Fix VPP offset calculation in pp_gather() and RouterReplayHelper to
count only MoE layers instead of all transformer layers
- Remove unnecessary layer_number tracking from RouterReplay patch to
minimize intrusive changes to Megatron.

### Checklist Before Starting

- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`,
`rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`,
`deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`,
`model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`,
`fully_async`, `one_step_off`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.

---------

Signed-off-by: xhx1022 <1737006628@qq.com>
DearFishi pushed a commit to KunlunxinAD/verl that referenced this pull request Mar 20, 2026
… PP/VPP (verl-project#5452)

What does this PR do?
Router replay previously assumed all transformer layers are MoE layers,
which caused incorrect layer indexing for hybrid models (e.g., models
with both dense and MoE layers determined by moe_layer_freq).
This led to bugs when using pipeline parallelism (PP) and virtual
pipeline parallelism (VPP), as layer offset calculations did not account
for dense layers.

Although verl-project#5037 introduced the
router replay mechanism by patching Megatron's TopKRouter, it did not
fully handle hybrid (dense + MoE) models under VPP.
  Specifically:

- Bug 1 — Incorrect VPP offset (root cause): In
https://github.com/verl-project/verl/blob/c179476754150a5384f96d56b622a8f6330d2c04/verl/utils/megatron/router_replay_utils.py#L422,
get_num_layers_to_build() was used to compute the offset across prior
VPP stages. This returns the count of all transformer layers (including
dense layers), but RouterReplay instances only exist on MoE layers. For
hybrid models
this over-counts the offset, causing the wrong slice of router instances
to be selected.
- Bug 2 — Replay data not set correctly (consequence): Because Bug 1
returns the wrong router instance list,

https://github.com/verl-project/verl/blob/c179476754150a5384f96d56b622a8f6330d2c04/verl/utils/megatron/router_replay_utils.py#L256
either assigns target_indices to the wrong router or goes out of bounds,
so replay data is never correctly dispatched to the corresponding MoE
layers.

The same issue also exists in pp_gather(), where VPP offset calculation
must slice gathered data by MoE layer count rather than total layer
count.

  Key changes:
- Add is_moe_layer() and get_moe_num_layers_to_build() helpers to
distinguish MoE layers from dense layers based on moe_layer_freq
- Rewrite set_router_replay_data() to correctly index router replay data
by MoE-layer ordinal for R2 mode with mixed dense/MoE models
- Fix VPP offset calculation in pp_gather() and RouterReplayHelper to
count only MoE layers instead of all transformer layers
- Remove unnecessary layer_number tracking from RouterReplay patch to
minimize intrusive changes to Megatron.

### Checklist Before Starting

- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`,
`rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`,
`deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`,
`model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`,
`fully_async`, `one_step_off`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.

---------

Signed-off-by: xhx1022 <1737006628@qq.com>
sijyang pushed a commit to sijyang/verl that referenced this pull request Apr 1, 2026
… PP/VPP (verl-project#5452)

What does this PR do?
Router replay previously assumed all transformer layers are MoE layers,
which caused incorrect layer indexing for hybrid models (e.g., models
with both dense and MoE layers determined by moe_layer_freq).
This led to bugs when using pipeline parallelism (PP) and virtual
pipeline parallelism (VPP), as layer offset calculations did not account
for dense layers.

Although verl-project#5037 introduced the
router replay mechanism by patching Megatron's TopKRouter, it did not
fully handle hybrid (dense + MoE) models under VPP.
  Specifically:

- Bug 1 — Incorrect VPP offset (root cause): In
https://github.com/verl-project/verl/blob/c179476754150a5384f96d56b622a8f6330d2c04/verl/utils/megatron/router_replay_utils.py#L422,
get_num_layers_to_build() was used to compute the offset across prior
VPP stages. This returns the count of all transformer layers (including
dense layers), but RouterReplay instances only exist on MoE layers. For
hybrid models
this over-counts the offset, causing the wrong slice of router instances
to be selected.
- Bug 2 — Replay data not set correctly (consequence): Because Bug 1
returns the wrong router instance list,

https://github.com/verl-project/verl/blob/c179476754150a5384f96d56b622a8f6330d2c04/verl/utils/megatron/router_replay_utils.py#L256
either assigns target_indices to the wrong router or goes out of bounds,
so replay data is never correctly dispatched to the corresponding MoE
layers.

The same issue also exists in pp_gather(), where VPP offset calculation
must slice gathered data by MoE layer count rather than total layer
count.

  Key changes:
- Add is_moe_layer() and get_moe_num_layers_to_build() helpers to
distinguish MoE layers from dense layers based on moe_layer_freq
- Rewrite set_router_replay_data() to correctly index router replay data
by MoE-layer ordinal for R2 mode with mixed dense/MoE models
- Fix VPP offset calculation in pp_gather() and RouterReplayHelper to
count only MoE layers instead of all transformer layers
- Remove unnecessary layer_number tracking from RouterReplay patch to
minimize intrusive changes to Megatron.

### Checklist Before Starting

- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`,
`rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`,
`deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`,
`model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`,
`fully_async`, `one_step_off`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.

---------

Signed-off-by: xhx1022 <1737006628@qq.com>
DaizeDong pushed a commit to DaizeDong/verl that referenced this pull request Apr 19, 2026
…data with global layer indices (verl-project#5037)

### What does this PR do?

DeepSeek-V3-style MoE employs a hybrid architecture with the first three
layers as dense FFN blocks before switching to MoE layers, which means
not every layer has a router.

This PR fixes DeepSeek V3 architecture for router replay R3, as vLLM
reports routed_experts across all transformer layers (including dense).
Megatron only has routers for MoE layers. Mapping with i + offset
silently shifts every MoE layer after a dense layer. So, when
routed‑experts tensors include dense layers (full `num_layers`), we
should map replay data by each router’s global layer_number; Otherwise,
we should fall back to local offset indexing and validate bounds to
catch mismatches. We also patch TopKRouter.set_layer_number to store the
global layer number in each RouterReplay instance so global alignment is
reliable with VPP/PP.

Dependent on vllm-project/vllm#33013

### Checklist Before Starting

- [X] Search for similar PRs. Paste at least one query link here: ...
- [X] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`,
`rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`,
`deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`,
`model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

Without this fix:
<img width="3646" height="1132" alt="image"
src="https://github.com/user-attachments/assets/d2400f03-4e25-4f52-8717-a23b58cc23ce"
/>

With this fix, it looks good now:
<img width="3668" height="1210" alt="image"
src="https://github.com/user-attachments/assets/7a9b4818-861f-4a52-8c13-90e6ed6f9530"
/>

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [X] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [X] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [X] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [X] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [X] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [X] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.

<sub>✨ Presented to you with <a href="https://macaron.im/mindlab">Mind
Lab</a> - A Lab for Experiential Intelligence.</sub>

Signed-off-by: Hollow Man <hollowman@opensuse.org>
DaizeDong pushed a commit to DaizeDong/verl that referenced this pull request Apr 19, 2026
… PP/VPP (verl-project#5452)

What does this PR do?
Router replay previously assumed all transformer layers are MoE layers,
which caused incorrect layer indexing for hybrid models (e.g., models
with both dense and MoE layers determined by moe_layer_freq).
This led to bugs when using pipeline parallelism (PP) and virtual
pipeline parallelism (VPP), as layer offset calculations did not account
for dense layers.

Although verl-project#5037 introduced the
router replay mechanism by patching Megatron's TopKRouter, it did not
fully handle hybrid (dense + MoE) models under VPP.
  Specifically:

- Bug 1 — Incorrect VPP offset (root cause): In
https://github.com/verl-project/verl/blob/c179476754150a5384f96d56b622a8f6330d2c04/verl/utils/megatron/router_replay_utils.py#L422,
get_num_layers_to_build() was used to compute the offset across prior
VPP stages. This returns the count of all transformer layers (including
dense layers), but RouterReplay instances only exist on MoE layers. For
hybrid models
this over-counts the offset, causing the wrong slice of router instances
to be selected.
- Bug 2 — Replay data not set correctly (consequence): Because Bug 1
returns the wrong router instance list,

https://github.com/verl-project/verl/blob/c179476754150a5384f96d56b622a8f6330d2c04/verl/utils/megatron/router_replay_utils.py#L256
either assigns target_indices to the wrong router or goes out of bounds,
so replay data is never correctly dispatched to the corresponding MoE
layers.

The same issue also exists in pp_gather(), where VPP offset calculation
must slice gathered data by MoE layer count rather than total layer
count.

  Key changes:
- Add is_moe_layer() and get_moe_num_layers_to_build() helpers to
distinguish MoE layers from dense layers based on moe_layer_freq
- Rewrite set_router_replay_data() to correctly index router replay data
by MoE-layer ordinal for R2 mode with mixed dense/MoE models
- Fix VPP offset calculation in pp_gather() and RouterReplayHelper to
count only MoE layers instead of all transformer layers
- Remove unnecessary layer_number tracking from RouterReplay patch to
minimize intrusive changes to Megatron.

### Checklist Before Starting

- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`,
`rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`,
`deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`,
`model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`,
`fully_async`, `one_step_off`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.

---------

Signed-off-by: xhx1022 <1737006628@qq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants