Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
b82a643
Add DeepEP support to Megatron Policy
parthmannan Dec 16, 2025
0488cee
Add example recipe and add to config dict
parthmannan Dec 17, 2025
32a6794
docs: Revise news section for nemotron v3 and DAPO algorithm support …
snowmanwwg Dec 16, 2025
1143d7a
chore: fix grpo functional test metric (#1643)
RayenTian Dec 16, 2025
d148d8c
feat: add support from building images using vllm from private repos …
terrykong Dec 17, 2025
76ddbe1
feat: Necessary changes for Gym GRPO tutorial (#1630)
bxyu-nvidia Dec 17, 2025
527f37e
perf: Add qwen3 30b-a3b async-8-off recipe (#1642)
youngeunkwon0405 Dec 17, 2025
285a329
feat: Add GPT-OSS support via mcore (#1452)
ashors1 Dec 17, 2025
4f2e164
chore: Bump vllm to 0.11.2, torch to 2.9, transformers to 4.57.1 (#1563)
yfw Dec 18, 2025
789bda4
fix: Support datasets saved with save_to_disk in ResponseDataset (#1610)
sahgerlad Dec 18, 2025
105a5cc
Recipe update
parthmannan Dec 20, 2025
250f34e
fix: Handle disabled validation in SFT training (#1611)
sahgerlad Dec 19, 2025
afadf3e
fix: Fix crash when using cp in dtensor path (#1663)
yfw Dec 19, 2025
87c55e2
fix: Fix Fp8 sequence padding for PP>1 case (#1579)
guyueh1 Dec 20, 2025
56dba8e
test: Perf recipe for v0.5 (#1667)
guyueh1 Dec 20, 2025
b527f27
fix: Fix fp8 after vllm v0.11.2 bump (#1660)
guyueh1 Dec 20, 2025
21d75b2
fix: Fix crash when using activation_checkpointing (#1676)
yfw Dec 22, 2025
f952f78
feat: add dapo recipe and test (#1617)
ZhiyuLi-Nvidia Dec 22, 2025
8c9ae9f
feat: DTensorPolicyV2 GPT-OSS SFT support (#1470)
adil-a Dec 23, 2025
d3442b4
fix: grad norm calculation for dtensor v2 (#1693)
hemildesai Dec 24, 2025
7fbc72e
feat: Add Nemotron‑3 Nano 30B A3B BF16 SFT nightly tests (FSDP2, +LoR…
RayenTian Dec 24, 2025
0419429
feat: Support prefetching of specific envs (#1692)
hemildesai Dec 25, 2025
9108663
Upgrade DeepEP version to match
parthmannan Dec 30, 2025
78917e0
Lint fix
parthmannan Jan 4, 2026
78d182c
fix: Fix DTensor slice crash after PyTorch 2.9 bump (#1689)
zpqiu Jan 2, 2026
71a7fa8
fix: grad norm check for automodel gpt oss nightly (#1708)
hemildesai Jan 5, 2026
8c492ff
fix: relax nanov3 nightly test metrics strict (#1712)
RayenTian Jan 5, 2026
4051123
fix: on GB200 use single-thread checkpoint save to avoid Cpu OOM (#1703)
guyueh1 Jan 5, 2026
3e4bdcf
perf: [Perf recipe] Change TP 16->32 for deepseek GB200 sync benchmar…
guyueh1 Jan 5, 2026
a2580e2
docs: Add doc for nano-v3 (#1694)
yfw Jan 5, 2026
90e14ee
fix: Disable cudnn sdpa backend when using activation checkpointing (…
yfw Jan 6, 2026
121dcf1
fix: log metrics that can be coerced to scalars (#1723)
terrykong Jan 6, 2026
f63f268
fix: use median instead of mean for logprob error for stability in ni…
terrykong Jan 7, 2026
6e4c7d3
fix: gemma3 27b must now have skip_tokenizer_init=False in vllm (#1721)
terrykong Jan 7, 2026
d3e219b
fix: fix several nightly tests that were flaky (#1724)
terrykong Jan 7, 2026
0032a2d
fix: apply offloading change from v2 to v1 (#1726)
terrykong Jan 7, 2026
c3ca1d4
fix: mcore generation config restored in nightly test (#1720)
terrykong Jan 8, 2026
512bef9
feat: Megatron SFT LoRA (#1629)
arendu Jan 8, 2026
d4683ea
build: Update aiohttp and urlib3 (#1746)
chtruong814 Jan 9, 2026
d443965
fix: patch pytorch aten.alias.default shard strategy (#1728)
RayenTian Jan 9, 2026
ff48f85
feat: RL support for custom moe models in dtensor v2 (#1695)
hemildesai Jan 9, 2026
9b5da01
fix: split dtensorv1 vllm dependency (#1638)
yuki-97 Jan 10, 2026
b66ec93
build: Resolve CVEs for gnupg and aiohttp (#1755)
chtruong814 Jan 10, 2026
35c7df9
build: Bump mamba to d68d16e and causal-conv1d to 67e0a9d (#1759)
chtruong814 Jan 12, 2026
6e6c476
Update uv
parthmannan Jan 15, 2026
0b5d72f
ci: Clean up disk space for lint check (#1768)
chtruong814 Jan 13, 2026
454c304
docs: Adding dtensor TP debugging summary (#1767)
joyang-nv Jan 15, 2026
1ba733d
Fix lock conflict resolution after signoff
parthmannan Jan 15, 2026
c66bc7a
Merge branch 'main' of https://github.com/NVIDIA-NeMo/RL into pmannan…
parthmannan Jan 15, 2026
a8351c7
Merge branch 'main' of https://github.com/NVIDIA-NeMo/RL into pmannan…
parthmannan Jan 15, 2026
0ff0879
Make DeepEP related args in all configs
parthmannan Jan 16, 2026
6b34413
Add deepep args to policy tests
parthmannan Jan 16, 2026
eb596a4
Add keys to vllm tests
parthmannan Jan 16, 2026
f72d321
Fix lint
guyueh1 Jan 18, 2026
6d85e5c
Merge branch 'main' into guyueh/deepep_mcore_training
guyueh1 Jan 18, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions examples/configs/distillation_math.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,9 @@ policy: &POLICY_BASE
bias_activation_fusion: True
defer_fp32_logits: False
moe_per_layer_logging: False
moe_enable_deepep: false
moe_token_dispatcher_type: "allgather"
moe_shared_expert_overlap: false

optimizer:
optimizer: "adam"
Expand Down
3 changes: 3 additions & 0 deletions examples/configs/distillation_math_megatron.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,9 @@ policy: &POLICY_BASE
bias_activation_fusion: True
moe_per_layer_logging: False
defer_fp32_logits: False
moe_enable_deepep: false
moe_token_dispatcher_type: "allgather"
moe_shared_expert_overlap: false

optimizer:
optimizer: "adam"
Expand Down
3 changes: 3 additions & 0 deletions examples/configs/dpo.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -119,6 +119,9 @@ policy:
bias_activation_fusion: True
defer_fp32_logits: False
moe_per_layer_logging: False
moe_enable_deepep: false
moe_token_dispatcher_type: "allgather"
moe_shared_expert_overlap: false

optimizer:
optimizer: "adam"
Expand Down
3 changes: 3 additions & 0 deletions examples/configs/grpo_math_1B.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -116,6 +116,9 @@ policy:
bias_activation_fusion: True
defer_fp32_logits: False
moe_per_layer_logging: False
moe_enable_deepep: false
moe_token_dispatcher_type: "allgather"
moe_shared_expert_overlap: false

optimizer:
optimizer: "adam"
Expand Down
3 changes: 3 additions & 0 deletions examples/configs/grpo_math_1B_megatron.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,9 @@ policy:
moe_router_load_balancing_type: "none" # "seq_aux_loss" causes logprob error divergence for grpo
moe_router_bias_update_rate: 0.0 # by default, disable bias updates for grpo
moe_permute_fusion: false
moe_enable_deepep: false
moe_token_dispatcher_type: "allgather"
moe_shared_expert_overlap: false
#gives ~20% training perf speedup with sequence packing
apply_rope_fusion: True

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,8 @@ policy:
sequence_parallel: true
moe_permute_fusion: true
apply_rope_fusion: false
moe_enable_deepep: true
moe_token_dispatcher_type: flex
optimizer:
lr: 5.0e-07
min_lr: 5.0e-08
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,8 @@ policy:
lr: 1.0e-06
scheduler:
lr_warmup_iters: 50
moe_enable_deepep: true
moe_token_dispatcher_type: flex
logger:
monitor_gpus: false
wandb:
Expand Down
3 changes: 3 additions & 0 deletions examples/configs/sft.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,9 @@ policy:
bias_activation_fusion: True
defer_fp32_logits: False
moe_per_layer_logging: False
moe_enable_deepep: false
moe_token_dispatcher_type: "allgather"
moe_shared_expert_overlap: false

peft:
enabled: false
Expand Down
3 changes: 3 additions & 0 deletions examples/configs/sft_openmathinstruct2_megatron.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,9 @@ policy:
# gives ~25% training perf speedup with sequence packing and apply_rope_fusion
bias_activation_fusion: True
moe_per_layer_logging: False
moe_enable_deepep: false
moe_token_dispatcher_type: "allgather"
moe_shared_expert_overlap: false

env_vars:
PYTORCH_CUDA_ALLOC_CONF: "expandable_segments:False"
Expand Down
3 changes: 3 additions & 0 deletions examples/configs/vlm_grpo_3B.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,9 @@ policy:
bias_activation_fusion: True
defer_fp32_logits: False
moe_per_layer_logging: False
moe_enable_deepep: false
moe_token_dispatcher_type: "allgather"
moe_shared_expert_overlap: false

optimizer:
optimizer: "adam"
Expand Down
3 changes: 3 additions & 0 deletions examples/configs/vlm_grpo_3B_megatron.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -146,6 +146,9 @@ policy:
bias_activation_fusion: True
defer_fp32_logits: False
moe_per_layer_logging: False
moe_enable_deepep: false
moe_token_dispatcher_type: "allgather"
moe_shared_expert_overlap: false
optimizer:
optimizer: adam
lr: 2.0e-07
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,9 @@ policy:
apply_rope_fusion: True
defer_fp32_logits: false
moe_permute_fusion: false
moe_enable_deepep: false
moe_token_dispatcher_type: "allgather"
moe_shared_expert_overlap: false

optimizer:
optimizer: "adam"
Expand Down
10 changes: 10 additions & 0 deletions nemo_rl/models/policy/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -183,6 +183,16 @@ class MegatronConfig(TypedDict):
# Force overwrite of the initial checkpoint even if it exists (default: False)
force_overwrite_initial_ckpt: NotRequired[bool]
moe_per_layer_logging: bool
# Set to true to enable DeepEP for expert parallel communication
# Must set moe_token_dispatcher_type to 'flex'
# Must set moe_shared_expert_overlap to False
moe_enable_deepep: bool
# The type of token dispatcher to use. The default is 'allgather'.
# Options are 'allgather','alltoall' and 'flex'
# Use 'flex' when using DeepEP
moe_token_dispatcher_type: str
# Can be used only with 'alltoall' token dispatcher
moe_shared_expert_overlap: bool
optimizer: MegatronOptimizerConfig
scheduler: MegatronSchedulerConfig
distributed_data_parallel_config: MegatronDDPConfig
Expand Down
7 changes: 7 additions & 0 deletions nemo_rl/models/policy/workers/megatron_policy_worker.py
Original file line number Diff line number Diff line change
Expand Up @@ -658,6 +658,13 @@ def __init__(
model_cfg.moe_router_bias_update_rate = self.cfg["megatron_cfg"][
"moe_router_bias_update_rate"
]
model_cfg.moe_enable_deepep = self.cfg["megatron_cfg"]["moe_enable_deepep"]
model_cfg.moe_token_dispatcher_type = self.cfg["megatron_cfg"][
"moe_token_dispatcher_type"
]
model_cfg.moe_shared_expert_overlap = self.cfg["megatron_cfg"][
"moe_shared_expert_overlap"
]

model_cfg.moe_permute_fusion = self.cfg["megatron_cfg"]["moe_permute_fusion"]
if "layernorm_epsilon" in self.cfg["megatron_cfg"]:
Expand Down
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,7 @@ mcore = [
# https://github.com/NVIDIA/TransformerEngine/blob/v2.3/transformer_engine/pytorch/attention/dot_product_attention/utils.py#L108
# https://github.com/facebookresearch/xformers/blob/8354497deb2c04c67fbb2e2ad911e86530da0e90/xformers/ops/fmha/flash.py#L76
"flash-attn==2.8.1",
"deep_ep @ git+https://github.com/deepseek-ai/DeepEP.git@bfded34800dfec415b71503f8205181de90b2480",
# Remove this once https://github.com/NVIDIA-NeMo/RL/issues/501 resolved
"vllm==0.11.2",
]
Expand Down
3 changes: 3 additions & 0 deletions tests/unit/models/generation/test_vllm_generation.py
Original file line number Diff line number Diff line change
Expand Up @@ -178,6 +178,9 @@ def get_basic_megatron_test_config(
"moe_router_load_balancing_type": "none",
"moe_router_bias_update_rate": 0.0,
"moe_permute_fusion": False,
"moe_enable_deepep": False,
"moe_token_dispatcher_type": "allgather",
"moe_shared_expert_overlap": False,
"apply_rope_fusion": True,
"bias_activation_fusion": True,
"moe_per_layer_logging": False,
Expand Down
3 changes: 3 additions & 0 deletions tests/unit/models/policy/test_megatron_worker.py
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,9 @@ def create_megatron_test_config(
"apply_rope_fusion": True,
"bias_activation_fusion": True,
"moe_per_layer_logging": False,
"moe_enable_deepep": False,
"moe_token_dispatcher_type": "allgather",
"moe_shared_expert_overlap": False,
"defer_fp32_logits": defer_fp32_logits,
"train_iters": 100, # Required for Megatron training
"optimizer": {
Expand Down
9 changes: 6 additions & 3 deletions uv.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading