Conversation
📝 WalkthroughWalkthroughIntroduces a new optional configuration field, truncated_importance_sampling_ratio, in GRPO YAML configs and wires it into ClippedPGLossFn. The loss now validates this field, restricts its use to token-level importance sampling with correction enabled, ensures positivity, and applies clipping to token-level importance weights when configured. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant C as Config
participant T as Trainer
participant L as ClippedPGLossFn
participant D as Data (rewards, logprobs)
C->>T: Provide loss_fn config (incl. truncated_importance_sampling_ratio)
T->>L: Initialize with cfg
L->>L: Validate<br/>- if ratio set: require IS correction enabled<br/>- forbid sequence-level IS<br/>- ratio > 0
T->>L: forward(old_logprobs, new_logprobs, rewards, flags)
alt IS correction enabled
alt Sequence-level IS
L->>L: Use sequence-level ratios (no truncation)
else Token-level IS
L->>L: Compute token IS weights
opt ratio provided
Note over L: Clip token IS weights to truncated_importance_sampling_ratio
end
L->>L: Compute clipped PG loss
end
else
L->>L: Compute loss without IS correction
end
L-->>T: loss
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (1)
nemo_rl/algorithms/loss_functions.py (1)
297-302: Reference public documentation instead of private Notion page.The comment on line 297 links to
https://fengyao.notion.site/off-policy-rl, which is a private page inaccessible to most developers. Consider referencing a public paper, arXiv preprint, or internal documentation instead.Additionally, consider moving this truncation inside the
if self.use_importance_sampling_correction:block (line 305) to make the dependency explicit, even though validation already prevents the invalid configuration:actor_importance_weights = actor_importance_weights_expanded del actor_importance_weights_expanded if self.use_importance_sampling_correction: + # TIS: Truncate importance weights if configured + if self.truncated_importance_sampling_ratio is not None: + actor_importance_weights = torch.clamp( + actor_importance_weights, + max=self.truncated_importance_sampling_ratio, + ) importance_weights_to_use = actor_importance_weights else:This would require removing the truncation from lines 297-302.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
examples/configs/grpo_math_1B.yaml(1 hunks)examples/configs/vlm_grpo_3B.yaml(1 hunks)examples/configs/vlm_grpo_3B_megatron.yaml(1 hunks)nemo_rl/algorithms/loss_functions.py(4 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Follow the Google Python Style Guide for all Python code
Target Python 3.12+ for all Python code in NeMo-RL
Indent Python code with 4 spaces; do not use tabs
Python filenames should be snake_case (e.g., some_file.py)
Class names should be PascalCase
Function and method names should be snake_case
Local variable names should be snake_case; if starting with a number, prefix with k (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE and prefixed with G_ (e.g., G_MY_GLOBAL)
Constants should be UPPER_SNAKE_CASE
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
For public interfaces used outside a file, prefer docstrings over comments
Use comments mainly for code within a function or interfaces local to a file
Commented-out code must include a nearby comment explaining usage and why it is commented out; otherwise remove before merging
Use Google-style docstrings for classes and functions (Sphinx-parseable)
Avoid using reflection when functionality can be easily achieved without it
Limit except clauses to the smallest specific set of exceptions possible
For duck-typing via try/except, keep the try body minimal and use else for main logic
Add the NVIDIA copyright header (with current year) at the top of all Python files, excluding tests/ and test-only scripts
Files:
nemo_rl/algorithms/loss_functions.py
nemo_rl/**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
nemo_rl/**/*.py: Do not set non-None configuration defaults in code; YAML is the single source of truth for defaults
Access required config attributes directly (e.g., policy_cfg["precision"]) and assume presence; do not introduce hidden defaults
Express configuration optionality via TypedDict using typing.NotRequired
When adding a new config key to a TypedDict subclass, document the key’s purpose, valid values/types, and recommended default in code
For any class or function decorated with @ray.remote, add '# pragma: no cover' on the class/def line (and on remote functions)
Files:
nemo_rl/algorithms/loss_functions.py
examples/configs/*.yaml
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
examples/configs/*.yaml: Exemplar configs under examples/configs/.yaml must include documented defaults
When adding a new config key, reflect its recommended default in exemplar YAMLs under examples/configs/.yaml
Files:
examples/configs/grpo_math_1B.yamlexamples/configs/vlm_grpo_3B.yamlexamples/configs/vlm_grpo_3B_megatron.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Lint check
- GitHub Check: Post automodel integration comment / Comment on PR
- GitHub Check: Post submodule check comment / Comment on PR
🔇 Additional comments (1)
nemo_rl/algorithms/loss_functions.py (1)
132-141: LGTM!The validation logic is thorough and correctly enforces the constraints:
- Only usable when importance sampling correction is enabled
- Only for token-level (not sequence-level) importance sampling
- Must be positive if provided
|
@yuki-97 did you see any benefits of truncated importance sampling? I tried this in a bunch of experiments and didn't see any benefits. |
@parthchadha yea, research team mentioned tis has a significant improvement in their setting. this is their train reward curve, val curve has the same trend, grey is w/ tis and blue is w/o. |
Signed-off-by: Yuki Huang <yukih@nvidia.com>
@terrykong I tried some our existing configs, but unfortunately also didn't see any benefits. Also FYI, the setting of the research team uses DAPO, multiple training datasets, and some other tricks, not sure which is the key point to gain the benefit of using TIS. |
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com> Signed-off-by: NeMo Bot <nemo-bot@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com> Signed-off-by: Lawrence Lane <llane@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com> Signed-off-by: yuanhangs <yuanhangs@nvidia.com>

As title.
Summary by CodeRabbit