fix: Set validation accuracy to mean of rewards to handle non-[0,1] reward#1619
fix: Set validation accuracy to mean of rewards to handle non-[0,1] reward#1619
Conversation
📝 WalkthroughWalkthroughThe validate function in the GRPO algorithm was modified to calculate accuracy by computing the mean of the rewards tensor instead of performing a binary equality check against 1.0, eliminating reliance on an unscaled binary reward assumption. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
nemo_rl/algorithms/grpo.py (1)
1770-1770: Good fix for non-[0,1] rewards, but consider renaming the metric for clarity.The change correctly computes the mean of rewards instead of a binary check, which properly handles rewards outside the [0,1] range. However, the metric name "accuracy" can be misleading when rewards are arbitrary values (e.g., negative or >1). Users might be confused seeing "Accuracy: 5.2" or "Accuracy: -0.3" in logs.
Note that the code already stores this internally as "val_reward" (lines 1490, 2451), suggesting awareness that it's a reward metric rather than a true accuracy measure.
Consider renaming for consistency and clarity:
- accuracy = rewards_t.mean().item() + mean_reward = rewards_t.mean().item() else: - accuracy = 0.0 + mean_reward = 0.0 avg_length = ( sum(total_lengths) / len(total_lengths) if len(total_lengths) > 0 else 0.0 ) val_metrics = { - "accuracy": accuracy, + "mean_reward": mean_reward, "avg_length": avg_length, **additional_metrics_to_report, }You would also need to update:
- Line 1490 and 2451: Change from
val_metrics["accuracy"]toval_metrics["mean_reward"]- Line 1805: Update the print statement to reflect the new metric name
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
nemo_rl/algorithms/grpo.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Conform code to Python 3.12+
Indent code with 4 spaces. Do not use tabs
Use snake_case for file names
Use PascalCase for class names
Use snake_case for function and method names
Use snake_case for local variables
Prefix variable names that start with a number with 'k' (e.g., k_99th_percentile)
Use upper snake_case with 'G' prefix for global variables (e.g., G_MY_GLOBAL)
Use upper snake_case for constants
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
Prefer docstrings over comments for interfaces that may be used outside a file
Reserve comments for code within a function or interfaces that are local to a file
If a piece of code is commented out, include a comment describing its usage and why it's commented out. Remove debug comments before merging
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx
Avoid using reflection when functionality can be easily achieved without reflection
When using try-except blocks, limit the except clause to the smallest set of specific errors possible
When using try-except blocks for duck-typing, keep the body of the try as small as possible and use the else block for logic
YAML is the single source of truth for configuration defaults. Do not set non-None defaults in code for configuration values
For required configuration attributes, access config directly and expect presence (e.g., policy_cfg['precision']) without hidden defaults
Use typing.NotRequired to mark optional attributes in TypedDict for configuration
When adding a new config key to a TypedDict subclass, document the key's purpose, valid values/types, and recommended default, and reflect the default in exemplar YAMLs under examples/configs/*.yaml
Follow the Google Python Style Guide for Python code
Files:
nemo_rl/algorithms/grpo.py
nemo_rl/**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
For any source file under nemo_rl/*.py that defines a class or function decorated with @ray.remote, add a coverage pragma (# pragma: no cover) because these run in separate Ray processes
Files:
nemo_rl/algorithms/grpo.py
!(**/tests/**|**/test_*.py|**/test_*.sh)
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year
Files:
nemo_rl/algorithms/grpo.py
**/*.{py,sh}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
The NVIDIA copyright header should appear at the top of all Python files and shell scripts (excluding tests)
Files:
nemo_rl/algorithms/grpo.py
🧠 Learnings (1)
📓 Common learnings
Learnt from: zpqiu
Repo: NVIDIA-NeMo/RL PR: 1006
File: examples/configs/recipes/llm/distillation-qwen3-32b-to-8b-base-2n8g-fsdp2tp2.v1.yaml:19-26
Timestamp: 2025-09-18T13:26:43.307Z
Learning: In on-policy distillation workflows, validation can use downstream task performance (like math problem solving) as RL-like reward metrics rather than traditional distillation metrics like KL divergence. In this case, "val_reward" with "higher_is_better: true" is the correct checkpoint monitoring configuration.
🧬 Code graph analysis (1)
nemo_rl/algorithms/grpo.py (1)
tests/check_metrics.py (1)
mean(52-97)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Lint check
- GitHub Check: Post submodule check comment / Comment on PR
- GitHub Check: Post automodel integration comment / Comment on PR
terrykong
left a comment
There was a problem hiding this comment.
thanks for the generalized fixed
|
@alexandery-nvidia could you dco sign your commit? |
…ewards Signed-off-by: alexandery <alexandery@nvidia.com>
2b4e014 to
86df23e
Compare
|
restarting ci, some runner troubles |
…eward (#1619) Signed-off-by: alexandery <alexandery@nvidia.com> Signed-off-by: Brian Yu <bxyu@nvidia.com>
…eward (NVIDIA-NeMo#1619) Signed-off-by: alexandery <alexandery@nvidia.com>
…eward (NVIDIA-NeMo#1619) Signed-off-by: alexandery <alexandery@nvidia.com> Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
…eward (NVIDIA-NeMo#1619) Signed-off-by: alexandery <alexandery@nvidia.com> Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
…eward (#1619) Signed-off-by: alexandery <alexandery@nvidia.com>
…eward (#1619) Signed-off-by: alexandery <alexandery@nvidia.com>
…eward (#1619) Signed-off-by: alexandery <alexandery@nvidia.com>
…ewards
What does this PR do ?
Add a one line overview of what this PR aims to accomplish.
Issues
List issues that this PR closes (syntax):
Usage
# Add a code snippet demonstrating how to use thisBefore your PR is "Ready for review"
Pre checks:
Additional Information
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.