Skip to content

fix: Set validation accuracy to mean of rewards to handle non-[0,1] reward#1619

Merged
terrykong merged 1 commit intomainfrom
alexandery/fix-validation-accuracy
Dec 11, 2025
Merged

fix: Set validation accuracy to mean of rewards to handle non-[0,1] reward#1619
terrykong merged 1 commit intomainfrom
alexandery/fix-validation-accuracy

Conversation

@alexandery-nvidia
Copy link
Contributor

@alexandery-nvidia alexandery-nvidia commented Dec 9, 2025

…ewards

What does this PR do ?

Add a one line overview of what this PR aims to accomplish.

Issues

List issues that this PR closes (syntax):

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

Summary by CodeRabbit

  • Bug Fixes
    • Improved accuracy calculation in algorithm validation to use reward averaging instead of binary equality checks for more precise performance measurement.

✏️ Tip: You can customize this high-level summary in your review settings.

@alexandery-nvidia alexandery-nvidia requested a review from a team as a code owner December 9, 2025 22:43
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 9, 2025

📝 Walkthrough

Walkthrough

The validate function in the GRPO algorithm was modified to calculate accuracy by computing the mean of the rewards tensor instead of performing a binary equality check against 1.0, eliminating reliance on an unscaled binary reward assumption.

Changes

Cohort / File(s) Change Summary
GRPO Validation Logic
nemo_rl/algorithms/grpo.py
Modified accuracy calculation in validate function from binary equality check against 1.0 to mean of rewards tensor

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

  • Verify the mean calculation correctly replaces the previous binary check logic
  • Confirm the change aligns with the intended reward structure for the validation step
  • Check for any downstream dependencies on the previous accuracy computation method

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Test Results For Major Changes ⚠️ Warning PR modifies core validation accuracy metric from binary equality to mean-of-rewards calculation, a significant change affecting convergence tracking. However, PR description lacks test results, regression analysis, or evidence that existing unit tests pass. Add confirmation that unit tests pass, validation metrics demonstrating the fix works with both [0,1]-constrained and non-[0,1] rewards, before-and-after convergence analysis, and functional test results confirming no regressions.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Title check ✅ Passed The title clearly and specifically describes the main change: modifying validation accuracy calculation to use mean of rewards instead of binary checks to support non-[0,1] reward ranges.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch alexandery/fix-validation-accuracy

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
nemo_rl/algorithms/grpo.py (1)

1770-1770: Good fix for non-[0,1] rewards, but consider renaming the metric for clarity.

The change correctly computes the mean of rewards instead of a binary check, which properly handles rewards outside the [0,1] range. However, the metric name "accuracy" can be misleading when rewards are arbitrary values (e.g., negative or >1). Users might be confused seeing "Accuracy: 5.2" or "Accuracy: -0.3" in logs.

Note that the code already stores this internally as "val_reward" (lines 1490, 2451), suggesting awareness that it's a reward metric rather than a true accuracy measure.

Consider renaming for consistency and clarity:

-            accuracy = rewards_t.mean().item()
+            mean_reward = rewards_t.mean().item()
         else:
-            accuracy = 0.0
+            mean_reward = 0.0
 
         avg_length = (
             sum(total_lengths) / len(total_lengths) if len(total_lengths) > 0 else 0.0
         )
 
         val_metrics = {
-            "accuracy": accuracy,
+            "mean_reward": mean_reward,
             "avg_length": avg_length,
             **additional_metrics_to_report,
         }

You would also need to update:

  • Line 1490 and 2451: Change from val_metrics["accuracy"] to val_metrics["mean_reward"]
  • Line 1805: Update the print statement to reflect the new metric name
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 64ab08d and 2b4e014.

📒 Files selected for processing (1)
  • nemo_rl/algorithms/grpo.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Conform code to Python 3.12+
Indent code with 4 spaces. Do not use tabs
Use snake_case for file names
Use PascalCase for class names
Use snake_case for function and method names
Use snake_case for local variables
Prefix variable names that start with a number with 'k' (e.g., k_99th_percentile)
Use upper snake_case with 'G' prefix for global variables (e.g., G_MY_GLOBAL)
Use upper snake_case for constants
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
Prefer docstrings over comments for interfaces that may be used outside a file
Reserve comments for code within a function or interfaces that are local to a file
If a piece of code is commented out, include a comment describing its usage and why it's commented out. Remove debug comments before merging
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx
Avoid using reflection when functionality can be easily achieved without reflection
When using try-except blocks, limit the except clause to the smallest set of specific errors possible
When using try-except blocks for duck-typing, keep the body of the try as small as possible and use the else block for logic
YAML is the single source of truth for configuration defaults. Do not set non-None defaults in code for configuration values
For required configuration attributes, access config directly and expect presence (e.g., policy_cfg['precision']) without hidden defaults
Use typing.NotRequired to mark optional attributes in TypedDict for configuration
When adding a new config key to a TypedDict subclass, document the key's purpose, valid values/types, and recommended default, and reflect the default in exemplar YAMLs under examples/configs/*.yaml
Follow the Google Python Style Guide for Python code

Files:

  • nemo_rl/algorithms/grpo.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

For any source file under nemo_rl/*.py that defines a class or function decorated with @ray.remote, add a coverage pragma (# pragma: no cover) because these run in separate Ray processes

Files:

  • nemo_rl/algorithms/grpo.py
!(**/tests/**|**/test_*.py|**/test_*.sh)

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year

Files:

  • nemo_rl/algorithms/grpo.py
**/*.{py,sh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

The NVIDIA copyright header should appear at the top of all Python files and shell scripts (excluding tests)

Files:

  • nemo_rl/algorithms/grpo.py
🧠 Learnings (1)
📓 Common learnings
Learnt from: zpqiu
Repo: NVIDIA-NeMo/RL PR: 1006
File: examples/configs/recipes/llm/distillation-qwen3-32b-to-8b-base-2n8g-fsdp2tp2.v1.yaml:19-26
Timestamp: 2025-09-18T13:26:43.307Z
Learning: In on-policy distillation workflows, validation can use downstream task performance (like math problem solving) as RL-like reward metrics rather than traditional distillation metrics like KL divergence. In this case, "val_reward" with "higher_is_better: true" is the correct checkpoint monitoring configuration.
🧬 Code graph analysis (1)
nemo_rl/algorithms/grpo.py (1)
tests/check_metrics.py (1)
  • mean (52-97)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Lint check
  • GitHub Check: Post submodule check comment / Comment on PR
  • GitHub Check: Post automodel integration comment / Comment on PR

Copy link
Collaborator

@terrykong terrykong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the generalized fixed

@terrykong
Copy link
Collaborator

@alexandery-nvidia could you dco sign your commit?

…ewards

Signed-off-by: alexandery <alexandery@nvidia.com>
@alexandery-nvidia alexandery-nvidia force-pushed the alexandery/fix-validation-accuracy branch from 2b4e014 to 86df23e Compare December 9, 2025 23:33
@terrykong terrykong changed the title fix: Set validation accuracy to mean of rewards to handle non-[0,1] r… fix: Set validation accuracy to mean of rewards to handle non-[0,1] reward Dec 10, 2025
@terrykong terrykong added the CI:L1 Run doctests, unit tests, and functional tests label Dec 10, 2025
@terrykong
Copy link
Collaborator

restarting ci, some runner troubles

@terrykong terrykong merged commit e3cfb11 into main Dec 11, 2025
97 of 108 checks passed
@terrykong terrykong deleted the alexandery/fix-validation-accuracy branch December 11, 2025 08:28
bxyu-nvidia pushed a commit that referenced this pull request Dec 16, 2025
…eward (#1619)

Signed-off-by: alexandery <alexandery@nvidia.com>
Signed-off-by: Brian Yu <bxyu@nvidia.com>
DeL-TaiseiOzaki pushed a commit to DeL-TaiseiOzaki/RL that referenced this pull request Jan 8, 2026
…eward (NVIDIA-NeMo#1619)

Signed-off-by: alexandery <alexandery@nvidia.com>
yuanhangsu1986 pushed a commit to yuanhangsu1986/RL-Nemontron-Edge-Omni that referenced this pull request Feb 12, 2026
…eward (NVIDIA-NeMo#1619)

Signed-off-by: alexandery <alexandery@nvidia.com>
Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
yuanhangsu1986 pushed a commit to yuanhangsu1986/RL-Nemontron-Edge-Omni that referenced this pull request Feb 21, 2026
…eward (NVIDIA-NeMo#1619)

Signed-off-by: alexandery <alexandery@nvidia.com>
Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
seonjinn pushed a commit that referenced this pull request Mar 8, 2026
…eward (#1619)

Signed-off-by: alexandery <alexandery@nvidia.com>
seonjinn pushed a commit that referenced this pull request Mar 8, 2026
…eward (#1619)

Signed-off-by: alexandery <alexandery@nvidia.com>
seonjinn pushed a commit that referenced this pull request Mar 9, 2026
…eward (#1619)

Signed-off-by: alexandery <alexandery@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI:L1 Run doctests, unit tests, and functional tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants