Skip to content

fix: Adding mean total tokens per sample to the output log#1406

Merged
terrykong merged 1 commit intoNVIDIA-NeMo:mainfrom
youngeunkwon0405:add-avg-token-length
Oct 28, 2025
Merged

fix: Adding mean total tokens per sample to the output log#1406
terrykong merged 1 commit intoNVIDIA-NeMo:mainfrom
youngeunkwon0405:add-avg-token-length

Conversation

@youngeunkwon0405
Copy link
Contributor

@youngeunkwon0405 youngeunkwon0405 commented Oct 22, 2025

What does this PR do ?

  1. Fix a wrong number in wandb log

  2. Print avg seq length directly to the output log

image

Issues

List issues that this PR closes (syntax):

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

Summary by CodeRabbit

  • Bug Fixes
    • Corrected metric calculation to properly average prompt length metrics alongside other metrics.
    • Enhanced performance metrics reporting to display mean total tokens per sample.

Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
@youngeunkwon0405 youngeunkwon0405 self-assigned this Oct 22, 2025
@youngeunkwon0405 youngeunkwon0405 requested a review from a team as a code owner October 22, 2025 05:48
@youngeunkwon0405 youngeunkwon0405 changed the title fix: Enhance the output log fix: Adding mean total tokens per sample to the output log Oct 22, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 22, 2025

📝 Walkthrough

Walkthrough

Extended metric tracking to include "mean_prompt_length" in GRPO training, ensuring it's averaged like other metrics. Added printing of "Mean Total Tokens per Sample" in performance metrics display when the metric exists.

Changes

Cohort / File(s) Change Summary
Metric tracking
nemo_rl/algorithms/grpo.py
Extended metric keys set in async_grpo_train to include "mean_prompt_length" for proper averaging
Performance metrics display
nemo_rl/algorithms/utils.py
Added conditional printing of "Mean Total Tokens per Sample" in print_performance_metrics after worker load visualization

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

Suggested reviewers

  • parthchadha

Pre-merge checks and finishing touches

✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The pull request title "fix: Adding mean total tokens per sample to the output log" is directly related to the main changes in the changeset. The title accurately describes the primary user-facing change: the addition of "Mean Total Tokens per Sample" printing to the output log in utils.py, with supporting changes in grpo.py to ensure proper metric averaging. The title is specific and uses conventional commit formatting ("fix:"), making it clear what is being added and where, without using vague or generic terms. A teammate scanning the commit history would immediately understand that this PR adds a new metric to the logging output.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Test Results For Major Changes ✅ Passed This PR contains only minor changes: adding "mean_prompt_length" to the set of metrics to be averaged (1 line) and adding a print statement for displaying "Mean Total Tokens per Sample" (5 lines). These are bug fixes and output enhancements that do not modify algorithm logic, affect numerics/convergence, or impact performance. According to the check criteria, test results documentation is required only for major changes (new features, breaking changes, significant refactoring) or changes affecting numerics, convergence, or performance. Since these are minor logging and metric calculation fixes, test results documentation is not required.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f2de476 and 08e9a84.

📒 Files selected for processing (2)
  • nemo_rl/algorithms/grpo.py (1 hunks)
  • nemo_rl/algorithms/utils.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Follow the Google Python Style Guide for all Python code
Target Python 3.12+ for all Python code in NeMo-RL
Indent Python code with 4 spaces; do not use tabs
Python filenames should be snake_case (e.g., some_file.py)
Class names should be PascalCase
Function and method names should be snake_case
Local variable names should be snake_case; if starting with a number, prefix with k (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE and prefixed with G_ (e.g., G_MY_GLOBAL)
Constants should be UPPER_SNAKE_CASE
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
For public interfaces used outside a file, prefer docstrings over comments
Use comments mainly for code within a function or interfaces local to a file
Commented-out code must include a nearby comment explaining usage and why it is commented out; otherwise remove before merging
Use Google-style docstrings for classes and functions (Sphinx-parseable)
Avoid using reflection when functionality can be easily achieved without it
Limit except clauses to the smallest specific set of exceptions possible
For duck-typing via try/except, keep the try body minimal and use else for main logic
Add the NVIDIA copyright header (with current year) at the top of all Python files, excluding tests/ and test-only scripts

Files:

  • nemo_rl/algorithms/utils.py
  • nemo_rl/algorithms/grpo.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

nemo_rl/**/*.py: Do not set non-None configuration defaults in code; YAML is the single source of truth for defaults
Access required config attributes directly (e.g., policy_cfg["precision"]) and assume presence; do not introduce hidden defaults
Express configuration optionality via TypedDict using typing.NotRequired
When adding a new config key to a TypedDict subclass, document the key’s purpose, valid values/types, and recommended default in code
For any class or function decorated with @ray.remote, add '# pragma: no cover' on the class/def line (and on remote functions)

Files:

  • nemo_rl/algorithms/utils.py
  • nemo_rl/algorithms/grpo.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Lint check
  • GitHub Check: Lint check
  • GitHub Check: Post automodel integration comment / Comment on PR
  • GitHub Check: Post submodule check comment / Comment on PR
🔇 Additional comments (2)
nemo_rl/algorithms/utils.py (1)

404-407: LGTM! Clean addition of metric printing.

The conditional printing of "Mean Total Tokens per Sample" follows the established pattern in this function and is properly guarded against missing metrics. The formatting and placement are appropriate.

nemo_rl/algorithms/grpo.py (1)

1980-1980: Good catch! This fix ensures correct metric aggregation.

Adding "mean_prompt_length" to the averaging set fixes a bug where this metric would have been summed instead of averaged in async GRPO mode. This brings the async implementation in line with the sync version (line 1146) and ensures the correct mean value is logged to wandb.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@youngeunkwon0405
Copy link
Contributor Author

@terrykong, can we merge this PR please?

@terrykong terrykong enabled auto-merge (squash) October 28, 2025 06:49
@terrykong terrykong merged commit b3aac89 into NVIDIA-NeMo:main Oct 28, 2025
57 of 60 checks passed
chtruong814 pushed a commit that referenced this pull request Oct 28, 2025
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
Signed-off-by: NeMo Bot <nemo-bot@nvidia.com>
lbliii pushed a commit that referenced this pull request Nov 3, 2025
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
Signed-off-by: Lawrence Lane <llane@nvidia.com>
PrinsYin pushed a commit to PrinsYin/RL that referenced this pull request Nov 30, 2025
…Mo#1406)

Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
yuanhangsu1986 pushed a commit to yuanhangsu1986/RL-Nemontron-Edge-Omni that referenced this pull request Feb 21, 2026
…Mo#1406)

Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI:L0 Run doctests and unit tests r0.4.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants