Skip to content

feat: Necessary changes for Gym GRPO tutorial#1630

Merged
terrykong merged 57 commits intomainfrom
bxyu/gym-grpo-tutorial
Dec 17, 2025
Merged

feat: Necessary changes for Gym GRPO tutorial#1630
terrykong merged 57 commits intomainfrom
bxyu/gym-grpo-tutorial

Conversation

@bxyu-nvidia
Copy link
Contributor

@bxyu-nvidia bxyu-nvidia commented Dec 12, 2025

What does this PR do ?

Add a one line overview of what this PR aims to accomplish.

Issues

List issues that this PR closes (syntax):

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

Summary by CodeRabbit

  • New Features

    • Added comprehensive GrPO (gradient-based policy optimization) training configuration for Nemotron Nano v2 with multi-node support.
    • Added tool parser plugin support for vLLM generation backend.
  • Chores

    • Updated dependency caching configuration.

✏️ Tip: You can customize this high-level summary in your review settings.

@bxyu-nvidia bxyu-nvidia changed the base branch from yifu/gym_sort to main December 14, 2025 22:43
@bxyu-nvidia bxyu-nvidia requested review from terrykong and yfw December 14, 2025 22:46
Copy link
Collaborator

@terrykong terrykong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yfw to review

@terrykong terrykong added the CI:L1 Run doctests, unit tests, and functional tests label Dec 14, 2025
@github-actions
Copy link

✅ Submodule Fast-Forward Check Results

Check based on commit: 3ead587 (PR #1630 from bxyu/gym-grpo-tutorial)

✅ Submodules that are properly updated:

Gym: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

@github-actions
Copy link

✅ Submodule Fast-Forward Check Results

Check based on commit: da1dc2d (PR #1630 from bxyu/gym-grpo-tutorial)

✅ Submodules that are properly updated:

Gym: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

@github-actions
Copy link

✅ Submodule Fast-Forward Check Results

Check based on commit: cc5dee6 (PR #1630 from bxyu/gym-grpo-tutorial)

✅ Submodules that are properly updated:

Gym: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

@github-actions
Copy link

✅ Submodule Fast-Forward Check Results

Check based on commit: e23e273 (PR #1630 from bxyu/gym-grpo-tutorial)

✅ Submodules that are properly updated:

Gym: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

@github-actions
Copy link

✅ Submodule Fast-Forward Check Results

Check based on commit: 22c6731 (PR #1630 from bxyu/gym-grpo-tutorial)

✅ Submodules that are properly updated:

Gym: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

@github-actions
Copy link

✅ Submodule Fast-Forward Check Results

Check based on commit: 6673126 (PR #1630 from bxyu/gym-grpo-tutorial)

✅ Submodules that are properly updated:

Gym: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

@github-actions
Copy link

✅ Submodule Fast-Forward Check Results

Check based on commit: 2d19b10 (PR #1630 from bxyu/gym-grpo-tutorial)

✅ Submodules that are properly updated:

Gym: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

@yfw yfw changed the title Necessary changes for Gym GRPO tutorial feat: Necessary changes for Gym GRPO tutorial Dec 15, 2025
@yfw yfw marked this pull request as ready for review December 15, 2025 13:54
@yfw yfw requested review from a team as code owners December 15, 2025 13:54
yfw
yfw previously approved these changes Dec 15, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 15, 2025

📝 Walkthrough

Walkthrough

Updates Gym submodule pointer, adds "datasets" to cached dependencies, introduces comprehensive GRPO configuration for Nemotron Nano v2, provides multi-node training launch script, extends vLLM config with tool parser plugin support, and enhances vllm_worker_async with improved error handling, tool parser plugin loading, and refined logging.

Changes

Cohort / File(s) Summary
Submodule & Dependencies
3rdparty/Gym-workspace/Gym, 3rdparty/Gym-workspace/setup.py
Submodule pointer updated to latest commit; "datasets" added to CACHED_DEPENDENCIES list
NeMo-Gym Configuration
examples/nemo_gym/grpo_workplace_assistant_nemotron_nano_v2_9b.yaml
New comprehensive GRPO training configuration for Nemotron Nano v2, defining hyperparameters, model/policy settings, dtensor/megatron distributed training, optimizer/scheduler, data paths, NeMo-Gym environment integration, and logging infrastructure
Infrastructure & Deployment
examples/nemo_gym/launch_nemo_gym_multinode_training.sh
New shell script for orchestrating multi-node NeMo-Gym training via SLURM with environment variable setup, resource configuration, and sbatch submission
vLLM Generation Integration
nemo_rl/models/generation/vllm/config.py
Added optional tool_parser_plugin field (string filepath) to VllmSpecificArgs TypedDict for registering custom vLLM tool parsers
vLLM Worker Enhancement
nemo_rl/models/generation/vllm/vllm_worker_async.py
Added tool parser plugin conditional import in _setup_vllm_openai_api_server; enhanced _replace_prefix_tokens assertions with detailed error context; refactored logging filter for cleaner output; removed model_config parameter from OpenAIServingModels constructor call
Unit Tests
tests/unit/models/generation/test_vllm_generation.py
Added decode method to mock tokenizer class within test fixture

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Areas requiring extra attention:

  • nemo_rl/models/generation/vllm/vllm_worker_async.py: Constructor signature change to OpenAIServingModels (removal of model_config parameter), conditional tool parser plugin import logic, and logging filter refactoring—verify compatibility with vLLM API and that removed parameter no longer exists or is optional
  • examples/nemo_gym/grpo_workplace_assistant_nemotron_nano_v2_9b.yaml: Validate YAML structure, nested configuration fields (dtensor_cfg, megatron_cfg, optimizer, scheduler), and ensure all required paths/model references are consistent
  • examples/nemo_gym/launch_nemo_gym_multinode_training.sh: Shell script correctness for SLURM submission, environment variable handling, and ray.sub launcher invocation

Possibly related PRs

Suggested reviewers

  • parthchadha
  • terrykong

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 20.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Test Results For Major Changes ⚠️ Warning PR lacks test results documentation despite containing significant changes to vllm_worker_async.py constructor (breaking API change), new GRPO configuration, and training script. No performance metrics, convergence analysis, or regression information provided. Author must run and document comprehensive unit/functional tests, verify API changes don't break existing functionality, test GRPO configuration with training iteration, and resolve unresolved placeholders and review issues.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: Necessary changes for Gym GRPO tutorial' directly aligns with the PR's main objective of introducing changes needed for a Gym GRPO tutorial, as confirmed by the file summaries and PR objectives.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch bxyu/gym-grpo-tutorial

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
nemo_rl/models/generation/vllm/vllm_worker_async.py (1)

318-322: Add required model_config parameter to OpenAIServingModels constructor call.

The model_config parameter is required in vLLM 0.11.0's OpenAIServingModels constructor but is missing from this call. Provide a ModelConfig instance containing model configuration (e.g., max_model_len). Without it, this will raise a TypeError at runtime.

🧹 Nitpick comments (1)
examples/nemo_gym/launch_nemo_gym_multinode_training.sh (1)

8-20: Quote variables to prevent word splitting.

The REPO_LOCATION variable inside the HEREDOC should be quoted for safety. Also, using $@ at line 19 may not work as expected inside a HEREDOC since it's evaluated at HEREDOC construction time, not execution time.

 read -r -d '' COMMAND <<EOF
-cd ${REPO_LOCATION}
+cd "\${REPO_LOCATION}"
 
 HF_HOME=$PWD/.cache/ \
-WANDB_API_KEY=$WANDB_API_KEY \
+WANDB_API_KEY=\$WANDB_API_KEY \
 NRL_FORCE_REBUILD_VENVS=true \
 uv run python examples/nemo_gym/run_grpo_nemo_gym.py \
-    ++cluster.num_nodes=$NUM_ACTOR_NODES \
-    ++logger.wandb.name=$EXP_NAME \
-    ++logger.log_dir=results/$EXP_NAME \
-    ++checkpointing.checkpoint_dir=results/$EXP_NAME \
+    ++cluster.num_nodes=\$NUM_ACTOR_NODES \
+    ++logger.wandb.name=\$EXP_NAME \
+    ++logger.log_dir=results/\$EXP_NAME \
+    ++checkpointing.checkpoint_dir=results/\$EXP_NAME \
     $@
 EOF

Note: The current approach expands variables at HEREDOC creation time which may be intentional for sbatch. Verify this is the desired behavior.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5d04b36 and 2d19b10.

📒 Files selected for processing (7)
  • 3rdparty/Gym-workspace/Gym (1 hunks)
  • 3rdparty/Gym-workspace/setup.py (1 hunks)
  • examples/nemo_gym/grpo_workplace_assistant_nemotron_nano_v2_9b.yaml (1 hunks)
  • examples/nemo_gym/launch_nemo_gym_multinode_training.sh (1 hunks)
  • nemo_rl/models/generation/vllm/config.py (1 hunks)
  • nemo_rl/models/generation/vllm/vllm_worker_async.py (4 hunks)
  • tests/unit/models/generation/test_vllm_generation.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Conform code to Python 3.12+
Indent code with 4 spaces. Do not use tabs
Use snake_case for file names
Use PascalCase for class names
Use snake_case for function and method names
Use snake_case for local variables
Prefix variable names that start with a number with 'k' (e.g., k_99th_percentile)
Use upper snake_case with 'G' prefix for global variables (e.g., G_MY_GLOBAL)
Use upper snake_case for constants
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
Prefer docstrings over comments for interfaces that may be used outside a file
Reserve comments for code within a function or interfaces that are local to a file
If a piece of code is commented out, include a comment describing its usage and why it's commented out. Remove debug comments before merging
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx
Avoid using reflection when functionality can be easily achieved without reflection
When using try-except blocks, limit the except clause to the smallest set of specific errors possible
When using try-except blocks for duck-typing, keep the body of the try as small as possible and use the else block for logic
YAML is the single source of truth for configuration defaults. Do not set non-None defaults in code for configuration values
For required configuration attributes, access config directly and expect presence (e.g., policy_cfg['precision']) without hidden defaults
Use typing.NotRequired to mark optional attributes in TypedDict for configuration
When adding a new config key to a TypedDict subclass, document the key's purpose, valid values/types, and recommended default, and reflect the default in exemplar YAMLs under examples/configs/*.yaml
Follow the Google Python Style Guide for Python code

Files:

  • nemo_rl/models/generation/vllm/config.py
  • 3rdparty/Gym-workspace/setup.py
  • tests/unit/models/generation/test_vllm_generation.py
  • nemo_rl/models/generation/vllm/vllm_worker_async.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

For any source file under nemo_rl/*.py that defines a class or function decorated with @ray.remote, add a coverage pragma (# pragma: no cover) because these run in separate Ray processes

Files:

  • nemo_rl/models/generation/vllm/config.py
  • nemo_rl/models/generation/vllm/vllm_worker_async.py
!(**/tests/**|**/test_*.py|**/test_*.sh)

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year

Files:

  • nemo_rl/models/generation/vllm/config.py
  • 3rdparty/Gym-workspace/Gym
  • examples/nemo_gym/launch_nemo_gym_multinode_training.sh
  • 3rdparty/Gym-workspace/setup.py
  • tests/unit/models/generation/test_vllm_generation.py
  • examples/nemo_gym/grpo_workplace_assistant_nemotron_nano_v2_9b.yaml
  • nemo_rl/models/generation/vllm/vllm_worker_async.py
**/*.{py,sh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

The NVIDIA copyright header should appear at the top of all Python files and shell scripts (excluding tests)

Files:

  • nemo_rl/models/generation/vllm/config.py
  • examples/nemo_gym/launch_nemo_gym_multinode_training.sh
  • 3rdparty/Gym-workspace/setup.py
  • tests/unit/models/generation/test_vllm_generation.py
  • nemo_rl/models/generation/vllm/vllm_worker_async.py
**/*.sh

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.sh: Use uv run instead of python to execute scripts
Follow the Google Shell Style Guide for shell scripts

Files:

  • examples/nemo_gym/launch_nemo_gym_multinode_training.sh
🧠 Learnings (4)
📚 Learning: 2025-09-19T03:00:58.662Z
Learnt from: shuo-nvidia
Repo: NVIDIA-NeMo/RL PR: 1006
File: examples/configs/recipes/llm/distillation-qwen3-32b-to-1.7b-base-1n8g-fsdp2tp1.v1.yaml:85-101
Timestamp: 2025-09-19T03:00:58.662Z
Learning: In distillation and GRPO configurations, max_new_tokens is intentionally set to the full context window (max_total_sequence_length) for consistency across the codebase. Overflow cases when prompt + generation tokens exceed max_model_len are handled by safeguards implemented in vllm_worker.py.

Applied to files:

  • examples/nemo_gym/grpo_workplace_assistant_nemotron_nano_v2_9b.yaml
  • nemo_rl/models/generation/vllm/vllm_worker_async.py
📚 Learning: 2025-09-18T14:57:31.003Z
Learnt from: zpqiu
Repo: NVIDIA-NeMo/RL PR: 1006
File: nemo_rl/algorithms/distillation.py:312-354
Timestamp: 2025-09-18T14:57:31.003Z
Learning: The distillation algorithm's cluster setup logic is designed to follow the same patterns used in GRPO for handling distributed training clusters and resource allocation.

Applied to files:

  • examples/nemo_gym/grpo_workplace_assistant_nemotron_nano_v2_9b.yaml
📚 Learning: 2025-09-10T05:34:35.406Z
Learnt from: bxyu-nvidia
Repo: NVIDIA-NeMo/RL PR: 1110
File: nemo_rl/models/generation/vllm/vllm_worker_async.py:346-359
Timestamp: 2025-09-10T05:34:35.406Z
Learning: In nemo_rl/models/generation/vllm/vllm_worker_async.py, the HTTP server intentionally uses different path structures: `/v1/chat/completions` is under the `/v1` prefix while `/tokenize` is at the root level without the `/v1` prefix. This is the intended design.

Applied to files:

  • nemo_rl/models/generation/vllm/vllm_worker_async.py
📚 Learning: 2025-09-10T05:29:34.349Z
Learnt from: bxyu-nvidia
Repo: NVIDIA-NeMo/RL PR: 1110
File: nemo_rl/models/generation/vllm/vllm_worker_async.py:98-105
Timestamp: 2025-09-10T05:29:34.349Z
Learning: In the _maybe_correct_merged_tokens function in nemo_rl/models/generation/vllm/vllm_worker_async.py, the loop condition `len(candidate_token_ids) < len(actual_token_ids) - 1` is intentionally designed to prevent accessing the final token in actual_token_ids, likely to handle specific tokenization edge cases in the vLLM HTTP server integration.

Applied to files:

  • nemo_rl/models/generation/vllm/vllm_worker_async.py
🪛 GitHub Actions: CICD NeMo RL
nemo_rl/models/generation/vllm/vllm_worker_async.py

[error] 1-1: pre-commit hook ruff-format failed: 1 file reformatted by this hook. Run 'pre-commit run --all-files' locally to apply changes.

🪛 Shellcheck (0.11.0)
examples/nemo_gym/launch_nemo_gym_multinode_training.sh

[error] 1-1: Tips depend on target shell and yours is unknown. Add a shebang or a 'shell' directive.

(SC2148)


[warning] 5-5: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.

(SC2164)

🔇 Additional comments (11)
3rdparty/Gym-workspace/Gym (1)

1-1: Submodule addition is properly configured.

The Gym submodule has been added with correct configuration in .gitmodules. Since this is a Git submodule pointer (metadata), the coding guidelines requiring NVIDIA copyright headers do not apply.

3rdparty/Gym-workspace/setup.py (1)

45-45: LGTM!

Adding datasets to the cached dependencies is appropriate. The existing validation mechanism (lines 69-95) will ensure this remains in sync with the Gym submodule's pyproject.toml.

tests/unit/models/generation/test_vllm_generation.py (1)

1369-1375: LGTM!

The decode method addition to the mock tokenizer is necessary to support the enhanced assertion messages in _replace_prefix_tokens, which now call tokenizer.decode() for diagnostic output. The no-op implementation is appropriate for this test since it only verifies that an AssertionError is raised.

nemo_rl/models/generation/vllm/config.py (1)

39-41: LGTM!

The tool_parser_plugin field is correctly typed as NotRequired[str] and documented. The implementation in vllm_worker_async.py uses ToolParserManager.import_tool_parser() to load the plugin when configured.

nemo_rl/models/generation/vllm/vllm_worker_async.py (3)

109-118: LGTM - Enhanced assertion diagnostics.

The detailed error message including token IDs and detokenized representations will significantly help debug issues with non-monotonic trajectories in multi-turn scenarios.


128-135: LGTM - Improved EOS token assertion diagnostics.

The enhanced error message provides valuable context when debugging chat template tokenization issues.


302-307: LGTM - Tool parser plugin loading.

The conditional import using ToolParserManager.import_tool_parser() correctly integrates with the new tool_parser_plugin config field.

examples/nemo_gym/grpo_workplace_assistant_nemotron_nano_v2_9b.yaml (4)

22-22: Approve: Configuration correctly follows pattern for max_new_tokens setting.

The configuration correctly sets max_new_tokens: ${policy.max_total_sequence_length} at line 182 and uses this same value as the basis for max_response_length at line 22. This aligns with best practices documented in the learnings, where overflow cases are handled by safeguards in vllm_worker.py.

Also applies to: 182-182


67-76: Configuration is valid and consistent across training and generation backends.

The dtensor_cfg is properly disabled (lines 67-76) while megatron_cfg is enabled (line 78+) with tensor_model_parallel_size: 2. With 8 GPUs per node, this allocates 2 GPUs to model parallelism and leaves 4 GPUs for data parallelism, which aligns correctly with the distributed data parallel configuration (grad reduce, overlap settings properly configured). The vLLM generation backend uses tensor_parallel_size: 1 (line 191) intentionally to allow independent inference instances, which is compatible with the training parallelism strategy. No consistency issues detected.


230-231: Initialize the Gym submodule to access training data.

The configuration references data files under 3rdparty/Gym-workspace/Gym/resources_servers/workplace_assistant/data/, but the Gym submodule is not initialized. The submodule directory exists but is empty. Users must initialize this submodule (e.g., git submodule update --init) before training to ensure the referenced train.jsonl and validation.jsonl files are available.


236-254: These config paths are NeMo-Gym runtime references, not repository paths.

The config_paths at lines 241–242 are passed to NeMo-Gym as part of initial_global_config_dict (as noted in line 239's comment), so they are resolved by the NeMo-Gym environment at runtime, not from the repository filesystem. Unlike the data paths (lines 230–231) which use full 3rdparty/Gym-workspace/Gym/ paths, these configuration paths are relative references for the NeMo-Gym service to resolve internally. Their correctness cannot be verified by checking the repository structure and should be validated against NeMo-Gym configuration documentation or runtime behavior instead.

@terrykong
Copy link
Collaborator

@bxyu-nvidia can you resolve the linter?

@github-actions
Copy link

✅ Submodule Fast-Forward Check Results

Check based on commit: 18d660e (PR #1630 from bxyu/gym-grpo-tutorial)

✅ Submodules that are properly updated:

Gym: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

@github-actions
Copy link

✅ Submodule Fast-Forward Check Results

Check based on commit: fec911f (PR #1630 from bxyu/gym-grpo-tutorial)

✅ Submodules that are properly updated:

Gym: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

@github-actions
Copy link

✅ Submodule Fast-Forward Check Results

Check based on commit: 5830a29 (PR #1630 from bxyu/gym-grpo-tutorial)

✅ Submodules that are properly updated:

Gym: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

@github-actions
Copy link

✅ Submodule Fast-Forward Check Results

Check based on commit: a24cd6f (PR #1630 from bxyu/gym-grpo-tutorial)

✅ Submodules that are properly updated:

Gym: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

terrykong
terrykong previously approved these changes Dec 16, 2025
…eward (#1619)

Signed-off-by: alexandery <alexandery@nvidia.com>
Signed-off-by: Brian Yu <bxyu@nvidia.com>
@github-actions
Copy link

✅ Submodule Fast-Forward Check Results

Check based on commit: 4615327 (PR #1630 from bxyu/gym-grpo-tutorial)

✅ Submodules that are properly updated:

Gym: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

@terrykong terrykong added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Dec 16, 2025
terrykong
terrykong previously approved these changes Dec 16, 2025
Signed-off-by: Brian Yu <bxyu@nvidia.com>
@github-actions
Copy link

✅ Submodule Fast-Forward Check Results

Check based on commit: 16cae06 (PR #1630 from bxyu/gym-grpo-tutorial)

✅ Submodules that are properly updated:

Gym: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

@bxyu-nvidia bxyu-nvidia added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Dec 16, 2025
@terrykong terrykong merged commit 0bddd47 into main Dec 17, 2025
55 of 58 checks passed
@terrykong terrykong deleted the bxyu/gym-grpo-tutorial branch December 17, 2025 05:40
DeL-TaiseiOzaki pushed a commit to DeL-TaiseiOzaki/RL that referenced this pull request Jan 8, 2026
Signed-off-by: alexandery <alexandery@nvidia.com>
Signed-off-by: Brian Yu <bxyu@nvidia.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Sahil Modi <samodi@nvidia.com>
Signed-off-by: ruit <ruit@nvidia.com>
Signed-off-by: Jonas Yang <joyang@nvidia.com>
Signed-off-by: ZeYi Lin <944270057@qq.com>
Signed-off-by: Alexander Zhipa <azzhipa@amazon.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Lawrence Lane <llane@nvidia.com>
Signed-off-by: Terry Kong <terryk@nvidia.com>
Co-authored-by: alexandery-nvidia <alexandery@nvidia.com>
Co-authored-by: Yi-Fu Wu <yifu.wu@gmail.com>
Co-authored-by: Peter Jin <pjin@nvidia.com>
Co-authored-by: samodi-nv <141948907+samodi-nv@users.noreply.github.com>
Co-authored-by: ruit <ruit@nvidia.com>
Co-authored-by: Jonas Yang <joyang@nvidia.com>
Co-authored-by: Ze-Yi LIN <58305964+Zeyi-Lin@users.noreply.github.com>
Co-authored-by: Alexander Zhipa <alex.zhipa@proton.me>
Co-authored-by: Alexander Zhipa <azzhipa@amazon.com>
Co-authored-by: Terry Kong <terrycurtiskong@gmail.com>
Co-authored-by: Yuki Huang <yukih@nvidia.com>
Co-authored-by: Manasa Manohara <mmanohara@nvidia.com>
Co-authored-by: Lawrence Lane <llane@nvidia.com>
Co-authored-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>
Co-authored-by: Terry Kong <terryk@nvidia.com>
parthmannan pushed a commit to parthmannan/RL that referenced this pull request Jan 15, 2026
Signed-off-by: alexandery <alexandery@nvidia.com>
Signed-off-by: Brian Yu <bxyu@nvidia.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Sahil Modi <samodi@nvidia.com>
Signed-off-by: ruit <ruit@nvidia.com>
Signed-off-by: Jonas Yang <joyang@nvidia.com>
Signed-off-by: ZeYi Lin <944270057@qq.com>
Signed-off-by: Alexander Zhipa <azzhipa@amazon.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Lawrence Lane <llane@nvidia.com>
Signed-off-by: Terry Kong <terryk@nvidia.com>
Co-authored-by: alexandery-nvidia <alexandery@nvidia.com>
Co-authored-by: Yi-Fu Wu <yifu.wu@gmail.com>
Co-authored-by: Peter Jin <pjin@nvidia.com>
Co-authored-by: samodi-nv <141948907+samodi-nv@users.noreply.github.com>
Co-authored-by: ruit <ruit@nvidia.com>
Co-authored-by: Jonas Yang <joyang@nvidia.com>
Co-authored-by: Ze-Yi LIN <58305964+Zeyi-Lin@users.noreply.github.com>
Co-authored-by: Alexander Zhipa <alex.zhipa@proton.me>
Co-authored-by: Alexander Zhipa <azzhipa@amazon.com>
Co-authored-by: Terry Kong <terrycurtiskong@gmail.com>
Co-authored-by: Yuki Huang <yukih@nvidia.com>
Co-authored-by: Manasa Manohara <mmanohara@nvidia.com>
Co-authored-by: Lawrence Lane <llane@nvidia.com>
Co-authored-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>
Co-authored-by: Terry Kong <terryk@nvidia.com>
Signed-off-by: Parth Mannan <pmannan@nvidia.com>
yuanhangsu1986 pushed a commit to yuanhangsu1986/RL-Nemontron-Edge-Omni that referenced this pull request Feb 12, 2026
Signed-off-by: alexandery <alexandery@nvidia.com>
Signed-off-by: Brian Yu <bxyu@nvidia.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Sahil Modi <samodi@nvidia.com>
Signed-off-by: ruit <ruit@nvidia.com>
Signed-off-by: Jonas Yang <joyang@nvidia.com>
Signed-off-by: ZeYi Lin <944270057@qq.com>
Signed-off-by: Alexander Zhipa <azzhipa@amazon.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Lawrence Lane <llane@nvidia.com>
Signed-off-by: Terry Kong <terryk@nvidia.com>
Co-authored-by: alexandery-nvidia <alexandery@nvidia.com>
Co-authored-by: Yi-Fu Wu <yifu.wu@gmail.com>
Co-authored-by: Peter Jin <pjin@nvidia.com>
Co-authored-by: samodi-nv <141948907+samodi-nv@users.noreply.github.com>
Co-authored-by: ruit <ruit@nvidia.com>
Co-authored-by: Jonas Yang <joyang@nvidia.com>
Co-authored-by: Ze-Yi LIN <58305964+Zeyi-Lin@users.noreply.github.com>
Co-authored-by: Alexander Zhipa <alex.zhipa@proton.me>
Co-authored-by: Alexander Zhipa <azzhipa@amazon.com>
Co-authored-by: Terry Kong <terrycurtiskong@gmail.com>
Co-authored-by: Yuki Huang <yukih@nvidia.com>
Co-authored-by: Manasa Manohara <mmanohara@nvidia.com>
Co-authored-by: Lawrence Lane <llane@nvidia.com>
Co-authored-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>
Co-authored-by: Terry Kong <terryk@nvidia.com>
Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
@coderabbitai coderabbitai bot mentioned this pull request Feb 20, 2026
4 tasks
yuanhangsu1986 pushed a commit to yuanhangsu1986/RL-Nemontron-Edge-Omni that referenced this pull request Feb 21, 2026
Signed-off-by: alexandery <alexandery@nvidia.com>
Signed-off-by: Brian Yu <bxyu@nvidia.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Sahil Modi <samodi@nvidia.com>
Signed-off-by: ruit <ruit@nvidia.com>
Signed-off-by: Jonas Yang <joyang@nvidia.com>
Signed-off-by: ZeYi Lin <944270057@qq.com>
Signed-off-by: Alexander Zhipa <azzhipa@amazon.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Lawrence Lane <llane@nvidia.com>
Signed-off-by: Terry Kong <terryk@nvidia.com>
Co-authored-by: alexandery-nvidia <alexandery@nvidia.com>
Co-authored-by: Yi-Fu Wu <yifu.wu@gmail.com>
Co-authored-by: Peter Jin <pjin@nvidia.com>
Co-authored-by: samodi-nv <141948907+samodi-nv@users.noreply.github.com>
Co-authored-by: ruit <ruit@nvidia.com>
Co-authored-by: Jonas Yang <joyang@nvidia.com>
Co-authored-by: Ze-Yi LIN <58305964+Zeyi-Lin@users.noreply.github.com>
Co-authored-by: Alexander Zhipa <alex.zhipa@proton.me>
Co-authored-by: Alexander Zhipa <azzhipa@amazon.com>
Co-authored-by: Terry Kong <terrycurtiskong@gmail.com>
Co-authored-by: Yuki Huang <yukih@nvidia.com>
Co-authored-by: Manasa Manohara <mmanohara@nvidia.com>
Co-authored-by: Lawrence Lane <llane@nvidia.com>
Co-authored-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>
Co-authored-by: Terry Kong <terryk@nvidia.com>
Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
@coderabbitai coderabbitai bot mentioned this pull request Feb 21, 2026
4 tasks
@coderabbitai coderabbitai bot mentioned this pull request Mar 4, 2026
4 tasks
seonjinn pushed a commit that referenced this pull request Mar 8, 2026
Signed-off-by: alexandery <alexandery@nvidia.com>
Signed-off-by: Brian Yu <bxyu@nvidia.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Sahil Modi <samodi@nvidia.com>
Signed-off-by: ruit <ruit@nvidia.com>
Signed-off-by: Jonas Yang <joyang@nvidia.com>
Signed-off-by: ZeYi Lin <944270057@qq.com>
Signed-off-by: Alexander Zhipa <azzhipa@amazon.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Lawrence Lane <llane@nvidia.com>
Signed-off-by: Terry Kong <terryk@nvidia.com>
Co-authored-by: alexandery-nvidia <alexandery@nvidia.com>
Co-authored-by: Yi-Fu Wu <yifu.wu@gmail.com>
Co-authored-by: Peter Jin <pjin@nvidia.com>
Co-authored-by: samodi-nv <141948907+samodi-nv@users.noreply.github.com>
Co-authored-by: ruit <ruit@nvidia.com>
Co-authored-by: Jonas Yang <joyang@nvidia.com>
Co-authored-by: Ze-Yi LIN <58305964+Zeyi-Lin@users.noreply.github.com>
Co-authored-by: Alexander Zhipa <alex.zhipa@proton.me>
Co-authored-by: Alexander Zhipa <azzhipa@amazon.com>
Co-authored-by: Terry Kong <terrycurtiskong@gmail.com>
Co-authored-by: Yuki Huang <yukih@nvidia.com>
Co-authored-by: Manasa Manohara <mmanohara@nvidia.com>
Co-authored-by: Lawrence Lane <llane@nvidia.com>
Co-authored-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>
Co-authored-by: Terry Kong <terryk@nvidia.com>
seonjinn pushed a commit that referenced this pull request Mar 8, 2026
Signed-off-by: alexandery <alexandery@nvidia.com>
Signed-off-by: Brian Yu <bxyu@nvidia.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Sahil Modi <samodi@nvidia.com>
Signed-off-by: ruit <ruit@nvidia.com>
Signed-off-by: Jonas Yang <joyang@nvidia.com>
Signed-off-by: ZeYi Lin <944270057@qq.com>
Signed-off-by: Alexander Zhipa <azzhipa@amazon.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Lawrence Lane <llane@nvidia.com>
Signed-off-by: Terry Kong <terryk@nvidia.com>
Co-authored-by: alexandery-nvidia <alexandery@nvidia.com>
Co-authored-by: Yi-Fu Wu <yifu.wu@gmail.com>
Co-authored-by: Peter Jin <pjin@nvidia.com>
Co-authored-by: samodi-nv <141948907+samodi-nv@users.noreply.github.com>
Co-authored-by: ruit <ruit@nvidia.com>
Co-authored-by: Jonas Yang <joyang@nvidia.com>
Co-authored-by: Ze-Yi LIN <58305964+Zeyi-Lin@users.noreply.github.com>
Co-authored-by: Alexander Zhipa <alex.zhipa@proton.me>
Co-authored-by: Alexander Zhipa <azzhipa@amazon.com>
Co-authored-by: Terry Kong <terrycurtiskong@gmail.com>
Co-authored-by: Yuki Huang <yukih@nvidia.com>
Co-authored-by: Manasa Manohara <mmanohara@nvidia.com>
Co-authored-by: Lawrence Lane <llane@nvidia.com>
Co-authored-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>
Co-authored-by: Terry Kong <terryk@nvidia.com>
seonjinn pushed a commit that referenced this pull request Mar 9, 2026
Signed-off-by: alexandery <alexandery@nvidia.com>
Signed-off-by: Brian Yu <bxyu@nvidia.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Sahil Modi <samodi@nvidia.com>
Signed-off-by: ruit <ruit@nvidia.com>
Signed-off-by: Jonas Yang <joyang@nvidia.com>
Signed-off-by: ZeYi Lin <944270057@qq.com>
Signed-off-by: Alexander Zhipa <azzhipa@amazon.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Lawrence Lane <llane@nvidia.com>
Signed-off-by: Terry Kong <terryk@nvidia.com>
Co-authored-by: alexandery-nvidia <alexandery@nvidia.com>
Co-authored-by: Yi-Fu Wu <yifu.wu@gmail.com>
Co-authored-by: Peter Jin <pjin@nvidia.com>
Co-authored-by: samodi-nv <141948907+samodi-nv@users.noreply.github.com>
Co-authored-by: ruit <ruit@nvidia.com>
Co-authored-by: Jonas Yang <joyang@nvidia.com>
Co-authored-by: Ze-Yi LIN <58305964+Zeyi-Lin@users.noreply.github.com>
Co-authored-by: Alexander Zhipa <alex.zhipa@proton.me>
Co-authored-by: Alexander Zhipa <azzhipa@amazon.com>
Co-authored-by: Terry Kong <terrycurtiskong@gmail.com>
Co-authored-by: Yuki Huang <yukih@nvidia.com>
Co-authored-by: Manasa Manohara <mmanohara@nvidia.com>
Co-authored-by: Lawrence Lane <llane@nvidia.com>
Co-authored-by: Yu Yao <54727607+yaoyu-33@users.noreply.github.com>
Co-authored-by: Terry Kong <terryk@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI:L1 Run doctests, unit tests, and functional tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants