Skip to content

fix: Replace decode-based prefix matching with EOS-boundary splicing#1337

Merged
terrykong merged 7 commits intomainfrom
vllm-async-token-merging-improve
Oct 15, 2025
Merged

fix: Replace decode-based prefix matching with EOS-boundary splicing#1337
terrykong merged 7 commits intomainfrom
vllm-async-token-merging-improve

Conversation

@parthchadha
Copy link
Contributor

@parthchadha parthchadha commented Oct 10, 2025

Replace decode-based prefix matching with EOS-boundary splicing to robustly preserve prior-turn tokens and prevent off-policy drift from retokenization

What does this PR do ?

Add a one line overview of what this PR aims to accomplish.

Issues

List issues that this PR closes (syntax):

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

Summary by CodeRabbit

  • Bug Fixes

    • Improved prompt assembly to prevent duplicated or truncated outputs around end-of-sequence tokens.
    • More reliable chat preprocessing, especially when resuming after the last assistant turn and when tool calls are present.
  • Refactor

    • Reworked token prefix handling to consistently align model and template prompts across chat and tokenize paths.
  • Chores

    • Reduced log noise by filtering repetitive “Added request” entries from the vLLM logger for clearer runtime logs.

…bustly preserve prior-turn tokens and prevent off-policy drift from retokenization

Signed-off-by: Parth Chadha <pchadha@nvidia.com>
@parthchadha parthchadha requested review from a team as code owners October 10, 2025 20:18
@parthchadha parthchadha changed the title Replace decode-based prefix matching with EOS-boundary splicing to ro… fix: Replace decode-based prefix matching with EOS-boundary splicing Oct 10, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 10, 2025

📝 Walkthrough

Walkthrough

Replaced _maybe_correct_merged_tokens with _replace_prefix_tokens and updated all call sites. Adjusted chat preprocessing to align prompts at the last assistant turn. Added vLLM async logger import and a filter to suppress “Added request” logs. Updated tests to reflect the new API and expanded edge-case coverage.

Changes

Cohort / File(s) Summary
Prefix token handling utility
nemo_rl/models/generation/vllm/vllm_worker_async.py
Removed _maybe_correct_merged_tokens; added _replace_prefix_tokens(tokenizer, model_prefix_token_ids, template_prefix_token_ids, template_token_ids). New logic trims at EOS and splices model/template token segments. Updated internal call sites to the new function.
Chat preprocessing and control flow
nemo_rl/models/generation/vllm/vllm_worker_async.py
In _preprocess_chat, materialized tool_calls, deep-copied messages, identified last assistant turn, and sliced messages accordingly. Applied _replace_prefix_tokens to align engine prompt tokenization. Adjusted both vLLM OpenAI server path and internal processing path.
Logging setup and noise reduction
nemo_rl/models/generation/vllm/vllm_worker_async.py
Imported vllm_async_llm_logger; added a logging filter to suppress “Added request” entries across chat and tokenize endpoints.
Tests updated for new API and cases
tests/unit/models/generation/test_vllm_generation.py
Replaced usages of removed function with _replace_prefix_tokens. Renamed test. Added cases for empty model prefix, multiple EOS, missing EOS in template prefix, and tokenizer without EOS. Validated outputs against original token sequences.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Client
  participant OpenAI_Server as vLLM OpenAI Server
  participant Preprocess as _preprocess_chat
  participant Tokenizer
  participant Prefix as _replace_prefix_tokens
  participant Engine as Generation Engine

  Client->>OpenAI_Server: ChatCompletion request
  OpenAI_Server->>Preprocess: messages, tools
  Preprocess->>Preprocess: deepcopy messages, find last assistant turn
  Preprocess->>Tokenizer: tokenize model/template prefixes
  Tokenizer-->>Preprocess: token_ids
  Preprocess->>Prefix: model_prefix_ids, template_prefix_ids, template_ids
  Prefix-->>Preprocess: spliced token_ids
  Preprocess->>Engine: prompt with aligned tokens
  Engine-->>OpenAI_Server: completion stream/result
  OpenAI_Server-->>Client: response

  rect rgba(240,248,255,0.6)
  note right of Prefix: New/changed logic: EOS-based splice
  end
Loading
sequenceDiagram
  autonumber
  participant Client
  participant TokenizeServer as Tokenize Endpoint
  participant Logger as vLLM Async Logger

  Client->>TokenizeServer: Tokenize request
  TokenizeServer->>Logger: log request
  Note right of Logger: Filter suppresses "Added request"
  Logger-->>TokenizeServer: (filtered logs)
  TokenizeServer-->>Client: tokenization result
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • yfw

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 15.38% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Test Results For Major Changes ⚠️ Warning The PR introduces a major change to token-prefix handling logic and updates tests accordingly, which can affect behavior and potentially performance. The PR description, per provided context, largely contains placeholders and does not document test results, convergence/numerics validation, or before/after performance metrics. Given the scope and potential impact, testing information should be included in the PR description to pass this check. As such, the criteria for major changes with documented testing are not met. Please update the PR description to include: a brief summary of unit/functional test results (e.g., commands run and pass/fail status), any evaluation showing no behavioral or convergence regressions on representative prompts/workloads, and performance benchmarks comparing before vs. after (throughput/latency, model and configuration details, dataset, and hardware). If only unit tests are applicable, state that and include their results; if performance is impacted, provide numbers and context. Once added, this check can pass.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title accurately captures the core change by stating that decode-based prefix matching is replaced with EOS-boundary splicing, which aligns with the modifications in vllm_worker_async.py and related tests. It is concise and free of extraneous details, ensuring that a reviewer can quickly grasp the intent. Therefore it effectively summarizes the main change of this PR.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch vllm-async-token-merging-improve

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
nemo_rl/models/generation/vllm/vllm_worker_async.py (2)

38-121: Solid EOS-boundary splicing; add a small safety and docstring polish.

Logic is sound and handles trailing-EOS and “last EOS in prefix” well. Two small improvements:

  • Guard against malformed inputs: ensure len(template_prefix_token_ids) <= len(template_token_ids).
  • Fix docstring typo (“uppdate” → “update”) and consider Google-style Args/Returns.
 def _replace_prefix_tokens(
     tokenizer,
     model_prefix_token_ids: list[int],
     template_prefix_token_ids: list[int],
     template_token_ids: list[int],
 ) -> list[int]:
@@
-    and image tokenization is non-unique, then we will need to uppdate this
+    and image tokenization is non-unique, then we will need to update this
     function.
@@
-    template_cut_start = -1
+    # Sanity check to prevent out-of-bounds (defensive)
+    assert len(template_prefix_token_ids) <= len(template_token_ids), (
+        "template_prefix_token_ids longer than template_token_ids"
+    )
+    template_cut_start = -1

As per coding guidelines.


411-421: Prefer logger over print; ensure filter attaches to effective logger.

Use vllm_async_llm_logger (or its parent) for the “Adding a vLLM logging filter …” notice instead of print, and confirm the filter is added to the logger actually emitting the “Added request …” messages (handlers/propagation may differ).

tests/unit/models/generation/test_vllm_generation.py (2)

1231-1395: Great coverage for _replace_prefix_tokens edge cases. Minor nit: avoid hard‑coded EOS ID.

Tests thoroughly cover EOS/no‑EOS, multiple EOS, and empty model prefix. Consider relaxing the explicit eos_token_id == 151645 assertion to just assert it’s not None to reduce brittleness across tokenizer variants of the same model.


1188-1211: Use explicit /tokenize path to avoid '/../' traversal.

Building the tokenize URL by stripping “/v1” from base_urls[0] is a bit clearer than “…/../tokenize”.

-    response = requests.post(url=f"{base_urls[0]}/../tokenize", json=body)
+    base = base_urls[0].removesuffix("/v1")
+    response = requests.post(url=f"{base}/tokenize", json=body)
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f29fa2a and 0565301.

📒 Files selected for processing (2)
  • nemo_rl/models/generation/vllm/vllm_worker_async.py (6 hunks)
  • tests/unit/models/generation/test_vllm_generation.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Follow the Google Python Style Guide for all Python code
Target Python 3.12+ for all Python code in NeMo-RL
Indent Python code with 4 spaces; do not use tabs
Python filenames should be snake_case (e.g., some_file.py)
Class names should be PascalCase
Function and method names should be snake_case
Local variable names should be snake_case; if starting with a number, prefix with k (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE and prefixed with G_ (e.g., G_MY_GLOBAL)
Constants should be UPPER_SNAKE_CASE
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
For public interfaces used outside a file, prefer docstrings over comments
Use comments mainly for code within a function or interfaces local to a file
Commented-out code must include a nearby comment explaining usage and why it is commented out; otherwise remove before merging
Use Google-style docstrings for classes and functions (Sphinx-parseable)
Avoid using reflection when functionality can be easily achieved without it
Limit except clauses to the smallest specific set of exceptions possible
For duck-typing via try/except, keep the try body minimal and use else for main logic
Add the NVIDIA copyright header (with current year) at the top of all Python files, excluding tests/ and test-only scripts

Files:

  • nemo_rl/models/generation/vllm/vllm_worker_async.py
  • tests/unit/models/generation/test_vllm_generation.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

nemo_rl/**/*.py: Do not set non-None configuration defaults in code; YAML is the single source of truth for defaults
Access required config attributes directly (e.g., policy_cfg["precision"]) and assume presence; do not introduce hidden defaults
Express configuration optionality via TypedDict using typing.NotRequired
When adding a new config key to a TypedDict subclass, document the key’s purpose, valid values/types, and recommended default in code
For any class or function decorated with @ray.remote, add '# pragma: no cover' on the class/def line (and on remote functions)

Files:

  • nemo_rl/models/generation/vllm/vllm_worker_async.py
🧠 Learnings (1)
📚 Learning: 2025-09-10T05:29:34.349Z
Learnt from: bxyu-nvidia
PR: NVIDIA-NeMo/RL#1110
File: nemo_rl/models/generation/vllm/vllm_worker_async.py:98-105
Timestamp: 2025-09-10T05:29:34.349Z
Learning: In the _maybe_correct_merged_tokens function in nemo_rl/models/generation/vllm/vllm_worker_async.py, the loop condition `len(candidate_token_ids) < len(actual_token_ids) - 1` is intentionally designed to prevent accessing the final token in actual_token_ids, likely to handle specific tokenization edge cases in the vLLM HTTP server integration.

Applied to files:

  • nemo_rl/models/generation/vllm/vllm_worker_async.py
🧬 Code graph analysis (2)
nemo_rl/models/generation/vllm/vllm_worker_async.py (1)
tests/unit/models/generation/test_vllm_generation.py (1)
  • tokenizer (239-242)
tests/unit/models/generation/test_vllm_generation.py (2)
nemo_rl/models/generation/vllm/vllm_worker_async.py (1)
  • _replace_prefix_tokens (38-121)
tests/unit/environments/test_code_environment.py (1)
  • tokenizer (85-94)
🪛 Ruff (0.13.3)
nemo_rl/models/generation/vllm/vllm_worker_async.py

281-281: Local variable actual_corresponding_token_ids is assigned to but never used

Remove assignment to unused variable actual_corresponding_token_ids

(F841)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Lint check
  • GitHub Check: Post automodel integration comment / Comment on PR
  • GitHub Check: Post submodule check comment / Comment on PR
🔇 Additional comments (1)
tests/unit/models/generation/test_vllm_generation.py (1)

35-37: Tests aligned with new API.

Import update to _replace_prefix_tokens looks good.

@parthchadha parthchadha added the CI:L1 Run doctests, unit tests, and functional tests label Oct 10, 2025
Signed-off-by: Parth Chadha <pchadha@nvidia.com>
@parthchadha parthchadha added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Oct 10, 2025
bxyu-nvidia
bxyu-nvidia previously approved these changes Oct 10, 2025
Signed-off-by: Parth Chadha <pchadha@nvidia.com>
@parthchadha parthchadha added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Oct 10, 2025
Copy link
Collaborator

@terrykong terrykong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very clear docstring!

Signed-off-by: Parth Chadha <pchadha@nvidia.com>
@parthchadha parthchadha added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Oct 13, 2025
…ing-improve

Signed-off-by: Parth Chadha <pchadha@nvidia.com>
@parthchadha parthchadha force-pushed the vllm-async-token-merging-improve branch from 4448c60 to 78811ba Compare October 13, 2025 22:18
@parthchadha parthchadha requested a review from a team as a code owner October 13, 2025 22:18
@parthchadha parthchadha added the CI:L1 Run doctests, unit tests, and functional tests label Oct 13, 2025
@terrykong terrykong enabled auto-merge (squash) October 13, 2025 22:40
@parthchadha parthchadha added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Oct 14, 2025
@parthchadha parthchadha added CI:L0 Run doctests and unit tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Oct 14, 2025
@parthchadha parthchadha added CI:L0 Run doctests and unit tests and removed CI:L0 Run doctests and unit tests labels Oct 14, 2025
@terrykong terrykong merged commit 5c67023 into main Oct 15, 2025
87 of 102 checks passed
@terrykong terrykong deleted the vllm-async-token-merging-improve branch October 15, 2025 02:10
chtruong814 pushed a commit that referenced this pull request Oct 15, 2025
…1337)

Signed-off-by: Parth Chadha <pchadha@nvidia.com>
Signed-off-by: NeMo Bot <nemo-bot@nvidia.com>
final_token_ids = _replace_prefix_tokens(
tokenizer=tokenizer,
model_prefix_token_ids=request.required_prefix_token_ids,
template_prefix_token_ids=request.required_prefix_token_ids,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should actually be actual_corresponding_token_ids

lbliii pushed a commit that referenced this pull request Nov 3, 2025
…1337)

Signed-off-by: Parth Chadha <pchadha@nvidia.com>
Signed-off-by: Lawrence Lane <llane@nvidia.com>
PrinsYin pushed a commit to PrinsYin/RL that referenced this pull request Nov 30, 2025
yuanhangsu1986 pushed a commit to yuanhangsu1986/RL-Nemontron-Edge-Omni that referenced this pull request Feb 21, 2026
…VIDIA-NeMo#1337)

Signed-off-by: Parth Chadha <pchadha@nvidia.com>
Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI:L0 Run doctests and unit tests r0.4.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants