Skip to content

Update names of yarn related settings#2413

Closed
maanug-nv wants to merge 2 commits intomainfrom
maanug/rename-yarn-mscale
Closed

Update names of yarn related settings#2413
maanug-nv wants to merge 2 commits intomainfrom
maanug/rename-yarn-mscale

Conversation

@maanug-nv
Copy link
Contributor

@maanug-nv maanug-nv commented Feb 17, 2026

What does this PR do ?

Add a one line overview of what this PR aims to accomplish.

Changelog

  • Add specific line by line info of high level changes in this PR.

GitHub Actions CI

See the CI sectionin the Contributing doc for how to trigger the CI. A Nvidia developer will need to approve and trigger the CI for external contributors.

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

If you haven't finished some of the above items you can still open "Draft" PR.

Additional Information

  • Related to # (issue)

Summary by CodeRabbit

  • Refactor
    • Simplified YARN rope scaling parameter handling by streamlining supported configuration options.
    • Removed redundant configuration fields from GPT and Ministral3 model providers to reduce unnecessary parameters.

Signed-off-by: Maanu Grover <maanug@nvidia.com>
Signed-off-by: Maanu Grover <maanug@nvidia.com>
@maanug-nv maanug-nv mentioned this pull request Feb 17, 2026
5 tasks
@maanug-nv maanug-nv requested a review from cuichenx February 17, 2026 18:58
@maanug-nv maanug-nv self-assigned this Feb 17, 2026
@maanug-nv maanug-nv added the bug Something isn't working label Feb 17, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 17, 2026

📝 Walkthrough

Walkthrough

The pull request simplifies YARN rope scaling parameter handling by narrowing the rope_scaling field mappings to a single factor mapping in the conversion module and removing related configuration fields from provider classes.

Changes

Cohort / File(s) Summary
Rope Scaling Mapping Simplification
src/megatron/bridge/models/conversion/model_bridge.py
Reduced YARN_ROPE_SCALING_MAPPING to map only ("factor", "rotary_scaling_factor"), removing mappings for original_max_position_embeddings, beta_fast, beta_slow, mscale, and mscale_all_dim.
Provider Configuration Cleanup
src/megatron/bridge/models/gpt_oss/gpt_oss_provider.py, src/megatron/bridge/models/ministral3/ministral3_provider.py
Removed yarn_mscale and yarn_mscale_all_dim fields from both GPTOSSProvider and Ministral3ModelProvider classes.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

🚥 Pre-merge checks | ✅ 3 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Test Results For Major Changes ⚠️ Warning PR removes public dataclass fields actively tested in unit tests without providing test results or documentation of updates. Include test results demonstrating all tests pass after changes, or document why removed fields don't require test updates.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Update names of yarn related settings' accurately reflects the main changes: removal of yarn_mscale and yarn_mscale_all_dim fields, and simplification of YARN_ROPE_SCALING_MAPPING.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch maanug/rename-yarn-mscale

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/megatron/bridge/models/conversion/model_bridge.py (1)

498-508: ⚠️ Potential issue | 🟠 Major

Round-trip conversion breaks for YARN rope scaling: mismatched field names between mapping and provider definitions.

YARN_ROPE_SCALING_MAPPING (line 290) maps "factor""rotary_scaling_factor" (without yarn_ prefix), but provider classes like gpt_oss_provider.py define the field as yarn_rotary_scaling_factor (with prefix). This creates two failure modes:

  1. Native YARN provider → HF conversion: Line 499 correctly guards by checking yarn_rotary_scaling_factor, but lines 505–506 then attempt to read rotary_scaling_factor which doesn't exist on YARN providers, so rope_scaling["factor"] remains unset.

  2. HF → provider → HF round-trip: Forward conversion (line 386) sets rotary_scaling_factor via setattr, but reverse conversion at line 499 checks for yarn_rotary_scaling_factor, so the entire rope_scaling block is skipped.

The fix is to update YARN_ROPE_SCALING_MAPPING to use the correct provider field name.

Proposed fix
     YARN_ROPE_SCALING_MAPPING = [
-        ("factor", "rotary_scaling_factor"),
+        ("factor", "yarn_rotary_scaling_factor"),
     ]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/megatron/bridge/models/conversion/model_bridge.py` around lines 498 -
508, YARN rope-scaling round-trip fails because YARN_ROPE_SCALING_MAPPING maps
HF keys to provider attribute names without the required "yarn_" prefix; update
the mapping so the provider-side names match the actual provider attributes
(e.g., use "yarn_rotary_scaling_factor" instead of "rotary_scaling_factor") so
the detection check in model_bridge.py (the yarn_rotary_scaling_factor guard),
the loop that reads getattr(provider, megatron_key, None), and the forward
conversion that uses setattr all reference the same provider attribute names and
the hf_config["rope_scaling"] keys are populated consistently.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@src/megatron/bridge/models/conversion/model_bridge.py`:
- Around line 498-508: YARN rope-scaling round-trip fails because
YARN_ROPE_SCALING_MAPPING maps HF keys to provider attribute names without the
required "yarn_" prefix; update the mapping so the provider-side names match the
actual provider attributes (e.g., use "yarn_rotary_scaling_factor" instead of
"rotary_scaling_factor") so the detection check in model_bridge.py (the
yarn_rotary_scaling_factor guard), the loop that reads getattr(provider,
megatron_key, None), and the forward conversion that uses setattr all reference
the same provider attribute names and the hf_config["rope_scaling"] keys are
populated consistently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant