Skip to content

Misc: Remaining cherry-picks for 26.02.01#2631

Merged
ko3n1g merged 4 commits intor0.3.0from
ko3n1g/cp/qwen-dispatcher
Mar 5, 2026
Merged

Misc: Remaining cherry-picks for 26.02.01#2631
ko3n1g merged 4 commits intor0.3.0from
ko3n1g/cp/qwen-dispatcher

Conversation

@ko3n1g
Copy link
Copy Markdown
Contributor

@ko3n1g ko3n1g commented Mar 3, 2026

What does this PR do ?

Add a one line overview of what this PR aims to accomplish.

Changelog

  • Add specific line by line info of high level changes in this PR.

GitHub Actions CI

See the CI sectionin the Contributing doc for how to trigger the CI. A Nvidia developer will need to approve and trigger the CI for external contributors.

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

If you haven't finished some of the above items you can still open "Draft" PR.

Additional Information

  • Related to # (issue)

Summary by CodeRabbit

  • Chores
    • Updated token dispatcher configurations for Qwen3 pretraining variants across multiple hardware configurations.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 3, 2026

📝 Walkthrough

Walkthrough

Adds cfg.model.moe_token_dispatcher_type configuration assignments across multiple Qwen3 pretraining config builders, setting values to "flex" or "alltoall" to specify token-level dispatch strategy during initialization for various GPU variants.

Changes

Cohort / File(s) Summary
Qwen3 MoE Token Dispatcher Configuration
scripts/performance/configs/qwen/qwen3_llm_pretrain.py
Adds cfg.model.moe_token_dispatcher_type assignments to multiple Qwen3 pretrain config functions (235b_a22b and 30b_a3b families across GB300/GB200/B300/B200/H100 variants, plus qwen3_next_80b_a3b_h100), setting token dispatch strategy after backend assignments.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

Possibly related PRs

Suggested labels

r0.3.0, performance, cherry-pick

Suggested reviewers

  • malay-nagda
  • yaoyu-33
🚥 Pre-merge checks | ✅ 2 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Test Results For Major Changes ⚠️ Warning PR makes performance configuration changes affecting MoE token dispatcher type across Qwen3 variants without documented performance benchmarks, before/after metrics, or convergence validation evidence. Update PR description with concrete performance testing results including before/after metrics for each hardware variant and convergence validation evidence.
Title check ❓ Inconclusive The title 'Misc: Remaining cherry-picks for 26.02.01' is vague and does not clearly describe the specific changes in the PR, which focus on adding moe_token_dispatcher_type configurations to Qwen3 pretraining configs. Consider a more specific title that describes the main change, such as 'Add moe_token_dispatcher_type configuration to Qwen3 pretraining configs' or 'Configure token dispatcher types for Qwen3 performance job variants'.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ko3n1g/cp/qwen-dispatcher

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
scripts/performance/configs/qwen/qwen3_llm_pretrain.py (1)

81-81: Consider replacing repeated dispatcher string literals with module-level constants.

This reduces typo risk and keeps future edits safer.

♻️ Suggested refactor
+MOE_TOKEN_DISPATCHER_FLEX = "flex"
+MOE_TOKEN_DISPATCHER_ALLTOALL = "alltoall"
...
-    cfg.model.moe_token_dispatcher_type = "flex"
+    cfg.model.moe_token_dispatcher_type = MOE_TOKEN_DISPATCHER_FLEX
...
-    cfg.model.moe_token_dispatcher_type = "alltoall"
+    cfg.model.moe_token_dispatcher_type = MOE_TOKEN_DISPATCHER_ALLTOALL
As per coding guidelines: "Use upper snake_case for constants".

Also applies to: 107-107, 133-133, 159-159, 185-185, 211-211, 237-237, 263-263

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/performance/configs/qwen/qwen3_llm_pretrain.py` at line 81, Replace
repeated string literals used for the MoE dispatcher with a module-level
constant: define an UPPER_SNAKE_CASE constant (e.g., MOE_TOKEN_DISPATCHER_FLEX =
"flex") at the top of the file and use that constant wherever
cfg.model.moe_token_dispatcher_type is assigned (including occurrences near
lines with cfg.model.moe_token_dispatcher_type and other repeated dispatcher
assignments mentioned); update all references (e.g., assignments that currently
use "flex") to use the constant to avoid typos and centralize changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@scripts/performance/configs/qwen/qwen3_llm_pretrain.py`:
- Line 81: Replace repeated string literals used for the MoE dispatcher with a
module-level constant: define an UPPER_SNAKE_CASE constant (e.g.,
MOE_TOKEN_DISPATCHER_FLEX = "flex") at the top of the file and use that constant
wherever cfg.model.moe_token_dispatcher_type is assigned (including occurrences
near lines with cfg.model.moe_token_dispatcher_type and other repeated
dispatcher assignments mentioned); update all references (e.g., assignments that
currently use "flex") to use the constant to avoid typos and centralize changes.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7b56e46 and 48df188.

📒 Files selected for processing (1)
  • scripts/performance/configs/qwen/qwen3_llm_pretrain.py

@ko3n1g ko3n1g force-pushed the ko3n1g/cp/qwen-dispatcher branch from 2461fd3 to dba7b23 Compare March 3, 2026 23:03
@ko3n1g ko3n1g changed the title Ko3n1g/cp/qwen dispatcher Misc: Remaining cherry-picks for 26.02.01 Mar 3, 2026
@ko3n1g ko3n1g requested a review from a team as a code owner March 4, 2026 13:21
ko3n1g and others added 4 commits March 5, 2026 10:33
Signed-off-by: oliver könig <okoenig@nvidia.com>
Signed-off-by: Malay Nagda <malayn@nvidia.com>
Co-authored-by: Malay Nagda <malayn@nvidia.com>
Signed-off-by: oliver könig <okoenig@nvidia.com>
Signed-off-by: oliver könig <okoenig@nvidia.com>
Signed-off-by: Malay Nagda <malayn@nvidia.com>
Signed-off-by: Sanju C Sudhakaran <scsudhakaran@nvidia.com>
Co-authored-by: oliver könig <okoenig@nvidia.com>
Signed-off-by: oliver könig <okoenig@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants