Skip to content

Ko3n1g/chore/reapply 2152 and 2209#2273

Merged
ko3n1g merged 2 commits intor0.3.0from
ko3n1g/chore/reapply-2152-and-2209
Feb 8, 2026
Merged

Ko3n1g/chore/reapply 2152 and 2209#2273
ko3n1g merged 2 commits intor0.3.0from
ko3n1g/chore/reapply-2152-and-2209

Conversation

@ko3n1g
Copy link
Copy Markdown
Contributor

@ko3n1g ko3n1g commented Feb 8, 2026

What does this PR do ?

Add a one line overview of what this PR aims to accomplish.

Changelog

  • Add specific line by line info of high level changes in this PR.

GitHub Actions CI

See the CI sectionin the Contributing doc for how to trigger the CI. A Nvidia developer will need to approve and trigger the CI for external contributors.

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

If you haven't finished some of the above items you can still open "Draft" PR.

Additional Information

  • Related to # (issue)

Summary by CodeRabbit

  • Chores
    • Updated pretraining configurations for DeepSeek and Qwen3 models.
    • Optimized parallelism and CUDA graph settings for improved performance.
    • Simplified configuration aliases for GB300 variants.
    • Enhanced runtime environment variable handling for specific model-precision combinations.

@ko3n1g ko3n1g merged commit a7a840d into r0.3.0 Feb 8, 2026
2 of 6 checks passed
@ko3n1g ko3n1g deleted the ko3n1g/chore/reapply-2152-and-2209 branch February 8, 2026 14:48
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 8, 2026

Caution

Review failed

The pull request is closed.

📝 Walkthrough

Walkthrough

This PR modifies DeepSeek and Qwen3 pretraining configurations to adjust parallelism settings, pipeline layouts, and CUDA graph scopes, while adding model-specific environment variable handling for DeepSeek fp8_mx compute dtype.

Changes

Cohort / File(s) Summary
DeepSeek Layout & Configuration
scripts/performance/configs/deepseek/deepseek_llm_pretrain.py, scripts/performance/configs/deepseek/deepseek_workload_base_configs.py
Updated GB200 to use base_cfg.pp_layout instead of None; redefined GB300_V1 with simplified micro_batch_size, reduced pipeline/virtual pipeline, and updated CUDA graph scope; consolidated NVFP4 variants to use V1/V2 bases directly, reducing cross-variant dependencies.
Qwen3 Expert Parallelism & CUDA Graph
scripts/performance/configs/qwen/qwen3_workload_base_configs.py
Increased expert_model_parallel_size from 16 to 32 in GB300_FP8_CS_V2, removed virtual_pipeline_model_parallel_size=12, and added cuda_graph_scope with attention and MoE components; added explicit expert_model_parallel_size=32 to GB200_FP8_CS_V2.
Performance Plugin Environment Handling
scripts/performance/perf_plugins.py
Added conditional branch to disable CUDNN layer normalization deletion when model family is "deepseek" and compute dtype is "fp8_mx".

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

  • PR #2271 — Directly reverts the layout argument change and cudnn-norm special-case additions from this PR
  • PR #2210 — Modifies identical QWEN3_235B_A22B_PRETRAIN_CONFIG_GB300_FP8_CS_V2 and GB200_FP8_CS_V2 entries with the same expert parallelism and cuda_graph_scope adjustments
  • PR #2186 — Makes overlapping changes to DeepSeek configs and perf_plugins cudnn LayerNorm handling

Suggested labels

r0.3.0, Run CICD, cherry-pick

Suggested reviewers

  • erhoo82
  • dingqingy-nv
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ko3n1g/chore/reapply-2152-and-2209

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant