Skip to content

Add MimoOptimizer for heterogeneous parallelism with Mimo#3212

Closed
yashaswikarnati wants to merge 7 commits intoNVIDIA:pull-request/3211from
yashaswikarnati:yash/mimo-optimizer
Closed

Add MimoOptimizer for heterogeneous parallelism with Mimo#3212
yashaswikarnati wants to merge 7 commits intoNVIDIA:pull-request/3211from
yashaswikarnati:yash/mimo-optimizer

Conversation

@yashaswikarnati
Copy link
Copy Markdown
Contributor

What does this PR do ?

Adds optimizer support for MIMO models with heterogeneous parallelism, where different modules (encoder, LLM) can have different DP/TP/PP configurations

Introduces MimoOptimizer class that:

  • Manages per-module MegatronOptimizer instances
  • Computes true global gradient norm via all_reduce MAX across all modules
  • Clips gradients using the global norm
  • Provides module-keyed state dicts for checkpointing

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

@yashaswikarnati yashaswikarnati requested review from a team as code owners February 3, 2026 00:49
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Feb 3, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@ko3n1g ko3n1g requested a review from a team February 3, 2026 00:49
@yashaswikarnati yashaswikarnati force-pushed the yash/mimo-optimizer branch 2 times, most recently from ecbde44 to 1d817b6 Compare February 3, 2026 04:56
@aroshanghias-nvd
Copy link
Copy Markdown

Quick note from trying to hook this into Megatron‑Bridge MIMO: there are a few small API mismatches that make the new MimoOptimizer hard to use without runtime shims in Bridge. To make the implementation work, core could expose these consistently:

  • HyperCommGrid should have is_rank_in_grid() (Bridge grids already track rank_offset/size, and optimizer assumes this method exists).
  • _get_pg_collection_from_grid() should set expt_dp=None so optimizer setup doesn’t fail when MoE isn’t used.
  • get_mimo_optimizer() shouldn’t call .get() on modality_submodules if it’s a ModuleDict — use membership check + indexing.

If these land in core, Bridge can use MimoOptimizer directly without compatibility patches.

@yashaswikarnati
Copy link
Copy Markdown
Contributor Author

Quick note from trying to hook this into Megatron‑Bridge MIMO: there are a few small API mismatches that make the new MimoOptimizer hard to use without runtime shims in Bridge. To make the implementation work, core could expose these consistently:

  • HyperCommGrid should have is_rank_in_grid() (Bridge grids already track rank_offset/size, and optimizer assumes this method exists).
  • _get_pg_collection_from_grid() should set expt_dp=None so optimizer setup doesn’t fail when MoE isn’t used.
  • get_mimo_optimizer() shouldn’t call .get() on modality_submodules if it’s a ModuleDict — use membership check + indexing.

If these land in core, Bridge can use MimoOptimizer directly without compatibility patches.

thanks for raising these, i can try to address.

@dimapihtar dimapihtar added complexity: high Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. labels Feb 4, 2026
@dimapihtar dimapihtar requested review from erhoo82 and yanring February 4, 2026 15:40
@dimapihtar
Copy link
Copy Markdown
Contributor

/ok to test 214e4fd

@ko3n1g ko3n1g added this to the Core 0.16 milestone Feb 4, 2026
@yanring yanring requested a review from FDecaYed February 6, 2026 03:37
@svcnvidia-nemo-ci svcnvidia-nemo-ci added the Final Review PR is in the "final review" stage label Mar 19, 2026
@yashaswikarnati yashaswikarnati changed the base branch from main to pull-request/3211 March 23, 2026 17:08
@yashaswikarnati yashaswikarnati requested review from a team as code owners March 23, 2026 17:08
@yashaswikarnati yashaswikarnati requested review from a team as code owners March 23, 2026 17:08
yashaswikarnati and others added 5 commits March 23, 2026 10:20
Adds optimizer support for MIMO models where different modules
(encoder, LLM) may have different DP/TP/PP configurations.

Key features:
- MimoOptimizer class managing per-module optimizers
- True global gradient norm via all_reduce MAX across module boundaries
- Module-aware checkpointing (state_dict keyed by module name)
- Simple API: get_mimo_optimizer(mimo_model, config)
- Rename _get_pg_collection_from_grid to _get_pg_collection_for_optimizer
- Remove embedding group creation (not needed by optimizer)
- Fix mp group to use ["tp", "pp"] instead of just "tp"
- Add missing optimizer groups: tp_ep_pp, expt_dp
- Update test to create required process groups
- Replace .get() with direct indexing for modality_submodules access
- When is_active=True, module MUST exist; using .get() silently hid bugs
- Direct indexing raises KeyError immediately if module missing
- Add is_current_rank_in_grid() method to HyperCommGrid
- Format code for consistency

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
After merging PR3211 into PR3212, the optimizer code referenced the
removed `config.language_module_key` attribute. Update to use the
constant `MIMO_LANGUAGE_MODULE_KEY` from role.py, remove the stale
kwarg from test_mimo_optimizer.py, and add composite process groups
(tp-pp, tp-ep-pp, dp-ep) required by the optimizer to the 1F1B test
grid setup.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
yashaswikarnati and others added 2 commits March 23, 2026 21:08
test_lm_pp3_4gpu used num_layers=2 with llm_pp=3, violating the
Megatron constraint that num_layers must be divisible by pp_size.
This caused an assertion failure on LLM ranks while the encoder
rank waited at a barrier, appearing as a deadlock. Fixed by
changing num_layers to 3.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@copy-pr-bot copy-pr-bot bot deleted the branch NVIDIA:pull-request/3211 March 24, 2026 16:35
@copy-pr-bot copy-pr-bot bot closed this Mar 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

complexity: high Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. Final Review PR is in the "final review" stage

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants