Skip to content

[Megatron-FSDP] Test FP8 activations + parameter sharding with Megatron-FSDP fully-shard. Update README.#2894

Merged
cspades merged 11 commits intoNVIDIA:mainfrom
cspades:cye/mfsdp-fp8-fully-shard
Jan 20, 2026
Merged

[Megatron-FSDP] Test FP8 activations + parameter sharding with Megatron-FSDP fully-shard. Update README.#2894
cspades merged 11 commits intoNVIDIA:mainfrom
cspades:cye/mfsdp-fp8-fully-shard

Conversation

@cspades
Copy link
Copy Markdown
Member

@cspades cspades commented Jan 10, 2026

What does this PR do ?

  • Add a unit test for Megatron-FSDP + FP8 Parameters for all existing FP8 recipes using fully_shard.
  • Refactor and add documentation in the README for exactly how to use FP8 parameters with fully_shard.
  • Nit: Refactor the fp8_model_init context manager to mixed_precision.py and try/except quantized_model_init for newer versions of TransformerEngine
  • Update inline commentary, which also documents some gaps in the support matrix with FP8, particularly that we only support FP8 parameters when fully-sharding the compute parameters, but do not currently universally support FP8 parameters for un-sharded, optimizer-sharded, or optimizer/gradient-sharded models. Only optim_grads_params.

Testing

Testing mostly done for #2239 using both Llama-8B in Megatron-LM and a toy TransformerEngine model with fully_shard, which is now tested in CI/CD.

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

@cspades cspades self-assigned this Jan 10, 2026
@cspades cspades requested review from a team as code owners January 10, 2026 00:49
@cspades cspades added the Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. label Jan 10, 2026
@ko3n1g ko3n1g added this to the Core 0.16 milestone Jan 10, 2026
@cspades cspades force-pushed the cye/mfsdp-fp8-fully-shard branch from f5576e1 to 7445fd5 Compare January 10, 2026 00:52
@cspades cspades force-pushed the cye/mfsdp-fp8-fully-shard branch from 7445fd5 to 7d246d1 Compare January 10, 2026 00:53
@cspades cspades force-pushed the cye/mfsdp-fp8-fully-shard branch from 7d246d1 to a5b3546 Compare January 10, 2026 00:57
@cspades cspades force-pushed the cye/mfsdp-fp8-fully-shard branch from a5b3546 to 7652096 Compare January 10, 2026 01:03
Signed-off-by: Cory Ye <cye@nvidia.com>
@cspades cspades added Final Review PR is in the "final review" stage and removed Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. labels Jan 10, 2026
@cspades cspades enabled auto-merge January 10, 2026 17:36
@cspades cspades changed the title [Megatron-FSDP] Test FP8 with Megatron-FSDP fully-shard. Update README. [Megatron-FSDP] Test FP8 activations + parameter sharding with Megatron-FSDP fully-shard. Update README. Jan 10, 2026
@cspades
Copy link
Copy Markdown
Member Author

cspades commented Jan 15, 2026

@NVIDIA/core-adlr @NVIDIA/core-nemo Can anyone help fit this in before the Feb release? It adds some minimal documentation for how OSS users can make FP8 parameters work with FSDP since you have to turn on some knobs that are definitely not obvious, i.e. people don't know we need the main buffer to support quantization.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Final Review PR is in the "final review" stage

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants