Conversation
This was referenced Dec 17, 2025
Contributor
|
@yaoyu-33 Is it possible to apply the patching logic inside |
2 tasks
Signed-off-by: yaoyu-33 <yaoyu.094@gmail.com>
Contributor
Author
|
@HollowMan6 updated. |
Contributor
Author
|
/ok to test b90af9e |
Contributor
Author
|
/ok to test 8eb93ef |
suiyoubi
approved these changes
Dec 28, 2025
erictang000
added a commit
to NovaSky-AI/SkyRL
that referenced
this pull request
Dec 28, 2025
Enables LoRA training with the Megatron Backend. Currently waiting for NVIDIA-NeMo/Megatron-Bridge#1762 to be merged into main, so we can at least pin a commit rather than a branch for stability. - Adds [LoRA](https://docs.nvidia.com/nemo/megatron-bridge/0.2.0/apidocs/bridge/bridge.peft.lora.html) support via Megatron-Bridge - Adds custom checkpointing for LoRA model parameters (until LoRA checkpointing logic is upstreamed to Megatron-Bridge). - Weight syncing logic for Megatron + LoRA is handled by merging the LoRA parameters back into the base model before exporting to vLLM. This means that for megatron lora (for now), lora does not have to be configured for vLLM. ## Examples GSM8K for Qwen3-30B-MoE and Qwen3-0.6B converging: <img width="1087" height="808" alt="image" src="https://github.com/user-attachments/assets/95e03b75-4a8c-4734-8f55-2cf535b04876" /> - Qwen3-30B-A3B previously required 2 H100 nodes for full parameter fine tuning - we can increase batch size compared to previous runs with LoRA on just 1 H100 node! ### DAPO Qwen-4B With TIS - megatron dense backend can match/exceed FSDP backend perf. TIS is especially important for the current version of LoRA. Canonical Lora seems to be less good than "performant lora" - or maybe more sensitive to learning rate. <img width="1214" height="814" alt="image" src="https://github.com/user-attachments/assets/4c2d2b37-f835-4e53-ac54-7e54812b6006" /> Blockers/TODOs: - [x] ~~For Dense models, LoRA results in low grad norm/0 ppo_clip_ratio unless pp > 1. Something on megatron-core or megatron-bridge is broken for dense models.~~ Issue tracked on Megatron-Bridge (NVIDIA-NeMo/Megatron-Bridge#1750), awaiting PR NVIDIA-NeMo/Megatron-Bridge#1762 - [x] Test out MoE models ## Future Work - Once Megatron-Bridge support for exporting only lora parameters is done, we should support just syncing these to vLLM for lower communication cost - Add support for other LoRA variants from Megatron-Bridge (canonical lora, qlora, dora).
dzorlu
pushed a commit
to fleet-ai/SkyRL
that referenced
this pull request
Feb 4, 2026
Enables LoRA training with the Megatron Backend. Currently waiting for NVIDIA-NeMo/Megatron-Bridge#1762 to be merged into main, so we can at least pin a commit rather than a branch for stability. - Adds [LoRA](https://docs.nvidia.com/nemo/megatron-bridge/0.2.0/apidocs/bridge/bridge.peft.lora.html) support via Megatron-Bridge - Adds custom checkpointing for LoRA model parameters (until LoRA checkpointing logic is upstreamed to Megatron-Bridge). - Weight syncing logic for Megatron + LoRA is handled by merging the LoRA parameters back into the base model before exporting to vLLM. This means that for megatron lora (for now), lora does not have to be configured for vLLM. ## Examples GSM8K for Qwen3-30B-MoE and Qwen3-0.6B converging: <img width="1087" height="808" alt="image" src="https://github.com/user-attachments/assets/95e03b75-4a8c-4734-8f55-2cf535b04876" /> - Qwen3-30B-A3B previously required 2 H100 nodes for full parameter fine tuning - we can increase batch size compared to previous runs with LoRA on just 1 H100 node! ### DAPO Qwen-4B With TIS - megatron dense backend can match/exceed FSDP backend perf. TIS is especially important for the current version of LoRA. Canonical Lora seems to be less good than "performant lora" - or maybe more sensitive to learning rate. <img width="1214" height="814" alt="image" src="https://github.com/user-attachments/assets/4c2d2b37-f835-4e53-ac54-7e54812b6006" /> Blockers/TODOs: - [x] ~~For Dense models, LoRA results in low grad norm/0 ppo_clip_ratio unless pp > 1. Something on megatron-core or megatron-bridge is broken for dense models.~~ Issue tracked on Megatron-Bridge (NVIDIA-NeMo/Megatron-Bridge#1750), awaiting PR NVIDIA-NeMo/Megatron-Bridge#1762 - [x] Test out MoE models ## Future Work - Once Megatron-Bridge support for exporting only lora parameters is done, we should support just syncing these to vLLM for lower communication cost - Add support for other LoRA variants from Megatron-Bridge (canonical lora, qlora, dora).
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Summary