Skip to content

Jinliang/qwen3 vl#4

Merged
shifangx merged 17 commits intoshifang/jinliang/qwen3-vlfrom
jinliang/qwen3-vl
Jan 29, 2026
Merged

Jinliang/qwen3 vl#4
shifangx merged 17 commits intoshifang/jinliang/qwen3-vlfrom
jinliang/qwen3-vl

Conversation

@shifangx
Copy link
Owner

What does this PR do ?

Add a one line overview of what this PR aims to accomplish.

Changelog

  • Add specific line by line info of high level changes in this PR.

GitHub Actions CI

See the CI sectionin the Contributing doc for how to trigger the CI. A Nvidia developer will need to approve and trigger the CI for external contributors.

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

If you haven't finished some of the above items you can still open "Draft" PR.

Additional Information

  • Related to # (issue)

wplf and others added 17 commits January 14, 2026 19:58
Signed-off-by: jinliangl <jinliangl@nvidia.com>
Signed-off-by: jinliangl <jinliangl@nvidia.com>
Signed-off-by: jinliangl <jinliangl@nvidia.com>
…ision_model=true to enable it

Signed-off-by: jinliangl <jinliangl@nvidia.com>
Signed-off-by: jinliangl <jinliangl@nvidia.com>
Signed-off-by: jinliangl <jinliangl@nvidia.com>
Signed-off-by: jinliangl <jinliangl@nvidia.com>
Signed-off-by: Shifang Xu <shifangx@nvidia.com>
@shifangx shifangx merged commit 97f99aa into shifang/jinliang/qwen3-vl Jan 29, 2026
shifangx added a commit that referenced this pull request Jan 29, 2026
* qwen3-vl migration [wip]

Signed-off-by: jinliangl <jinliangl@nvidia.com>

* support qwen3vl bshd training

Signed-off-by: jinliangl <jinliangl@nvidia.com>

* support bshd training, thd training, thd training with cp, loss convergency has not been test yet.

* adjust how to get packed seq params

* supoort qwen3vl dense

* support convert hf ckpt for qwenvl vision module, can convert deepstack's param, need to check it further.

* fix qwen3vl shard_state_dict

* align megagtron-bridge and mbridge fwd bitwisely

* align bshd and thd training, cp training remains to do

* fix full recompute of qwen3vl bug with credit to xuwen

Signed-off-by: jinliangl <jinliangl@nvidia.com>

* support hf vision model and megatron vision model; use model.use_hf_vision_model=true to enable it

Signed-off-by: jinliangl <jinliangl@nvidia.com>

* align with pr 1997, thd and bshd loss curve is verified

Signed-off-by: jinliangl <jinliangl@nvidia.com>

* init hf_config from scratch to avoid deepcopy parallel_state

Signed-off-by: jinliangl <jinliangl@nvidia.com>

* use pg_collection instead of mpu

Signed-off-by: jinliangl <jinliangl@nvidia.com>

* add qwen3vl_step new file

---------

Signed-off-by: jinliangl <jinliangl@nvidia.com>
Signed-off-by: Shifang Xu <shifangx@nvidia.com>
Co-authored-by: jinliangl <jinliangl@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants