Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support qwen2_5_vl #9924

Merged
merged 4 commits into from
Feb 24, 2025
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 14 additions & 1 deletion paddlenlp/experimental/transformers/qwen2/modeling.py
Original file line number Diff line number Diff line change
Expand Up @@ -1554,4 +1554,17 @@

# NOTE: (changwenbin) This function corresponds to QWen2-VL's second part, only used for QWen2-VL.
def __init__(self, config: Qwen2Config):
super().__init__(config, base_model_prefix="model")
super().__init__(config)
self.base_model_prefix = "model"

Check warning on line 1558 in paddlenlp/experimental/transformers/qwen2/modeling.py

View check run for this annotation

Codecov / codecov/patch

paddlenlp/experimental/transformers/qwen2/modeling.py#L1557-L1558

Added lines #L1557 - L1558 were not covered by tests


class Qwen2_5_VLForConditionalGenerationBlockInferenceModel(Qwen2ForCausalLMBlockInferenceModel):

Check warning on line 1561 in paddlenlp/experimental/transformers/qwen2/modeling.py

View check run for this annotation

Codecov / codecov/patch

paddlenlp/experimental/transformers/qwen2/modeling.py#L1561

Added line #L1561 was not covered by tests
"""
NOTE: (changwenbin) This class inherits from Qwen2ForCausalLMBlockInferenceModel.
Used only for QWen2-5-VL's second part.
"""

# NOTE: (changwenbin) This function corresponds to QWen2-5-VL's second part, only used for QWen2-5-VL.
def __init__(self, config: Qwen2Config):
super().__init__(config)
self.base_model_prefix = "model"

Check warning on line 1570 in paddlenlp/experimental/transformers/qwen2/modeling.py

View check run for this annotation

Codecov / codecov/patch

paddlenlp/experimental/transformers/qwen2/modeling.py#L1568-L1570

Added lines #L1568 - L1570 were not covered by tests
2 changes: 1 addition & 1 deletion paddlenlp/transformers/auto/configuration.py
Original file line number Diff line number Diff line change
Expand Up @@ -238,7 +238,7 @@ def __init__(self, mapping):

def __getitem__(self, key):
# NOTE: (changwenbin) This is to enable the qwen2_vl language model to use qwen2 reasoning optimization
if key == "qwen2_vl":
if key == "qwen2_vl" or key == "qwen2_5_vl":
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里写个通用的mapping吧,这种写法不推荐

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

key = "qwen2"
if key in self._extra_content:
return self._extra_content[key]
Expand Down
Loading