[Models] Lfm2-VL Architecture#29191
Conversation
Signed-off-by: Paul Pak <paulpak58@gmail.com>
|
Documentation preview: https://vllm--29191.org.readthedocs.build/en/29191/ |
There was a problem hiding this comment.
Code Review
This pull request introduces support for the Lfm2-VL model, including its architecture implementation, an example script for offline inference, and necessary registrations. A new Siglip2Model implementation is also added as it's a dependency for Lfm2-VL. The overall implementation is solid, but I've found a type hint mismatch in the new lfm2_vl.py file that should be corrected for code correctness and clarity.
| spatial_shapes: torch.Tensor, | ||
| pixel_attention_mask: torch.Tensor, | ||
| num_patches: torch.Tensor, | ||
| ) -> torch.Tensor: |
There was a problem hiding this comment.
The return type hint for image_pixels_to_features is torch.Tensor, but the function actually returns a list[torch.Tensor]. The caller of this function, _process_image_input, expects a list of tensors. Please update the type hint to list[torch.Tensor] to match the implementation and avoid potential type errors.
| ) -> torch.Tensor: | |
| ) -> list[torch.Tensor]: |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| return self.vision_model( | ||
| pixel_values=pixel_values, | ||
| # attention_mask=pixel_attention_mask, | ||
| spatial_shapes=spatial_shapes, | ||
| ) |
There was a problem hiding this comment.
Mask padded vision tokens in Siglip2 forward
In Siglip2Model.forward the processor-supplied pixel_attention_mask is accepted but never forwarded to the vision transformer (attention_mask is commented out). This means padded patch tokens introduced to equalize sequence length are never masked, yet they still carry bias and positional embeddings, so the encoder will mix these fake tokens into attention for any image smaller than the maximum patch budget—a common case—yielding incorrect visual embeddings. The mask needs to be propagated to the encoder or applied inside the attention layers to exclude padding.
Useful? React with 👍 / 👎.
| self.vision_tower = Siglip2Model( | ||
| config=vision_config, | ||
| quant_config=quant_config, | ||
| prefix=f"{prefix}.vit", |
There was a problem hiding this comment.
Looks like a typo in the prefix
| quant_config=quant_config, | ||
| prefix=f"{prefix}.out_proj", | ||
| ) | ||
| self.attn = MultiHeadAttention( |
There was a problem hiding this comment.
Do you mean siglip2navit? Because it needs cu_seq_lens to build attention mask, which hasn't been integrated into MultiHeadAttention.
We will consolidate it with MultiHeadAttention after the vision attention refactoring (waiting #27919)
| return attn_output | ||
|
|
||
|
|
||
| class Siglip2MLP(nn.Module): |
There was a problem hiding this comment.
For the other layers, you can import directly from siglip2 file if they are the same
Signed-off-by: Paul Pak <paulpak58@gmail.com>
|
This pull request has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this pull request should remain open. Thank you! |
|
Closing as superseded by #31758 |
Purpose
LFM2-VL Implementation
Test Plan
Test Result
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.