Skip to content

Commit 5c2acb2

Browse files
authored
[Models][QwenVL] Remove unnecessary .contiguous() calls (#27106)
Signed-off-by: Lukas Geiger <[email protected]>
1 parent b26b70b commit 5c2acb2

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

vllm/model_executor/models/qwen2_5_vl.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -396,7 +396,7 @@ def forward(
396396
q, k, v = self.split_qkv(x)
397397
batch_size = q.shape[1]
398398

399-
q, k, v = (rearrange(x, "s b ... -> b s ...").contiguous() for x in (q, k, v))
399+
q, k, v = (rearrange(x, "s b ... -> b s ...") for x in (q, k, v))
400400
if rotary_pos_emb is not None:
401401
# [2 * b, s, heads, head_dim]
402402
qk_concat = torch.cat([q, k], dim=0)

vllm/model_executor/models/qwen2_vl.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -423,7 +423,7 @@ def forward(
423423
q, k, v = self.split_qkv(x)
424424
batch_size = q.shape[1]
425425

426-
q, k, v = (rearrange(x, "s b ... -> b s ...").contiguous() for x in (q, k, v))
426+
q, k, v = (rearrange(x, "s b ... -> b s ...") for x in (q, k, v))
427427
if rotary_pos_emb is not None:
428428
# [2 * b, s, heads, head_dim]
429429
qk_concat = torch.cat([q, k], dim=0)

0 commit comments

Comments
 (0)