Skip to content

ovis2.5 infer with vllm failed #5649

@JH-Xie

Description

@JH-Xie

环境:
ms_swift==3.8.0.dev0
vllm==0.10.0
transformers==4.55.4

推理脚本:
CUDA_VISIBLE_DEVICES=0,1,2,3 IMAGE_FACTOR=16 MAX_PIXELS=602112 NFRAMES=8 swift infer
--model_type ovis2_5
--model $exp_name
--infer_backend vllm
--max_model_len 16384
--tensor_parallel_size 4
--temperature 0.01
--repetition_penalty 1.0
--val_dataset test_data
--limit_mm_per_prompt '{"image": 10, "video": 5}'

推理报错:
(VllmWorker rank=3 pid=142898) ERROR 09-03 19:09:57 [multiproc_executor.py:511] raise ValueError(
(VllmWorker rank=3 pid=142898) ERROR 09-03 19:09:57 [multiproc_executor.py:511] ValueError: The Transformers implementation of 'Ovis2_5' is not compatible with vLLM.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions