-
Notifications
You must be signed in to change notification settings - Fork 466
Description
A bit confused seeing error when ran prediction on a windows 11 PC with NV RTX 4090 and Intel CPU. Any hint is appreciated, thanks!
(fastvlm) PS C:\Users\Chunde\GitHub\ml-fastvlm> python .\predict.py --model-path D:\ml-fastvlm\checkpoints\llava-fastvithd_7b_stage2 --image-file "D:\Image_20171209131344.jpg" --prompt "describe the image" Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\Users\Chunde\GitHub\ml-fastvlm\predict.py", line 87, in <module> predict(args) File "C:\Users\Chunde\GitHub\ml-fastvlm\predict.py", line 31, in predict tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, args.model_base, model_name, device="mps") File "C:\Users\Chunde\GitHub\ml-fastvlm\llava\model\builder.py", line 131, in load_pretrained_model model = LlavaQwen2ForCausalLM.from_pretrained( File "C:\Users\Chunde\.conda\envs\fastvlm\lib\site-packages\transformers\modeling_utils.py", line 4245, in from_pretrained ) = cls._load_pretrained_model( File "C:\Users\Chunde\.conda\envs\fastvlm\lib\site-packages\transformers\modeling_utils.py", line 4815, in _load_pretrained_model new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model( File "C:\Users\Chunde\.conda\envs\fastvlm\lib\site-packages\transformers\modeling_utils.py", line 873, in _load_state_dict_into_meta_model set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs) File "C:\Users\Chunde\.conda\envs\fastvlm\lib\site-packages\accelerate\utils\modeling.py", line 337, in set_module_tensor_to_device new_value = value.to(device) RuntimeError: PyTorch is not linked with support for mps devices