-
Notifications
You must be signed in to change notification settings - Fork 472
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
微调Qwen2_5_VL模型时报错:ImportError: cannot import name 'Qwen2_5_VLForConditionalGeneration' from 'transformers' #3109
Comments
transformers main分支安装一下 |
我用这个命令安装,得到的是transformers-4.49.0.dev0,还是报一样的错误 |
https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct/discussions/17 |
目前看来 4.48.3 和4.49.0.dev0都没用,包括你提供的链接中 |
版本确实是 transformers-4.49.0.dev0,我装完后可以了 😎 你再确认下是不是是不是装成功了吧。 |
感谢 我验证了一下 的确可以 爱你~ |
我又遇到了一个新问题:AssertionError: Input and cos/sin must have the same dtype, got torch.float32 and torch.bfloat16 |
Related issue #huggingface/transformers#36188 |
等待transformers修复 |
我使用的命令是:
nproc_per_node=1
CUDA_VISIBLE_DEVICES=0
NPROC_PER_NODE=$nproc_per_node
swift sft
--model model/Qwen/Qwen2.5-VL-3B-Instruct
--train_type full
--freeze_vit true
--dataset data/sft_data/
--num_train_epochs 1
--torch_dtype bfloat16
--per_device_train_batch_size 2
--per_device_eval_batch_size 2
--learning_rate 1e-5
--gradient_accumulation_steps 8
--eval_steps 500
--save_steps 2000
--save_total_limit 2
--logging_steps 2
--max_length 8192
--system 'You are a helpful assistant.'
--warmup_ratio 0.05
--dataloader_num_workers 4
--attn_impl flash_attn
--output_dir output/Qwen2_5-VL-3B-Instruct
--deepspeed zero2
我的系统版本以及库
Linux ubuntu20.04
transformers 4.49.0.dev0
transformers-stream-generator 0.0.5
triton 3.2.0
报错信息如下:
/[INFO:swift] Loading the model using model_dir: /mnt/general/share/model/Qwen/Qwen2.5-VL-3B-Instruct
[WARNING:swift] Please install the package:
pip install "transformers>=4.49" -U
.[rank0]: Traceback (most recent call last):
[rank0]: File "/mnt/general/ganchun/code/ms-swift-3.1.0/swift/cli/sft.py", line 5, in
[rank0]: sft_main()
[rank0]: File "/mnt/general/ganchun/code/ms-swift-3.1.0/swift/llm/train/sft.py", line 257, in sft_main
[rank0]: return SwiftSft(args).main()
[rank0]: File "/mnt/general/ganchun/code/ms-swift-3.1.0/swift/llm/train/sft.py", line 30, in init
[rank0]: self._prepare_model_tokenizer()
[rank0]: File "/mnt/general/ganchun/code/ms-swift-3.1.0/swift/llm/train/sft.py", line 62, in _prepare_model_tokenizer
[rank0]: self.model, self.processor = args.get_model_processor()
[rank0]: File "/mnt/general/ganchun/code/ms-swift-3.1.0/swift/llm/argument/base_args/base_args.py", line 265, in get_model_processor
[rank0]: return get_model_tokenizer(**kwargs)
[rank0]: File "/mnt/general/ganchun/code/ms-swift-3.1.0/swift/llm/model/register.py", line 494, in get_model_tokenizer
[rank0]: model, processor = get_function(model_dir, model_info, model_kwargs, load_model, **kwargs)
[rank0]: File "/mnt/general/ganchun/code/ms-swift-3.1.0/swift/llm/model/model/qwen.py", line 583, in get_model_tokenizer_qwen2_5_vl
[rank0]: from transformers import Qwen2_5_VLForConditionalGeneration
[rank0]: ImportError: cannot import name 'Qwen2_5_VLForConditionalGeneration' from 'transformers' (/mnt/general/ganchun/miniconda3/envs/internvl/lib/python3.10/site-packages/transformers/init.py)
[rank0]:[W214 04:03:46.998951826 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())
E0214 04:03:47.960000 3391 site-packages/torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 3456) of binary: /mnt/general/ganchun/miniconda3/envs/internvl/bin/python
Traceback (most recent call last):
File "/mnt/general/ganchun/miniconda3/envs/internvl/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/mnt/general/ganchun/miniconda3/envs/internvl/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/mnt/general/ganchun/miniconda3/envs/internvl/lib/python3.10/site-packages/torch/distributed/run.py", line 923, in
main()
File "/mnt/general/ganchun/miniconda3/envs/internvl/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 355, in wrapper
return f(*args, **kwargs)
File "/mnt/general/ganchun/miniconda3/envs/internvl/lib/python3.10/site-packages/torch/distributed/run.py", line 919, in main
run(args)
File "/mnt/general/ganchun/miniconda3/envs/internvl/lib/python3.10/site-packages/torch/distributed/run.py", line 910, in run
elastic_launch(
File "/mnt/general/ganchun/miniconda3/envs/internvl/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 138, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/mnt/general/ganchun/miniconda3/envs/internvl/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
/mnt/general/ganchun/code/ms-swift-3.1.0/swift/cli/sft.py FAILED
Failures:
<NO_OTHER_FAILURES>
Root Cause (first observed failure):
[0]:
time : 2025-02-14_04:03:47
host : pt-c1628b2c236d447c90472c41b62e1140-worker-0.pt-c1628b2c236d447c90472c41b62e1140.ns-devoversea-d41e68bd.svc.cluster.local
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 3456)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
我希望能够解决这个报错,并能顺利微调模型
The text was updated successfully, but these errors were encountered: