Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MiniCPM-V-2_6转换的问题 #201

Open
OutBreak-hui opened this issue Feb 28, 2025 · 1 comment
Open

MiniCPM-V-2_6转换的问题 #201

OutBreak-hui opened this issue Feb 28, 2025 · 1 comment

Comments

@OutBreak-hui
Copy link

INFO: rkllm-toolkit version: 1.1.4
INFO: vision_config is None, using default vision config
INFO: vision_config is None, using default vision config
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:23<00:00, 5.85s/it]
WARNING: The llm used in MiniCPMV is Qwen2ForCausalLM, we only convert it!
ERROR: Catch exception when loading model: CUDA out of memory. Tried to allocate 260.00 MiB. GPU 0 has a total capacty of 23.59 GiB of which 81.44 MiB is free. Including non-PyTorch memory, this process has 23.49 GiB memory in use. Of the allocated memory 23.09 GiB is allocated by PyTorch, and 150.89 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

我是3090上进行转换的,正常需要多大的显存转换呢

@waydong
Copy link
Collaborator

waydong commented Mar 7, 2025

你好,当前版本显存至少为float32权重的大小,新版本将会优化显存占用。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants