We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
使用Baichuan2-13B模型,convert需要几十G内存,在16G内存的Mac上无法完成运行,进程会OOM被系统kill掉。 而llama.cpp的convert只占很少内存就可以完成。能否使用类似llama.cpp的方法做量化?
The text was updated successfully, but these errors were encountered:
我也遇到这个问题,好象分批处理数据就能实现,但我又不会。
Sorry, something went wrong.
No branches or pull requests
使用Baichuan2-13B模型,convert需要几十G内存,在16G内存的Mac上无法完成运行,进程会OOM被系统kill掉。
而llama.cpp的convert只占很少内存就可以完成。能否使用类似llama.cpp的方法做量化?
The text was updated successfully, but these errors were encountered: