We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The text was updated successfully, but these errors were encountered:
可以设置环境变量CUDA_VISIBLE_DEVICES
Sorry, something went wrong.
感谢,按照 torch 的配置成功,建议可以把支持 torch 的参数配置这一项写出来。
求问大佬,我用了量化后的chatGLM3模型,编译也没报错,但是推理一直在CPU进行,GPU没用到,在编译时已经写了-DGGML_CUBLAS=ON。 详细的写在-DGGML_CUBLAS=ON,编译成功之后无法使用显卡进行推理,仍然使用CPU进行推理 #241 大佬能帮忙看看怎么回事么?
你好,我遇到了同样的问题,可否详细问下最后怎么实现的
No branches or pull requests
文档中没有指定说明。是否支持默认 torch 的配置参数
The text was updated successfully, but these errors were encountered: