We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
环境:macos m2 python3.11.4
按照指引中: CMAKE_ARGS="-DGGML_METAL=ON" pip install -U chatglm-cpp
安装成功提示: Looking in indexes: https://mirrors.aliyun.com/pypi/simple/ Collecting chatglm-cpp Using cached chatglm_cpp-0.2.10-cp311-cp311-macosx_14_0_arm64.whl Installing collected packages: chatglm-cpp Successfully installed chatglm-cpp-0.2.10
在使用python样例测试的过程中:
import chatglm_cpp pipeline = chatglm_cpp.Pipeline("chatglm3-ggml.bin") pipeline.chat(["介绍一下深圳的历史"])
观察系统监控,发现实际使用了cpu而不是gpu
同时按照官方实例:
./build/bin/main -m chatglm-ggml.bin -i
进行测试的时候,通过观察系统监控是可以使用gpu的
请问是我在安装chatglm_cpp的过程中,什么环节出了问题导致无法使用mps?
The text was updated successfully, but these errors were encountered:
我猜大概率是因为你当前目录下有一个 chatglm_cpp 的子目录 我是先吧项目的 chatglm_cpp 子目录重新命名后再安装 pip 扩展 验证 mps 后端使用 GPU 的
Sorry, something went wrong.
看到输出的log里有
Using cached chatglm_cpp-0.2.10-cp311-cp311-macosx_14_0_arm64.whl
应该是使用了缓存的 cpu wheel,尝试下加上 --no-cache-dir 参数来安装
--no-cache-dir
CMAKE_ARGS="-DGGML_METAL=ON" pip install -U chatglm-cpp --no-cache-dir --force-reinstall
No branches or pull requests
环境:macos m2 python3.11.4
按照指引中:
CMAKE_ARGS="-DGGML_METAL=ON" pip install -U chatglm-cpp
安装成功提示:
Looking in indexes: https://mirrors.aliyun.com/pypi/simple/
Collecting chatglm-cpp
Using cached chatglm_cpp-0.2.10-cp311-cp311-macosx_14_0_arm64.whl
Installing collected packages: chatglm-cpp
Successfully installed chatglm-cpp-0.2.10
在使用python样例测试的过程中:
观察系统监控,发现实际使用了cpu而不是gpu

同时按照官方实例:
进行测试的时候,通过观察系统监控是可以使用gpu的

请问是我在安装chatglm_cpp的过程中,什么环节出了问题导致无法使用mps?
The text was updated successfully, but these errors were encountered: