-
Notifications
You must be signed in to change notification settings - Fork 335
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feature] Add AMD GPU support through ROCm #172
Comments
Tested on Need more test to verify?
|
Tested on `Vega 56` with `ChatGLM3` things work well. More test maybe required.
Excuse me, did you compile under Windows or Linux? I'm failing in windows cmake, is there any result lld-link: error: could not open 'm.lib': no such file or directory |
I built with ROCm in Ubuntu with Docker image According to your description, I believe it is because of the environment path in the CMake file... I hardcoded the library path in the docker image. Actually I find some problems with my CMake file. It will raise error say during building |
ubuntu22.04,rocm5.7.1,6800xt llama.cpp可以: |
Are you using |
I finally figured out what is wrong with my script in my side... As the only error I encounter is:
I tested everything I can do with the It end up to be a problem of my docker environment |
@CellerX I have checked and patched the |
thx,i will try again |
Add
CMake
flag inCMakeLists.txt
refer to llama.cppCompile with args:
cmake -B build -DGGML_HIPBLAS=ON -DCMAKE_C_COMPILER=/opt/rocm/llvm/bin/clang -DCMAKE_CXX_COMPILER=/opt/rocm/llvm/bin/clang++ && cmake --build build -j
The text was updated successfully, but these errors were encountered: