-
Notifications
You must be signed in to change notification settings - Fork 961
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Installation stuck on windows #1703
Comments
I have the same problem, but during the upgrade: Previously (two months ago) the installation was without problems. |
you can try to install a pre-built wheel with basic CPU support. pip install llama-cpp-python |
It's stuck exactly here: with incremental build. [C:\Users\User\AppData\Local\Temp\tmpij2flo7g\build\vendor\llama.cpp\src\llama.vcxproj]
Structured output is enabled. The formatting of compiler diagnostics will reflect the error hierarchy. See https://aka.ms/cpp/structured-output for more details.
Creating directory "llama.dir\Release\llama.tlog\".
InitializeBuildStatus:
Creating "llama.dir\Release\llama.tlog\unsuccessfulbuild" because "AlwaysCreate" was specified.
Touching "llama.dir\Release\llama.tlog\unsuccessfulbuild".
CustomBuild:
Building Custom Rule C:/Users/User/AppData/Local/Temp/pip-req-build-xtbmoz6p/vendor/llama.cpp/src/CMakeLists.txt
ClCompile:
C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.40.33807\bin\HostX64\x64\CL.exe /c /I"C:\Users\User\AppData\Local\Temp\pip-req-build-xtbmoz6p\vendor\llama.cpp\src\." /I"C:\Users\User\AppData\Local\Temp\pip-req-build-xtbmoz6p\vendor\llama.cpp\src\..\include" /I"C:\Users\User\AppData\Local\Temp\pip-req-build-xtbmoz6p\vendor\llama.cpp\ggml\src\..\include" /nologo /W1 /WX- /diagnostics:column /O2 /Ob2 /D _WINDLL /D _MBCS /D WIN32 /D _WINDOWS /D NDEBUG /D LLAMA_SHARED /D LLAMA_BUILD /D _CRT_SECURE_NO_WARNINGS /D GGML_USE_VULKAN /D "CMAKE_INTDIR=\"Release\"" /D llama_EXPORTS /EHsc /MD /GS /fp:precise /Zc:wchar_t /Zc:forScope /Zc:inline /Fo"llama.dir\Release\\" /Fd"llama.dir\Release\vc143.pdb" /external:W1 /Gd /TP /errorReport:queue "C:\Users\User\AppData\Local\Temp\pip-req-build-xtbmoz6p\vendor\llama.cpp\src\llama.cpp" "C:\Users\User\AppData\Local\Temp\pip-req-build-xtbmoz6p\vendor\llama.cpp\src\llama-vocab.cpp" "C:\Users\User\AppData\Local\Temp\pip-req-build-xtbmoz6p\vendor\llama.cpp\src\llama-grammar.cpp" "C:\Users\User\AppData\Local\Temp\pip-req-build-xtbmoz6p\vendor\llama.cpp\src\llama-sampling.cpp" "C:\Users\User\AppData\Local\Temp\pip-req-build-xtbmoz6p\vendor\llama.cpp\src\unicode.cpp" "C:\Users\User\AppData\Local\Temp\pip-req-build-xtbmoz6p\vendor\llama.cpp\src\unicode-data.cpp"
llama.cpp
llama-vocab.cpp
llama-grammar.cpp
llama-sampling.cpp
unicode.cpp
unicode-data.cpp
Generating Code... |
@thewh1teagle Just wait. It can take from 10 minutes to an hour (depending on your PC configuration). |
@Shyryp |
@thewh1teagle Sorry, I'm not aware of why this might be happening. I haven't looked into it in depth. |
@thewh1teagle can you try re-building with Additionally, when building |
Turns out that it happens in both llama-cpp-python and llama.cpp in Release mode I thought that it doesn't happen in llama.cpp because I compiled it with default mode. Not sure why in debug mode it compile fast but in release it stuck forever on this |
Linking #1714 as it seems to be the same issue. Seems to be aggressive link-time optimization by msvc |
How can I disable it in release mode to verify that this is the cause? (cli) Update Using the following: cmake -B build . -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="/Od"
cmake --build build --config Release --target llama-cli fixed the issue. |
Can you provide the code for python CLI? I am having some issues with the build taking a long time!
CMake Warning (dev) at CMakeLists.txt:17 (install): CMake Warning (dev) at CMakeLists.txt:9 (install): CMake Warning (dev) at CMakeLists.txt:17 (install): -- Configuring done (12.4s) Run Build Command(s): "C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/MSBuild/Current/Bin/amd64/MSBuild.exe" ALL_BUILD.vcxproj /p:Configuration=Release /p:Platform=x64 /p:VisualStudioVersion=17.0 /v:n Project "C:\Users\poko-\AppData\Local\Temp\tmpj2ijvozw\build\ALL_BUILD.vcxproj" on node 1 (default targets).
C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\ucrt\assert.h(21,9): warning C4005: 'static_assert': macro redefinition [C:\Users\poko-\AppData\Local\Temp\tmpj2ijvozw\build\vendor\llama.cpp\ggml\src\ggml.vcxproj]
MakeDirsForLink: |
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
Should build fast without stuck
Current Behavior
It stuck when building
Environment and Context
Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.
amd ryzen 5 4500u
windows 11
Failure Information (for bugs)
Steps to Reproduce
https://github.com/ggerganov/llama.cpp/tree/1731d4238f9e4f925a750810e7f5480827c66dcf
With original llama.cpp it builds fast without errors with vulkan.
If more information is needed let me know.
By the way I think that the issue template should be improved... it has too many things and it's not simplified.
Happens also without vulkan
The text was updated successfully, but these errors were encountered: