Skip to content

cmake: fix cli build when LLAMA_BUILD_SERVER=OFF#18670

Merged
ngxson merged 1 commit intoggml-org:masterfrom
AsbjornOlling:fix-unconditionally-include-server-subdir
Jan 9, 2026
Merged

cmake: fix cli build when LLAMA_BUILD_SERVER=OFF#18670
ngxson merged 1 commit intoggml-org:masterfrom
AsbjornOlling:fix-unconditionally-include-server-subdir

Conversation

@AsbjornOlling
Copy link
Contributor

@AsbjornOlling AsbjornOlling commented Jan 7, 2026

The Problem

The problem is that building with -DLLAMA_BUILD_SERVER=OFF does not work, since llama-cli now depends on server-context (since #17824).

To reproduce the issue on current latest master (commit 5642667):

cmake -B build -DLLAMA_BUILD_SERVER=OFF
cmake --build build

It will produce the following error:

[ 95%] Building CXX object tools/cli/CMakeFiles/llama-cli.dir/cli.cpp.o
In file included from /home/asbjorn/Development/llama-cpp-rs/llama-cpp-sys-2/llama.cpp/tools/cli/../server/server-task.h:12,
                 from /home/asbjorn/Development/llama-cpp-rs/llama-cpp-sys-2/llama.cpp/tools/cli/../server/server-context.h:2,
                 from /home/asbjorn/Development/llama-cpp-rs/llama-cpp-sys-2/llama.cpp/tools/cli/cli.cpp:6:
/home/asbjorn/Development/llama-cpp-rs/llama-cpp-sys-2/llama.cpp/tools/cli/../server/server-common.h:7:10: fatal error: mtmd.h: No such file or directory
    7 | #include "mtmd.h"
      |          ^~~~~~~~
compilation terminated.
make[2]: *** [tools/cli/CMakeFiles/llama-cli.dir/build.make:76: tools/cli/CMakeFiles/llama-cli.dir/cli.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:4013: tools/cli/CMakeFiles/llama-cli.dir/all] Error 2
make: *** [Makefile:146: all] Error 2

This build error is what has prevented utilityai/llama-cpp-rs from bumping the llama.cpp version for the past month.

The Fix

I fixed this by moving the if (LLAMA_BUILD_SERVER) conditional from tools/CMakeLists.txt into the part of tools/server/CMakeLists.txt that builds the actual server executable. So now if LLAMA_BUILD_SERVER is OFF, it will still build the server shared libs (which llama-cli depend on), but not the server executable itself.

I fixed this by moving the cli target into the LLAMA_BUILD_SERVER conditional.
If LLAMA_BUILD_TOOLS=ON and LLAMA_BUILD_SERVER=OFF, it will not build cli or server, but it will build the rest.

I have run the ci/run.sh testing suite to confirm that everything works as before.

AI Disclosure

The code changes in this PR were all made and verified by human hands and eyes, but LLM assistance was used for identifying the problem and suggesting several solutions (one of which is this solution).

@AsbjornOlling AsbjornOlling changed the title cmake: always include server subdir for shared libs cmake: fix cli build when LLAMA_BUILD_SERVER=OFF Jan 7, 2026
@ngxson
Copy link
Contributor

ngxson commented Jan 7, 2026

You can skip building llama-cli when build_server is disabled. I prefer that simple fix. You won't use cli for binding anyway

@AsbjornOlling
Copy link
Contributor Author

AsbjornOlling commented Jan 8, 2026

You can skip building llama-cli when build_server is disabled. I prefer that simple fix. You won't use cli for binding anyway

We don't need llama-cli for bindings, but don't we need to have LLAMA_BUILD_TOOLS enabled in order to have multimodal support in the bindings? It seems like all of the mtmd stuff lives in tools/mtmd/.

I have tested setting LLAMA_BUILD_TOOLS=OFF in utilityai/llama-cpp-rs, and it indeed does result in a linking error. It was enabled in llama-cpp-rs when multimodal support was added in utilityai/llama-cpp-rs#790

@ngxson
Copy link
Contributor

ngxson commented Jan 8, 2026

Have you read my suggestion above?

@AsbjornOlling AsbjornOlling force-pushed the fix-unconditionally-include-server-subdir branch from e89a2e2 to 3c2e276 Compare January 8, 2026 10:37
@AsbjornOlling
Copy link
Contributor Author

Have you read my suggestion above?

Ah okay. I read it, but misunderstood it. Sorry about that. 😅

I implemented your suggestion now.

@ngxson ngxson merged commit a180ba7 into ggml-org:master Jan 9, 2026
74 of 75 checks passed
gary149 pushed a commit to gary149/llama-agent that referenced this pull request Jan 13, 2026
dillon-blake pushed a commit to Boxed-Logic/llama.cpp that referenced this pull request Jan 15, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants