Skip to content

Conversation

@angt
Copy link
Collaborator

@angt angt commented Dec 3, 2025

Previously, cmake was forcing _WIN32_WINNT=0x0A00 for MinGW builds, This caused "macro redefined" warnings with toolchains that define the version.

This also removes the GGML_WIN_VER variable as it is no longer needed.

Previously, cmake was forcing `_WIN32_WINNT=0x0A00` for MinGW builds,
This caused "macro redefined" warnings with toolchains that define the version.

This also removes the `GGML_WIN_VER` variable as it is no longer needed.

Signed-off-by: Adrien Gallouët <[email protected]>
@angt angt requested review from ggerganov and ngxson as code owners December 3, 2025 12:08
@angt
Copy link
Collaborator Author

angt commented Dec 3, 2025

I try to align with the surrounding code, that's why the define is not at the same place.

@github-actions github-actions bot added build Compilation issues examples server ggml changes relating to the ggml tensor library for machine learning labels Dec 3, 2025
@danbev danbev merged commit ef75a89 into ggml-org:master Dec 4, 2025
77 of 80 checks passed
khemchand-zetta pushed a commit to khemchand-zetta/llama.cpp that referenced this pull request Dec 4, 2025
Previously, cmake was forcing `_WIN32_WINNT=0x0A00` for MinGW builds,
This caused "macro redefined" warnings with toolchains that define the version.

This also removes the `GGML_WIN_VER` variable as it is no longer needed.

Signed-off-by: Adrien Gallouët <[email protected]>
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Dec 4, 2025
* origin/master:
server: strip content-length header on proxy (ggml-org#17734)
server: move msg diffs tracking to HTTP thread (ggml-org#17740)
examples : add missing code block end marker [no ci] (ggml-org#17756)
common : skip model validation when --help is requested (ggml-org#17755)
ggml-cpu : remove asserts always evaluating to false (ggml-org#17728)
convert: use existing local chat_template if mistral-format model has one. (ggml-org#17749)
cmake : simplify build info detection using standard variables (ggml-org#17423)
ci : disable ggml-ci-x64-amd-* (ggml-org#17753)
common: use native MultiByteToWideChar (ggml-org#17738)
metal : use params per pipeline instance (ggml-org#17739)
llama : fix sanity checks during quantization (ggml-org#17721)
build : move _WIN32_WINNT definition to headers (ggml-org#17736)
build: enable parallel builds in msbuild using MTT (ggml-org#17708)
ggml-cpu: remove duplicate conditional check 'iid' (ggml-org#17650)
Add a couple of file types to the text section (ggml-org#17670)
convert : support latest mistral-common (fix conversion with --mistral-format) (ggml-org#17712)
Use OpenAI-compatible `/v1/models` endpoint by default (ggml-org#17689)
webui: Fix zero pasteLongTextToFileLen to disable conversion being overridden (ggml-org#17445)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

build Compilation issues examples ggml changes relating to the ggml tensor library for machine learning server

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants