Skip to content

Conversation

@kylo5aby
Copy link
Contributor

@kylo5aby kylo5aby commented Aug 2, 2024

When run llama-bench -m model_name -o json on Windows, before this change, output:

    "build_commit": "a15ef8f8",
    "build_number": 3416,
    "cuda": false,
    "vulkan": false,
    "kompute": false,
    "metal": false,
    "sycl": false,
    "rpc": "0",
    "gpu_blas": false,
    "blas": false,
    "cpu_info": "",
    ....

after this change, output:

    "build_commit": "68504f09",
    "build_number": 3455,
    "cuda": false,
    "vulkan": false,
    "kompute": false,
    "metal": false,
    "sycl": false,
    "rpc": "0",
    "gpu_blas": false,
    "blas": false,
    "cpu_info": "Intel(R) Core(TM) Ultra 5 125H",

@slaren
Copy link
Member

slaren commented Aug 3, 2024

Looks like this is failing on Windows ARM.

@kylo5aby
Copy link
Contributor Author

kylo5aby commented Aug 5, 2024

Looks like this is failing on Windows ARM.

Updated to support ARM, PTAL

@mofosyne mofosyne added the Review Complexity : Low Trivial changes to code that most beginner devs (or those who want a break) can tackle. e.g. UI fix label Aug 5, 2024
@kylo5aby
Copy link
Contributor Author

kylo5aby commented Aug 6, 2024

I see in the CI/CD pipeline, Server / server (Release) (pull_request) is fail, but I have run CI/Server on my forked repo on target branch, in which CI and Server are passed, and the change won't affect llama-server.

here is my CI link
image

@slaren
Copy link
Member

slaren commented Aug 6, 2024

The CI failure is not related to this change, it can be ignored. The code looks good but there are two issues:

  • The hKey is leaked on success
  • RegQueryValueEx is not guaranteed to return a NUL-terminated string, and that needs to be handled correctly to avoid buffer overflows

@kylo5aby
Copy link
Contributor Author

kylo5aby commented Aug 6, 2024

Fixed. Thanks the feedback

@slaren slaren merged commit 506122d into ggml-org:master Aug 7, 2024
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Aug 7, 2024
)

* Add support for getting cpu info on Windows for llama_bench

* refactor

---------

Co-authored-by: slaren <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

examples Review Complexity : Low Trivial changes to code that most beginner devs (or those who want a break) can tackle. e.g. UI fix

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants