Skip to content

Info: llama.cpp-related issue when using docker-model-runner with gpt-oss and tooling enabled #131

@optimisticupdate

Description

@optimisticupdate

While testing docker-model-runner with gpt-oss, I noticed a problem that only occurs when tooling is enabled.
This is not an issue with docker-model-runner itself, it appears to be caused by upstream changes in llama.cpp.

For reference, there’s an open PR in llama.cpp that may resolve it:
ggml-org/llama.cpp#15158

Just sharing this here so others are aware, and looking forward to a new release image once the fix is available upstream.

Since I was not that active in open source contribution the last years I hope it is fine to post it here.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions