Skip to content

Misc. bug: ALL gguf models fail to run (no log, docker exit code 139), #12205

@orchidObsessed

Description

@orchidObsessed

Name and Version

Docker:

ghcr.io/ggml-org/llama.cpp:server (b8419)

Operating systems

Linux

Which llama.cpp modules do you know to be affected?

llama-server

Command line

docker run -p 8080:8080 -v /models:/models ghcr.io/ggml-org/llama.cpp:server -m /models/mistral-7b-instruct-v0.1.Q5_K_M.gguf -c 512 --host 0.0.0.0 --port 8080

Problem description & steps to reproduce

Whether using the full, light, or server image, both latest and some previous ones, ALL gguf models fail to run (no log, docker exit code 139), either when using server or providing a prompt / interactive mode. When building a custom image that clones and compiles llama-cpp, all models run successfully.

Sample failing docker compose:

services:
  llama-server:
    image: ghcr.io/ggml-org/llama.cpp:server
    container_name: llm-mistral
    environment:
      # LLAMA_ARG_MODEL: /models/mistral-7b-instruct-v0.1.Q5_K_M.gguf
      LLAMA_ARG_MODEL_URL: "https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q5_K_M.gguf"
      LLAMA_ARG_PORT: 5000
      LLAMA_ARG_HOST: 0.0.0.0
      LLAMA_ARG_NO_WEBUI: True
      LLAMA_ARG_CTX_SIZE: 0
      LLAMA_ARG_NO_MMAP: True
    volumes:
      - /models:/models
    ports:
      - "5000:5000"
    restart: unless-stopped

First Bad Commit

No response

Relevant log output

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions