Skip to content

Commit f20f2c9

Browse files
committed
add docs and container build improvements #43
1 parent 7a97c38 commit f20f2c9

File tree

4 files changed

+93
-23
lines changed

4 files changed

+93
-23
lines changed

.github/workflows/containers.yml

+8-6
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,14 @@
11
name: Build Containers
22

33
on:
4-
# schedule:
5-
# - cron: "0 11 * * *" # Runs daily at 11 AM UTC (3 AM PST)
6-
# push:
7-
# tags:
8-
# - "*" # Triggers on any new tag
9-
workflow_dispatch: # Allows manual triggering of the workflow
4+
# time has no specific meaning, trying to time it after
5+
# the llama.cpp daily packages are published
6+
# https://github.com/ggerganov/llama.cpp/blob/master/.github/workflows/docker.yml
7+
schedule:
8+
- cron: "37 5 * * *"
9+
10+
# Allows manual triggering of the workflow
11+
workflow_dispatch:
1012

1113
jobs:
1214
build-and-push:

README.md

+65-15
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,11 @@
11
![llama-swap header image](header.jpeg)
22

33
# llama-swap
4+
45
llama-swap is a light weight, transparent proxy server that provides automatic model swapping to llama.cpp's server.
56

67
Written in golang, it is very easy to install (single binary with no dependancies) and configure (single yaml file).
78

8-
Download a pre-built [release](https://github.com/mostlygeek/llama-swap/releases) or build it yourself from source with `make clean all`.
9-
10-
## How does it work?
11-
When a request is made to an OpenAI compatible endpoint, lama-swap will extract the `model` value and load the appropriate server configuration to serve it. If a server is already running it will stop it and start the correct one. This is where the "swap" part comes in. The upstream server is automatically swapped to the correct one to serve the request.
12-
13-
In the most basic configuration llama-swap handles one model at a time. For more advanced use cases, the `profiles` feature can load multiple models at the same time. You have complete control over how your system resources are used.
14-
15-
## Do I need to use llama.cpp's server (llama-server)?
16-
Any OpenAI compatible server would work. llama-swap was originally designed for llama-server and it is the best supported.
17-
18-
For Python based inference servers like vllm or tabbyAPI it is recommended to run them via podman or docker. This provides clean environment isolation as well as responding correctly to `SIGTERM` signals to shutdown.
19-
209
## Features:
2110

2211
- ✅ Easy to deploy: single binary with no dependencies
@@ -37,6 +26,66 @@ For Python based inference servers like vllm or tabbyAPI it is recommended to ru
3726
- ✅ Use any local OpenAI compatible server (llama.cpp, vllm, tabbyAPI, etc)
3827
- ✅ Direct access to upstream HTTP server via `/upstream/:model_id` ([demo](https://github.com/mostlygeek/llama-swap/pull/31))
3928

29+
## Docker Install ([download images](https://github.com/mostlygeek/llama-swap/pkgs/container/llama-swap))
30+
31+
Docker is the quickest way to try out llama-swap:
32+
33+
```
34+
$ docker run -it --rm --runtime nvidia -p 9292:8080 ghcr.io/mostlygeek/llama-swap:cuda
35+
36+
37+
# qwen2.5 0.5B
38+
$ curl -s http://localhost:9292/v1/chat/completions \
39+
-H "Content-Type: application/json" \
40+
-H "Authorization: Bearer no-key" \
41+
-d '{"model":"qwen2.5","messages": [{"role": "user","content": "tell me a joke"}]}' | \
42+
jq -r '.choices[0].message.content'
43+
44+
45+
# SmolLM2 135M
46+
$ curl -s http://localhost:9292/v1/chat/completions \
47+
-H "Content-Type: application/json" \
48+
-H "Authorization: Bearer no-key" \
49+
-d '{"model":"smollm2","messages": [{"role": "user","content": "tell me a joke"}]}' | \
50+
jq -r '.choices[0].message.content'
51+
```
52+
53+
Docker images are [published nightly](https://github.com/mostlygeek/llama-swap/pkgs/container/llama-swap) that include the latest llama-swap and llama-server:
54+
55+
- `ghcr.io/mostlygeek/llama-swap:cuda`
56+
- `ghcr.io/mostlygeek/llama-swap:intel`
57+
- `ghcr.io/mostlygeek/llama-swap:vulkan`
58+
- `ghcr.io/mostlygeek/llama-swap:musa`
59+
60+
Specific versions are also available and are tagged with the llama-swap, architecture and llama.cpp versions. For example: `ghcr.io/mostlygeek/llama-swap:v89-cuda-b4716`
61+
62+
Beyond the demo you will likely want to run the containers with your downloaded models and custom configuration.
63+
64+
```
65+
$ docker run -it --rm --runtime nvidia -p 9292:8080 \
66+
-v /path/to/models:/models \
67+
-v /path/to/custom/config.yaml:/app/config.yaml \
68+
ghcr.io/mostlygeek/llama-swap:cuda
69+
```
70+
71+
## Bare metal Install ([download](https://github.com/mostlygeek/llama-swap/releases))
72+
73+
Pre-built binaries are available for Linux, FreeBSD and Darwin (OSX). These are automatically published and are likely a few hours ahead of the docker releases. The baremetal install works with any OpenAI compatible server, not just llama-server.
74+
75+
You can also build llama-swap yourself from source with `make clean all`.
76+
77+
## How does llama-swap work?
78+
79+
When a request is made to an OpenAI compatible endpoint, lama-swap will extract the `model` value and load the appropriate server configuration to serve it. If a server is already running it will stop it and start the correct one. This is where the "swap" part comes in. The upstream server is automatically swapped to the correct one to serve the request.
80+
81+
In the most basic configuration llama-swap handles one model at a time. For more advanced use cases, the `profiles` feature can load multiple models at the same time. You have complete control over how your system resources are used.
82+
83+
## Do I need to use llama.cpp's server (llama-server)?
84+
85+
Any OpenAI compatible server would work. llama-swap was originally designed for llama-server and it is the best supported.
86+
87+
For Python based inference servers like vllm or tabbyAPI it is recommended to run them via podman or docker. This provides clean environment isolation as well as responding correctly to `SIGTERM` signals to shutdown.
88+
4089
## config.yaml
4190

4291
llama-swap's configuration is purposefully simple.
@@ -59,8 +108,8 @@ models:
59108

60109
# aliases names to use this model for
61110
aliases:
62-
- "gpt-4o-mini"
63-
- "gpt-3.5-turbo"
111+
- "gpt-4o-mini"
112+
- "gpt-3.5-turbo"
64113

65114
# check this path for an HTTP 200 OK before serving requests
66115
# default: /health to match llama.cpp
@@ -121,7 +170,7 @@ profiles:
121170

122171
1. Create a configuration file, see [config.example.yaml](config.example.yaml)
123172
1. Download a [release](https://github.com/mostlygeek/llama-swap/releases) appropriate for your OS and architecture.
124-
* _Note: Windows currently untested._
173+
- _Note: Windows currently untested._
125174
1. Run the binary with `llama-swap --config path/to/config.yaml`
126175

127176
### Building from source
@@ -156,6 +205,7 @@ curl -Ns 'http://host/logs/stream?no-history'
156205
Use this unit file to start llama-swap on boot. This is only tested on Ubuntu.
157206

158207
`/etc/systemd/system/llama-swap.service`
208+
159209
```
160210
[Unit]
161211
Description=llama-swap

docker/config.example.yaml

+17
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
healthCheckTimeout: 300
2+
logRequests: true
3+
4+
models:
5+
"qwen2.5":
6+
proxy: "http://127.0.0.1:9999"
7+
cmd: >
8+
/app/llama-server
9+
-hf bartowski/Qwen2.5-0.5B-Instruct-GGUF:Q4_K_M
10+
--port 9999
11+
12+
"smollm2":
13+
proxy: "http://127.0.0.1:9999"
14+
cmd: >
15+
/app/llama-server
16+
-hf bartowski/SmolLM2-135M-Instruct-GGUF:Q4_K_M
17+
--port 9999

docker/llama-swap.Containerfile

+3-2
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,12 @@ FROM ghcr.io/ggerganov/llama.cpp:${BASE_TAG}
55
ARG LS_VER=89
66

77
WORKDIR /app
8-
98
RUN \
109
curl -LO https://github.com/mostlygeek/llama-swap/releases/download/v"${LS_VER}"/llama-swap_"${LS_VER}"_linux_amd64.tar.gz && \
1110
tar -zxf llama-swap_"${LS_VER}"_linux_amd64.tar.gz && \
1211
rm llama-swap_"${LS_VER}"_linux_amd64.tar.gz
1312

13+
COPY config.example.yaml /app/config.yaml
1414

15-
ENTRYPOINT [ "/app/llama-swap", "--config", "/config.yaml" ]
15+
HEALTHCHECK CMD curl -f http://localhost:8080/ || exit 1
16+
ENTRYPOINT [ "/app/llama-swap", "-config", "/app/config.yaml" ]

0 commit comments

Comments
 (0)