Skip to content

Commit f407fcf

Browse files
authored
Release v0.3.5.post1 (#2022)
1 parent 54479d6 commit f407fcf

File tree

5 files changed

+10
-10
lines changed

5 files changed

+10
-10
lines changed

docker/Dockerfile.rocm

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# Usage (to build SGLang ROCm docker image):
2-
# docker build --build-arg SGL_BRANCH=v0.3.5 -t testImage -f Dockerfile.rocm .
2+
# docker build --build-arg SGL_BRANCH=v0.3.5.post1 -t testImage -f Dockerfile.rocm .
33

44
# default base image
55
ARG BASE_IMAGE="rocm/vllm-dev:20241022"

docs/start/install.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Note: Please check the [FlashInfer installation doc](https://docs.flashinfer.ai/
1616
## Method 2: From source
1717
```
1818
# Use the last release branch
19-
git clone -b v0.3.5 https://github.com/sgl-project/sglang.git
19+
git clone -b v0.3.5.post1 https://github.com/sgl-project/sglang.git
2020
cd sglang
2121
2222
pip install --upgrade pip
@@ -46,7 +46,7 @@ docker run --gpus all \
4646
Note: To AMD ROCm system with Instinct/MI GPUs, it is recommended to use `docker/Dockerfile.rocm` to build images, example and usage as below:
4747

4848
```bash
49-
docker build --build-arg SGL_BRANCH=v0.3.5 -t v0.3.5-rocm620 -f Dockerfile.rocm .
49+
docker build --build-arg SGL_BRANCH=v0.3.5.post1 -t v0.3.5.post1-rocm620 -f Dockerfile.rocm .
5050

5151
alias drun='docker run -it --rm --network=host --device=/dev/kfd --device=/dev/dri --ipc=host \
5252
--shm-size 16G --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
@@ -55,11 +55,11 @@ alias drun='docker run -it --rm --network=host --device=/dev/kfd --device=/dev/d
5555
drun -p 30000:30000 \
5656
-v ~/.cache/huggingface:/root/.cache/huggingface \
5757
--env "HF_TOKEN=<secret>" \
58-
v0.3.5-rocm620 \
58+
v0.3.5.post1-rocm620 \
5959
python3 -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct --host 0.0.0.0 --port 30000
6060

6161
# Till flashinfer backend available, --attention-backend triton --sampling-backend pytorch are set by default
62-
drun v0.3.5-rocm620 python3 -m sglang.bench_latency --batch-size 32 --input 1024 --output 128 --model amd/Meta-Llama-3.1-8B-Instruct-FP8-KV --tp 8 --quantization fp8
62+
drun v0.3.5.post1-rocm620 python3 -m sglang.bench_latency --batch-size 32 --input 1024 --output 128 --model amd/Meta-Llama-3.1-8B-Instruct-FP8-KV --tp 8 --quantization fp8
6363
```
6464

6565
## Method 4: Using docker compose

python/pyproject.toml

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
44

55
[project]
66
name = "sglang"
7-
version = "0.3.5"
7+
version = "0.3.5.post1"
88
description = "SGLang is yet another fast serving framework for large language models and vision language models."
99
readme = "README.md"
1010
requires-python = ">=3.8"

python/sglang/version.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
__version__ = "0.3.5"
1+
__version__ = "0.3.5.post1"

test/srt/test_pytorch_sampling_backend.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ def test_mmlu(self):
4040
)
4141

4242
metrics = run_eval(args)
43-
assert metrics["score"] >= 0.65
43+
self.assertGreaterEqual(metrics["score"], 0.65)
4444

4545
def test_greedy(self):
4646

@@ -62,7 +62,7 @@ def test_greedy(self):
6262
if first_text is None:
6363
first_text = text
6464

65-
assert text == first_text, f'"{text}" is not identical to "{first_text}"'
65+
self.assertEqual(text, first_text)
6666

6767
first_text = None
6868

@@ -82,7 +82,7 @@ def test_greedy(self):
8282
text = response_batch[i]["text"]
8383
if first_text is None:
8484
first_text = text
85-
assert text == first_text, f'"{text}" is not identical to "{first_text}"'
85+
self.assertEqual(text, first_text)
8686

8787

8888
if __name__ == "__main__":

0 commit comments

Comments
 (0)