Skip to content

llama-bench: add -d depth arg#13096

Merged
JohannesGaessler merged 9 commits intoggml-org:masterfrom
thevishalagarwal:llama-bench/add-depth-param
Apr 28, 2025
Merged

llama-bench: add -d depth arg#13096
JohannesGaessler merged 9 commits intoggml-org:masterfrom
thevishalagarwal:llama-bench/add-depth-param

Conversation

@thevishalagarwal
Copy link
Contributor

Add -d or --n-depth arg in llama-bench to run tests with prefilled KV cache context

Relevant discussion #12874

Sample output

$ .\llama-bench.exe -d 0,512

| model                          |       size |     params | backend    | ngl |            test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| qwen2 7B Q4_K - Medium         |   4.36 GiB |     7.62 B | CUDA       |  99 |           pp512 |      7340.20 ± 23.45 |
| qwen2 7B Q4_K - Medium         |   4.36 GiB |     7.62 B | CUDA       |  99 |           tg128 |        120.60 ± 0.59 |
| qwen2 7B Q4_K - Medium         |   4.36 GiB |     7.62 B | CUDA       |  99 |    pp512 @ d512 |      6425.91 ± 18.88 |
| qwen2 7B Q4_K - Medium         |   4.36 GiB |     7.62 B | CUDA       |  99 |    tg128 @ d512 |        116.71 ± 0.60 |

@JohannesGaessler JohannesGaessler requested a review from slaren April 25, 2025 11:02
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
@thevishalagarwal thevishalagarwal requested a review from slaren April 25, 2025 21:21
Copy link
Contributor

@JohannesGaessler JohannesGaessler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fine too. Please fix the trailing whitespaces and I'll merge.

@thevishalagarwal
Copy link
Contributor Author

@JohannesGaessler Can you merge this?

@JohannesGaessler JohannesGaessler merged commit 1831f53 into ggml-org:master Apr 28, 2025
48 checks passed
@JohannesGaessler
Copy link
Contributor

Yes, I was just waiting for the CI to finish.

@ggerganov
Copy link
Member

I think there is problem with the test statistics for non-zero depths:

./bin/llama-bench -m ../models/llama-3.2-1b-instruct/ggml-model-q8_0.gguf -fa 1 -p 1,2,3,4,4,4,4,5,6,7,8 -d 0,1024 -n 32 -t 1
model size params backend threads fa test t/s
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp1 263.43 ± 8.77
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp2 503.00 ± 2.73
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp3 572.72 ± 6.80
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp4 742.78 ± 8.86
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp4 748.31 ± 2.90
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp4 742.75 ± 4.85
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp4 746.46 ± 6.38
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp5 782.50 ± 3.75
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp6 1044.21 ± 3.44
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp7 1116.17 ± 5.92
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp8 1277.44 ± 3.61
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 tg32 271.58 ± 0.22
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp1 @ d1024 220.30 ± 83.63
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp2 @ d1024 387.89 ± 189.78
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp3 @ d1024 444.76 ± 204.33
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp4 @ d1024 566.75 ± 267.05
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp4 @ d1024 567.49 ± 257.96
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp4 @ d1024 566.70 ± 268.04
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp4 @ d1024 568.02 ± 263.91
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp5 @ d1024 595.75 ± 273.03
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp6 @ d1024 780.02 ± 349.01
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp7 @ d1024 817.96 ± 372.68
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 pp8 @ d1024 932.35 ± 409.11
llama 1B Q8_0 1.22 GiB 1.24 B Metal,BLAS,RPC 1 1 tg32 @ d1024 254.96 ± 9.40

build: f9cd683 (5503)

Notice how the uncertainty of the results for ppX @ d1024 is very large. Also, I would expect that the results for -p 1 and -n 32 to be relatively close to each other, but this is not true for larger depths. Any ideas what could be wrong?

@JohannesGaessler
Copy link
Contributor

The tg32 result has a relative uncertainty of +-3.9%, the pp1 result has a relative uncertainty of +- 38.0%. But the tg32 result also did 32 times as many evals so the statistical uncertainty can be expected to decrease by sqrt(32) for an equivalent number of evals which would put it at +-6.7%. That is still more than for tg32 but the individual evals also have much shorter runtimes so random effects that don't scale with the eval length make up a larger percentage of the runtime. If you look at the uncertainties on the runtimes in absolute terms it's 4.54 +- 1.72 ms for pp1 and 125.49 +- 4.63 ms for tg32 which I think is fine.

@JohannesGaessler
Copy link
Contributor

If I remember correctly we are currently calculating the means and standard deviations of t/s values rather than the runtimes. As long as the differences are small I think this is fine but for large differences between runs (such as when individual runs are very short) I think this is not quite correct and it could lead to bad estimates of the uncertainty.

If you want to be fancy you could also do Rao-Blackwellization to get a tighter estimate of the uncertainty but I think this is not needed.

timwu pushed a commit to timwu/llama.cpp that referenced this pull request Dec 20, 2025
* add depth param

* update llama-bench README and add depth param

* llama-bench: default params for depth arg for faster execution

* Update examples/llama-bench/README.md

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* fix buffer print ub

* use user provided args

* remove extra whitespaces

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants