Skip to content

Address performance regression in Qwen and llama.cpp due to chunking

c77bafd
Select commit
Loading
Failed to load commit list.
Open

UPSTREAM PR #17241: ggml-cpu: handle 3d tensors in repack mat_mul #191

Address performance regression in Qwen and llama.cpp due to chunking
c77bafd
Select commit
Loading
Failed to load commit list.
LOCI Review / Performance Review #191 succeeded Nov 13, 2025 in 29m 35s

Performance varied across binaries, overall acceptable

0 binaries improved · 11 binaries unchanged · 5 binaries stable ~ within threshold · 0 binaries degraded ~ beyond threshold

Binary Δ % Response Δ % Throughput Performance (based on response time)
build.bin.libggml-base.so 0.00 0.00 unchanged
build.bin.libggml-cpu.so 0.00 0.00 unchanged
build.bin.libggml.so 0.00 0.00 unchanged
build.bin.libllama.so 0.00 0.39 unchanged
build.bin.libmtmd.so 0.28 0.81 stable
build.bin.llama-bench 0.23 0.79 stable
build.bin.llama-cvector-generator 0.00 0.38 unchanged
build.bin.llama-gemma3-cli 0.00 0.00 unchanged
build.bin.llama-gguf-split 0.21 -0.06 stable
build.bin.llama-llava-cli 0.00 0.00 unchanged
build.bin.llama-minicpmv-cli 0.00 0.00 unchanged
build.bin.llama-quantize 0.22 -0.03 stable
build.bin.llama-qwen2vl-cli 0.00 0.00 unchanged
build.bin.llama-run 0.00 0.41 unchanged
build.bin.llama-tokenize 0.21 0.01 stable
build.bin.llama-tts 0.00 0.38 unchanged

Performance threshold: 30%
Default configuration used.
Note: Performance status is evaluated only from Δ% Response. Throughput is displayed for reference.

Access the complete analysis in the LOCI Dashboard.