Skip to content

ggml: src: ggml-remotingfrontend/ggml-backend: add stub for .graph_op…

f28602d
Select commit
Loading
Failed to load commit list.
Open

UPSTREAM PR #17072: [RFC] ggml: new backend for API Remoting #114

ggml: src: ggml-remotingfrontend/ggml-backend: add stub for .graph_op…
f28602d
Select commit
Loading
Failed to load commit list.
LOCI Review / Performance Review #114 succeeded Nov 7, 2025 in 26m 56s

Performance varied across binaries, overall acceptable

0 binaries improved · 11 binaries unchanged · 5 binaries stable ~ within threshold · 0 binaries degraded ~ beyond threshold

Binary Δ % Response Δ % Throughput Performance (based on response time)
build.bin.libggml-base.so 0.00 0.00 unchanged
build.bin.libggml-cpu.so 0.00 0.00 unchanged
build.bin.libggml.so 0.00 0.00 unchanged
build.bin.libllama.so 0.02 -0.00 stable
build.bin.libmtmd.so 0.00 0.00 unchanged
build.bin.llama-bench 1.40 -0.00 stable
build.bin.llama-cvector-generator 0.02 -0.00 stable
build.bin.llama-gemma3-cli 0.00 0.00 unchanged
build.bin.llama-gguf-split 0.00 0.00 unchanged
build.bin.llama-llava-cli 0.00 0.00 unchanged
build.bin.llama-minicpmv-cli 0.00 0.00 unchanged
build.bin.llama-quantize 0.00 0.00 unchanged
build.bin.llama-qwen2vl-cli 0.00 0.00 unchanged
build.bin.llama-run 0.02 -0.00 stable
build.bin.llama-tokenize 0.00 0.00 unchanged
build.bin.llama-tts 0.02 -0.00 stable

Performance threshold: 30%
Default configuration used.
Note: Performance status is evaluated only from Δ% Response. Throughput is displayed for reference.

Access the complete analysis in the LOCI Dashboard.
Open the Pull Request linked to this check-run.