Skip to content

model: add llama 4 scaling for mistral-large (deepseek arch)

49d2305
Select commit
Loading
Failed to load commit list.
Open

UPSTREAM PR #17744: model: add llama 4 scaling for mistral-large (deepseek arch) #423

model: add llama 4 scaling for mistral-large (deepseek arch)
49d2305
Select commit
Loading
Failed to load commit list.
LOCI Review / Performance Review #423 succeeded Dec 3, 2025 in 35m 15s

Stable performance (within threshold)

0 binaries improved · 15 binaries unchanged · 1 binary stable ~ within threshold · 0 binaries degraded ~ beyond threshold

Binary Δ % Response Δ % Throughput Performance (based on response time)
build.bin.libggml-base.so 0 0 unchanged
build.bin.libggml-cpu.so 0 0 unchanged
build.bin.libggml.so 0 0 unchanged
build.bin.libllama.so 0.13 0.08 stable
build.bin.libmtmd.so 0 0 unchanged
build.bin.llama-bench 0 0 unchanged
build.bin.llama-cvector-generator 0 0 unchanged
build.bin.llama-gemma3-cli 0 0 unchanged
build.bin.llama-gguf-split 0 0 unchanged
build.bin.llama-llava-cli 0 0 unchanged
build.bin.llama-minicpmv-cli 0 0 unchanged
build.bin.llama-quantize 0 0 unchanged
build.bin.llama-qwen2vl-cli 0 0 unchanged
build.bin.llama-run 0 0 unchanged
build.bin.llama-tokenize 0 0 unchanged
build.bin.llama-tts 0 0 unchanged

Performance threshold: 30%
Default configuration used.
Note: Performance status is evaluated only from Δ% Response. Throughput is displayed for reference.

Explore the complete analysis inside the Version Insights.