Skip to content

chat : fix AST fallback for non-partial parses, lenient TAG_WITH_TAGG…

0083bb1
Select commit
Loading
Failed to load commit list.
Open

UPSTREAM PR #20660: Fix chat parser regressions: inference crashes/frozen; output backtracked #1266

chat : fix AST fallback for non-partial parses, lenient TAG_WITH_TAGG…
0083bb1
Select commit
Loading
Failed to load commit list.
LOCI Review / Performance Per Binary #1266 succeeded Mar 18, 2026

Performance varied across binaries, overall acceptable

1 binary improved · 13 binaries unchanged · 1 binary stable ~ within threshold · 0 binaries degraded ~ beyond threshold

Binary Δ % Response Δ % Throughput Performance (based on response time)
build.bin.libggml-base.so 0 0 unchanged
build.bin.libggml-cpu.so 0 0 unchanged
build.bin.libggml.so 0 0 unchanged
build.bin.libllama.so 0 0 unchanged
build.bin.libmtmd.so 0 0 unchanged
build.bin.llama-bench 0 0 unchanged
build.bin.llama-cvector-generator -0.22 -0.09 improved
build.bin.llama-gemma3-cli 0 0 unchanged
build.bin.llama-gguf-split 0 0 unchanged
build.bin.llama-llava-cli 0 0 unchanged
build.bin.llama-minicpmv-cli 0 0 unchanged
build.bin.llama-quantize 0 0 unchanged
build.bin.llama-qwen2vl-cli 0 0 unchanged
build.bin.llama-tokenize 0 0 unchanged
build.bin.llama-tts 0.09 0.04 stable

Performance threshold: 30%
Default configuration used.
Note: Performance status is evaluated only from Δ% Response. Throughput is displayed for reference.

Explore the complete analysis inside the Loci Inspector.
Open the Pull Request linked to this check-run.