Skip to content

UPSTREAM PR #17091: add version to all shared object files#127

Open
DajanaV wants to merge 1 commit intomainfrom
upstream-PR17091-branch_furrysalamander-master
Open

UPSTREAM PR #17091: add version to all shared object files#127
DajanaV wants to merge 1 commit intomainfrom
upstream-PR17091-branch_furrysalamander-master

Conversation

@DajanaV
Copy link
Collaborator

@DajanaV DajanaV commented Nov 7, 2025

Mirrored from ggml-org/llama.cpp#17091

While working on a Yocto recipe for llama.cpp, I got some QA errors because the so files don't have a version set. This is an easy enough fix, you just have to specify one in Cmake.

I've done my best to set these with what I understand are appropriate values, but if I'm pulling in inappropriate variables for any of these, by all means let's fix them. The mtmd stuff in particular seemed a bit ambiguous, but even though it's a "subproject" I imagine the desire (for now) is to still release (and version) it together with llama.cpp as a whole?

@DajanaV DajanaV force-pushed the main branch 21 times, most recently from 6aa5dc2 to 81cedf2 Compare November 10, 2025 16:10
When compiling llama.cpp in Yocto, it fails QA checks because the generated so files aren't versioned.  This applies a version to all generated so files, allowing the package to build without errors.
@DajanaV DajanaV force-pushed the upstream-PR17091-branch_furrysalamander-master branch from 8e5f541 to 2719f80 Compare November 10, 2025 17:35
@loci-review
Copy link

loci-review bot commented Nov 10, 2025

Access the complete analysis in the LOCI Dashboard

Performance Analysis Summary

Overview

PR #127 implements CMake build configuration changes to add version information to shared object files. The analysis reveals a significant discrepancy between the intended scope (build-only changes) and the performance measurement results.

Key Findings

Performance Metrics Impact:

  • Complete elimination of measurable performance metrics across all functions
  • Power consumption reduction: 100% decrease across all 15 analyzed binaries, including core libraries (libllama.so, libggml-cpu.so, libggml-base.so) and executables (llama-run, llama-tts, llama-cvector-generator)
  • Total estimated power consumption: Reduced from ~1.75 million nJ to 0 nJ

Core Function Analysis:

  • Critical inference functions (llama_decode, llama_model_load_from_file) show no accessible performance data in target version
  • CFG analysis reveals complete absence of control flow data for target version, while base version contains normal execution patterns
  • Function insights return empty metrics for all analyzed functions

Inference Performance Impact:

  • No direct impact on tokens per second expected from CMake-only changes
  • Measurement anomaly: Zero power consumption suggests either aggressive optimization eliminating executable code or analysis tool limitations with new binary format

Technical Discrepancy:

  • GitHub code review shows only CMake set_target_properties() additions for library versioning
  • Performance analysis indicates fundamental changes in binary structure or measurement capability
  • Build system changes should not affect runtime performance metrics

Actionable Recommendations

Immediate Investigation Required:

  • Verify binary integrity and executable status of target version libraries
  • Confirm that core inference functions (llama_decode, llama_encode, llama_tokenize) remain functional in built binaries
  • Validate static analysis tools can process the new versioned binary format
  • Compare actual runtime performance between versions to confirm CMake changes don't inadvertently affect execution

Build Verification:

  • Ensure version variables (${GGML_VERSION}, ${LLAMA_INSTALL_VERSION}) are properly defined
  • Test that versioned shared libraries load correctly in runtime environments

The complete absence of performance metrics in the target version requires immediate verification to distinguish between legitimate optimization and measurement limitations.

@DajanaV DajanaV force-pushed the main branch 4 times, most recently from 973f45e to 1a27925 Compare November 10, 2025 22:08
@loci-dev loci-dev force-pushed the main branch 23 times, most recently from 048ad94 to 6c1fde6 Compare February 3, 2026 13:32
@loci-dev loci-dev force-pushed the main branch 7 times, most recently from 2f4d02d to 073bd79 Compare February 18, 2026 02:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants