Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Created by
brew bumpCreated with
brew bump-formula-pr.resourceblocks have been checked for updates.release notes
local-ai run gemma-3n-e2b-itlocal-ai run gemma-3n-e4b-it⚠️ Breaking Changes
Several important changes that reduce image size, simplify the ecosystem, and pave the way for a leaner LocalAI core:
🧰 Container Image Changes
📁 Directory Structure Updated
New default model and backend paths for container images:
/models/(was/build/models)/backends/(was/build/backends)🏷 Unified Image Tag Naming for
master(development) buildsWe've cleaned up and standardized container image tags for clarity and consistency:
gpu-nvidia-cuda11andgpu-nvidia-cuda12(previouslycublas-cuda11,cublas-cuda12)gpu-intel-f16andgpu-intel-f32(previouslysycl-f16,sycl-f32)Meta packages in backend galleries
We’ve introduced meta-packages to the backend gallery!
These packages automatically install the most suitable backend depending on the GPU detected in your system — saving time, reducing errors, and ensuring you get the right setup out of the box. These will be added as soon as the 3.1.0 images are going to be published, stay tuned!
For instance, you will be able to install
vllmjust by installing thevllmbackend in the gallery ( no need to select anymore the correct GPU version)The Complete Local Stack for Privacy-First AI
With LocalAGI rejoining LocalAI alongside LocalRecall, our ecosystem provides a complete, open-source stack for private, secure, and intelligent AI operations:
LocalAI
The free, Open Source OpenAI alternative. Acts as a drop-in replacement REST API compatible with OpenAI specifications for local AI inferencing. No GPU required.
Link: https://github.com/mudler/LocalAI
LocalAGI
A powerful Local AI agent management platform. Serves as a drop-in replacement for OpenAI's Responses API, supercharged with advanced agentic capabilities and a no-code UI.
Link: https://github.com/mudler/LocalAGI
LocalRecall
A RESTful API and knowledge base management system providing persistent memory and storage capabilities for AI agents. Designed to work alongside LocalAI and LocalAGI.
Link: https://github.com/mudler/LocalRecall
Join the Movement! ❤️
A massive THANK YOU to our incredible community and our sponsors! LocalAI has over 33,500 stars, and LocalAGI has already rocketed past 800+ stars!
As a reminder, LocalAI is real FOSS (Free and Open Source Software) and its sibling projects are community-driven and not backed by VCs or a company. We rely on contributors donating their spare time and our sponsors to provide us the hardware! If you love open-source, privacy-first AI, please consider starring the repos, contributing code, reporting bugs, or spreading the word!
👉 Check out the reborn LocalAGI v2 today: https://github.com/mudler/LocalAGI
Full changelog :point_down:
:point_right: Click to expand :point_left:
What's Changed
Breaking Changes 🛠
Bug fixes :bug:
Exciting New Features 🎉
🧠 Models
👒 Dependencies
3e65f518ddf840b13b74794158aa95a2c8aa30ccby @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to3e65f518ddf840b13b74794158aa95a2c8aa30ccmudler/LocalAI#56918f71d0f3e86ccbba059350058af8758cafed73e6by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to8f71d0f3e86ccbba059350058af8758cafed73e6mudler/LocalAI#569206cbedfca1587473df9b537f1dd4d6bfa2e3de13by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to06cbedfca1587473df9b537f1dd4d6bfa2e3de13mudler/LocalAI#5697e6c10cf3d5d60dc647eb6cd5e73d3c347149f746by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp toe6c10cf3d5d60dc647eb6cd5e73d3c347149f746mudler/LocalAI#5702aa0ef5c578eef4c2adc7be1282f21bab5f3e8d26by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp toaa0ef5c578eef4c2adc7be1282f21bab5f3e8d26mudler/LocalAI#5703238005c2dc67426cf678baa2d54c881701693288by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to238005c2dc67426cf678baa2d54c881701693288mudler/LocalAI#5710a422176937c5bb20eb58d969995765f90d3c1a9bby @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp toa422176937c5bb20eb58d969995765f90d3c1a9bmudler/LocalAI#5713ce82bd0117bd3598300b3a089d13d401b90279c7by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp toce82bd0117bd3598300b3a089d13d401b90279c7mudler/LocalAI#571273e53dc834c0a2336cd104473af6897197b96277by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to73e53dc834c0a2336cd104473af6897197b96277mudler/LocalAI#57190083335ba0e9d6becbe0958903b0a27fc2ebaeedby @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to0083335ba0e9d6becbe0958903b0a27fc2ebaeedmudler/LocalAI#571810c6501bd05a697e014f1bee3a84e5664290c489by @localai-bot in chore: ⬆️ Update leejet/stable-diffusion.cpp to10c6501bd05a697e014f1bee3a84e5664290c489mudler/LocalAI#49252bf9d539dd158345e3a3b096e16474af535265b4by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to2bf9d539dd158345e3a3b096e16474af535265b4mudler/LocalAI#57244daf7050ca2bf17f5166f45ac6da651c4e33f293by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to4daf7050ca2bf17f5166f45ac6da651c4e33f293mudler/LocalAI#572510c6501bd05a697e014f1bee3a84e5664290c489" by @mudler in Revert "chore: ⬆️ Update leejet/stable-diffusion.cpp to10c6501bd05a697e014f1bee3a84e5664290c489" mudler/LocalAI#57278846aace4934ad29651ea61b8c7e3f6b0556e3d2by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to8846aace4934ad29651ea61b8c7e3f6b0556e3d2mudler/LocalAI#573432cf4e2aba799aff069011f37ca025401433cf9fby @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to32cf4e2aba799aff069011f37ca025401433cf9fmudler/LocalAI#5733Other Changes
Full Changelog: mudler/LocalAI@v3.0.0...v3.1.0
View the full release notes at https://github.com/mudler/LocalAI/releases/tag/v3.1.0.