Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: llamafile backend #2010

Open
mudler opened this issue Apr 11, 2024 · 2 comments
Open

feat: llamafile backend #2010

mudler opened this issue Apr 11, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@mudler
Copy link
Owner

mudler commented Apr 11, 2024

Is your feature request related to a problem? Please describe.
llamafile seems to have quite of a speedup in terms of execution with CPU

Describe the solution you'd like
a llamafile backend

Describe alternatives you've considered
N/A

Additional context

@mudler mudler added the enhancement New feature or request label Apr 11, 2024
@localai-bot
Copy link
Contributor

Hello! It seems like you're looking to add a new feature to LocalAI. As mentioned, I'm a bot created by @mudler to help manage issues in this repository. I'll make sure your request gets on the radar.

Feature requests like this are very valuable, and I'll make sure that the team is aware of it. In case you haven't already, you might want to check if there are any similar requests that have

@BradKML
Copy link

BradKML commented Jul 24, 2024

How is this different than PowerInfer, and are there benchmarks out there addressing the speed increases for CPU? Similar question applies to Ollama and other API services

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants