Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

idea: embeddings should be generated using llama.cpp #9

Closed
daboe01 opened this issue Jun 23, 2024 · 3 comments
Closed

idea: embeddings should be generated using llama.cpp #9

daboe01 opened this issue Jun 23, 2024 · 3 comments

Comments

@daboe01
Copy link

daboe01 commented Jun 23, 2024

i think that this might speed up indexing of large documents and/or a high volume of documents

@daboe01
Copy link
Author

daboe01 commented Jun 25, 2024

ggerganov/llama.cpp#5423

@abgulati
Copy link
Owner

moving to discussions

Hi @daboe01,
Thanks for bringing this up. In fact, ggreganov had already suggested this to me in a recent chat I had with him about LARS. I've tabled it for now as a topic to explore in the future as it's non-urgent.

Repository owner locked and limited conversation to collaborators Jun 25, 2024
@abgulati abgulati converted this issue into discussion #10 Jun 25, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants