Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues Running Different LLM Models on examples/rag_with_quantized_llm.ipynb #42

Open
nossu3751 opened this issue Feb 29, 2024 · 0 comments

Comments

@nossu3751
Copy link

Hello,

I'm relatively new to working with Large Language Models (LLM) and am reaching out through the issues tab as I couldn't find a discussions section. I'm currently exploring the Intel/fastRAG repository to learn more about the implementation of RAG models with quantized LLMs, and I have encountered some challenges that I hope to get guidance on. I've been trying to run the examples/rag_with_quantized_llm.ipynb notebook on a GCP server (c3-standard-8 instance with 8 vCPUs and 32 GB memory, running Ubuntu 22.04)

I've successfully run the example using the facebook/opt-iml-max-1.3b model specified in the notebook. However, when attempting to experiment with other models, specifically open-research/openlm-research/open_llama_3b and openlm-research/open_llama_7b, I've encountered some challenges:

With the open_llama_3b model, the process gets stuck at the "Quantizing" step without progressing further.
Attempting to use the open_llama_7b model results in the process being killed immediately after the "Saving external data to one file..." message. This is surprising, especially considering the relatively small model size.
Given my limited experience, I'm reaching out for some guidance. I'm curious if there are minimum hardware requirements for each model size or specific quantizing precision that I might not be aware of. Any insights or suggestions on how to successfully run these models, or adjustments to my setup that could help, would be greatly appreciated.

Thank you for your time and assistance.

danielfleischer pushed a commit that referenced this issue Sep 12, 2024
Fix for #42.

In bi-encoder rankers, the document scores were not being attached properly. 

This PR populates the documents with their corresponding scores, enabling downstream filtering.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant