You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm relatively new to working with Large Language Models (LLM) and am reaching out through the issues tab as I couldn't find a discussions section. I'm currently exploring the Intel/fastRAG repository to learn more about the implementation of RAG models with quantized LLMs, and I have encountered some challenges that I hope to get guidance on. I've been trying to run the examples/rag_with_quantized_llm.ipynb notebook on a GCP server (c3-standard-8 instance with 8 vCPUs and 32 GB memory, running Ubuntu 22.04)
I've successfully run the example using the facebook/opt-iml-max-1.3b model specified in the notebook. However, when attempting to experiment with other models, specifically open-research/openlm-research/open_llama_3b and openlm-research/open_llama_7b, I've encountered some challenges:
With the open_llama_3b model, the process gets stuck at the "Quantizing" step without progressing further.
Attempting to use the open_llama_7b model results in the process being killed immediately after the "Saving external data to one file..." message. This is surprising, especially considering the relatively small model size.
Given my limited experience, I'm reaching out for some guidance. I'm curious if there are minimum hardware requirements for each model size or specific quantizing precision that I might not be aware of. Any insights or suggestions on how to successfully run these models, or adjustments to my setup that could help, would be greatly appreciated.
Thank you for your time and assistance.
The text was updated successfully, but these errors were encountered:
Fix for #42.
In bi-encoder rankers, the document scores were not being attached properly.
This PR populates the documents with their corresponding scores, enabling downstream filtering.
Hello,
I'm relatively new to working with Large Language Models (LLM) and am reaching out through the issues tab as I couldn't find a discussions section. I'm currently exploring the Intel/fastRAG repository to learn more about the implementation of RAG models with quantized LLMs, and I have encountered some challenges that I hope to get guidance on. I've been trying to run the examples/rag_with_quantized_llm.ipynb notebook on a GCP server (c3-standard-8 instance with 8 vCPUs and 32 GB memory, running Ubuntu 22.04)
I've successfully run the example using the facebook/opt-iml-max-1.3b model specified in the notebook. However, when attempting to experiment with other models, specifically open-research/openlm-research/open_llama_3b and openlm-research/open_llama_7b, I've encountered some challenges:
With the open_llama_3b model, the process gets stuck at the "Quantizing" step without progressing further.
Attempting to use the open_llama_7b model results in the process being killed immediately after the "Saving external data to one file..." message. This is surprising, especially considering the relatively small model size.
Given my limited experience, I'm reaching out for some guidance. I'm curious if there are minimum hardware requirements for each model size or specific quantizing precision that I might not be aware of. Any insights or suggestions on how to successfully run these models, or adjustments to my setup that could help, would be greatly appreciated.
Thank you for your time and assistance.
The text was updated successfully, but these errors were encountered: