Completely local RAG with chat UI
Clone the repo:
git clone [email protected]:justushar/DocOllama.git
cd DocOllama
Install the dependencies:
pip install -r requirements.txt
Fetch your LLM (llama3.1 by default):
ollama pull llama3.1:8b
Run the Ollama server
ollama serve
Start DocOllama:
streamlit run app.py
Extracts text from PDF documents and creates chunks (using semantic and character splitter) that are stored in a vector databse
Given a query, searches for similar documents, reranks the result and applies LLM chain filter before returning the response.
Combines the LLM with the retriever to answer a given user question