Start by downloading the Ollama application from the official website: Ollama Download. Once installed, Ollama will be running at: http://localhost:11434
Explore the various models available in the Ollama library: Ollama Library.
To run a model, use the following command:
ollama pull llama3.1
Recommended Models:
- Llama3.1
- Llava (Vision model)
Get your tavily api key by signing up at https://app.tavily.com/home
- Use qdrant cloud:
- Sign up at https://cloud.qdrant.io/
- Create your cluster
- Get url database URL and API key
- Run qdrant in local using docker:
docker run -p 6333:6333 -p 6334:6334 qdrant/qdrant
- Clone the repo
git clone https://github.com/SSK-14/WizSearch.git
- Install required libraries
- Create virtual environment
pip3 install virtualenv
python3 -m venv {your-venvname}
source {your-venvname}/bin/activate
- Install required libraries
pip3 install -r requirements.txt
- Activate your virtual environment
source {your-venvname}/bin/activate
- Set up your
secrets.toml
file Create asecrets.toml
file in .streamlit folder Refer. Add the following values:
MODEL_BASE_URL = "http://localhost:11434/v1"
MODEL_NAMES = ["llama3.1", "llava"]
VISION_MODELS = ["llava"]
TAVILY_API_KEY = "Your Tavily API Key"
QDRANT_URL = "Your Qdrant URL" Eg: "http://localhost:6333"
QDRANT_API_KEY = "Your Qdrant API Key" (optional for cloud deployments)
- Running
streamlit run app.py