This repository contains the necessary steps for installing ollama and running TII's Falcon 11b in 4-bit quantization, along with Whisper for ASR, and ComfyUI for handling the stable diffusion backend
Ollama.Falcon.V2.mp4
- Node and NPM need to be installed
- Conda or venv (optional) (Conda was used in this tutorial)
- Ollama (To download the 4-bit quantized Falcon 11B model)
- Local Whisper for Speech-to-text generation (STT)
- Open WebUI
- ComfyUI for managing the stable diffusion pipeline
- RealisticVisionV60B1 as the base model for the diffusion pipeline (optional) (Download any pretrained model checkpoint that you would like and use here)
The below steps are for installing node and npm on MacOS:
# installs nvm (Node Version Manager)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
# download and install Node.js
nvm install 20
# verifies the right Node.js version is in the environment
node -v # should print `v20.14.0`
# verifies the right NPM version is in the environment
npm -v # should print `10.7.0`
Create an environment and install the required dependencies by running the commands below:
#Creating an environment called locallm with python 3.11 installed
conda create -n locallm python=3.11
#Activating the environment
conda activate locallm
#Installing the required libraries and packages (Install the packages listed in the attached requirements.txt file)
pip install -r requirements.txt
Download and install Ollama from the link below:
https://www.ollama.com/
To install Whisper locally on your machine run the following pip command to install the latest commit from OpenAI's repository:
pip install git+https://github.com/openai/whisper.git
Whisper requires ffmpeg to be installed locally:
# on Ubuntu or Debian
sudo apt update && sudo apt install ffmpeg
# on Arch Linux
sudo pacman -S ffmpeg
# on MacOS using Homebrew (https://brew.sh/)
brew install ffmpeg
# on Windows using Chocolatey (https://chocolatey.org/)
choco install ffmpeg
# on Windows using Scoop (https://scoop.sh/)
scoop install ffmpeg
To build and install Open WebUI locally, run the following commands:
git clone https://github.com/open-webui/open-webui.git
cd open-webui/
# Copying required .env file
cp -RPp .env.example .env
# Building Frontend Using Node
npm install
npm run build
# Serving Frontend with the Backend
cd ./backend
pip install -r requirements.txt -U
bash start.sh
This will run the the start.sh
shell script on the current terminal/shell window, it is advised to run it in a detached terminal session or in a containerized form.
Download the pruned model safetensors from CivitAI or HuggingFace: https://civitai.com/models/4201/realistic-vision-v60-b1
- git clone the ComfyUI repository https://github.com/comfyanonymous/ComfyUI.git
- navigate to the models/checkpoints folder inside the cloned repository, and paste the aforementioned downloaded model there.
- Launch comfyUI by running
python main.py