A Chrome Extension + FastAPI backend for identifying and labeling AI-generated hallucinations in LLM responses.
You can follow these instructions to get the backend up and running locally.
Make sure you have the following installed:
- Python 3.10+
- Git
- Visual Studio Code (or your preferred IDE)
- Uvicorn (automatically installed via requirements)
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activategit clone https://github.com/Emdya/LLM-Reality-Check.gitcd LLM-Reality Checkcode .pip install -r backend/requirements.txtuvicorn main:app --reloadOnce the server is running, you'll see a message like:
INFO: Uvicorn running on http://127.0.0.1:8000
Open your browser and visit:
http://127.0.0.1:8000/docs
to interact with the API via the Swagger UI