Skip to content

Hackathon Project for SpurHacks, focused on reducing the number of "AI hallucinations" that occur in Large Language Models

Notifications You must be signed in to change notification settings

Emdya/LLM-Reality-Check

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM RealityCheck

A Chrome Extension + FastAPI backend for identifying and labeling AI-generated hallucinations in LLM responses.


🛠️ Step-by-Step Installation Guide (Developer Setup)

You can follow these instructions to get the backend up and running locally.


Prerequisites

Make sure you have the following installed:

  • Python 3.10+
  • Git
  • Visual Studio Code (or your preferred IDE)
  • Uvicorn (automatically installed via requirements)

Installation Steps

1. Create a Python Virtual Environment

python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate

2. Clone the Repository

git clone https://github.com/Emdya/LLM-Reality-Check.git

3. Navigate to the Project Directory

cd LLM-Reality Check

4. Open the Project in VS Code

code .

5. Install Backend Requirements

pip install -r backend/requirements.txt

6. Launch the FastAPI Server

uvicorn main:app --reload

7. Access the API Docs

Once the server is running, you'll see a message like:

INFO:     Uvicorn running on http://127.0.0.1:8000

Open your browser and visit:

http://127.0.0.1:8000/docs

to interact with the API via the Swagger UI

About

Hackathon Project for SpurHacks, focused on reducing the number of "AI hallucinations" that occur in Large Language Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages