Note: LLMs can sometimes provide incorrect or outdated information. Always verify critical information through trusted sources.
This project is a Next.js application that performs real-time fact checking on spoken statements. It uses Deepgram for audio transcription and leverages both OpenAI and Perplexity to verify the accuracy of claims.
- Real-time Audio Transcription: Captures and transcribes spoken audio using Deepgram's API
- AI Fact Checking: Uses both OpenAI and Perplexity to cross-reference and verify statements
- Live Results: Shows fact-checking results in real-time as statements are processed
- Explanation of Validity: Provides detailed explanations for why statements are considered true or false
- Next.js for the frontend and API routes
- AI SDK for interacting with LLMs
- Deepgram for audio transcription
- OpenAI and Perplexity for validating claims
- ShadcnUI for UI components
- Tailwind CSS for styling
- Speak into microphone
- Deepgram processes the audio stream in real-time and returns transcribed text
- The transcribed text is analyzed for distinct statements ('?!.')
- Each statement is sent to OpenAI and Perplexity for fact checking
- The verification status and explanation are displayed to the user
To get the project up and running, follow these steps:
-
Install dependencies:
npm install
-
Copy the example environment file:
cp .env.example .env
-
Add your API keys to the
.env
file:OPENAI_API_KEY=your_api_key_here DEEPGRAM_API_KEY=your_api_key_here PERPLEXITY_API_KEY=your_api_key_here
-
Start the development server:
npm run dev
Your project should now be running on http://localhost:3000.
The project is set up for one-click deployment on Vercel. Use the "Deploy with Vercel" button above to create your own instance of the application.
To learn more about the technologies used in this project, check out the following resources: