EfficienGPT is a quick-learning engine designed to optimize knowledge acquisition using the Pareto Principle (80/20 Rule). By leveraging GPT-4o, MongoDB Atlas, and Streamlit, it provides fast, insightful, and actionable learning experiences tailored to individual needs.
EfficienGPT allows users to:
- Retrieve focused insights without getting overwhelmed by unnecessary details.
- Store and access knowledge instantly with MongoDB Atlas.
- Interact through a clean, intuitive UI powered by Streamlit.
Refer to out Devpost link to know more about out the project and see our demo!
- AI-powered Learning: Uses GPT-4o to generate precise and actionable insights.
- Efficient Knowledge Storage: Stores and retrieves insights securely via MongoDB Atlas.
- Streamlit UI: Provides a simple and distraction-free interface for users.
- Pareto Principle Optimization: Ensures that only the most important 20% of knowledge is delivered.
- Customizable Inputs: Users define their topic, available time, and use case for tailored learning.
git clone https://github.com/your-username/EfficienGPT.git
cd EfficienGPT
# On macOS/Linux
python3 -m venv venv
source venv/bin/activate
# On Windows
python -m venv venv
venv\Scripts\activate
pip install -r requirements.txt
EfficienGPT requires an OpenAI API Key to function properly.
- Go to the OpenAI API Keys page.
- Sign up or log in to your OpenAI account.
- Navigate to API Keys and create a new API key.
Create a .env
file in the project directory and add the following line:
OPENAI_API_KEY=your_api_key_here
Alternatively, you can set it as an environment variable:
export OPENAI_API_KEY="your_api_key_here" # macOS/Linux
set OPENAI_API_KEY="your_api_key_here" # Windows
streamlit run Home.py
This will launch the web interface where you can interact with EfficienGPT.
If you want to run EfficienGPT without incurring OpenAI API costs, you can use Ollama, a local LLM runtime.
Follow the installation guide here: Ollama Installation.
ollama pull mistral # Or any other model you prefer
Modify your .env
file to use the local model:
USE_LOCAL_MODEL=True
OLLAMA_MODEL=mistral
Then, run the Streamlit app as usual:
streamlit run Home.py
This will use Ollama’s locally stored model instead of the OpenAI API.
EfficienGPT is an evolving project. Contributions, feature requests, and bug reports are welcome!
- Fork the repository.
- Create a new branch for your feature/fix.
- Commit your changes.
- Open a pull request.
This project is licensed under the MIT License. See the LICENSE
file for details.
EfficienGPT was built with ❤️ using:
- GPT-4o for AI-powered insights.
- MongoDB Atlas for efficient and secure knowledge storage.
- Streamlit for an intuitive user experience.
- LangChain for smart prompt engineering.
This README provides a complete guide to setting up, running, and contributing to EfficienGPT. Let me know if you need any modifications! 🚀