Welcome to the GPT-4 and GroQ (Llama) Conversation Variation Generator project! This repository contains a script that uses both GPT-4 (OpenAI) and GroQ (Llama) models to generate variations of conversations from provided Jupyter notebooks (.ipynb files).
This project was developed as part of a coding assignment. The goal is to generate variations of RLHF (Reinforcement Learning from Human Feedback) conversations, maintaining the flow and meaning while providing rephrased interactions between the assistant and the user.
- Dual Model Support: Use either GPT-4 from OpenAI or GroQ (Llama) models for generating variations.
- Flexible Input/Output: Specify input and output paths for Jupyter notebook files.
- Environment Variable Management: Securely manage API keys using a .env file.
- Python 3.x
- pip (Python package installer)
Install the required packages using pip:
pip install openai
pip install groq
pip install python-dotenv
OPENAI_API_KEY=your_openai_api_key
GROQ_API_KEY=your_groq_api_key
- Set the input and output paths for your Jupyter notebook files in
main.py
(lines 27-28). - Execute the
main.py
- By default, the project uses the
GroQ
model for local testing. You can switch toGPT-4
by modifying the function call inmain.py
variation = generate_variation(prompt, model='gpt')