Welcome to LlaraveLlama - the most user-friendly private AI chat suite that brings enterprise-grade AI to your local machine in under 3 minutes. By seamlessly combining Laravel's reliability with Ollama's AI capabilities, we've created a zero-configuration solution that gives you instant access to powerful AI models, complete with a polished interface and 20+ pre-configured AI assistants. Whether you're a privacy enthusiast, developer, or someone who wants their own private GPT-like experience without the technical hassle, LlaraveLlama delivers everything you need with just a single command.
Experience the power of LlaraveLlama in action through our live demo, running efficiently on a minimal cloud setup with just 4GB RAM and 2 CPU cores at: Live Demo
- Preview
- Why LlaraveLlama?
- Key Features
- Installation Options
- Accessing the Application
- Mobile Access Setup
- Debugging Features
- About the Author
- The Vision
- Contributing
- License
- Related Projects
LlaraveLlama revolutionizes private AI chat by making it accessible to everyone - no technical expertise required. With just one command, you get a complete, production-ready AI chat platform that rivals commercial solutions:
π Instant Setup
- Up and running in under 3 minutes with a single command
- Zero configuration needed - everything works out of the box
- Pre-loaded with 3 powerful, optimized AI models
π Complete Solution
- 20+ carefully crafted AI assistants ready to use
- Beautiful, intuitive interface that works on all devices
- 10 pre-configured example chats to get you started immediately
π True Privacy
- 100% self-hosted - your data never leaves your machine
- No cloud dependencies or external services
- Complete control over your AI interactions
Get started with LlaraveLlama today and experience the fastest path to having your own private, powerful AI chat suite - no technical knowledge required!
- One-Command Setup: From zero to fully functional AI chat suite in under 3 minutes
- Pre-Configured Excellence:
- 3 powerful, lightweight AI models ready to use:
- Llama3.2 (3B) for general tasks
- Qwen2.5 (3B) for technical discussions
- Gemma2 (2B) for creative tasks
- 20+ professionally crafted AI assistant profiles
- 10 example conversations showcasing optimal usage
- 3 powerful, lightweight AI models ready to use:
- Professional Interface:
- Beautiful daylight and moonlight themes
- Mobile-ready responsive design
- Markdown and code syntax highlighting
- One-click message and code copying
- Privacy & Control:
- 100% private - all data stays local
- No internet required after setup
- Complete ownership of your AI interactions
- Enterprise-Grade Features:
- Powerful conversation search
- Custom AI assistant creation
- Local JSON storage for all conversations
- Comprehensive debug mode for testing
- Resource Friendly:
- Runs smoothly on modest hardware
- Works on cloud VPS or personal computers
- Optimized for efficiency and performance
Experience LlaraveLlama in minutes with our streamlined Docker version - the fastest and most reliable way to get started on any system:
π Complete Docker Installation Guide
docker-compose up -d
The Docker version provides:
- Single command setup
- Zero configuration needed
- Automatic dependency management
- Both CPU and GPU support
- Pre-configured environment with all models included
- Works reliably across all operating systems
To install without docker, we provide a full service set up script for non-docker installs tested and operational on Ubuntu 22,04/24.04 LTS.
(Ubuntu 22.04/24.04 LTS)
For Ubuntu users, we provide powerful automation scripts that handle the entire setup process:
git clone https://github.com/Better-Call-Jason/LlaraveLlama.git
cd LlaraveLlama
sudo ./setup.sh
git clone https://github.com/Better-Call-Jason/LlaraveLlama.git
cd LlaraveLlama
sudo ./setup_nvidia.sh
sudo ./setup.sh
- Ubuntu 22.04/24.04 LTS (Tested and verified)
- Minimum 8GB RAM recommended
- 10GB free disk space
For installation on other Linux distributions or for manual control, review the commands in setup.sh
and setup_nvidia.sh
. These scripts provide the reference for manual installation steps, including composer and Node.js configuration. Technical expertise required for non-Ubuntu installations.
sudo ./setup.sh --serve
- Local access:
http://localhost
- Network access:
http://your_computer_ip_address
- Mobile access: Ensure your device is on the same network and use
http://host_ip
The application is automatically configured for mobile access during installation. Simply:
- Ensure your device is on the same network as the host machine
- Access LlaraveLlama using the network URL provided after installation
- Enjoy a premium private mobile AI chat experience!
LlaraveLlama includes comprehensive debugging capabilities that can be easily controlled through your environment settings.
Debug mode is controlled through your .env
file:
APP_DEBUG=true # Enable debugging features
APP_DEBUG=false # Disable debugging features (production setting)
This setting automatically controls:
- The debug panel visibility
- Service operation logging
- System interaction details
- API call monitoring
No code changes are required - simply update your .env file and clear the configuration:
php artisan config:clear
php artisan cache:clear
Created by a passionate PHP/JS full-stack developer who believes in the democratization of AI technology. This project started as a personal tool, evolved through family use, and is now shared with the world. It represents a belief that powerful AI tools should be accessible to everyone while maintaining privacy and control over their data.
LlaraveLlama was born from the amazing reality that today's LLM technology can run efficiently on consumer hardware. As these models become more powerful and accessible, tools like LlaraveLlama make it possible for everyone to harness their potential while maintaining complete privacy and control.
Your contributions are welcome! Whether it's bug fixes, feature additions, or documentation improvements, feel free to create a feature/bug fix branch, merge in and make a pull request.
LlaraveLlama is open-source software licensed under the MIT license. See the LICENSE file for the full license text.
- Docker-LlaraveLlama - The Docker version of this project