-
Notifications
You must be signed in to change notification settings - Fork 5
Docker Overview
Matthew Altenburg edited this page May 16, 2025
·
1 revision
JoeyLLM runs inside a Docker container to ensure consistency across all contributors and to enable scalable training on GPU clusters.
π§ This setup has two parts β both are required to get started:
1οΈβ£ Install Docker on your system (plus GPU support if you have an NVIDIA GPU)
2οΈβ£ Pull and run the JoeyLLM container, which contains the full dev environment
| Platform | GPU Support | Setup Guide |
|---|---|---|
| Linux | β Full | π§ Linux Setup |
| Windows (WSL2) | β Partial | πͺ Windows Setup |
| macOS | β CPU-only | π macOS Setup |
π§ Linux is strongly recommended for contributors working on training or internals.
Once Docker is installed and ready:
This will walk you through:
- Pulling the official image
- Creating and starting the container
- Cloning the repo and installing Python dependencies inside the container
Using Docker ensures:
- β Reproducible, isolated environments
- π§© No local dependency/version conflicts
- βοΈ Compatibility with GPU clusters and inference pipelines
- π€ Easy GPU acceleration (where available)
- π€ Consistent dev workflow across all contributors
π οΈ Important:
JoeyLLM is built to run at scale across shared infrastructure.
Docker ensures your local environment matches our GPU cluster setup.
No β a GPU is not required.
- π§ͺ Basic testing works fine on CPU.
- βοΈ Full training runs on shared GPU infrastructure.
- β‘ If you have an NVIDIA GPU, enable GPU support for faster dev/testing (optional).
Need help?
- π¬ Join our Discord
- π Open an Issue