A repository to create a cluster to be used as homelab.
Below are some of the tools I find useful.
Tool | Purpose |
---|---|
ansible | Configuration management tool and simple IT automation system |
Renovate | Automatically finds new releases for the applications and issues corresponding PR's |
TaskFile | Task runner/build tool that aims to be simpler and easier to use than GNU Make. |
pre-commit | A framework for managing and maintaining multi-language pre-commit hooks. |
kubesearch | Look for how other people manage their Self-hosted software on k8s-at-home community |
mkdocs material | Static website generator for all my docs in this repo |
Age | Simple, modern and secure file encryption tool, format, and Go library. |
yamlfmt | Extensible command line tool or library to format yaml files. |
prettier | Opinionated code formatter, that enforces a consistent style |
markdownlint | Static analysis tool to enforce standards and consistency for Markdown files. |
super-linter | A collection of linters and code analyzers, to help validate your source code. |
Installation can be made using pip install
or a package manager such as homebrew
.
# Install via homebrew
brew install pre-commit
Install the git-hooks
scripts, be sure .pre-commit-config.yaml
configuration fle has been created in the root folder.
# Install
pre-commit install
Now pre-commit
will run automatically on git commit
.
Installation can be made using pip install
or a package manager such as homebrew
.
# Install via homebrew
brew install mkdocs
Initialize a project using mkdocs
# Create a mkdocs project
mkdocs new .
Install MkDocs
plugin dependencies
# Install dependencies (use --break-system-packages or create python environment)
pip3 install -r .github/mkdocs/requirements.txt --break-system-packages
Serve the website content on a local server
mkdocs serve
# Serve using specific config file location
mkdocs serve -f .github/mkdocs/mkdocs.yml
Go to Renovate install the app into the account.
Task offers many installation methods, so package manager such as homebrew
can be used.
# Install via homebrew
brew install go-task
Get all current tasks
# List all available task to run
task -l
# 'task -l' can be set as default task tu run
task
# Combination of task for initialization
task init
Setup pre-commit
using TaskFile.
# Init pre-commit hooks
task precommit:init
# Update pre-commit dependencies
task precommit:update
Check linting and formatting before commit.
# Format and Lint
task format:all
task lint:all
Run following task to install dependencies using homebrew
# Run 'init' task within brew include file.
task brew:init
# Install ansible dependencies
task ansible:init
# Check the inventory server statuses (staging)
task ansible:ping ANSIBLE_INVENTORY_ENV=staging
# Install kubernetes (staging)
task ansible:install ANSIBLE_INVENTORY_ENV=staging
# Merge Kube config
task ansible:config ANSIBLE_INVENTORY_ENV=staging
# Uninstall Kubernetes (staging)
task ansible:uninstall ANSIBLE_INVENTORY_ENV=staging
You can run super-linter
outside GitHub Actions.
# Run docker image using linux/amd64, since there is no arm64 support.
docker run \
-e DEFAULT_WORKSPACE=/tmp/lint \
-e LOG_LEVEL=DEBUG \
-e RUN_LOCAL=true \
-e SHELL=/bin/bash \
-e DEFAULT_BRANCH=main \
-e ANSIBLE_DIRECTORY=infrastructure/ansible \
-e VALIDATE_ALL_CODEBASE=true \
-e VALIDATE_YAML=true \
-e VALIDATE_MARKDOWN=true \
-e VALIDATE_JSON=true \
-e VALIDATE_TERRAFORM_TFLINT=true \
-e VALIDATE_RENOVATE=true \
-e YAML_CONFIG_FILE=.yamllint.yaml \
-v $PWD:/tmp/lint \
--platform linux/amd64 \
ghcr.io/super-linter/super-linter:slim-v6.3.0
Environment file will be used to store secrets that will be used during the installation process and later by external secret.
Following file must be created at the root .env
.
# NOTE: Apply the changes into the cluster after a modification
# kubectl create secret generic -n security cluster-secrets --from-env-file=.env --dry-run=client -o yaml | kubectl apply -f -
# Goddady API KEY
GODADDY_API_KEY=
GODADDY_SECRET_KEY=
# Cloudfare API TOKEN
CLOUDFLARE_API_TOKEN=
CLOUDFLARE_ZONE_ID=
# Github User Name
GITHUB_REPO=https://github.com/jsa4000
GITHUB_USERNAME=
GITHUB_PAT=
# Zitadel MasterKey
# LC_ALL=C tr -dc A-Za-z0-9 </dev/urandom | head -c 32
ZITADEL_MATERKEY=
# Homelab Password
HOMELAB_USER=
HOMELAB_PASSWORD=
# Postgres Password
POSTGRES_SUPER_PASS=
POSTGRES_USER_PASS=
# Argocd Password (Bcrypt hashed admin password)
# htpasswd -nbBC 10 "" $HOMELAB_PASSWORD | tr -d ':\n' | sed 's/$2y/$2a/'
ARGOCD_PASSWORD=
# Servarr API key (prowlarr, radarr, sonarr, etc..)
SERVARR_APIKEY=
# Zigbee
# To create a new random network key use: 'shuf -i 0-255 -n 16 | paste -sd "," -'
ZIGBEE2MQTT_NETWORK_KEY=[]
# SpeedTest Tracker API key
# https://speedtest-tracker.dev/
SPEEDTEST_APP_KEY='base64:'
In order to initialize the cluster, use this script to clean all DNS records from Cloudflare.
Running this script the new DNS records created will be take some time until it will be replicated over the network (DNS servers)
# Source environment file into current session
source .env
# Go to infrastructure/cluster/scripts folder
cd infrastructure/cluster/scripts
# Run following script to clean all DNS records.
source ./scripts/delete-cloulflare-dns.sh $CLOUDFLARE_API_TOKEN $CLOUDFLARE_ZONE_ID
Create local cluster where the home-lab will be installed.
The staging cluster will be at 192.168.205.1XX
# Go to infrastructure/cluster/scripts folder
cd infrastructure/cluster/scripts
# Run script to create cluster scripts (check resources such as memory, cpu, storage and output folder)
./scripts/create-qemu-cluster.sh
# Follow the instruction to setup the cluster
Install ansible dependencies and bootstrap the addons
cluster with core services.
# Go to infrastructure/ansible folder
cd infrastructure/ansible
# Install ansible dependencies
task ansible:init
# Check hosts are available (use staging or pro)
task ansible:ping ANSIBLE_INVENTORY_ENV=staging
server-2 | SUCCESS => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python3"},"changed": false,"ping": "pong"}
server-1 | SUCCESS => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python3"},"changed": false,"ping": "pong"}
server-3 | SUCCESS => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python3"},"changed": false,"ping": "pong"}
# Run ansible using staging inventory file (use staging or pro)
task ansible:install ANSIBLE_INVENTORY_ENV=staging
PLAY RECAP **********************************************************************************************************************************************************************************
server-1 : ok=67 changed=28 unreachable=0 failed=0 skipped=60 rescued=0 ignored=0
server-2 : ok=40 changed=12 unreachable=0 failed=0 skipped=53 rescued=0 ignored=0
server-3 : ok=40 changed=12 unreachable=0 failed=0 skipped=53 rescued=0 ignored=0
Following the checklist to be fulfilled after the ansible initialization.
- All pods are running in all namespaces (*).
- Ensure all applications are synced in argocd.
- Oauth2-proxy is degraded.
- Internal and external ingresses can be accesses.
- Check all the volumes are created and used.
- Check all the targets in Prometheus dashboard are healthy.
(*) Some applications are intended to be in error or pending states.
The bootstrap process can take some minutes or hours depending on the internet connection. Wait until all the pods are in Running
status.
There are some pods such as
oauth2-proxy
that you should do manual task to deploy it successfully. Sometimes the status will beCreateContainerConfigError
.
In order to check the status of the pods, you can use kubectl
commands.
# Switch to current cluster kubernetes config (use staging or pro)
task ansible:config ANSIBLE_INVENTORY_ENV=staging
# Get all running pods in all namespaces (k is an alias of kubectl)
k get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
database cnpg-cloudnative-pg-64bb9df9c8-fr95h 1/1 Running 0 72m
database postgres-1 1/1 Running 0 65m
database postgres-tls-1 1/1 Running 0 61m
gitops argocd-application-controller-0 1/1 Running 0 5m31s
Check all applications deployed in addons cluster are in-sync.
# Run following command to open argocd dashboard at http://localhost:8080 (admin,**)
task k8s:argocd
# Once traefik and cert-manager are running argocd dashboard can be accessed through 'https://argocd.staging.internal.javiersant.com/'
Hooks in argocd sometimes are not triggered properly, so it get hangs infinitely. In order to solve this issue:
- Go into the apps that are in this state.
- Go into the
Syncing
option and click TERMINATE. - Press Sync button again and wait.
Applications that sometimes get stuck are argocd and zitadel
Oauth2-proxy can be degraded
because Zitadel was not properly initialized or because a timeout trying to access Zitadel URL.
Zitadel URL is available when:
- Zitadel DNS record is already created. i.e https://zitadel.staging.internal.javiersant.com
- Zitadel app (pod) is in
Running
status. (kubectl get deployment zitadel -n iam
) Cert-manager
has created the certificate. i.e. *.javiersant.com
Once all points above are checked, you must:
- Go to argocd dashboard and search for
oauth2-proxy
application. - Delete the Job called
oauth2-proxy-zitadel-init
and wait until it completes. - Wait until
oauth2-proxy
application is properly synced.
Check following dashboards to check if everything is working fine.
Bootstrap the apps
cluster to install additional services and tools.
This process can take some minutes.
# Run following command
kubectl apply -f https://raw.githubusercontent.com/jsa4000/homelab-ops/refs/heads/staging/kubernetes/bootstrap/apps-appset.yaml
Add Home Assistant configuration to be access through proxy, connect to a database and create custom dashboards.
# Execute following command from root ./
source kubernetes/utils/home-assistant-init.sh
This will configure the Servarr stack (Prowlarr, Radarr and Sonarr)
# Execute following command from root ./
# Use staging or pro
source kubernetes/utils/servarr-init.sh staging
The go to following steps:
- Add indexer to Prowlarr. (public)
- Search for film at Radarr. Use interactive Search to search custom one.
- Go to qbittorrent to check if it's being downloaded.
- Go to Jellyfin, create a new account (
admin
) and create a new Movies Media folder from/downloads/movies
.
Following the checklist to be fulfilled after the ansible initialization.
- All pods are running in all namespaces (*).
- Ensure all applications are synced in argocd.
- Internal and external ingresses can be accesses.
- Homepage Widgets are showing information.
(*) Some applications are intended to be in error or pending states.
Check following dashboards to check if everything is working fine.
- Homepage
- IT-Tools
- PGAdmin [[email protected]]
- Redis Commander (admin)
- Speed Test [[email protected]]
- Home Assistant (Create user at start, admin)
- Prowlarr
- Radarr
- QBittorrent
- File Browser
- Jellyfin (Create user at start, admin)
- Open WebUI