Vagrantfile and Scripts to Automate Kubernetes Setup using Kubeadm [Practice Environment for CKA/CKAD and CKS Exams]
A fully automated setup for CKA, CKAD, and CKS practice labs is tested on the following systems:
- Windows
- Ubuntu Desktop
- Mac Intel-based systems
If you are MAC Silicon user, Please use the following repo.
As part of our commitment to helping the DevOps community save money on Kubernetes Certifications, we continuously update the latest voucher codes from the Linux Foundation
🚀 CKA, CKAD, CKS, or KCNA exam aspirants can save 50% today using code CYBER24CCTECHIES at https://kube.promo/devops. It is a limited-time offer from the Linux Foundation.
The following are the best bundles to save upto 65% (up to $1087 savings) with code CYBER24BUNDLECT
- KCNA + KCSA + CKA + CKAD + CKS (65% - $1087 Savings): kube.promo/kubestronaut
- CKA + CKAD + CKS Exam bundle (63% - $747 Savings): kube.promo/k8s-bundle
- CKA + CKS Bundle (63% - $500 Savings) kube.promo/bundle
- KCNA + CKA (68% - $338 Savings) kube.promo/kcka-bundle
- KCSA + CKS Exam Bundle (64% - $407 Savings) kube.promo/kcsa-cks
- KCNA + KCSA Exam Bundle (66% - $330 Savings) kube.promo/kcna-kcsa
Note: You have one year of validity to appear for the certification exam after registration
- A working Vagrant setup using Vagrant + VirtualBox
Here is the high level workflow.
Current k8s version for CKA, CKAD, and CKS exam: 1.30
The setup is updated with 1.31 cluster version.
Refer to this link for documentation full: https://devopscube.com/kubernetes-cluster-vagrant/
- Working Vagrant setup
- 8 Gig + RAM workstation as the Vms use 3 vCPUS and 4+ GB RAM
The latest version of Virtualbox for Mac/Linux can cause issues.
Create/edit the /etc/vbox/networks.conf file and add the following to avoid any network-related issues.
* 0.0.0.0/0 ::/0
or run below commands
sudo mkdir -p /etc/vbox/
echo "* 0.0.0.0/0 ::/0" | sudo tee -a /etc/vbox/networks.conf
So that the host only networks can be in any range, not just 192.168.56.0/21 as described here: https://discuss.hashicorp.com/t/vagrant-2-2-18-osx-11-6-cannot-create-private-network/30984/23
To provision the cluster, execute the following commands.
git clone https://github.com/scriptcamp/vagrant-kubeadm-kubernetes.git
cd vagrant-kubeadm-kubernetes
vagrant up
cd vagrant-kubeadm-kubernetes
cd configs
export KUBECONFIG=$(pwd)/config
or you can copy the config file to .kube directory.
cp config ~/.kube/
The dashboard is automatically installed by default, but it can be skipped by commenting out the dashboard version in settings.yaml before running vagrant up
.
If you skip the dashboard installation, you can deploy it later by enabling it in settings.yaml and running the following:
vagrant ssh -c "/vagrant/scripts/dashboard.sh" controlplane
To get the login token, copy it from config/token or run the following command:
kubectl -n kubernetes-dashboard get secret/admin-user -o go-template="{{.data.token | base64decode}}"
Make the dashboard accessible:
kubectl proxy
Open the site in your browser:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
vagrant halt
vagrant up
vagrant destroy -f
+-------------------+
| External |
| Network/Internet |
+-------------------+
|
|
+-------------+--------------+
| Host Machine |
| (Internet Connection) |
+-------------+--------------+
|
| NAT
+-------------+--------------+
| K8s-NATNetwork |
| 192.168.99.0/24 |
+-------------+--------------+
|
|
+-------------+--------------+
| k8s-Switch (Internal) |
| 192.168.99.1/24 |
+-------------+--------------+
| | |
| | |
+-------+--+ +---+----+ +-+-------+
| Master | | Worker | | Worker |
| Node | | Node 1 | | Node 2 |
|192.168.99| |192.168.| |192.168. |
| .99 | | 99.81 | | 99.82 |
+----------+ +--------+ +---------+
This network graph shows:
- The host machine connected to the external network/internet.
- The NAT network (K8s-NATNetwork) providing a bridge between the internal network and the external network.
- The internal Hyper-V switch (k8s-Switch) connecting all the Kubernetes nodes.
- The master node and two worker nodes, each with their specific IP addresses, all connected to the internal switch.