Skip to content

A guide to install K3s on VM or Raspberry Pi with a custom Traefik Ingress Control.

License

Notifications You must be signed in to change notification settings

NicoMincuzzi/k3s-remote

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

43 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

K3S-remote

A guide to install K3s on VM or Raspberry Pi with a custom Traefik Ingress Control.

Prerequisites

What's K3S?

Find here all you need

1. Install K3S by k3sup

export IP=<HOST_IP>
k3sup install \
  --ip $IP \
  --user root \
  --ssh-key <SSH_PATH> \
  --merge \
  --local-path $HOME/.kube/config \
  --context my-k8s \
  --k3s-extra-args '--no-deploy traefik'

Options for install:

  • --cluster - start this server in clustering mode using embdeed etcd (embedded HA)
  • --skip-install - if you already have k3s installed, you can just run this command to get the kubeconfig
  • --ssh-key - specify a specific path for the SSH key for remote login
  • --local-path - default is ./kubeconfig - set the file where you want to save your cluster's kubeconfig. By default this file will be overwritten.
  • --merge - Merge config into existing file instead of overwriting (e.g. to add config to the default kubectl config, use --local-path ~/.kube/config --merge).
  • --context - default is default - set the name of the kubeconfig context.
  • --ssh-port - default is 22, but you can specify an alternative port i.e. 2222
  • --k3s-extra-args - Optional extra arguments to pass to k3s installer, wrapped in quotes, i.e. --k3s-extra-args '--no-deploy traefik' or --k3s-extra-args '--docker'. For multiple args combine then within single quotes --k3s-extra-args '--no-deploy traefik --docker'.
  • --k3s-version - set the specific version of k3s, i.e. v0.9.1
  • --ipsec - Enforces the optional extra argument for k3s: --flannel-backend option: ipsec
  • --print-command - Prints out the command, sent over SSH to the remote computer
  • --datastore - used to pass a SQL connection-string to the --datastore-endpoint flag of k3s. You must use the format required by k3s in the Rancher docs.

See even more install options by running k3sup install --help.

NOTE

Traefik can be configured by editing the traefik.yaml file. To prevent k3s from using or overwriting the modified version, deploy k3s with --no-deploy traefik and store the modified copy in the k3s/server/manifests directory. For more information, refer to the official Traefik for Helm Configuration Parameters.

1.1 Extra

Let's build a 3-node Kubernetes cluster with Rancher's k3s project and k3sup, which uses ssh to make the whole process quick and painless.

Note

Running k3s/MicroK8s on some ARM hardware may run into difficulties because cgroups are not enabled by default.

This can be remedied on Ubuntu distribution by editing the boot parameters:

sudo vi /boot/firmware/cmdline.txt

Note: In some Raspberry Pi Linux distributions the boot parameters are in /boot/cmdline.txt or /boot/firmware/nobtcmd.txt.

And adding the following:

cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory to address disk performance issues often present on Raspberry Pi see the troubleshooting section.

More details are available at: https://microk8s.io/docs/install-alternatives#heading--arm

1. Create the server

In Kubernetes terminology, the server is often called the master.

$ export IP=<HOST_IP>

$ k3sup install \
  --ip $IP \
  --user root \
  --ssh-key <SSH_PATH> \
  --merge \
  --local-path $HOME/.kube/config \
  --context my-k8s \
  --k3s-extra-args '--no-deploy traefik'

k3s is so fast to start up, that it may be ready for use after the command has completed.

Test it out:

$ export KUBECONFIG=`pwd`/kubeconfig

$ kubectl get node -o wide
NAME    STATUS   ROLES    AGE   VERSION         INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
master  Ready    master   15h   v1.19.15+k3s2   192.168.1.45    <none>        Ubuntu 20.04.3 LTS   5.4.0-1045-raspi    containerd://1.4.11-k3s1

2. Extend the cluster

You can add additional hosts in order to expand our available capacity.

Run the following:

$ k3sup join --ip <WORKER_X_IP> --server-ip <SERVER_IP> --user root --ssh-key <SSH_PATH>

Replace <WORKER_X_IP> with each worker ip address.

3. Control plane node isolation: taint

Unlike k8s, the master node here is eligible to run containers destined for worker nodes as it does not have the node-role.kubernetes.io/master=true:NoSchedule taint that's typically present.

Tainting your master node is recommend to prevent workloads to be scheduled on it, unless you are only running a single-node k3s cluster on a Raspberry Pi.

$ kubectl taint nodes <SERVER_NAME> node-role.kubernetes.io/master=true:NoSchedule

Replace <SERVER_NAME> with your k3s server node NAME shown in the kubectl get nodes output.

4. Optional labels

If you have noticed, other than master, the other nodes have <none> as their role.

  • k3s by default does not label the agent nodes with the worker role, which k8s does. I prefer to label the agent nodes as worker just to make the visual experience as close as possible to k8s.
$ kubectl label node <WORKER_NAME> node-role.kubernetes.io/worker=''

Replace <WORKER_NAME> with the hostname of your nodes.

2. Install Ingress Controller

$ kubectl apply -f ./traefik.yml

Verify that all is right by running kubectl get pods --all-namespaces

image

In alternatives, browse to http://<HOST_IP>/ you should see a 404 page not found:

image

3. Deploy a dummy app

Deploy a dummy app, based on nicomincuzzi/go-webapp image, and service by running:

$ kubectl apply -f dummy_app.yml

Verify your app responds correctly:

$ kubectl port-forward pod/<POD_NAME> <YOUR_LOCAL_PORT>:<POD_PORT>

4. cert-manager

cert-manager is a native Kubernetes certificate management controller. It can help with issuing certificates from a variety of sources, such as Let’s Encrypt, HashiCorp Vault, Venafi, a simple signing key pair, or self signed.

Installing with Helm

As an alternative to the YAML manifests referenced above, we also provide an official Helm chart for installing cert-manager. Read more here.

Steps

In order to install the Helm chart, you must follow these steps:

Create the namespace for cert-manager:

$ kubectl create namespace cert-manager

Add the Jetstack Helm repository:

Warning: It is important that this repository is used to install cert-manager. The version residing in the helm stable repository is deprecated and should not be used.

$ helm repo add jetstack https://charts.jetstack.io

Update your local Helm chart repository cache:

$ helm repo update

cert-manager requires a number of CRD resources to be installed into your cluster as part of installation.

This can either be done manually, using kubectl, or using the installCRDs option when installing the Helm chart.

Note: If you’re using a helm version based on Kubernetes v1.18 or below (Helm v3.2) installCRDs will not work with cert-manager v0.16. For more info see the v0.16 upgrade notes

Option 1: installing CRDs with kubectl

Install the CustomResourceDefinition resources using kubectl:

$ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.2.0/cert-manager.crds.yaml

Option 2: install CRDs as part of the Helm release

To automatically install and manage the CRDs as part of your Helm release, you must add the --set installCRDs=true flag to your Helm installation command.

Uncomment the relevant line in the next steps to enable this.

To install the cert-manager Helm chart:

$ helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --version v1.2.0 \
  --create-namespace \
  --set installCRDs=true

The default cert-manager configuration is good for the majority of users, but a full list of the available options can be found in the Helm chart README.

Verifying the installation

Once you’ve installed cert-manager, you can verify it is deployed correctly by checking the cert-manager namespace for running pods:

$ kubectl get pods --namespace cert-manager

NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-5c6866597-zw7kh               1/1     Running   0          2m
cert-manager-cainjector-577f6d9fd7-tr77l   1/1     Running   0          2m
cert-manager-webhook-787858fcdb-nlzsq      1/1     Running   0          2m

You should see the cert-manager, cert-manager-cainjector, and cert-manager-webhook pod in a Running state. It may take a minute or so for the TLS assets required for the webhook to function to be provisioned. This may cause the webhook to take a while longer to start for the first time than other pods. If you experience problems, please check the FAQ guide.

Configuration

In order to configure cert-manager to begin issuing certificates, first Issuer or ClusterIssuer resources must be created. These resources represent a particular signing authority and detail how the certificate requests are going to be honored. You can read more on the concept of Issuers here.

cert-manager supports multiple ‘in-tree’ issuer types that are denoted by being in the cert-manager.io group. cert-manager also supports external issuers than can be installed into your cluster that belong to other groups. These external issuer types behave no different and are treated equal to in tree issuer types.

When using ClusterIssuer resource types, ensure you understand the Cluster Resource Namespace where other Kubernetes resources will be referenced from.

Create ClusterIssuer resource:

kubectl apply -f cluster-issuer.yml

Verify that all it's ok running:

kubectl get clusterissuer

image

5. Expose app to extern via Ingress

Finally, expose your app to extern running the following command:

kubectl apply -f ingress.yml

How to Contribute

Make a pull request...

License

Distributed under Apache-2.0 License, please see license file within the code for more details.