Skip to content

Commit

Permalink
tweaks part 1
Browse files Browse the repository at this point in the history
  • Loading branch information
mluukkai committed Apr 6, 2024
1 parent e50e48c commit 60f16b9
Show file tree
Hide file tree
Showing 5 changed files with 45 additions and 45 deletions.
1 change: 1 addition & 0 deletions .nvmrc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
v10.18.1
11 changes: 6 additions & 5 deletions data/part-1/1-first-deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,7 @@ Keep this in mind if you want to avoid doing more work than necessary.

Let's get started!

Create a web server that outputs "Server started in port NNNN" when it is started and deploy it into your Kubernetes cluster. Please make it so that an environment variable PORT can be used to choose that port. You will not have access to the port when it is running in Kuberetes yet. We will configure the access when we get to networking.
Create a web server that outputs "Server started in port NNNN" when it is started and deploy it into your Kubernetes cluster. Please make it so that an environment variable PORT can be used to choose that port. You will not have access to the port when it is running in Kubernetes yet. We will configure the access when we get to networking.

</exercise>

Expand All @@ -287,7 +287,7 @@ $ kubectl scale deployment/hashgenerator-dep --replicas=4
$ kubectl set image deployment/hashgenerator-dep dwk-app1=jakousa/dwk-app1:b7fc18de2376da80ff0cfc72cf581a9f94d10e64
```

Things start to get really cumbersome. It is hard to imagine how someone in their right mind could be maintaining multiple applications like that. Thankfully we will now use a declarative approach where we define how things should be rather than how they should change. This is more sustainable in the long term than the iterative approach and will let us keep our sanity.
Things start to get really cumbersome. It is hard to imagine how someone in their right mind could be maintaining multiple applications like that. Thankfully we will now use a _declarative_ approach where we define how things should be rather than how they should change. This is more sustainable in the long run than the imperative approach and will let us keep our sanity.

Before redoing the previous steps via the declarative approach, let's take the existing deployment down.

Expand Down Expand Up @@ -320,11 +320,12 @@ spec:
image: jakousa/dwk-app1:b7fc18de2376da80ff0cfc72cf581a9f94d10e64
```
<text-box name="Text editor of choice" variant="hint">
I personally use vscode to create these yaml files. It has helpful autofill, definitions, and syntax check for Kubernetes with the extension Kubernetes by Microsoft. Even now it helpfully warns us that we haven't defined resource limitations. I won't care about that warning yet, but you can figure it out if you want to.
I personally use Visual studio code to create these yaml files. It has helpful autofill, definitions, and syntax check for Kubernetes with the extension Kubernetes by Microsoft. Even now it helpfully warns us that we haven't defined resource limitations. I won't care about that warning yet, but you can figure it out if you want to.
</text-box>
This looks a lot like the docker-compose.yamls we have previously written. Let's ignore what we don't know for now, which is mainly labels, and focus on the things that we know:
This looks a lot like the docker-compose.yaml files we have previously written. Let's ignore what we don't know for now, which is mainly labels, and focus on the things that we know:
- We're declaring what kind it is (kind: Deployment)
- We're declaring it a name as metadata (name: hashgenerator-dep)
Expand All @@ -350,7 +351,7 @@ $ kubectl apply -f https://raw.githubusercontent.com/kubernetes-hy/material-exam

Woah! The fact that you can apply manifest from the internet just like that will come in handy.

Instead of deleting the deployment we could just apply a modified deployment on top of what we already have. Kubernetes will take care of rolling out a new version. By using tags (e.g. `dwk/image:tag`) with the deployments each time we update the image we can modify and apply the new deployment yaml. Previously you may have always used the 'latest' tag, or not thought about tags at all. From the tag Kubernetes will know that the image is a new one and pulls it.
Instead of deleting the deployment, we could just apply a modified deployment on top of what we already have. Kubernetes will take care of rolling out a new version. By using tags (e.g. `dwk/image:tag`) in the deployments, each time we update the image we can modify and apply the new deployment yaml. Previously you may have always used the 'latest' tag, or not thought about tags at all. From the tag Kubernetes will know that the image is a new one and pulls it.

When updating anything in Kubernetes the usage of delete is actually an anti-pattern and you should use it only as the last option. As long as you don't delete the resource Kubernetes will do a rolling update, ensuring minimum (or none) downtime for the application. On the topic of anti-patterns: you should also always avoid doing anything imperatively! If your files don't tell Kubernetes and your team what the state should be and instead you run commands that edit the state you are just lowering the [bus factor](https://en.wikipedia.org/wiki/Bus_factor) for your cluster and application.

Expand Down
20 changes: 9 additions & 11 deletions data/part-1/2-introduction-to-debugging.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ After this section you

</text-box>

Kubernetes is a "self-healing" system, and we'll get back to what Kubernetes consists of and how it actually works in part 5. But at this stage "self-healing" is an excellent concept: Often you (the maintainer or developer) don't have to do anything in case something goes wrong with a pod or a container.
Kubernetes is a "self-healing" system, and we'll get back to what Kubernetes consists of and how it actually works in part 5. But at this stage "self-healing" is an excellent concept: usually, you (the maintainer or te developer) don't have to do anything in case something goes wrong with a pod or a container.

Sometimes you need to interfere, or you might have problems with your own configuration. As you are trying to find bugs in your configuration start by eliminating all possibilities one by one. The key is to be systematic and **to question everything**. Here are the preliminary tools to solve problems.

Expand All @@ -39,7 +39,7 @@ $ kubectl apply -f https://raw.githubusercontent.com/kubernetes-hy/material-exam
$ kubectl describe deployment hashgenerator-dep
Name: hashgenerator-dep
Namespace: default
CreationTimestamp: Wed, 16 Sep 2020 16:17:39 +0300
CreationTimestamp: Fri, 05 Apr 2024 10:42:30 +0300
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=hashgenerator
Expand Down Expand Up @@ -70,7 +70,7 @@ $ kubectl describe deployment hashgenerator-dep
Normal ScalingReplicaSet 8m39s deployment-controller Scaled up replica set hashgenerator-dep-75bdcc94c to 1
```

There's a lot of information we are not ready to evaluate yet. But take a moment to read through everything. There're at least a few key information pieces we know, mostly because we defined them earlier in the yaml. The events are often the place to look for errors.
There's a lot of information we are not ready to evaluate yet. Take a moment to read through everything. There're at least a few key information pieces we know, mostly because we defined them earlier in the yaml. The _Events_ is quite often the place to look for errors.

The command `describe` can be used for other resources as well. Let's see the pod next:

Expand All @@ -80,14 +80,14 @@ $ kubectl describe pod hashgenerator-dep-75bdcc94c-whwsm
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned default/hashgenerator-dep-75bdcc94c-whwsm to k3d-k3s-default-agent-0
Normal Pulling 15m kubelet, k3d-k3s-default-agent-0 Pulling image "jakousa/dwk-app1:b7fc18de2376da80ff0cfc72cf581a9f94d10e64"
Normal Pulled 15m kubelet, k3d-k3s-default-agent-0 Successfully pulled image "jakousa/dwk-app1:b7fc18de2376da80ff0cfc72cf581a9f94d10e64"
Normal Created 15m kubelet, k3d-k3s-default-agent-0 Created container hashgenerator
Normal Started 15m kubelet, k3d-k3s-default-agent-0 Started container hashgenerator
Normal Scheduled 26s default-scheduler Successfully assigned default/hashgenerator-dep-7877df98df-qmck9 to k3d-k3s-default-server-0
Normal Pulling 15m kubelet Pulling image "jakousa/dwk-app1:b7fc18de2376da80ff0cfc72cf581a9f94d10e64"
Normal Pulled 26s kubelet Container image "jakousa/dwk-app1:b7fc18de2376da80ff0cfc72cf581a9f94d10e64"
Normal Created 26s kubelet Created container hashgenerator
Normal Started 26s kubelet Started container hashgenerator
```

There's again a lot of information but let's focus on the events this time. Here we can see everything that happened. Scheduler put the pod to the node with the name "k3d-k3s-default-agent-0" successfully pulled the image and started the container. Everything is working as intended, excellent. The application is running.
There's again a lot of information but let's focus on the events this time. Here we can see everything that happened. Scheduler successfully pulled the image and started the container in node called "k3d-k3s-default-server-0". Everything is working as intended, excellent. The application is running.

Next, let's check that the application is actually doing what it should by reading the logs.

Expand Down Expand Up @@ -120,5 +120,3 @@ The view shows us the same information as was in the description. But the GUI of
In addition, at the bottom, you can open a terminal with the correct context.

"The best feature in my opinion is that when I do kubectl get pod in the terminal, the dashboard you are looking at is always in the right context. Additionally, I don't need to worry about working with stale information because everything is real-time." - [Matti Paksula](https://github.com/matti)

<quiz id="2dc3ffa9-6a47-4c08-857b-f87f87b9dd9e"></quiz>
32 changes: 15 additions & 17 deletions data/part-1/3-introduction-to-networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,26 +52,26 @@ Now we can view the response from http://localhost:3003 and confirm that it is w

</exercise>

External connections with docker used the flag -p `-p 3003:3000` or in docker-compose ports declaration. Unfortunately, Kubernetes isn't as simple. We're going to use either a *Service* resource or an *Ingress* resource.
External connections with Docker used the flag -p `-p 3003:3000` or in Docker compose ports the declaration. Unfortunately, Kubernetes isn't as simple. We're going to use either a *Service* resource or an *Ingress* resource.

#### Before anything else ####

Because we are running our cluster inside docker with k3d we will have to do some preparations.
Because we are running our cluster inside Docker with k3d we will have to do some preparations.

Opening a route from outside of the cluster to the pod will not be enough if we have no means of accessing the cluster inside the containers!

```console
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b60f6c246ebb rancher/k3d-proxy:v3.0.0 "/bin/sh -c nginx-pr…" 2 hours ago Up 2 hours 80/tcp, 0.0.0.0:58264->6443/tcp k3d-k3s-default-serverlb
553041f96fc6 rancher/k3s:latest "/bin/k3s agent" 2 hours ago Up 2 hours k3d-k3s-default-agent-1
aebd23c2ef99 rancher/k3s:latest "/bin/k3s agent" 2 hours ago Up 2 hours k3d-k3s-default-agent-0
a34e49184d37 rancher/k3s:latest "/bin/k3s server --t…" 2 hours ago Up 2 hours k3d-k3s-default-server-0
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b60f6c246ebb ghcr.io/k3d-io/k3d-proxy:5 "/bin/sh -c nginx-pr…" 2 hours ago Up 2 hours 80/tcp, 0.0.0.0:58264->6443/tcp k3d-k3s-default-serverlb
553041f96fc6 rancher/k3s:latest "/bin/k3s agent" 2 hours ago Up 2 hours k3d-k3s-default-agent-1
aebd23c2ef99 rancher/k3s:latest "/bin/k3s agent" 2 hours ago Up 2 hours k3d-k3s-default-agent-0
a34e49184d37 rancher/k3s:latest "/bin/k3s server --t…" 2 hours ago Up 2 hours k3d-k3s-default-server-0
```

K3d has helpfully prepared us a port to access the API in 6443 and, in addition, has opened a port to 80. All requests to the load balancer here will be proxied to the same ports of all server nodes of the cluster. However, for testing purposes, we'll want an individual port open for a single node. Let's delete our old cluster and create a new one with port some ports open.
Be scrolling a bit right we see that K3d has helpfully prepared us the port 6443 to access the API. Also the port 80 is open. All requests to the load balancer here will be proxied to the same ports of all server nodes of the cluster. However, for testing purposes, we'll want _an individual port open for a single node_. Let's delete our old cluster and create a new one with some ports open.

K3d documentation tells us how the ports are opened, we'll open local 8081 to 80 in k3d-k3s-default-serverlb and local 8082 to 30080 in k3d-k3s-default-agent-0. The 30080 is chosen almost completely randomly, but needs to be a value between 30000-32767 for the next step:
K3d [documentation](https://k3d.io/v5.3.0/usage/commands/k3d_cluster_create/) tells us how the ports are opened, we'll open local 8081 to 80 in k3d-k3s-default-serverlb and local 8082 to 30080 in k3d-k3s-default-agent-0. The 30080 is chosen almost completely randomly, but needs to be a value between 30000-32767 for the next step:

```console
$ k3d cluster delete
Expand Down Expand Up @@ -104,7 +104,7 @@ Your OS may support using the host network so no ports need to be opened. Howeve

#### What is a Service? ####

As *Deployment* resources took care of deployments for us. *Service* resource will take care of serving the application to connections from outside of the cluster.
As *Deployment* resources took care of deployments for us. *Service* resources will take care of serving the application to connections from outside (and also inside!) of the cluster.

Create a file service.yaml into the manifests folder and we need the service to do the following things:

Expand Down Expand Up @@ -141,7 +141,7 @@ $ kubectl apply -f manifests/service.yaml

As we've published 8082 as 30080 we can access it now via http://localhost:8082.

We've now defined a nodeport with `type: NodePort`. *NodePorts* simply ports that are opened by Kubernetes to **all of the nodes** and the service will handle requests in that port. NodePorts are not flexible and require you to assign a different port for every application. As such NodePorts are not used in production but are helpful to know about.
We've now defined a nodeport with `type: NodePort`. [NodePorts](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) are simply ports that are opened by Kubernetes to **all of the nodes** and the service will handle requests in that port. NodePorts are not flexible and require you to assign a different port for every application. As such NodePorts are not used in production but are helpful to know about.

What we'd want to use instead of NodePort would be a *LoadBalancer* type service but this "only" works with cloud providers as it configures a, possibly costly, load balancer for it. We'll get to know them in part 3.

Expand All @@ -155,9 +155,9 @@ There's one additional resource that will help us with serving the application,

#### What is an Ingress? ####

Incoming Network Access resource *Ingress* is a completely different type of resource from *Services*. If you've got your OSI model memorized, it works in layer 7 while services work on layer 4. You could see these used together: first the aforementioned *LoadBalancer* and then Ingress to handle routing. In our case, as we don't have a load balancer available we can use the Ingress as the first stop. If you're familiar with reverse proxies like Nginx, Ingress should seem familiar.
Incoming Network Access resource [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) is a completely different type of resource from *Services*. If you've got your [OSI model](https://en.wikipedia.org/wiki/OSI_model) memorized, it works in layer 7 while services work on layer 4. You could see these used together: first the aforementioned *LoadBalancer* and then Ingress to handle routing. In our case, as we don't have a load balancer available we can use the Ingress as the first stop. If you're familiar with reverse proxies like Nginx, Ingress should seem familiar.

Ingresses are implemented by various different "controllers". This means that ingresses do not automatically work in a cluster, but gives you the freedom of choosing which ingress controller works for you the best. K3s has [Traefik](https://containo.us/traefik/) installed already. Other options include Istio and Nginx Ingress Controller, [more here](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/).
Ingresses are implemented by various different "controllers". This means that ingresses do not automatically work in a cluster, but give you the freedom of choosing which ingress controller works for you the best. K3s has [Traefik](https://containo.us/traefik/) installed already. Other options include Istio and Nginx Ingress Controller, [more here](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/).

Switching to Ingress will require us to create an Ingress resource. Ingress will route incoming traffic forward to a *Services*, but the old *NodePort* Service won't do.

Expand All @@ -166,7 +166,7 @@ $ kubectl delete -f manifests/service.yaml
service "hashresponse-svc" deleted
```

A ClusterIP type Service resource gives the Service an internal IP that'll be accessible in the cluster.
A [ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/#type-clusterip) type Service resource gives the Service an internal IP that'll be accessible within the cluster.

The following will let TCP traffic from port 2345 to port 3000.

Expand All @@ -187,7 +187,7 @@ spec:
targetPort: 3000
```
For resource 2 the new *Ingress*.
The second resource we need is the new *Ingress*.
1. Declare that it should be an Ingress
2. And route all traffic to our service
Expand Down Expand Up @@ -256,5 +256,3 @@ We can see that the ingress is listening on port 80. As we already opened port t
The ping-pong application will need to listen requests on '/pingpong', so you may have to make changes to its code. This can be avoided by configuring the ingress to rewrite the path, but we will leave that as an optional exercise. You can check out https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource

</exercise>

<quiz id="3ac75f4c-037e-4319-b581-0545bd3b76d9"></quiz>
Loading

0 comments on commit 60f16b9

Please sign in to comment.