diff --git a/.nvmrc b/.nvmrc
new file mode 100644
index 000000000..b97bf89b9
--- /dev/null
+++ b/.nvmrc
@@ -0,0 +1 @@
+v10.18.1
\ No newline at end of file
diff --git a/data/part-1/1-first-deploy.md b/data/part-1/1-first-deploy.md
index 46509261f..485070e4f 100644
--- a/data/part-1/1-first-deploy.md
+++ b/data/part-1/1-first-deploy.md
@@ -267,7 +267,7 @@ Keep this in mind if you want to avoid doing more work than necessary.
Let's get started!
-Create a web server that outputs "Server started in port NNNN" when it is started and deploy it into your Kubernetes cluster. Please make it so that an environment variable PORT can be used to choose that port. You will not have access to the port when it is running in Kuberetes yet. We will configure the access when we get to networking.
+Create a web server that outputs "Server started in port NNNN" when it is started and deploy it into your Kubernetes cluster. Please make it so that an environment variable PORT can be used to choose that port. You will not have access to the port when it is running in Kubernetes yet. We will configure the access when we get to networking.
@@ -287,7 +287,7 @@ $ kubectl scale deployment/hashgenerator-dep --replicas=4
$ kubectl set image deployment/hashgenerator-dep dwk-app1=jakousa/dwk-app1:b7fc18de2376da80ff0cfc72cf581a9f94d10e64
```
-Things start to get really cumbersome. It is hard to imagine how someone in their right mind could be maintaining multiple applications like that. Thankfully we will now use a declarative approach where we define how things should be rather than how they should change. This is more sustainable in the long term than the iterative approach and will let us keep our sanity.
+Things start to get really cumbersome. It is hard to imagine how someone in their right mind could be maintaining multiple applications like that. Thankfully we will now use a _declarative_ approach where we define how things should be rather than how they should change. This is more sustainable in the long run than the imperative approach and will let us keep our sanity.
Before redoing the previous steps via the declarative approach, let's take the existing deployment down.
@@ -320,11 +320,12 @@ spec:
image: jakousa/dwk-app1:b7fc18de2376da80ff0cfc72cf581a9f94d10e64
```
+
- I personally use vscode to create these yaml files. It has helpful autofill, definitions, and syntax check for Kubernetes with the extension Kubernetes by Microsoft. Even now it helpfully warns us that we haven't defined resource limitations. I won't care about that warning yet, but you can figure it out if you want to.
+ I personally use Visual studio code to create these yaml files. It has helpful autofill, definitions, and syntax check for Kubernetes with the extension Kubernetes by Microsoft. Even now it helpfully warns us that we haven't defined resource limitations. I won't care about that warning yet, but you can figure it out if you want to.
-This looks a lot like the docker-compose.yamls we have previously written. Let's ignore what we don't know for now, which is mainly labels, and focus on the things that we know:
+This looks a lot like the docker-compose.yaml files we have previously written. Let's ignore what we don't know for now, which is mainly labels, and focus on the things that we know:
- We're declaring what kind it is (kind: Deployment)
- We're declaring it a name as metadata (name: hashgenerator-dep)
@@ -350,7 +351,7 @@ $ kubectl apply -f https://raw.githubusercontent.com/kubernetes-hy/material-exam
Woah! The fact that you can apply manifest from the internet just like that will come in handy.
-Instead of deleting the deployment we could just apply a modified deployment on top of what we already have. Kubernetes will take care of rolling out a new version. By using tags (e.g. `dwk/image:tag`) with the deployments each time we update the image we can modify and apply the new deployment yaml. Previously you may have always used the 'latest' tag, or not thought about tags at all. From the tag Kubernetes will know that the image is a new one and pulls it.
+Instead of deleting the deployment, we could just apply a modified deployment on top of what we already have. Kubernetes will take care of rolling out a new version. By using tags (e.g. `dwk/image:tag`) in the deployments, each time we update the image we can modify and apply the new deployment yaml. Previously you may have always used the 'latest' tag, or not thought about tags at all. From the tag Kubernetes will know that the image is a new one and pulls it.
When updating anything in Kubernetes the usage of delete is actually an anti-pattern and you should use it only as the last option. As long as you don't delete the resource Kubernetes will do a rolling update, ensuring minimum (or none) downtime for the application. On the topic of anti-patterns: you should also always avoid doing anything imperatively! If your files don't tell Kubernetes and your team what the state should be and instead you run commands that edit the state you are just lowering the [bus factor](https://en.wikipedia.org/wiki/Bus_factor) for your cluster and application.
diff --git a/data/part-1/2-introduction-to-debugging.md b/data/part-1/2-introduction-to-debugging.md
index 2725591ee..c9ecc3dc8 100644
--- a/data/part-1/2-introduction-to-debugging.md
+++ b/data/part-1/2-introduction-to-debugging.md
@@ -14,7 +14,7 @@ After this section you
-Kubernetes is a "self-healing" system, and we'll get back to what Kubernetes consists of and how it actually works in part 5. But at this stage "self-healing" is an excellent concept: Often you (the maintainer or developer) don't have to do anything in case something goes wrong with a pod or a container.
+Kubernetes is a "self-healing" system, and we'll get back to what Kubernetes consists of and how it actually works in part 5. But at this stage "self-healing" is an excellent concept: usually, you (the maintainer or te developer) don't have to do anything in case something goes wrong with a pod or a container.
Sometimes you need to interfere, or you might have problems with your own configuration. As you are trying to find bugs in your configuration start by eliminating all possibilities one by one. The key is to be systematic and **to question everything**. Here are the preliminary tools to solve problems.
@@ -39,7 +39,7 @@ $ kubectl apply -f https://raw.githubusercontent.com/kubernetes-hy/material-exam
$ kubectl describe deployment hashgenerator-dep
Name: hashgenerator-dep
Namespace: default
- CreationTimestamp: Wed, 16 Sep 2020 16:17:39 +0300
+ CreationTimestamp: Fri, 05 Apr 2024 10:42:30 +0300
Labels:
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=hashgenerator
@@ -70,7 +70,7 @@ $ kubectl describe deployment hashgenerator-dep
Normal ScalingReplicaSet 8m39s deployment-controller Scaled up replica set hashgenerator-dep-75bdcc94c to 1
```
-There's a lot of information we are not ready to evaluate yet. But take a moment to read through everything. There're at least a few key information pieces we know, mostly because we defined them earlier in the yaml. The events are often the place to look for errors.
+There's a lot of information we are not ready to evaluate yet. Take a moment to read through everything. There're at least a few key information pieces we know, mostly because we defined them earlier in the yaml. The _Events_ is quite often the place to look for errors.
The command `describe` can be used for other resources as well. Let's see the pod next:
@@ -80,14 +80,14 @@ $ kubectl describe pod hashgenerator-dep-75bdcc94c-whwsm
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
- Normal Scheduled 15m default-scheduler Successfully assigned default/hashgenerator-dep-75bdcc94c-whwsm to k3d-k3s-default-agent-0
- Normal Pulling 15m kubelet, k3d-k3s-default-agent-0 Pulling image "jakousa/dwk-app1:b7fc18de2376da80ff0cfc72cf581a9f94d10e64"
- Normal Pulled 15m kubelet, k3d-k3s-default-agent-0 Successfully pulled image "jakousa/dwk-app1:b7fc18de2376da80ff0cfc72cf581a9f94d10e64"
- Normal Created 15m kubelet, k3d-k3s-default-agent-0 Created container hashgenerator
- Normal Started 15m kubelet, k3d-k3s-default-agent-0 Started container hashgenerator
+ Normal Scheduled 26s default-scheduler Successfully assigned default/hashgenerator-dep-7877df98df-qmck9 to k3d-k3s-default-server-0
+ Normal Pulling 15m kubelet Pulling image "jakousa/dwk-app1:b7fc18de2376da80ff0cfc72cf581a9f94d10e64"
+ Normal Pulled 26s kubelet Container image "jakousa/dwk-app1:b7fc18de2376da80ff0cfc72cf581a9f94d10e64"
+ Normal Created 26s kubelet Created container hashgenerator
+ Normal Started 26s kubelet Started container hashgenerator
```
-There's again a lot of information but let's focus on the events this time. Here we can see everything that happened. Scheduler put the pod to the node with the name "k3d-k3s-default-agent-0" successfully pulled the image and started the container. Everything is working as intended, excellent. The application is running.
+There's again a lot of information but let's focus on the events this time. Here we can see everything that happened. Scheduler successfully pulled the image and started the container in node called "k3d-k3s-default-server-0". Everything is working as intended, excellent. The application is running.
Next, let's check that the application is actually doing what it should by reading the logs.
@@ -120,5 +120,3 @@ The view shows us the same information as was in the description. But the GUI of
In addition, at the bottom, you can open a terminal with the correct context.
"The best feature in my opinion is that when I do kubectl get pod in the terminal, the dashboard you are looking at is always in the right context. Additionally, I don't need to worry about working with stale information because everything is real-time." - [Matti Paksula](http://github.com/matti)
-
-
diff --git a/data/part-1/3-introduction-to-networking.md b/data/part-1/3-introduction-to-networking.md
index 0d07418c8..09fafda2a 100644
--- a/data/part-1/3-introduction-to-networking.md
+++ b/data/part-1/3-introduction-to-networking.md
@@ -52,26 +52,26 @@ Now we can view the response from http://localhost:3003 and confirm that it is w
-External connections with docker used the flag -p `-p 3003:3000` or in docker-compose ports declaration. Unfortunately, Kubernetes isn't as simple. We're going to use either a *Service* resource or an *Ingress* resource.
+External connections with Docker used the flag -p `-p 3003:3000` or in Docker compose ports the declaration. Unfortunately, Kubernetes isn't as simple. We're going to use either a *Service* resource or an *Ingress* resource.
#### Before anything else ####
-Because we are running our cluster inside docker with k3d we will have to do some preparations.
+Because we are running our cluster inside Docker with k3d we will have to do some preparations.
Opening a route from outside of the cluster to the pod will not be enough if we have no means of accessing the cluster inside the containers!
```console
$ docker ps
- CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
- b60f6c246ebb rancher/k3d-proxy:v3.0.0 "/bin/sh -c nginx-pr…" 2 hours ago Up 2 hours 80/tcp, 0.0.0.0:58264->6443/tcp k3d-k3s-default-serverlb
- 553041f96fc6 rancher/k3s:latest "/bin/k3s agent" 2 hours ago Up 2 hours k3d-k3s-default-agent-1
- aebd23c2ef99 rancher/k3s:latest "/bin/k3s agent" 2 hours ago Up 2 hours k3d-k3s-default-agent-0
- a34e49184d37 rancher/k3s:latest "/bin/k3s server --t…" 2 hours ago Up 2 hours k3d-k3s-default-server-0
+ CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
+ b60f6c246ebb ghcr.io/k3d-io/k3d-proxy:5 "/bin/sh -c nginx-pr…" 2 hours ago Up 2 hours 80/tcp, 0.0.0.0:58264->6443/tcp k3d-k3s-default-serverlb
+ 553041f96fc6 rancher/k3s:latest "/bin/k3s agent" 2 hours ago Up 2 hours k3d-k3s-default-agent-1
+ aebd23c2ef99 rancher/k3s:latest "/bin/k3s agent" 2 hours ago Up 2 hours k3d-k3s-default-agent-0
+ a34e49184d37 rancher/k3s:latest "/bin/k3s server --t…" 2 hours ago Up 2 hours k3d-k3s-default-server-0
```
-K3d has helpfully prepared us a port to access the API in 6443 and, in addition, has opened a port to 80. All requests to the load balancer here will be proxied to the same ports of all server nodes of the cluster. However, for testing purposes, we'll want an individual port open for a single node. Let's delete our old cluster and create a new one with port some ports open.
+Be scrolling a bit right we see that K3d has helpfully prepared us the port 6443 to access the API. Also the port 80 is open. All requests to the load balancer here will be proxied to the same ports of all server nodes of the cluster. However, for testing purposes, we'll want _an individual port open for a single node_. Let's delete our old cluster and create a new one with some ports open.
-K3d documentation tells us how the ports are opened, we'll open local 8081 to 80 in k3d-k3s-default-serverlb and local 8082 to 30080 in k3d-k3s-default-agent-0. The 30080 is chosen almost completely randomly, but needs to be a value between 30000-32767 for the next step:
+K3d [documentation](https://k3d.io/v5.3.0/usage/commands/k3d_cluster_create/) tells us how the ports are opened, we'll open local 8081 to 80 in k3d-k3s-default-serverlb and local 8082 to 30080 in k3d-k3s-default-agent-0. The 30080 is chosen almost completely randomly, but needs to be a value between 30000-32767 for the next step:
```console
$ k3d cluster delete
@@ -104,7 +104,7 @@ Your OS may support using the host network so no ports need to be opened. Howeve
#### What is a Service? ####
-As *Deployment* resources took care of deployments for us. *Service* resource will take care of serving the application to connections from outside of the cluster.
+As *Deployment* resources took care of deployments for us. *Service* resources will take care of serving the application to connections from outside (and also inside!) of the cluster.
Create a file service.yaml into the manifests folder and we need the service to do the following things:
@@ -141,7 +141,7 @@ $ kubectl apply -f manifests/service.yaml
As we've published 8082 as 30080 we can access it now via http://localhost:8082.
-We've now defined a nodeport with `type: NodePort`. *NodePorts* simply ports that are opened by Kubernetes to **all of the nodes** and the service will handle requests in that port. NodePorts are not flexible and require you to assign a different port for every application. As such NodePorts are not used in production but are helpful to know about.
+We've now defined a nodeport with `type: NodePort`. [NodePorts](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) are simply ports that are opened by Kubernetes to **all of the nodes** and the service will handle requests in that port. NodePorts are not flexible and require you to assign a different port for every application. As such NodePorts are not used in production but are helpful to know about.
What we'd want to use instead of NodePort would be a *LoadBalancer* type service but this "only" works with cloud providers as it configures a, possibly costly, load balancer for it. We'll get to know them in part 3.
@@ -155,9 +155,9 @@ There's one additional resource that will help us with serving the application,
#### What is an Ingress? ####
-Incoming Network Access resource *Ingress* is a completely different type of resource from *Services*. If you've got your OSI model memorized, it works in layer 7 while services work on layer 4. You could see these used together: first the aforementioned *LoadBalancer* and then Ingress to handle routing. In our case, as we don't have a load balancer available we can use the Ingress as the first stop. If you're familiar with reverse proxies like Nginx, Ingress should seem familiar.
+Incoming Network Access resource [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) is a completely different type of resource from *Services*. If you've got your [OSI model](https://en.wikipedia.org/wiki/OSI_model) memorized, it works in layer 7 while services work on layer 4. You could see these used together: first the aforementioned *LoadBalancer* and then Ingress to handle routing. In our case, as we don't have a load balancer available we can use the Ingress as the first stop. If you're familiar with reverse proxies like Nginx, Ingress should seem familiar.
-Ingresses are implemented by various different "controllers". This means that ingresses do not automatically work in a cluster, but gives you the freedom of choosing which ingress controller works for you the best. K3s has [Traefik](https://containo.us/traefik/) installed already. Other options include Istio and Nginx Ingress Controller, [more here](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/).
+Ingresses are implemented by various different "controllers". This means that ingresses do not automatically work in a cluster, but give you the freedom of choosing which ingress controller works for you the best. K3s has [Traefik](https://containo.us/traefik/) installed already. Other options include Istio and Nginx Ingress Controller, [more here](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/).
Switching to Ingress will require us to create an Ingress resource. Ingress will route incoming traffic forward to a *Services*, but the old *NodePort* Service won't do.
@@ -166,7 +166,7 @@ $ kubectl delete -f manifests/service.yaml
service "hashresponse-svc" deleted
```
-A ClusterIP type Service resource gives the Service an internal IP that'll be accessible in the cluster.
+A [ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/#type-clusterip) type Service resource gives the Service an internal IP that'll be accessible within the cluster.
The following will let TCP traffic from port 2345 to port 3000.
@@ -187,7 +187,7 @@ spec:
targetPort: 3000
```
-For resource 2 the new *Ingress*.
+The second resource we need is the new *Ingress*.
1. Declare that it should be an Ingress
2. And route all traffic to our service
@@ -256,5 +256,3 @@ We can see that the ingress is listening on port 80. As we already opened port t
The ping-pong application will need to listen requests on '/pingpong', so you may have to make changes to its code. This can be avoided by configuring the ingress to rewrite the path, but we will leave that as an optional exercise. You can check out https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource
-
-
diff --git a/data/part-1/4-introduction-to-storage.md b/data/part-1/4-introduction-to-storage.md
index 2375d0b4f..4e8d01aad 100644
--- a/data/part-1/4-introduction-to-storage.md
+++ b/data/part-1/4-introduction-to-storage.md
@@ -22,15 +22,15 @@ There are multiple types of volumes and we'll get started with two of them.
### Simple Volume ###
-Where in docker and docker-compose it would essentially mean that we had something persistent, here that is not the case. *emptyDir* volumes are shared filesystems inside a pod, this means that their lifecycle is tied to a pod. When the pod is destroyed the data is lost. In addition, simply moving the pod from another node will destroy the contents of the volume as the space is reserved from the node the pod is running on. Even with the limitations it may be used as a cache as it persists between container restarts or it can be used to share files between two containers in a pod.
+A [volume](https://docs.docker.com/storage/volumes/) in Docker and Docker compose is the way to persist the data the containers are using. With Kubernetes [the simple volumes](https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/) that is not quite the case.
-Before we can get started with this, we need an application that shares data with another application. In this case, it will work as a method to share simple log files with each other. We'll need to develop the apps:
+The simple Kubernetes volumes, in technical terms *emptyDir* volumes, are shared filesystems _inside a pod_, this means that their lifecycle is tied to a pod. When the pod is destroyed the data is lost. In addition, simply moving the pod from another node will destroy the contents of the volume as the space is reserved from the node the pod is running on. So surely you should not use [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir)volumes e.g. for backing up a database. Even with the limitations it may be used as a cache as it persists between container restarts or it can be used to share files between two containers in a pod.
-App 1 will check if /usr/src/app/files/image.jpg exists and if not download a random image and save it as image.jpg. Any HTTP request will trigger a new image generation.
+Before we can get started with this, we need an application that shares data with another application. In this case, it will work as a method to share simple log files with each other. We'll need to develop:
+- App 1 that will check if /usr/src/app/files/image.jpg exists and if not, it downloads a random image and saves it as image.jpg. Any HTTP request will trigger a new image generation.
+- App 2 that will check for the file /usr/src/app/files/image.jpg and shows it, if it is available.
-App 2 will check for /usr/src/app/files/image.jpg and show it if it is available.
-
-They share a deployment so that both of them are inside the same pod. My version is available for you to use [here](https://github.com/kubernetes-hy/material-example/blob/b9ff709b4af7ca13643635e07df7367b54f5c575/app3/manifests/deployment.yaml). The example includes ingress and service to access the application.
+Apps share a deployment so that both of them are **inside the same pod**. My version is available for you to use [here](https://github.com/kubernetes-hy/material-example/blob/b9ff709b4af7ca13643635e07df7367b54f5c575/app3/manifests/deployment.yaml). The example includes ingress and service to access the application.
**deployment.yaml**
@@ -65,7 +65,7 @@ spec:
mountPath: /usr/src/app/files
```
-As the display is dependant on the volume we can confirm that it works by accessing the image-response and getting the image. The provided ingress used the previously opened port 8081
+As the display is dependent on the volume we can confirm that it works by accessing the image-response and getting the image. The provided ingress used the previously opened port 8081
Note that all data is lost when the pod goes down.
@@ -79,13 +79,15 @@ Note that all data is lost when the pod goes down.
Either application can generate the hash. The reader or the writer.
+ You may find [this](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_logs/) helpful now since there are more than one container running inside a pod.
+
### Persistent Volumes ###
This type of storage is what you probably had in mind when we started talking about volumes. Unfortunately, we're quite limited with the options here and will return to *PersistentVolumes* briefly in Part 2 and again in Part 3 with GKE.
-The reason for the difficulty is because you should not store data with the application or create a dependency on the filesystem by the application. Kubernetes supports cloud providers very well and you can run your own storage system. During this course, we are not going to run our own storage system as that would be a huge undertaking and most likely "in real life" you are going to use something hosted by a cloud provider. This topic would probably be a part of its own, but let's scratch the surface and try something you can use to run something at home.
+The reason for the difficulty is that you should not store data with the application or create a dependency on the filesystem by the application. Kubernetes supports cloud providers very well and you can run your own storage system. During this course, we are not going to run our own storage system as that would be a huge undertaking and most likely "in real life" you are going to use something hosted by a cloud provider. This topic would probably be a part of its own, but let's scratch the surface and try something you can use to run something at home.
A *local* volume is a *PersistentVolume* that binds a path from the node to use as a storage. This ties the volume to the node.
@@ -166,7 +168,7 @@ And apply it with persistentvolume.yaml and persistentvolumeclaim.yaml.
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-hy/material-example/master/app3/manifests/deployment-persistent.yaml
```
-With the previous service and ingress we can access it from http://localhost:8081. To confirm that the data is persistent we can run
+With the previous service and ingress, we can access it from http://localhost:8081. To confirm that the data is persistent we can run
```console
$ kubectl delete -f https://raw.githubusercontent.com/kubernetes-hy/material-example/master/app3/manifests/deployment-persistent.yaml
@@ -210,19 +212,19 @@ If you are interested in learning more about running your own storage you can ch
Make sure to cache the image into a volume so that the API isn't needed for new images every time we access the application or the container crashes.
- Best way to test what happens when your container shuts down is likely by shutting down the container, so you can add logic for that as well, for testing purposes.
+ The best way to test what happens when your container shuts down is likely by shutting down the container, so you can add logic for that as well, for testing purposes.
- For the project we'll need to do some coding to start seeing results in the next part.
+ For the project, we'll need to do some coding to start seeing results in the next part.
1. Add an input field. The input should not take todos that are over 140 characters long.
2. Add a send button. It does not have to send the todo yet.
- 3. Add a list for the existing todos with some hardcoded todos.
+ 3. Add a list of the existing todos with some hardcoded todos.
Maybe something similar to this: