From 05ada8acd575d4a5d10ec981fa0e52763a95b62e Mon Sep 17 00:00:00 2001 From: "Lubomir I. Ivanov" Date: Thu, 14 Mar 2019 23:15:21 +0200 Subject: [PATCH 1/4] kubeadm: update the 1.14 HA guide --- .../setup/independent/high-availability.md | 323 +++++++++--------- 1 file changed, 169 insertions(+), 154 deletions(-) diff --git a/content/en/docs/setup/independent/high-availability.md b/content/en/docs/setup/independent/high-availability.md index 10e0e2b32ce37..66a50b5b863f2 100644 --- a/content/en/docs/setup/independent/high-availability.md +++ b/content/en/docs/setup/independent/high-availability.md @@ -19,15 +19,12 @@ control plane nodes and etcd members are separated. Before proceeding, you should carefully consider which approach best meets the needs of your applications and environment. [This comparison topic](/docs/setup/independent/ha-topology/) outlines the advantages and disadvantages of each. -Your clusters must run Kubernetes version 1.12 or later. You should also be aware that -setting up HA clusters with kubeadm is still experimental and will be further simplified -in future versions. You might encounter issues with upgrading your clusters, for example. +You should also be aware that setting up HA clusters with kubeadm is still experimental and will be further +simplified in future versions. You might encounter issues with upgrading your clusters, for example. We encourage you to try either approach, and provide us with feedback in the kubeadm [issue tracker](https://github.com/kubernetes/kubeadm/issues/new). -Note that the alpha feature gate `HighAvailability` is deprecated in v1.12 and removed in v1.13. - -See also [The HA upgrade documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-ha-1-13). +See also [The HA upgrade documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-ha-1-14). {{< caution >}} This page does not address running your cluster on a cloud provider. In a cloud @@ -57,28 +54,12 @@ For the external etcd cluster only, you also need: - Three additional machines for etcd members -{{< note >}} -The following examples run Calico as the Pod networking provider. If you run another -networking provider, make sure to replace any default values as needed. -{{< /note >}} - {{% /capture %}} {{% capture steps %}} ## First steps for both methods -{{< note >}} -**Note**: All commands on any control plane or etcd node should be -run as root. -{{< /note >}} - -- Some CNI network plugins like Calico require a CIDR such as `192.168.0.0/16` and - some like Weave do not. See the [CNI network - documentation](/docs/setup/independent/create-cluster-kubeadm/#pod-network). - To add a pod CIDR set the `podSubnet: 192.168.0.0/16` field under - the `networking` object of `ClusterConfiguration`. - ### Create load balancer for kube-apiserver {{< note >}} @@ -119,38 +100,6 @@ option. Your cluster requirements may need a different configuration. 1. Add the remaining control plane nodes to the load balancer target group. -### Configure SSH - -SSH is required if you want to control all nodes from a single machine. - -1. Enable ssh-agent on your main device that has access to all other nodes in - the system: - - ``` - eval $(ssh-agent) - ``` - -1. Add your SSH identity to the session: - - ``` - ssh-add ~/.ssh/path_to_private_key - ``` - -1. SSH between nodes to check that the connection is working correctly. - - - When you SSH to any node, make sure to add the `-A` flag: - - ``` - ssh -A 10.0.0.7 - ``` - - - When using sudo on any node, make sure to preserve the environment so SSH - forwarding works: - - ``` - sudo -E -s - ``` - ## Stacked control plane and etcd nodes ### Steps for the first control plane node @@ -160,9 +109,6 @@ SSH is required if you want to control all nodes from a single machine. apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: stable - apiServer: - certSANs: - - "LOAD_BALANCER_DNS" controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" - `kubernetesVersion` should be set to the Kubernetes version to use. This @@ -170,131 +116,119 @@ SSH is required if you want to control all nodes from a single machine. - `controlPlaneEndpoint` should match the address or DNS and port of the load balancer. - It's recommended that the versions of kubeadm, kubelet, kubectl and Kubernetes match. -1. Make sure that the node is in a clean state: + {{< note >}} + Some CNI network plugins like Calico require a CIDR such as `192.168.0.0/16` and + some like Weave do not. See the [CNI network + documentation](/docs/setup/independent/create-cluster-kubeadm/#pod-network). + To add a pod CIDR set the `podSubnet: 192.168.0.0/16` field under + the `networking` object of `ClusterConfiguration`. + {{< /note >}} + +1. Initialize the control plane: ```sh - sudo kubeadm init --config=kubeadm-config.yaml + sudo kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs ``` - - You should see something like: + - The `--experimental-upload-certs` flags is used to upload the certificates that should be shared + across all the control-plane instances to the cluster. If instead, you prefer to copy certs across + control-plane nodes manually or using automation tools, please remove this flag and refer to [Manual + certificate distribution](#manual-certs) section bellow. + + After the command completes you should see something like so: ```sh ... - You can now join any number of machines by running the following on each node - as root: + You can now join any number of control-plane node by running the following command on each as a root: + kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --experimental-control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07 - kubeadm join 192.168.0.200:6443 --token j04n3m.octy8zely83cy2ts --discovery-token-ca-cert-hash sha256:84938d2a22203a8e56a787ec0c6ddad7bc7dbd52ebabc62fd5f4dbea72b14d1f - ``` + Please note that the certificate-key gives access to cluster sensitive data, keep it secret! + As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward. -1. Copy this output to a text file. You will need it later to join other control plane nodes to the - cluster. + Then you can join any number of worker nodes by running the following on each as root: + kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 + ``` -1. Apply the Weave CNI plugin: + - Copy this output to a text file. You will need it later to join control plane and worker nodes to the cluster. + - When `--experimental-upload-certs` is used with `kubeadm init`, the certificates of the primary control plane + are encrypted and uploaded in the `kubeadm-certs` Secret. + - Please note that the `kubeadm-certs` Secret and decryption key expire after two hours. To re-upload the certificates + and generate a new decryption key, use the following command on a control plane that is already joined the cluster: - ```sh - kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" - ``` + ```sh + sudo kubeadm init phase upload-certs --experimental-upload-certs + ``` -1. Type the following and watch the pods of the components get started: + {{< caution >}} + As stated in the command output, please note that the certificate-key gives access to cluster sensitive data, keep it secret! + {{< /caution >}} - ```sh - kubectl get pod -n kube-system -w - ``` +1. Apply the CNI plugin of your choice: - - It's recommended that you join new control plane nodes only after the first node has finished initializing. + [Follow these instructions](/docs/setup/independent/create-cluster-kubeadm/#pod-network) to install + the pod network. Make sure this corresponds to whichever pod CIDR you provided in the kubeadm configuration file. -1. Copy the certificate files from the first control plane node to the rest: + In this example we are using Weave Net: - In the following example, replace `CONTROL_PLANE_IPS` with the IP addresses of the - other control plane nodes. ```sh - USER=ubuntu # customizable - CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8" - for host in ${CONTROL_PLANE_IPS}; do - scp /etc/kubernetes/pki/ca.crt "${USER}"@$host: - scp /etc/kubernetes/pki/ca.key "${USER}"@$host: - scp /etc/kubernetes/pki/sa.key "${USER}"@$host: - scp /etc/kubernetes/pki/sa.pub "${USER}"@$host: - scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host: - scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host: - scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt - scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key - scp /etc/kubernetes/admin.conf "${USER}"@$host: - done + kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" ``` -{{< caution >}} -Copy only the certificates in the above list. kubeadm will take care of generating the rest of the certificates -with the required SANs for the joining control-plane instances. If you copy all the certificates by mistake, -the creation of additional nodes could fail due to a lack of required SANs. -{{< /caution >}} - -### Steps for the rest of the control plane nodes - -1. Move the files created by the previous step where `scp` was used: +1. Type the following and watch the pods of the control plane components get started: ```sh - USER=ubuntu # customizable - mkdir -p /etc/kubernetes/pki/etcd - mv /home/${USER}/ca.crt /etc/kubernetes/pki/ - mv /home/${USER}/ca.key /etc/kubernetes/pki/ - mv /home/${USER}/sa.pub /etc/kubernetes/pki/ - mv /home/${USER}/sa.key /etc/kubernetes/pki/ - mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/ - mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/ - mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt - mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key - mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf + kubectl get pod -n kube-system -w ``` - This process writes all the requested files in the `/etc/kubernetes` folder. - -1. Start `kubeadm join` on this node using the join command that was previously given to you by `kubeadm init` on - the first node. It should look something like this: +### Steps for the rest of the control plane nodes - ```sh - sudo kubeadm join 192.168.0.200:6443 --token j04n3m.octy8zely83cy2ts --discovery-token-ca-cert-hash sha256:84938d2a22203a8e56a787ec0c6ddad7bc7dbd52ebabc62fd5f4dbea72b14d1f --experimental-control-plane - ``` +{{< caution >}} +You must join new control plane nodes sequentially, only after the first node has finished initializing. +{{< /caution >}} - - Notice the addition of the `--experimental-control-plane` flag. This flag automates joining this - control plane node to the cluster. +For each additional control plane node you should: -1. Type the following and watch the pods of the components get started: +1. Execute the join command that was previously + given to you by `kubeadm init` on the first node. It should look something like this: ```sh - kubectl get pod -n kube-system -w + kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --experimental-control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07 ``` -1. Repeat these steps for the rest of the control plane nodes. + - The `--experimental-control-plane` flag tells `kubeadm join` to create a new control plane. + - The `--certificate-key ...` will cause the control plane certificates to be downloaded + from the `kubeadm-certs` Secret in the cluster and be decrypted using the given key. ## External etcd nodes +Setting up a cluster with external etcd nodes is similar to the procedure used for stacked etcd +with the exception that you should setup etcd first, and you should pass the etcd information +in the kubeadm config file. + ### Set up the etcd cluster -- Follow [these instructions](/docs/setup/independent/setup-ha-etcd-with-kubeadm/) - to set up the etcd cluster. +1. Follow [these instructions](/docs/setup/independent/setup-ha-etcd-with-kubeadm/) + to set up the etcd cluster. -### Set up the first control plane node +1. Setup SSH as described [here](#manual-certs). -1. Copy the following files from any node from the etcd cluster to this node: +1. Copy the following files from any etcd node in the cluster to the first control plane node: ```sh export CONTROL_PLANE="ubuntu@10.0.0.7" - +scp /etc/kubernetes/pki/etcd/ca.crt "${CONTROL_PLANE}": - +scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${CONTROL_PLANE}": - +scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}": + scp /etc/kubernetes/pki/etcd/ca.crt "${CONTROL_PLANE}": + scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${CONTROL_PLANE}": + scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}": ``` - - Replace the value of `CONTROL_PLANE` with the `user@host` of this machine. + - Replace the value of `CONTROL_PLANE` with the `user@host` of the first control plane machine. + +### Set up the first control plane node 1. Create a file called `kubeadm-config.yaml` with the following contents: apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: stable - apiServer: - certSANs: - - "LOAD_BALANCER_DNS" controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" etcd: external: @@ -306,9 +240,13 @@ the creation of additional nodes could fail due to a lack of required SANs. certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key - - The difference between stacked etcd and external etcd here is that we are using the `external` field for `etcd` in the kubeadm config. In the case of the stacked etcd topology this is managed automatically. + {{< note >}} + The difference between stacked etcd and external etcd here is that we are using + the `external` field for `etcd` in the kubeadm config. In the case of the stacked + etcd topology this is managed automatically. + {{< /note >}} - - Replace the following variables in the template with the appropriate values for your cluster: + - Replace the following variables in the config template with the appropriate values for your cluster: - `LOAD_BALANCER_DNS` - `LOAD_BALANCER_PORT` @@ -316,11 +254,13 @@ the creation of additional nodes could fail due to a lack of required SANs. - `ETCD_1_IP` - `ETCD_2_IP` -1. Run `kubeadm init --config kubeadm-config.yaml` on this node. +The following steps are exactly the same as described for stacked etcd setup: + +1. Run `kubeadm init --config kubeadm-config.yaml --experimental-upload-certs` on this node. -1. Write the join command that is returned to a text file for later use. +1. Write the output join commands that are returned to a text file for later use. -1. Apply the Weave CNI plugin: +1. Apply the CNI plugin of your choice. The given example is for Weave Net: ```sh kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" @@ -328,27 +268,102 @@ the creation of additional nodes could fail due to a lack of required SANs. ### Steps for the rest of the control plane nodes -To add the rest of the control plane nodes, follow [these instructions](#steps-for-the-rest-of-the-control-plane-nodes). -The steps are the same as for the stacked etcd setup, with the exception that a local -etcd member is not created. - -To summarize: +The steps are the same as for the stacked etcd setup: - Make sure the first control plane node is fully initialized. -- Copy certificates between the first control plane node and the other control plane nodes. -- Join each control plane node with the join command you saved to a text file, plus add the `--experimental-control-plane` flag. +- Join each control plane node with the join command you saved to a text file. It's recommended +to join the control plane nodes one at a time. +- Don't forget that the decryption key from `--certificate-key` expires after two hours, by default. ## Common tasks after bootstrapping control plane -### Install a pod network +### Install workers -[Follow these instructions](/docs/setup/independent/create-cluster-kubeadm/#pod-network) to install -the pod network. Make sure this corresponds to whichever pod CIDR you provided -in the master configuration file. +Worker nodes can be joined to the cluster with the command you stored previously +as the output from the `kubeadm init` command: -### Install workers +```sh +kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 +``` + +## Manual certificate distribution {#manual-certs} + +If you choose to not use `kubeadm init` with the `--experimental-upload-certs` flag this means that +you are going to have to manually copy the certificates from the primary control plane node to the +joining control plane nodes. + +There are many ways to do this. In the following example we are using `ssh` and `scp`: + +SSH is required if you want to control all nodes from a single machine. + +1. Enable ssh-agent on your main device that has access to all other nodes in + the system: + + ``` + eval $(ssh-agent) + ``` + +1. Add your SSH identity to the session: + + ``` + ssh-add ~/.ssh/path_to_private_key + ``` + +1. SSH between nodes to check that the connection is working correctly. + + - When you SSH to any node, make sure to add the `-A` flag: + + ``` + ssh -A 10.0.0.7 + ``` + + - When using sudo on any node, make sure to preserve the environment so SSH + forwarding works: + + ``` + sudo -E -s + ``` + +1. After configuring SSH on all the nodes you should run the following script on the first control plane node after + running `kubeadm init. This script will copy the certificates from the first control plane node to the other + control plane nodes: + + In the following example, replace `CONTROL_PLANE_IPS` with the IP addresses of the + other control plane nodes. + ```sh + USER=ubuntu # customizable + CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8" + for host in ${CONTROL_PLANE_IPS}; do + scp /etc/kubernetes/pki/ca.crt "${USER}"@$host: + scp /etc/kubernetes/pki/ca.key "${USER}"@$host: + scp /etc/kubernetes/pki/sa.key "${USER}"@$host: + scp /etc/kubernetes/pki/sa.pub "${USER}"@$host: + scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host: + scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host: + scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt + scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key + done + ``` -Each worker node can now be joined to the cluster with the command returned from any of the -`kubeadm init` commands. The flag `--experimental-control-plane` should not be added to worker nodes. +{{< caution >}} +Copy only the certificates in the above list. kubeadm will take care of generating the rest of the certificates +with the required SANs for the joining control-plane instances. If you copy all the certificates by mistake, +the creation of additional nodes could fail due to a lack of required SANs. +{{< /caution >}} + +1. Then on each joining control plane node you have to run the following script before running `kubeadm join`: + + ```sh + USER=ubuntu # customizable + mkdir -p /etc/kubernetes/pki/etcd + mv /home/${USER}/ca.crt /etc/kubernetes/pki/ + mv /home/${USER}/ca.key /etc/kubernetes/pki/ + mv /home/${USER}/sa.pub /etc/kubernetes/pki/ + mv /home/${USER}/sa.key /etc/kubernetes/pki/ + mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/ + mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/ + mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt + mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key + ``` {{% /capture %}} From 137833cc2294e6a33ac78249a039040101de9079 Mon Sep 17 00:00:00 2001 From: "Lubomir I. Ivanov" Date: Fri, 15 Mar 2019 13:53:55 +0200 Subject: [PATCH 2/4] kubeadm: try to fix note/caution indent in HA page --- .../setup/independent/high-availability.md | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/content/en/docs/setup/independent/high-availability.md b/content/en/docs/setup/independent/high-availability.md index 66a50b5b863f2..0194932b0f3be 100644 --- a/content/en/docs/setup/independent/high-availability.md +++ b/content/en/docs/setup/independent/high-availability.md @@ -116,13 +116,13 @@ option. Your cluster requirements may need a different configuration. - `controlPlaneEndpoint` should match the address or DNS and port of the load balancer. - It's recommended that the versions of kubeadm, kubelet, kubectl and Kubernetes match. - {{< note >}} - Some CNI network plugins like Calico require a CIDR such as `192.168.0.0/16` and - some like Weave do not. See the [CNI network - documentation](/docs/setup/independent/create-cluster-kubeadm/#pod-network). - To add a pod CIDR set the `podSubnet: 192.168.0.0/16` field under - the `networking` object of `ClusterConfiguration`. - {{< /note >}} +{{< note >}} +Some CNI network plugins like Calico require a CIDR such as `192.168.0.0/16` and +some like Weave do not. See the [CNI network +documentation](/docs/setup/independent/create-cluster-kubeadm/#pod-network). +To add a pod CIDR set the `podSubnet: 192.168.0.0/16` field under +the `networking` object of `ClusterConfiguration`. +{{< /note >}} 1. Initialize the control plane: @@ -158,9 +158,9 @@ option. Your cluster requirements may need a different configuration. sudo kubeadm init phase upload-certs --experimental-upload-certs ``` - {{< caution >}} - As stated in the command output, please note that the certificate-key gives access to cluster sensitive data, keep it secret! - {{< /caution >}} +{{< caution >}} +As stated in the command output, please note that the certificate-key gives access to cluster sensitive data, keep it secret! +{{< /caution >}} 1. Apply the CNI plugin of your choice: @@ -240,11 +240,11 @@ in the kubeadm config file. certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key - {{< note >}} - The difference between stacked etcd and external etcd here is that we are using - the `external` field for `etcd` in the kubeadm config. In the case of the stacked - etcd topology this is managed automatically. - {{< /note >}} +{{< note >}} +The difference between stacked etcd and external etcd here is that we are using +the `external` field for `etcd` in the kubeadm config. In the case of the stacked +etcd topology this is managed automatically. +{{< /note >}} - Replace the following variables in the config template with the appropriate values for your cluster: From a56bdf63315498606d605a5cd26f4be45fe33a22 Mon Sep 17 00:00:00 2001 From: "Lubomir I. Ivanov" Date: Fri, 15 Mar 2019 14:01:15 +0200 Subject: [PATCH 3/4] kubeadm: fix missing sudo and minor amends in HA doc --- .../docs/setup/independent/high-availability.md | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/content/en/docs/setup/independent/high-availability.md b/content/en/docs/setup/independent/high-availability.md index 0194932b0f3be..ca7c6e3838a97 100644 --- a/content/en/docs/setup/independent/high-availability.md +++ b/content/en/docs/setup/independent/high-availability.md @@ -187,11 +187,11 @@ You must join new control plane nodes sequentially, only after the first node ha For each additional control plane node you should: -1. Execute the join command that was previously - given to you by `kubeadm init` on the first node. It should look something like this: +1. Execute the join command that was previously given to you by the `kubeadm init` output on the first node. + It should look something like this: ```sh - kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --experimental-control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07 + sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --experimental-control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07 ``` - The `--experimental-control-plane` flag tells `kubeadm join` to create a new control plane. @@ -256,7 +256,7 @@ etcd topology this is managed automatically. The following steps are exactly the same as described for stacked etcd setup: -1. Run `kubeadm init --config kubeadm-config.yaml --experimental-upload-certs` on this node. +1. Run `sudo kubeadm init --config kubeadm-config.yaml --experimental-upload-certs` on this node. 1. Write the output join commands that are returned to a text file for later use. @@ -283,7 +283,7 @@ Worker nodes can be joined to the cluster with the command you stored previously as the output from the `kubeadm init` command: ```sh -kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 +sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 ``` ## Manual certificate distribution {#manual-certs} @@ -325,7 +325,7 @@ SSH is required if you want to control all nodes from a single machine. ``` 1. After configuring SSH on all the nodes you should run the following script on the first control plane node after - running `kubeadm init. This script will copy the certificates from the first control plane node to the other + running `kubeadm init`. This script will copy the certificates from the first control plane node to the other control plane nodes: In the following example, replace `CONTROL_PLANE_IPS` with the IP addresses of the @@ -351,7 +351,8 @@ with the required SANs for the joining control-plane instances. If you copy all the creation of additional nodes could fail due to a lack of required SANs. {{< /caution >}} -1. Then on each joining control plane node you have to run the following script before running `kubeadm join`: +1. Then on each joining control plane node you have to run the following script before running `kubeadm join`. + This script will move the previously copied certificates from the home directory to `/etc/kuberentes/pki`: ```sh USER=ubuntu # customizable From 5eea3234b4c75561dbd7cc1b93340f7df2a94b3e Mon Sep 17 00:00:00 2001 From: "Lubomir I. Ivanov" Date: Sat, 16 Mar 2019 05:16:40 +0200 Subject: [PATCH 4/4] kubeadm: apply latest amends to the HA doc for 1.14 --- .../docs/setup/independent/high-availability.md | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/content/en/docs/setup/independent/high-availability.md b/content/en/docs/setup/independent/high-availability.md index ca7c6e3838a97..38f0425dfc530 100644 --- a/content/en/docs/setup/independent/high-availability.md +++ b/content/en/docs/setup/independent/high-availability.md @@ -24,7 +24,7 @@ simplified in future versions. You might encounter issues with upgrading your cl We encourage you to try either approach, and provide us with feedback in the kubeadm [issue tracker](https://github.com/kubernetes/kubeadm/issues/new). -See also [The HA upgrade documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-ha-1-14). +See also [The upgrade documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14). {{< caution >}} This page does not address running your cluster on a cloud provider. In a cloud @@ -151,21 +151,26 @@ the `networking` object of `ClusterConfiguration`. - Copy this output to a text file. You will need it later to join control plane and worker nodes to the cluster. - When `--experimental-upload-certs` is used with `kubeadm init`, the certificates of the primary control plane are encrypted and uploaded in the `kubeadm-certs` Secret. - - Please note that the `kubeadm-certs` Secret and decryption key expire after two hours. To re-upload the certificates - and generate a new decryption key, use the following command on a control plane that is already joined the cluster: + - To re-upload the certificates and generate a new decryption key, use the following command on a control plane + node that is already joined to the cluster: ```sh sudo kubeadm init phase upload-certs --experimental-upload-certs ``` +{{< note >}} +The `kubeadm-certs` Secret and decryption key expire after two hours. +{{< /note >}} + {{< caution >}} -As stated in the command output, please note that the certificate-key gives access to cluster sensitive data, keep it secret! +As stated in the command output, the certificate-key gives access to cluster sensitive data, keep it secret! {{< /caution >}} 1. Apply the CNI plugin of your choice: [Follow these instructions](/docs/setup/independent/create-cluster-kubeadm/#pod-network) to install - the pod network. Make sure this corresponds to whichever pod CIDR you provided in the kubeadm configuration file. + the CNI provider. Make sure the configuration corresponds to the Pod CIDR specified in the kubeadm + configuration file if applicable. In this example we are using Weave Net: