diff --git a/docs/content/how-to/kubevirt/create-kubevirt-cluster.md b/docs/content/how-to/kubevirt/create-kubevirt-cluster.md index 2313c50c604..ba42c6840f6 100644 --- a/docs/content/how-to/kubevirt/create-kubevirt-cluster.md +++ b/docs/content/how-to/kubevirt/create-kubevirt-cluster.md @@ -18,20 +18,28 @@ Install a nested OCP cluster running on VMs within a management OCP cluster Before creating a guest cluster, the Hypershift operator and hypershift cli tool must be installed. +### Build the HyperShift CLI + The command below builds latest hypershift cli tool from source and places the cli tool within the /usr/local/bin directory. +!!! note + + The command below is the same if you use docker. + ```shell podman run --rm --privileged -it -v \ $PWD:/output docker.io/library/golang:1.18 /bin/bash -c \ 'git clone https://github.com/openshift/hypershift.git && \ cd hypershift/ && \ make hypershift && \ -mv bin/hypershift /output/hypershift' - -sudo mv $PWD/hypershift /usr/local/bin +mv bin/hypershift /output/hypershift +sudo install -m 0755 -o root -g root $PWD/hypershift /usr/local/bin/hypershift +rm $PWD/hypershift ``` +## Deploy the HyperShift Operator + Use the hypershift cli tool to install the HyperShift operator into the management cluster. @@ -39,6 +47,16 @@ management cluster. hypershift install ``` +You will see the operator running in the `hypershift` namespace: + +```sh +oc -n hypershift get pods + +NAME READY STATUS RESTARTS AGE +operator-755d587f44-lrtrq 1/1 Running 0 114s +operator-755d587f44-qj6pz 1/1 Running 0 114s +``` + ## Create a HostedCluster Once all the [prerequisites](#prerequisites) are met, and the HyperShift @@ -47,6 +65,11 @@ operator is installed, it is now possible to create a guest cluster. Below is an example of how to create a guest cluster using environment variables and the `hypershift` cli tool. +!!! note + + The --release-image flag could be used to provision the HostedCluster with a specific OpenShift Release (the hypershift operator has a support matrix of releases supported by a given version of the operator) + + ```shell linenums="1" export CLUSTER_NAME=example export PULL_SECRET="$HOME/pull-secret" @@ -56,7 +79,7 @@ export WORKER_COUNT="2" hypershift create cluster kubevirt \ --name $CLUSTER_NAME \ ---node-pool-replicas=$WORKER_COUNT \ +--node-pool-replicas $WORKER_COUNT \ --pull-secret $PULL_SECRET \ --memory $MEM \ --cores $CPU @@ -67,17 +90,37 @@ hypershift create cluster kubevirt \ A default NodePool will be created for the cluster with 2 vm worker replicas per the `--node-pool-replicas` flag. +After a few moments we should see our hosted control plane pods up and running: + +~~~sh +oc -n clusters-$CLUSTER_NAME get pods + +NAME READY STATUS RESTARTS AGE +capi-provider-5cc7b74f47-n5gkr 1/1 Running 0 3m +catalog-operator-5f799567b7-fd6jw 2/2 Running 0 69s +certified-operators-catalog-784b9899f9-mrp6p 1/1 Running 0 66s +cluster-api-6bbc867966-l4dwl 1/1 Running 0 66s +. +. +. +redhat-operators-catalog-9d5fd4d44-z8qqk 1/1 Running 0 66s +~~~ + A guest cluster backed by KubeVirt virtual machines typically takes around 10-15 minutes to fully provision. The status of the guest cluster can be seen by viewing the corresponding HostedCluster resource. For example, the output below reflects what a fully provisioned HostedCluster object looks like. ```shell linenums="1" + oc get --namespace clusters hostedclusters -NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE -clusters example 4.12.2 example-admin-kubeconfig Completed True False The hosted control plane is available + +NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE +example 4.12.7 example-admin-kubeconfig Completed True False The hosted control plane is available ``` +## Accessing the HostedCluster + CLI access to the guest cluster is gained by retrieving the guest cluster's kubeconfig. Below is an example of how to retrieve the guest cluster's kubeconfig using the hypershift cli. @@ -86,56 +129,114 @@ kubeconfig using the hypershift cli. hypershift create kubeconfig --name $CLUSTER_NAME > $CLUSTER_NAME-kubeconfig ``` -## Delete a HostedCluster +If we access the cluster, we will see we have two nodes. -To delete a HostedCluster: +```shell +oc --kubeconfig $CLUSTER_NAME-kubeconfig get nodes + +NAME STATUS ROLES AGE VERSION +example-n6prw Ready worker 32m v1.25.4+18eadca +example-nc6g4 Ready worker 32m v1.25.4+18eadca +``` + +We can also check the ClusterVersion: ```shell -hypershift destroy cluster kubevirt --name $CLUSTER_NAME +oc --kubeconfig $CLUSTER_NAME-kubeconfig get clusterversion + +NAME VERSION AVAILABLE PROGRESSING SINCE STATUS +version 4.12.7 True False 5m39s Cluster version is 4.12.7 ``` -## Optional MetalLB Configuration Steps +## Scaling an existing NodePool -A load balancer is required. If MetalLB is in use, here are some example steps -outlining how to configure MetalLB after [installing MetalLB](https://docs.openshift.com/container-platform/4.12/networking/metallb/metallb-operator-install.html). +Manually scale a NodePool using the `oc scale` command: + +```shell linenums="1" +NODEPOOL_NAME=$CLUSTER_NAME +NODEPOOL_REPLICAS=5 + +oc scale nodepool/$NODEPOOL_NAME --namespace clusters --replicas=$NODEPOOL_REPLICAS +``` + +After a while, in our hosted cluster this is what we will see: + +```shell +oc --kubeconfig $CLUSTER_NAME-kubeconfig get nodes + +NAME STATUS ROLES AGE VERSION +example-9jvnf Ready worker 97s v1.25.4+18eadca +example-n6prw Ready worker 116m v1.25.4+18eadca +example-nc6g4 Ready worker 117m v1.25.4+18eadca +example-thp29 Ready worker 4m17s v1.25.4+18eadca +example-twxns Ready worker 88s v1.25.4+18eadca +``` + +## Adding Additional NodePools + +Create additional NodePools for a guest cluster by specifying a name, number of +replicas, and any additional information such as memory and cpu requirements. + +For example, let's create a NodePool with more CPUs assigned to the VMs (4 vs 2): + +```shell linenums="1" +export NODEPOOL_NAME=$CLUSTER_NAME-extra-cpu +export WORKER_COUNT="2" +export MEM="6Gi" +export CPU="4" +export DISK="16" + +hypershift create nodepool kubevirt \ + --cluster-name $CLUSTER_NAME \ + --name $NODEPOOL_NAME \ + --node-count $WORKER_COUNT \ + --memory $MEM \ + --cores $CPU \ + --root-volume-size $DISK +``` + +Check the status of the NodePool by listing `nodepool` resources in the `clusters` +namespace: + +```shell +oc get nodepools --namespace clusters + +NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE +example example 5 5 False False 4.12.7 +example-extra-cpu example 2 False False True True Minimum availability requires 2 replicas, current 0 available +``` + +After a while, in our hosted cluster this is what we will see: -**Step 1. Create a MetalLB instance** ```shell -oc create -f - <.). -Create a LoadBalancer Service to route ingress traffic to the KubeVirt VMs -that are acting as nodes for the guest cluster. +This time, the HostedCluster will not finish the deployment (will remain in `Partial` progress) as we saw in the previous section, since we have configured a base domain we need to make sure that the required DNS records and load balancer are in-place: -The guest cluster must be inspected in order to learn what port to use as the -target port when routing to the KubeVirt VMs. The target port can be discovered -by using the kubeconfig for the new Hypershift KubeVirt cluster to retrieve the -default router's NodePort service. +```shell linenums="1" +oc get --namespace clusters hostedclusters + +NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE +example example-admin-kubeconfig Partial True False The hosted control plane is available +``` -Below is a combination of cli commands that will automatically detect the target -port of the guest cluster and store it in an environment variable. +If we access the HostedCluster this is what we will see: ```shell hypershift create kubeconfig --name $CLUSTER_NAME > $CLUSTER_NAME-kubeconfig -export EXTERNAL_IP=$(oc --kubeconfig $CLUSTER_NAME-kubeconfig get services -n openshift-ingress router-nodeport-default -o wide --no-headers | sed -E 's|.*443:(.....).*$|\1|' | tr -d '[:space:]) ``` -After the target port is discovered, create a LoadBalancer service to route -traffic to the guest cluster's KubeVirt VMs. - -```shell linenums="1" -export CLUSTER_NAME=example -export CLUSTER_NAMESPACE=clusters-${CLUSTER_NAME} - -cat << EOF > apps-LB-service.yaml -apiVersion: v1 -kind: Service -metadata: - labels: - app: ${CLUSTER_NAME} - name: ${CLUSTER_NAME} - namespace: ${CLUSTER_NAMESPACE} -spec: - ports: - - name: https-443 - port: 443 - protocol: TCP - targetPort: ${HTTPS_NODEPORT} - selector: - kubevirt.io: virt-launcher - type: LoadBalancer -EOF - -oc create -f apps-LB-service.yaml +```shell +oc --kubeconfig $CLUSTER_NAME-kubeconfig get co + +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +console 4.12.7 False False False 30m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.example.hypershift.lab): Get "https://console-openshift-console.apps.example.hypershift.lab": dial tcp: lookup console-openshift-console.apps.example.hypershift.lab on 172.31.0.10:53: no such host +. +. +. +ingress 4.12.7 True False True 28m The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing) ``` -**Step 3. Configure Wildcard DNS** +In the next section we will fix that. -Once the LoadBalancer is created, configure a wildcard DNS A record or CNAME -that references the LoadBalancer service's external IP. +### Step 2 - Set up the LoadBalancer -Get the LoadBalancer's external IP. -```shell -export EXTERNAL_IP=$(oc get service -n $KUBEVIRT_CLUSTER_NAMESPACE $KUBEVIRT_CLUSTER_NAME | grep $KUBEVIRT_CLUSTER_NAME| awk '{ print $4 }' | tr -d '[:space:]') -``` +!!! note -Configure a wildcard `*.apps...` DNS entry -referencing the IP stored in the $EXTERNAL_IP environment variable that is -routable both internally and externally of the cluster. + If your cluster is on bare-metal you may need MetalLB to be able to provision functional LoadBalancer services. Take a look at the section [Optional MetalLB Configuration Steps](#optional-metallb-configuration-steps). -## Scaling an existing NodePool +This option requires configuring a new LoadBalancer service that routes to the KubeVirt VMs as well as assign a wildcard DNS entry to the LoadBalancer's IP address. -Manually scale a NodePool using the `oc scale` command: +First, we need to create a LoadBalancer Service that routes ingress traffic to the KubeVirt VMs. -```shell linenums="1" -NODEPOOL_NAME=${CLUSTER_NAME}-work -NODEPOOL_REPLICAS=5 +A NodePort Service exposing the HostedCluster ingress already exists, we will grab the NodePorts and create the LoadBalancer service targeting these ports. -oc scale nodepool/$NODEPOOL_NAME --namespace clusters --replicas=$NODEPOOL_REPLICAS -``` +1. Grab NodePorts -## Adding Additional NodePools + ```sh + export HTTP_NODEPORT=$(oc --kubeconfig $CLUSTER_NAME-kubeconfig get services -n openshift-ingress router-nodeport-default -o jsonpath='{.spec.ports[?(@.name=="http")].nodePort}') + export HTTPS_NODEPORT=$(oc --kubeconfig $CLUSTER_NAME-kubeconfig get services -n openshift-ingress router-nodeport-default -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}') + ``` -Create additional NodePools for a guest cluster by specifying a name, number of -replicas, and any additional information such as memory and cpu requirements. +2. Create LoadBalancer Service -Create a NodePool: + ```sh + cat << EOF | oc apply -f - + apiVersion: v1 + kind: Service + metadata: + labels: + app: $CLUSTER_NAME + name: $CLUSTER_NAME-apps + namespace: clusters-$CLUSTER_NAME + spec: + ports: + - name: https-443 + port: 443 + protocol: TCP + targetPort: ${HTTPS_NODEPORT} + - name: http-80 + port: 80 + protocol: TCP + targetPort: ${HTTP_NODEPORT} + selector: + kubevirt.io: virt-launcher + type: LoadBalancer + EOF + ``` -```shell linenums="1" -export NODEPOOL_NAME=${CLUSTER_NAME}-workers -export WORKER_COUNT="2" -export MEM="6Gi" -export CPU="2" +### Step 3 - Set up a wildcard DNS record for the `*.apps` -hypershift create nodepool kubevirt \ - --cluster-name $CLUSTER_NAME \ - --name $NODEPOOL_NAME \ - --node-count $WORKER_COUNT \ - --memory $MEM \ - --cores $CPU +Now that we have the ingress exposed, next step is configure a wildcard DNS A record or CNAME that references the LoadBalancer Service's external IP. + +1. Get the external IP. + + ```shell + export EXTERNAL_IP=$(oc -n clusters-$CLUSTER_NAME get service $CLUSTER_NAME-apps -o jsonpath='{.status.loadBalancer.ingress[0].ip}') + ``` + +2. Configure a wildcard `*.apps...` DNS entry referencing the IP stored in $EXTERNAL_IP that is routable both internally and externally of the cluster. + +For example, for the cluster used in this example and for an external ip value of `192.168.20.30` this is what DNS resolutions will look like: + +```sh +dig +short test.apps.example.hypershift.lab + +192.168.20.30 ``` -Check the status of the NodePool by listing `nodepool` resources in the `clusters` -namespace: +### Checking HostedCluster status after having fixed the ingress -```shell -oc get nodepools --namespace clusters +Now that we fixed the ingress, we should see our HostedCluster progress moved from `Partial` to `Completed`. + +```shell linenums="1" +oc get --namespace clusters hostedclusters + +NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE +example 4.12.7 example-admin-kubeconfig Completed True False The hosted control plane is available ``` + +## Optional MetalLB Configuration Steps + +LoadBalancer type services are required. If MetalLB is in use, here are some example steps +outlining how to configure MetalLB after [installing MetalLB](https://docs.openshift.com/container-platform/4.12/networking/metallb/metallb-operator-install.html). + +1. Create a MetalLB instance: + + ```shell + oc create -f - <