-
Notifications
You must be signed in to change notification settings - Fork 435
Updated kubevirt docs #2318
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updated kubevirt docs #2318
Conversation
|
@davidvossel could you take a look? |
✅ Deploy Preview for hypershift-docs ready!
To edit notification comments on pull requests, go to your Netlify site settings. |
|
@mvazquezc: The label(s) DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
1e12240 to
d39a0b8
Compare
| --memory $MEM \ | ||
| --cores $CPU | ||
| --cores $CPU \ | ||
| #--release-image quay.io/openshift-release-dev/ocp-release:$OCP_RELEASE \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'd rather omit the release-image from the example command. we want hypershift cli to pick the best default for us. if someone is using the latest hypershift operator, the default will be accurate
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will remove it and add a note stating that this flag could be used to provision the cluster with a specific OCP release. Let me know if this is good enough or if we want to completely omit this information.
From my point of view I see it useful in terms of testing, let's say that you're running a hypershift version that supports deploying from 4.11 to 4.13 hosted clusters. Being able to specify a given release may be helpful when troubleshooting issues / reproducing a bug / testing something in a specific release / etc.
| # You may want to run this instead | ||
| hypershift install --hypershift-image quay.io/hypershift/hypershift-operator:4.12 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think this has the potential to lead to more confusion. if someone uses a hypershift operator pinned to a specific release (and not the latest one) then we have to start talking about release payload versioning and other stuff, which isn't well defined at the moment. i'd rather we stick to defaults for the documentation examples.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, will remove that from the doc.
| In order for the hosted cluster to complete its deployment, we need to fix the Ingress, we will do that in the next sections. | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the default settings are meant to avoid people needing to consider ingress
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, will reorganize the document so the default example doesn't require ingress.
| export MEM="6Gi" | ||
| export CPU="2" | ||
| export WORKER_COUNT="2" | ||
| export BASE_DOMAIN=hypershift.lab |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't want to specify a basedomain by default. that's the advanced case that requires someone to setup DNS entries
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed this.
| 4. Create a ClusterIP service and a OCP Route in the management cluster for providing ingress to the hosted cluster (One limitation is that we can only expose one of the router ports, in this case we will expose the https one.) | ||
| ```shell | ||
| export HTTPS_NODEPORT=$(oc --kubeconfig $CLUSTER_NAME-kubeconfig get services -n openshift-ingress router-nodeport-default -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}') | ||
| cat << EOF | oc apply -f - | ||
| apiVersion: v1 | ||
| kind: Service | ||
| metadata: | ||
| labels: | ||
| app: $CLUSTER_NAME | ||
| name: $CLUSTER_NAME-apps | ||
| namespace: clusters-$CLUSTER_NAME | ||
| spec: | ||
| ports: | ||
| - name: https-443 | ||
| port: 443 | ||
| protocol: TCP | ||
| targetPort: ${HTTPS_NODEPORT} | ||
| selector: | ||
| kubevirt.io: virt-launcher | ||
| type: ClusterIP | ||
| EOF | ||
| cat << EOF | oc apply -f - | ||
| apiVersion: route.openshift.io/v1 | ||
| kind: Route | ||
| metadata: | ||
| name: $CLUSTER_NAME-443 | ||
| namespace: clusters-$CLUSTER_NAME | ||
| spec: | ||
| host: data.apps.$CLUSTER_NAME.$BASE_DOMAIN | ||
| wildcardPolicy: Subdomain | ||
| tls: | ||
| termination: passthrough | ||
| insecureEdgeTerminationPolicy: Redirect | ||
| port: | ||
| targetPort: https-443 | ||
| to: | ||
| kind: Service | ||
| name: $CLUSTER_NAME-apps | ||
| weight: 100 | ||
| EOF | ||
| ``` | ||
| ## Checking HostedCluster status after having fixed the ingress | ||
| Now that we fixed the ingress, we should see our HostedCluster progress moved from `Partial` to `Completed`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is done automatically now for the latest hypershift operator as long as guest clsuters are using 4.12.2 or greater
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct, removed.
|
FYI @lahinson |
|
@davidvossel After these updates are approved and merged, should I add this info to the content we're working on in stolostron/rhacm-docs#4803? |
|
hey @davidvossel I modified the doc, could you review again? thanks. |
|
/area hypershift-operator |
…e annotation bee450b (ci-operator: expose ephemeral cluster versions based on parents, 2020-08-11, openshift#1098) taught assembleReleaseStep to consume the release.openshift.io/config annotation on an ImageStream to find a version prefix that actually reflects the 4.y release branch (instead of using 0.0.1-0 as the prefix). Those annotations are set on the app.ci ImageStreams, for example: $ oc whoami -c default/api-ci-l2s4-p1-openshiftapps-com:6443/wking $ oc -n ocp get -o json imagestream 4.13 | jq -r '.metadata.annotations["release.openshift.io/config"] | fromjson | .name' 4.13.0-0.ci But they are not set in ImageStreams contained within release images: $ oc adm release info -o json registry.ci.openshift.org/ocp/release:4.13.0-0.ci-2023-03-29-224346 | jq '.references | {kind, metadata}' { "kind": "ImageStream", "metadata": { "name": "4.13.0-0.ci-2023-03-29-224346", "creationTimestamp": "2023-03-29T22:52:54Z", "annotations": { "release.openshift.io/from-image-stream": "ocp/4.13-2023-03-29-224346" } } } With this commit, I'm taking the name out of the imported release ImageStream and trying to parse it as a Semantic Version. If it parses, I'm constructing a release.openshift.io/config annotation to set just the 'name' property to the MAJOR.MINOR.SPEC from that parsed name. This should allow cluster-bot runs like: launch 4.13.0-0.ci,openshift/cluster-version-operator#918,openshift/hypershift#2318 hypershift-hosted to build releases named 4.13.0-0-... instead of their current 0.0.1-0-...: $ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/release-openshift-origin-installer-launch-hypershift-hosted/1641294936391290880/artifacts/release/artifacts/release-payload-latest/image-references | jq -r .metadata.name 0.0.1-0.test-2023-03-30-043404-ci-ln-h9dwcbk-latest which is failing to run with [1]: Release image is not valid: { "lastTransitionTime": "2023-03-30T04:36:29Z", "message": "releases before 4.8 are not supported", "observedGeneration": 3, "reason": "InvalidImage", "status": "False", "type": "ValidReleaseImage" } [1]: https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-origin-installer-launch-hypershift-hosted/1641294936391290880#1:build-log.txt%3A203-210
…e annotation bee450b (ci-operator: expose ephemeral cluster versions based on parents, 2020-08-11, openshift#1098) taught assembleReleaseStep to consume the release.openshift.io/config annotation on an ImageStream to find a version prefix that actually reflects the 4.y release branch (instead of using 0.0.1-0 as the prefix). Those annotations are set on the app.ci ImageStreams, for example: $ oc whoami -c default/api-ci-l2s4-p1-openshiftapps-com:6443/wking $ oc -n ocp get -o json imagestream 4.13 | jq -r '.metadata.annotations["release.openshift.io/config"] | fromjson | .name' 4.13.0-0.ci But they are not set in ImageStreams contained within release images: $ oc adm release info -o json registry.ci.openshift.org/ocp/release:4.13.0-0.ci-2023-03-29-224346 | jq '.references | {kind, metadata}' { "kind": "ImageStream", "metadata": { "name": "4.13.0-0.ci-2023-03-29-224346", "creationTimestamp": "2023-03-29T22:52:54Z", "annotations": { "release.openshift.io/from-image-stream": "ocp/4.13-2023-03-29-224346" } } } With this commit, I'm taking the name out of the imported release ImageStream and trying to parse it as a Semantic Version. If it parses, I'm constructing a release.openshift.io/config annotation to set just the 'name' property to the MAJOR.MINOR.PATCH from that parsed name. This should allow cluster-bot runs like: launch 4.13.0-0.ci,openshift/cluster-version-operator#918,openshift/hypershift#2318 hypershift-hosted to build releases named 4.13.0-0-... instead of their current 0.0.1-0-...: $ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/release-openshift-origin-installer-launch-hypershift-hosted/1641294936391290880/artifacts/release/artifacts/release-payload-latest/image-references | jq -r .metadata.name 0.0.1-0.test-2023-03-30-043404-ci-ln-h9dwcbk-latest which is failing to run with [1]: Release image is not valid: { "lastTransitionTime": "2023-03-30T04:36:29Z", "message": "releases before 4.8 are not supported", "observedGeneration": 3, "reason": "InvalidImage", "status": "False", "type": "ValidReleaseImage" } [1]: https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-origin-installer-launch-hypershift-hosted/1641294936391290880#1:build-log.txt%3A203-210
1ad115e to
5596268
Compare
jparrill
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
| ~~~sh | ||
| podman cp $(podman create --name hypershift --rm --pull always quay.io/hypershift/hypershift-operator:latest):/usr/bin/hypershift /tmp/hypershift && podman rm -f hypershift | ||
|
|
||
| sudo install -m 0755 -o root -g root /tmp/hypershift /usr/local/bin/hypershift | ||
| ~~~ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
|
||
| A load balancer is required. If MetalLB is in use, here are some example steps | ||
| outlining how to configure MetalLB after [installing MetalLB](https://docs.openshift.com/container-platform/4.12/networking/metallb/metallb-operator-install.html). | ||
| If we access the cluster we will see we have two nodes and that there are some cluster operators failing because Ingress has not been properly configured. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/If we access the cluster/If we access the Hosted Cluster/
| namespace: metallb-system | ||
| EOF | ||
| ``` | ||
| oc --kubeconfig $CLUSTER_NAME-kubeconfig get nodes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you clarify in the var name, which cluster are you referring to?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This var is configured at the beginning of the scenario and does not change.
75b47fb to
a16a47d
Compare
|
I just squashed all three commits. I think this is now ready for a final review. Thanks. |
jparrill
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
…e annotation bee450b (ci-operator: expose ephemeral cluster versions based on parents, 2020-08-11, openshift#1098) taught assembleReleaseStep to consume the release.openshift.io/config annotation on an ImageStream to find a version prefix that actually reflects the 4.y release branch (instead of using 0.0.1-0 as the prefix). Those annotations are set on the app.ci ImageStreams, for example: $ oc whoami -c default/api-ci-l2s4-p1-openshiftapps-com:6443/wking $ oc -n ocp get -o json imagestream 4.13 | jq -r '.metadata.annotations["release.openshift.io/config"] | fromjson | .name' 4.13.0-0.ci But they are not set in ImageStreams contained within release images: $ oc adm release info -o json registry.ci.openshift.org/ocp/release:4.13.0-0.ci-2023-03-29-224346 | jq '.references | {kind, metadata}' { "kind": "ImageStream", "metadata": { "name": "4.13.0-0.ci-2023-03-29-224346", "creationTimestamp": "2023-03-29T22:52:54Z", "annotations": { "release.openshift.io/from-image-stream": "ocp/4.13-2023-03-29-224346" } } } With this commit, I'm taking the name out of the imported release ImageStream and trying to parse it as a Semantic Version. If it parses, I'm constructing a release.openshift.io/config annotation to set just the 'name' property to the MAJOR.MINOR.PATCH from that parsed name. This should allow cluster-bot runs like: launch 4.13.0-0.ci,openshift/cluster-version-operator#918,openshift/hypershift#2318 hypershift-hosted to build releases named 4.13.0-0-... instead of their current 0.0.1-0-...: $ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/release-openshift-origin-installer-launch-hypershift-hosted/1641294936391290880/artifacts/release/artifacts/release-payload-latest/image-references | jq -r .metadata.name 0.0.1-0.test-2023-03-30-043404-ci-ln-h9dwcbk-latest which is failing to run with [1]: Release image is not valid: { "lastTransitionTime": "2023-03-30T04:36:29Z", "message": "releases before 4.8 are not supported", "observedGeneration": 3, "reason": "InvalidImage", "status": "False", "type": "ValidReleaseImage" } [1]: https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-origin-installer-launch-hypershift-hosted/1641294936391290880#1:build-log.txt%3A203-210
davidvossel
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks great, i had a couple of suggestions
| ```shell | ||
| podman run --rm --privileged -it -v \ | ||
| $PWD:/output docker.io/library/golang:1.18 /bin/bash -c \ | ||
| 'git clone https://github.com/openshift/hypershift.git && \ | ||
| cd hypershift/ && \ | ||
| make hypershift && \ | ||
| mv bin/hypershift /output/hypershift' | ||
|
|
||
| sudo mv $PWD/hypershift /usr/local/bin | ||
| ``` | ||
| === "Podman" | ||
|
|
||
| ```shell | ||
| podman run --rm --privileged -it -v \ | ||
| $PWD:/output docker.io/library/golang:1.18 /bin/bash -c \ | ||
| 'git clone https://github.com/openshift/hypershift.git && \ | ||
| cd hypershift/ && \ | ||
| make hypershift && \ | ||
| mv bin/hypershift /output/hypershift' | ||
|
|
||
| sudo install -m 0755 -o root -g root $PWD/hypershift /usr/local/bin/hypershift | ||
| rm $PWD/hypershift | ||
| ``` | ||
|
|
||
| === "Docker" | ||
|
|
||
| ```shell | ||
| docker run --rm --privileged -it -v \ | ||
| $PWD:/output docker.io/library/golang:1.18 /bin/bash -c \ | ||
| 'git clone https://github.com/openshift/hypershift.git && \ | ||
| cd hypershift/ && \ | ||
| make hypershift && \ | ||
| mv bin/hypershift /output/hypershift' | ||
|
|
||
| sudo install -m 0755 -o root -g root $PWD/hypershift /usr/local/bin/hypershift | ||
| rm $PWD/hypershift | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the podman and docker command are identical except for the use of podman or docker as the binary, right? If so I'd suggest documenting this just with podman, and saying the command is the same for docker.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added note.
| export CLUSTER_NAME=example | ||
| export HOSTEDCLUSTER_NAME=example |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason we use CLUSTER_NAME instead of HOSTEDCLUSTER_NAME is that CLUSTER_NAME is consistent with documentation for all the other hypershift platforms. I'd prefer to remain consistent. So if we're going to change this env var, i'd want to change it across the board.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed back to CLUSTER_NAME.
| ## Advanced Ingress and DNS | ||
|
|
||
| This process involves three steps. | ||
| Every OpenShift cluster comes setup with a default application ingress | ||
| controller which is expected have an external DNS record associated with it. | ||
|
|
||
| **Step 1. Cluster Creation** | ||
| For example, a HyperShift cluster named `example` with the base domain | ||
| `hypershift.lab` is created, then it is expected that the wildcard domain | ||
| `*.apps.example.hypershift.lab` can be resolved. | ||
|
|
||
| Create Hypershift KubeVirt cluster with a custom base domain you control. This | ||
| can be achieved using the `--base-domain` cli argument during cluster creation. | ||
| There are two ways to set this up: | ||
|
|
||
| Below is an example. | ||
| * Default: Reuse the management cluster's wildcard DNS routing and make the | ||
| Hypershift KubeVirt cluster a subdomain of the management cluster. This is what we have seen in the previous section. | ||
| * Advanced: Set up a new LoadBalancer service and wildcard DNS record for the HostedCluster `*.apps` domain. | ||
|
|
||
| ### Deploying the HostedCluster specifying our base domain |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously we had a Default Ingress and DNS Behavior section which clearly outlined how ingress works out of the box. then we have a Customized Ingress and DNS Behavior which went into the details around how someone can set up their own basedomain. I'd like to keep those two sections.
Can the Default Ingress and DNS Behavior be re-introduced?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reintroduced and reworded some sections to match with the 3-steps proposed by the original doc:
- Cluster creation
- LoadBalancer creation
- Wildcard DNS configuration
@lahinson yes, i think it would make sense to update the official docs once these changes land. These changes definitely add further clarity to the procedures |
Signed-off-by: Mario Vazquez <mavazque@redhat.com> Incorporated feedback Signed-off-by: Mario Vazquez <mavazque@redhat.com> Added Juan P. feedback Signed-off-by: Mario Vazquez <mavazque@redhat.com>
a16a47d to
51ba765
Compare
|
@davidvossel I just pushed some changes addressing your suggestions. Ready for review when you're back. |
davidvossel
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
this is great! thanks for improving the docs
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: davidvossel, mvazquezc The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@mvazquezc: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |


What this PR does / why we need it:
It updates the kubevirt configuration as of the latest release (03/23/2023).
Checklist