Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 6 additions & 1 deletion _topic_maps/_topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -604,8 +604,13 @@ Topics:
- Name: Updating clusters overview
File: index
- Name: Understanding OpenShift updates
File: understanding-openshift-updates
Dir: understanding_updates
Distros: openshift-enterprise
Topics:
- Name: Introduction to OpenShift updates
File: intro-to-updates
- Name: How cluster updates work
File: how-updates-work
- Name: Understanding update channels and releases
File: understanding-upgrade-channels-release
Distros: openshift-enterprise
Expand Down
Binary file added images/update-runlevels.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion modules/update-common-terms.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * updating/understanding-openshift-updates.adoc
// * updating/understanding_updates/intro-to-updates.adoc

:_content-type: REFERENCE
[id="update-common-terms_{context}"]
Expand Down
114 changes: 114 additions & 0 deletions modules/update-evaluate-availability.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
// Module included in the following assemblies:
//
// * updating/understanding_updates/how-updates-work.adoc

:_content-type: CONCEPT
[id="update-evaluate-availability_{context}"]
= Evaluation of update availability

The Cluster Version Operator (CVO) periodically queries the OpenShift Update Service (OSUS) for the most recent data about update possibilities.
This data is based on the cluster's subscribed channel.
The CVO then saves information about update recommendations into either the `availableUpdates` or `conditionalUpdates` field of its `ClusterVersion` resource.

The CVO periodically checks the conditional updates for update risks.
These risks are conveyed through the data served by the OSUS, which contains information for each version about known issues that might affect a cluster updated to that version.
Most risks are limited to clusters with specific characteristics, such as clusters with a certain size or clusters that are deployed in a particular cloud platform.

The CVO continuously evaluates its cluster characteristics against the conditional risk information for each conditional update. If the CVO finds that the cluster matches the criteria, the CVO stores this information in the `conditionalUpdates` field of its `ClusterVersion` resource.
If the CVO finds that the cluster does not match the risks of an update, or that there are no risks associated with the update, it stores the target version in the `availableUpdates` field of its `ClusterVersion` resource.

The user interface, either the web console or the OpenShift CLI (`oc`), presents this information in sectioned headings to the administrator.
Each *supported but not recommended* update recommendation contains a link to further resources about the risk so that the administrator can make an informed decision about the update.

You can inspect all available updates with the following command:

[source,terminal]
----
$ oc adm upgrade --include-not-recommended
----

The additional `--include-not-recommended` parameter includes updates that are available but not recommended due to a known risk that applies to the cluster.

.Example output
[source,terminal]
----
Cluster version is 4.10.22

Upstream is unset, so the cluster will use an appropriate default.
Channel: fast-4.11 (available channels: candidate-4.10, candidate-4.11, eus-4.10, fast-4.10, fast-4.11, stable-4.10)

Recommended updates:

VERSION IMAGE
4.10.26 quay.io/openshift-release-dev/ocp-release@sha256:e1fa1f513068082d97d78be643c369398b0e6820afab708d26acda2262940954
4.10.25 quay.io/openshift-release-dev/ocp-release@sha256:ed84fb3fbe026b3bbb4a2637ddd874452ac49c6ead1e15675f257e28664879cc
4.10.24 quay.io/openshift-release-dev/ocp-release@sha256:aab51636460b5a9757b736a29bc92ada6e6e6282e46b06e6fd483063d590d62a
4.10.23 quay.io/openshift-release-dev/ocp-release@sha256:e40e49d722cb36a95fa1c03002942b967ccbd7d68de10e003f0baa69abad457b

Supported but not recommended updates:

Version: 4.11.0
Image: quay.io/openshift-release-dev/ocp-release@sha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4
Recommended: False
Reason: RPMOSTreeTimeout
Message: Nodes with substantial numbers of containers and CPU contention may not reconcile machine configuration https://bugzilla.redhat.com/show_bug.cgi?id=2111817#c22
----

One way to inspect the underlying availability data created by the CVO is by querying the `ClusterVersion` resource with the following command:

[source,terminal]
----
$ oc get clusterversion version -o json | jq '.status.availableUpdates'
----

.Example output
[source,terminal]
----
[
{
"channels": [
"candidate-4.11",
"candidate-4.12",
"fast-4.11",
"fast-4.12"
],
"image": "quay.io/openshift-release-dev/ocp-release@sha256:400267c7f4e61c6bfa0a59571467e8bd85c9188e442cbd820cc8263809be3775",
"url": "https://access.redhat.com/errata/RHBA-2023:3213",
"version": "4.11.41"
},
...
]
----

A similar command can be used to check conditional updates:

[source,terminal]
----
$ oc get clusterversion version -o json | jq '.status.conditionalUpdates'
----

.Example output
[source,terminal]
----
[
{
"conditions": [
{
"lastTransitionTime": "2023-05-30T16:28:59Z",
"message": "The 4.11.36 release only resolves an installation issue https://issues.redhat.com//browse/OCPBUGS-11663 , which does not affect already running clusters. 4.11.36 does not include fixes delivered in recent 4.11.z releases and therefore upgrading from these versions would cause fixed bugs to reappear. Red Hat does not recommend upgrading clusters to 4.11.36 version for this reason. https://access.redhat.com/solutions/7007136",
"reason": "PatchesOlderRelease",
"status": "False",
"type": "Recommended"
}
],
"release": {
"channels": [...],
"image": "quay.io/openshift-release-dev/ocp-release@sha256:8c04176b771a62abd801fcda3e952633566c8b5ff177b93592e8e8d2d1f8471d",
"url": "https://access.redhat.com/errata/RHBA-2023:1733",
"version": "4.11.36"
},
"risks": [...]
},
...
]
----
61 changes: 61 additions & 0 deletions modules/update-manifest-application.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
// Module included in the following assemblies:
//
// * updating/understanding_updates/how-updates-work.adoc

:_content-type: CONCEPT
[id="update-manifest-application_{context}"]
= Understanding how manifests are applied during an update

Some manifests supplied in a release image must be applied in a certain order because of the dependencies between them.
For example, the `CustomResourceDefinition` resource must be created before the matching custom resources.
Additionally, there is a logical order in which the individual cluster Operators must be updated to minimize disruption in the cluster.
The Cluster Version Operator (CVO) implements this logical order through the concept of Runlevels.

These dependencies are encoded in the filenames of the manifests in the release image:

[source, terminal]
----
0000_<runlevel>_<component>_<manifest-name>.yaml
----

For example:

[source, terminal]
----
0000_03_config-operator_01_proxy.crd.yaml
----

The CVO internally builds a dependency graph for the manifests, where the CVO obeys the following rules:

* During an update, manifests at a lower Runlevel are applied before those at a higher Runlevel.

* Within one Runlevel, manifests for different components can be applied in parallel.

* Within one Runlevel, manifests for a single component are applied in lexicographic order.

The CVO then applies manifests following the generated dependency graph.

[NOTE]
====
For some resource types, the CVO monitors the resource after its manifest is applied, and considers it to be successfully updated only after the resource reaches a stable state.
Achieving this stable state can take some time.
This is especially true for cluster Operators, which might perform their own update actions in the cluster after the CVO deploys their new versions.
While the additional update actions take place, these cluster Operators temporarily set their `Progressing` condition to `True`.
====

The CVO waits until all cluster Operators in the Runlevel meet the following conditions before it proceeds to the next Runlevel:

* The cluster Operators have an `Available=True` condition.

* The cluster Operators have a `Degraded=False` condition.

* The cluster Operators declare they have achieved the desired version in their ClusterOperator resource.

Some actions can take significant time to finish. The CVO waits for the actions to complete in order to ensure the subsequent Runlevels can proceed safely.
The process of applying all manifests is expected to take 60 to 120 minutes in total; see *Understanding {product-title} update duration* for more information about factors that influence update duration.

image::update-runlevels.png[A diagram displaying the sequence of Runlevels and the manifests of components within each level]

In the previous example diagram, the CVO is waiting until all work is completed at Runlevel 20.
The CVO has applied all manifests to the Operators in the Runlevel, but the `kube-apiserver-operator ClusterOperator` performs some actions after its new version was deployed. The `kube-apiserver-operator ClusterOperator` declares this progress through the `Progressing=True` condition and by not declaring the new version as reconciled in its `status.versions`.
The CVO waits until the ClusterOperator reports an acceptable status, and then it will start applying manifests at Runlevel 25.
59 changes: 59 additions & 0 deletions modules/update-mco-process.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
// Module included in the following assemblies:
//
// * updating/understanding_updates/how-updates-work.adoc

:_content-type: CONCEPT
[id="mco-update-process_{context}"]
= Understanding how the Machine Config Operator updates nodes
The Machine Config Operator (MCO) applies a new machine configuration to each control plane node and compute node. During the machine configuration update, control plane nodes and compute nodes are organized into their own machine config pools, where the pools of machines are updated in parallel. The `.spec.maxUnavailable` parameter, which has a default value of `1`, determines how many nodes in a machine config pool can simultaneously undergo the update process.

When the machine configuration update process begins, the MCO checks the amount of currently unavailable nodes in a pool. If there are fewer unavailable nodes than the value of `.spec.maxUnavailable`, the MCO initiates the following sequence of actions on available nodes in the pool:

. Cordon and drain the node
+
[NOTE]
====
When a node is cordoned, workloads cannot be scheduled to it.
====

. Update the system configuration and operating system (OS) of the node

. Reboot the node

. Uncordon the node

A node undergoing this process is unavailable until it is uncordoned and workloads can be scheduled to it again. The MCO begins updating nodes until the number of unavailable nodes is equal to the value of `.spec.maxUnavailable`.

As a node completes its update and becomes available, the number of unavailable nodes in the machine config pool is once again fewer than `.spec.maxUnavailable`. If there are remaining nodes that need to be updated, the MCO initiates the update process on a node until the `.spec.maxUnavailable` limit is once again reached. This process repeats until each control plane node and compute node has been updated.

The following example workflow describes how this process might occur in a machine config pool with 5 nodes, where `.spec.maxUnavailable` is 3 and all nodes are initially available:

. The MCO cordons nodes 1, 2, and 3, and begins to drain them.

. Node 2 finishes draining, reboots, and becomes available again. The MCO cordons node 4 and begins draining it.

. Node 1 finishes draining, reboots, and becomes available again. The MCO cordons node 5 and begins draining it.

. Node 3 finishes draining, reboots, and becomes available again.

. Node 5 finishes draining, reboots, and becomes available again.

. Node 4 finishes draining, reboots, and becomes available again.

Because the update process for each node is independent of other nodes, some nodes in the example above finish their update out of the order in which they were cordoned by the MCO.

You can check the status of the machine configuration update by running the following command:

[source,terminal]
----
$ oc get mcp
----

.Example output

[source,terminal]
----
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h
worker rendered-worker-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h
----
47 changes: 47 additions & 0 deletions modules/update-process-workflow.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
// Module included in the following assemblies:
//
// * updating/understanding_updates/how-updates-work.adoc

:_content-type: CONCEPT
[id="update-process-workflow_{context}"]
= Update process workflow

The following steps represent a detailed workflow of the {product-title} (OCP) update process:

. The target version is stored in the `spec.desiredUpdate.version` field of the `ClusterVersion` resource, which may be managed through the web console or the CLI.

. The Cluster Version Operator (CVO) detects that the `desiredUpdate` in the `ClusterVersion` resource differs from the current cluster version.
Using graph data from the OpenShift Update Service, the CVO resolves the desired cluster version to a pull spec for the release image.

. The CVO validates the integrity and authenticity of the release image.
Red Hat publishes cryptographically-signed statements about published release images at predefined locations by using image SHA digests as unique and immutable release image identifiers.
The CVO utilizes a list of built-in public keys to validate the presence and signatures of the statement matching the checked release image.

. The CVO creates a job named `version-$version-$hash` in the `openshift-cluster-version` namespace.
This job uses containers that are executing the release image, so the cluster downloads the image through the container runtime.
The job then extracts the manifests and metadata from the release image to a shared volume that is accessible to the CVO.

. The CVO validates the extracted manifests and metadata.

. The CVO checks some preconditions to ensure that no problematic condition is detected in the cluster.
Certain conditions can prevent updates from proceeding.
These conditions are either determined by the CVO itself, or reported by individual cluster Operators that detect some details about the cluster that the Operator considers problematic for the update.

. The CVO records the accepted release in `status.desired` and creates a `status.history` entry about the new update.

. The CVO begins applying the manifests from the release image.
Cluster Operators are updated in separate stages called Runlevels, and the CVO ensures that all Operators in a Runlevel finish updating before it proceeds to the next level.

. Manifests for the CVO itself are applied early in the process.
When the CVO deployment is applied, the current CVO pod terminates, and a CVO pod using the new version starts.
The new CVO proceeds to apply the remaining manifests.

. The update proceeds until the entire control plane is updated to the new version.
Individual cluster Operators might perform update tasks on their domain of the cluster, and while they do so, they report their state through the `Progressing=True` condition.

. The Machine Config Operator (MCO) manifests are applied towards the end of the process.
The updated MCO then begins updating the system configuration and operating system of every node.
Each node might be drained, updated, and rebooted before it starts to accept workloads again.

The cluster reports as updated after the control plane update is finished, usually before all nodes are updated.
After the update, the CVO maintains all cluster resources to match the state delivered in the release image.
40 changes: 40 additions & 0 deletions modules/update-release-images.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
// Module included in the following assemblies:
//
// * updating/understanding_updates/how-updates-work.adoc

:_content-type: CONCEPT
[id="update-release-images_{context}"]
= Release images

A release image is the delivery mechanism for a specific {product-title} (OCP) version.
It contains the release metadata, a Cluster Version Operator (CVO) binary matching the release version, every manifest needed to deploy individual OpenShift cluster Operators, and a list of SHA digest-versioned references to all container images that make up this OpenShift version.

You can inspect the content of a specific release image by running the following command:

[source,terminal]
----
$ oc adm release extract <release image>
----

.Example output
[source,terminal]
----
$ oc adm release extract quay.io/openshift-release-dev/ocp-release:4.12.6-x86_64
Extracted release payload from digest sha256:800d1e39d145664975a3bb7cbc6e674fbf78e3c45b5dde9ff2c5a11a8690c87b created at 2023-03-01T12:46:29Z

$ ls
0000_03_authorization-openshift_01_rolebindingrestriction.crd.yaml
0000_03_config-operator_01_proxy.crd.yaml
0000_03_marketplace-operator_01_operatorhub.crd.yaml
0000_03_marketplace-operator_02_operatorhub.cr.yaml
0000_03_quota-openshift_01_clusterresourcequota.crd.yaml <1>
...
0000_90_service-ca-operator_02_prometheusrolebinding.yaml <2>
0000_90_service-ca-operator_03_servicemonitor.yaml
0000_99_machine-api-operator_00_tombstones.yaml
image-references <3>
release-metadata
----
<1> Manifest for `ClusterResourceQuota` CRD, to be applied on Runlevel 03
<2> Manifest for `PrometheusRoleBinding` resource for the `service-ca-operator`, to be applied on Runlevel 90
<3> List of SHA digest-versioned references to all required images
2 changes: 1 addition & 1 deletion modules/update-service-overview.adoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
// Module included in the following assemblies:
//
// * architecture/architecture-installation.adoc
// * updating/understanding-openshift-updates.adoc
// * updating/understanding_updates/intro-to-updates.adoc

:_content-type: CONCEPT
[id="update-service-about_{context}"]
Expand Down
2 changes: 1 addition & 1 deletion security/container_security/security-hosts-vms.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ include::modules/security-hosts-vms-rhcos.adoc[leveloffset=+1]
* xref:../../installing/install_config/installing-customizing.adoc#installation-special-config-kmod_installing-customizing[Kernel modules]
* xref:../../installing/install_config/installing-customizing.adoc#installation-special-config-encrypt-disk_installing-customizing[Disk encryption]
* xref:../../installing/install_config/installing-customizing.adoc#installation-special-config-chrony_installing-customizing[Chrony time service]
* xref:../../updating/understanding-openshift-updates.adoc#update-service-about_understanding-openshift-updates[About the OpenShift Update Service]
* xref:../../updating/understanding_updates/intro-to-updates.adoc#update-service-about_understanding-openshift-updates[About the OpenShift Update Service]
////
ifndef::openshift-origin[]
* xref:../../installing/installing-fips.adoc#installing-fips[FIPS cryptography]
Expand Down
2 changes: 1 addition & 1 deletion updating/index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ You can update an {product-title} 4 cluster with a single operation by using the
ifndef::openshift-origin[]
[id="updating-clusters-overview-understanding-container-platform-updates"]
== Understanding OpenShift Container Platform updates
xref:../updating/understanding-openshift-updates.adoc#update-service-about_understanding-openshift-updates[About the OpenShift Update Service]: For clusters with internet access, Red Hat provides over-the-air updates by using an {product-title} update service as a hosted service located behind public APIs.
xref:../updating/understanding_updates/intro-to-updates.adoc#update-service-about_understanding-openshift-updates[About the OpenShift Update Service]: For clusters with internet access, Red Hat provides over-the-air updates by using an {product-title} update service as a hosted service located behind public APIs.
endif::openshift-origin[]

[id="updating-clusters-overview-upgrade-channels-and-releases"]
Expand Down
Loading