|
3 | 3 | The `clusterctl upgrade` command can be used to upgrade the version of the Cluster API providers (CRDs, controllers)
|
4 | 4 | installed into a management cluster.
|
5 | 5 |
|
6 |
| -## Background info: management groups |
7 |
| - |
8 |
| -The upgrade procedure is designed to ensure all the providers in a *management group* use the same |
9 |
| -API Version of Cluster API (contract), e.g. the v1alpha 3 Cluster API contract. |
10 |
| - |
11 |
| -A management group is a group of providers composed by a CoreProvider and a set of Bootstrap/ControlPlane/Infrastructure |
12 |
| -providers watching objects in the same namespace. |
13 |
| - |
14 |
| -Usually, in a management cluster there is only a management group, but in case of [n-core multi tenancy](init.md#multi-tenancy) |
15 |
| -there can be more than one. |
16 |
| - |
17 | 6 | # upgrade plan
|
18 | 7 |
|
19 | 8 | The `clusterctl upgrade plan` command can be used to identify possible targets for upgrades.
|
@@ -106,56 +95,3 @@ clusterctl upgrade apply --management-group capi-system/cluster-api \
|
106 | 95 | In this case, all the provider's versions must be explicitly stated.
|
107 | 96 |
|
108 | 97 | </aside>
|
109 |
| - |
110 |
| -## Upgrading a Multi-tenancy management cluster |
111 |
| - |
112 |
| -[Multi-tenancy](init.md#multi-tenancy) for Cluster API means a management cluster where multiple instances of the same |
113 |
| -provider are installed, and this is achieved by multiple calls to `clusterctl init`, and in most cases, each one with |
114 |
| -different environment variables for customizing the provider instances. |
115 |
| - |
116 |
| -In order to upgrade a multi-tenancy management cluster, and preserve the instance specific settings, you should do |
117 |
| -the same during upgrades and execute multiple calls to `clusterctl upgrade apply`, each one with different environment |
118 |
| -variables. |
119 |
| - |
120 |
| -For instance, in case of a management cluster with n>1 instances of an infrastructure provider, and only one instance |
121 |
| -of Cluster API core provider, bootstrap provider and control plane provider, you should: |
122 |
| - |
123 |
| -Run once `clusterctl upgrade apply` for the core provider, the bootstrap provider and the control plane provider; |
124 |
| -this can be achieved by using the `--core`, `--bootstrap` and `--control-plane` flags followed by the upgrade target |
125 |
| -for each one of those providers, e.g. |
126 |
| - |
127 |
| -```shell |
128 |
| -clusterctl upgrade apply --management-group capi-system/cluster-api \ |
129 |
| - --core capi-system/cluster-api:v0.3.1 \ |
130 |
| - --bootstrap capi-kubeadm-bootstrap-system/kubeadm:v0.3.1 \ |
131 |
| - --control-plane capi-kubeadm-control-plane-system/kubeadm:v0.3.1 |
132 |
| -``` |
133 |
| - |
134 |
| -Run `clusterctl upgrade apply` for each infrastructure provider instance, using the `--infrastructure` flag, |
135 |
| -taking care to provide different environment variables for each call (as in the initial setup), e.g. |
136 |
| - |
137 |
| -Set the environment variables for instance 1 and then run: |
138 |
| - |
139 |
| -```shell |
140 |
| -clusterctl upgrade apply --management-group capi-system/cluster-api \ |
141 |
| - --infrastructure instance1/docker:v0.3.1 |
142 |
| -``` |
143 |
| - |
144 |
| -Afterwards, set the environment variables for instance 2 and then run: |
145 |
| - |
146 |
| -```shell |
147 |
| -clusterctl upgrade apply --management-group capi-system/cluster-api \ |
148 |
| - --infrastructure instance2/docker:v0.3.1 |
149 |
| -``` |
150 |
| - |
151 |
| -etc. |
152 |
| - |
153 |
| -<aside class="note warning"> |
154 |
| - |
155 |
| -<h1>tips</h1> |
156 |
| - |
157 |
| -As alternative of using multiple set of env variables it is possible to use |
158 |
| -multiple config files and pass them to the different `clusterctl upgrade apply` calls |
159 |
| -using the `--config` flag. |
160 |
| - |
161 |
| -</aside> |
0 commit comments