Skip to content

Commit c615219

Browse files
authored
Merge pull request #4074 from fabriziopandini/document-multi-tenancy-contract
📖 Document multi-tenancy contract
2 parents 97436cc + bb54134 commit c615219

File tree

8 files changed

+78
-131
lines changed

8 files changed

+78
-131
lines changed

docs/book/src/SUMMARY.md

+2
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,8 @@
4343
- [MachineHealthCheck](./developer/architecture/controllers/machine-health-check.md)
4444
- [Control Plane](./developer/architecture/controllers/control-plane.md)
4545
- [MachinePool](./developer/architecture/controllers/machine-pool.md)
46+
- [Multi-tenancy](./developer/architecture/controllers/multi-tenancy.md)
47+
- [Support multiple instances](./developer/architecture/controllers/support-multiple-instances.md)
4648
- [Provider Implementers](./developer/providers/implementers.md)
4749
- [v1alpha1 to v1alpha2](./developer/providers/v1alpha1-to-v1alpha2.md)
4850
- [v1alpha2 to v1alpha3](./developer/providers/v1alpha2-to-v1alpha3.md)

docs/book/src/clusterctl/commands/init.md

-60
Original file line numberDiff line numberDiff line change
@@ -125,66 +125,6 @@ same namespace.
125125

126126
</aside>
127127

128-
#### Multi-tenancy
129-
130-
*Multi-tenancy* for Cluster API means a management cluster where multiple instances of the same provider are installed.
131-
132-
The user can achieve multi-tenancy configurations with `clusterctl` by a combination of:
133-
134-
- Multiple calls to `clusterctl init`;
135-
- Usage of the `--target-namespace` flag;
136-
- Usage of the `--watching-namespace` flag;
137-
138-
The `clusterctl` command officially supports the following multi-tenancy configurations:
139-
140-
{{#tabs name:"tab-multi-tenancy" tabs:"n-Infra, n-Core"}}
141-
{{#tab n-Infra}}
142-
A management cluster with <em>n (n>1)</em> instances of an infrastructure provider, and <em>only one</em> instance
143-
of Cluster API core provider, bootstrap provider and control plane provider (optional).
144-
145-
For example:
146-
147-
* Cluster API core provider installed in the `capi-system` namespace, watching objects in all namespaces;
148-
* The kubeadm bootstrap provider in `capbpk-system`, watching all namespaces;
149-
* The kubeadm control plane provider in `cacpk-system`, watching all namespaces;
150-
* The `aws` infrastructure provider in `aws-system1`, watching objects in `aws-system1` only;
151-
* The `aws` infrastructure provider in `aws-system2`, watching objects in `aws-system2` only;
152-
* etc. (more instances of the `aws` provider)
153-
154-
{{#/tab }}
155-
{{#tab n-Core}}
156-
A management cluster with <em>n (n>1)</em> instances of the Cluster API core provider, each one with <em>a dedicated</em>
157-
instance of infrastructure provider, bootstrap provider, and control plane provider (optional).
158-
159-
For example:
160-
161-
* A Cluster API core provider installed in the `capi-system1` namespace, watching objects in `capi-system1` only, and with:
162-
* The kubeadm bootstrap provider in `capi-system1`, watching `capi-system1`;
163-
* The kubeadm control plane provider in `capi-system1`, watching `capi-system1`;
164-
* The `aws` infrastructure provider in `capi-system1`, watching objects `capi-system1`;
165-
* A Cluster API core provider installed in the `capi-system2` namespace, watching objects in `capi-system2` only, and with:
166-
* The kubeadm bootstrap provider in `capi-system2`, watching `capi-system2`;
167-
* The kubeadm control plane provider in `capi-system2`, watching `capi-system2`;
168-
* The `aws` infrastructure provider in `capi-system2`, watching objects `capi-system2`;
169-
* etc. (more instances of the Cluster API core provider and the dedicated providers)
170-
171-
172-
{{#/tab }}
173-
{{#/tabs }}
174-
175-
176-
<aside class="note warning">
177-
178-
<h1>Warning</h1>
179-
180-
It is possible to achieve many other different configurations of multi-tenancy with `clusterctl`.
181-
182-
However, the user should be aware that configurations not listed above are not verified by the `clusterctl`tests
183-
and support will be provided at best effort only.
184-
185-
</aside>
186-
187-
188128
## Provider repositories
189129

190130
To access provider specific information, such as the components YAML to be used for installing a provider,

docs/book/src/clusterctl/commands/upgrade.md

-64
Original file line numberDiff line numberDiff line change
@@ -3,17 +3,6 @@
33
The `clusterctl upgrade` command can be used to upgrade the version of the Cluster API providers (CRDs, controllers)
44
installed into a management cluster.
55

6-
## Background info: management groups
7-
8-
The upgrade procedure is designed to ensure all the providers in a *management group* use the same
9-
API Version of Cluster API (contract), e.g. the v1alpha 3 Cluster API contract.
10-
11-
A management group is a group of providers composed by a CoreProvider and a set of Bootstrap/ControlPlane/Infrastructure
12-
providers watching objects in the same namespace.
13-
14-
Usually, in a management cluster there is only a management group, but in case of [n-core multi tenancy](init.md#multi-tenancy)
15-
there can be more than one.
16-
176
# upgrade plan
187

198
The `clusterctl upgrade plan` command can be used to identify possible targets for upgrades.
@@ -106,56 +95,3 @@ clusterctl upgrade apply --management-group capi-system/cluster-api \
10695
In this case, all the provider's versions must be explicitly stated.
10796

10897
</aside>
109-
110-
## Upgrading a Multi-tenancy management cluster
111-
112-
[Multi-tenancy](init.md#multi-tenancy) for Cluster API means a management cluster where multiple instances of the same
113-
provider are installed, and this is achieved by multiple calls to `clusterctl init`, and in most cases, each one with
114-
different environment variables for customizing the provider instances.
115-
116-
In order to upgrade a multi-tenancy management cluster, and preserve the instance specific settings, you should do
117-
the same during upgrades and execute multiple calls to `clusterctl upgrade apply`, each one with different environment
118-
variables.
119-
120-
For instance, in case of a management cluster with n>1 instances of an infrastructure provider, and only one instance
121-
of Cluster API core provider, bootstrap provider and control plane provider, you should:
122-
123-
Run once `clusterctl upgrade apply` for the core provider, the bootstrap provider and the control plane provider;
124-
this can be achieved by using the `--core`, `--bootstrap` and `--control-plane` flags followed by the upgrade target
125-
for each one of those providers, e.g.
126-
127-
```shell
128-
clusterctl upgrade apply --management-group capi-system/cluster-api \
129-
--core capi-system/cluster-api:v0.3.1 \
130-
--bootstrap capi-kubeadm-bootstrap-system/kubeadm:v0.3.1 \
131-
--control-plane capi-kubeadm-control-plane-system/kubeadm:v0.3.1
132-
```
133-
134-
Run `clusterctl upgrade apply` for each infrastructure provider instance, using the `--infrastructure` flag,
135-
taking care to provide different environment variables for each call (as in the initial setup), e.g.
136-
137-
Set the environment variables for instance 1 and then run:
138-
139-
```shell
140-
clusterctl upgrade apply --management-group capi-system/cluster-api \
141-
--infrastructure instance1/docker:v0.3.1
142-
```
143-
144-
Afterwards, set the environment variables for instance 2 and then run:
145-
146-
```shell
147-
clusterctl upgrade apply --management-group capi-system/cluster-api \
148-
--infrastructure instance2/docker:v0.3.1
149-
```
150-
151-
etc.
152-
153-
<aside class="note warning">
154-
155-
<h1>tips</h1>
156-
157-
As alternative of using multiple set of env variables it is possible to use
158-
multiple config files and pass them to the different `clusterctl upgrade apply` calls
159-
using the `--config` flag.
160-
161-
</aside>

docs/book/src/clusterctl/provider-contract.md

+1-3
Original file line numberDiff line numberDiff line change
@@ -283,8 +283,6 @@ Provider authors should be aware of the following transformations that `clusterc
283283
* Enforcement of target namespace:
284284
* The name of the namespace object is set;
285285
* The namespace field of all the objects is set (with exception of cluster wide objects like e.g. ClusterRoles);
286-
* ClusterRole and ClusterRoleBinding are renamed by adding a “${namespace}-“ prefix to the name; this change reduces the risks
287-
of conflicts between several instances of the same provider in case of multi tenancy;
288286
* Enforcement of watching namespace;
289287
* All components are labeled;
290288

@@ -307,7 +305,7 @@ If, for any reason, the provider authors/YAML designers decide not to comply wit
307305
* implement link to external objects from a cluster template (e.g. secrets, configMaps NOT included in the cluster template)
308306

309307
The provider authors/YAML designers should be aware that it is their responsibility to ensure the proper
310-
functioning of all the `clusterctl` features both in single tenancy or multi-tenancy scenarios and/or document known limitations.
308+
functioning of `clusterctl` when using non-compliant component YAML or cluster templates.
311309

312310
### Move
313311

Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
# Multi tenancy
2+
3+
Multi tenancy in Cluster API defines the capability of an infrastructure provider to manage different credentials, each
4+
one of them corresponding to an infrastructure tenant.
5+
6+
## Contract
7+
8+
In order to support multi tenancy, the following rule applies:
9+
10+
- Infrastructure providers MUST be able to manage different sets of credentials (if any)
11+
- Providers SHOULD deploy and run any kind of webhook (validation, admission, conversion)
12+
following Cluster API codebase best practices for the same release.
13+
- Providers MUST create and publish a `{type}-component.yaml` accordingly.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
# Support running multiple instances of the same provider
2+
3+
Up until v1alpha3, the need of supporting [multiple credentials](../../../reference/glossary.md#multi-tenancy) was addressed by running multiple
4+
instances of the same provider, each one with its own set of credentials while watching different namespaces.
5+
6+
However, running multiple instances of the same provider proved to be complicated for several reasons:
7+
8+
- Complexity in packaging providers: CustomResourceDefinitions (CRD) are global resources, these may have a reference
9+
to a service that can be used to convert between CRD versions (conversion webhooks). Only one of these services should
10+
be running at any given time, this requirement led us to previously split the webhooks code to a different deployment
11+
and namespace.
12+
- Complexity in deploying providers, due to the requirement to ensure consistency of the management cluster, e.g.
13+
controllers watching the same namespaces.
14+
- The introduction of the concept of management groups in clusterctl, with impacts on the user experience/documentation.
15+
- Complexity in managing co-existence of different versions of the same provider while there could be only
16+
one version of CRDs and webhooks. Please note that this constraint generates a risk, because some version of the provider
17+
de-facto were forced to run with CRDs and webhooks deployed from a different version.
18+
19+
Nevertheless, we want to make it possible for users to choose to deploy multiple instances of the same providers,
20+
in case the above limitations/extra complexity are acceptable for them.
21+
22+
## Contract
23+
24+
In order to make it possible for users to deploy multiple instances of the same provider:
25+
26+
- Providers MUST support the `--namespace` flag in their controllers.
27+
28+
⚠️ Users selecting this deployment model, please be aware:
29+
30+
- Support should be considered best-effort.
31+
- Cluster API (incl. every provider managed under `kubernetes-sigs`, won't release a specialized components file
32+
supporting the scenario described above; however, users should be able to create such deployment model from
33+
the `/config` folder.
34+
- Cluster API (incl. every provider managed under `kubernetes-sigs`) testing infrastructure won't run test cases
35+
with multiple instances of the same provider.
36+
37+
In conclusion, giving the increasingly complex task that is to manage multiple instances of the same controllers,
38+
the Cluster API community may only provide best effort support for users that choose this model.
39+
40+
As always, if some members of the community would like to take on the responsibility of managing this model,
41+
please reach out through the usual communication channels, we'll make sure to guide you in the right path.

docs/book/src/developer/providers/v1alpha3-to-v1alpha4.md

+13
Original file line numberDiff line numberDiff line change
@@ -41,3 +41,16 @@ the delegating client by default under the hood, so this can be now removed.
4141
- The functions `fake.NewFakeClientWithScheme` and `fake.NewFakeClient` have been deprecated.
4242
- Switch to `fake.NewClientBuilder().WithObjects().Build()` instead, which provides a cleaner interface
4343
to create a new fake client with objects, lists, or a scheme.
44+
45+
## Multi tenancy
46+
47+
Up until v1alpha3, the need of supporting multiple credentials was addressed by running multiple
48+
instances of the same provider, each one with its own set of credentials while watching different namespaces.
49+
50+
Starting from v1alpha4 instead we are going require that an infrastructure provider should manage different credentials,
51+
each one of them corresponding to an infrastructure tenant.
52+
53+
see [Multi-tenancy](../architecture/controllers/multi-tenancy.md) and [Support multiple instances](../architecture/controllers/support-multiple-instances.md) for
54+
more details.
55+
56+
Specific changes related to this topic will be detailed in this document.

docs/book/src/reference/glossary.md

+8-4
Original file line numberDiff line numberDiff line change
@@ -142,11 +142,15 @@ Perform create, scale, upgrade, or destroy operations on the cluster.
142142

143143
The cluster where one or more Infrastructure Providers run, and where resources (e.g. Machines) are stored. Typically referred to when you are provisioning multiple workload clusters.
144144

145-
### Management group
145+
### Multi-tenancy
146146

147-
A management group is a group of providers composed by a CoreProvider and a set of Bootstrap/ControlPlane/Infrastructure providers
148-
watching objects in the same namespace. For example, a management group can be used for upgrades, in order to ensure all the providers
149-
in a management group support the same Cluster API version.
147+
Multi tenancy in Cluster API defines the capability of an infrastructure provider to manage different credentials, each
148+
one of them corresponding to an infrastructure tenant.
149+
150+
Please note that up until v1alpha3 this concept had a different meaning, referring to the capability to run multiple
151+
instances of the same provider, each one with its own credentials; starting from v1alpha4 we are disambiguating the two concepts.
152+
153+
see [Multi-tenancy](../developer/architecture/controllers/multi-tenancy.md) and [Support multiple instances](../developer/architecture/controllers/support-multiple-instances.md).
150154

151155
# N
152156
---

0 commit comments

Comments
 (0)