Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions _topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -179,6 +179,8 @@ Topics:
File: installing-gcp-private
- Name: Installing a cluster on GCP using Deployment Manager templates
File: installing-gcp-user-infra
- Name: Installing a cluster on GCP using Deployment Manager templates and a shared VPC
File: installing-gcp-user-infra-vpc
- Name: Restricted network GCP installation
File: installing-restricted-networks-gcp
- Name: Uninstalling a cluster on GCP
Expand Down
137 changes: 137 additions & 0 deletions installing/installing_gcp/installing-gcp-user-infra-vpc.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@
[id="installing-gcp-user-infra-vpc"]
= Installing a cluster with shared VPC on user-provisioned infrastructure in GCP by using Deployment Manager templates
include::modules/common-attributes.adoc[]
:context: installing-gcp-user-infra-vpc

toc::[]

In {product-title} version {product-version}, you can install a cluster into a shared Virtual Private Cloud (VPC) on
Google Cloud Platform (GCP) that uses infrastructure that you provide.

The steps for performing a user-provided infrastructure installation into a shared VPC are outlined here. Several
link:https://cloud.google.com/deployment-manager/docs[Deployment Manager] templates are provided to assist in
completing these steps or to help model your own. You are also free to create
the required resources through other methods; the templates are just an
example.

.Prerequisites

* Review details about the
xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update]
processes.
* If you use a firewall and plan to use telemetry, you must
xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configure the firewall to allow the sites] that your cluster requires access to.
+
[NOTE]
====
Be sure to also review this site list if you are configuring a proxy.
====

[id="csr-management-gcp-vpc"]
== Certificate signing requests management

Because your cluster has limited access to automatic machine management when you
use infrastructure that you provision, you must provide a mechanism for approving
cluster certificate signing requests (CSRs) after installation. The
`kube-controller-manager` only approves the kubelet client CSRs. The
`machine-approver` cannot guarantee the validity of a serving certificate
that is requested by using kubelet credentials because it cannot confirm that
the correct machine issued the request. You must determine and implement a
method of verifying the validity of the kubelet serving certificate requests
and approving them.

[id="installation-gcp-user-infra-config-project-vpc"]
== Configuring the GCP project that hosts your cluster

Before you can install {product-title}, you must configure a Google Cloud
Platform (GCP) project to host it.

include::modules/installation-gcp-project.adoc[leveloffset=+2]
include::modules/installation-gcp-enabling-api-services.adoc[leveloffset=+2]
include::modules/installation-gcp-limits.adoc[leveloffset=+2]
include::modules/installation-gcp-service-account.adoc[leveloffset=+2]
include::modules/installation-gcp-permissions.adoc[leveloffset=+3]
include::modules/installation-gcp-regions.adoc[leveloffset=+2]
include::modules/installation-gcp-install-cli.adoc[leveloffset=+2]

include::modules/installation-gcp-user-infra-config-host-project-vpc.adoc[leveloffset=+1]
include::modules/installation-gcp-dns.adoc[leveloffset=+2]
include::modules/installation-creating-gcp-vpc.adoc[leveloffset=+2]
include::modules/installation-deployment-manager-vpc.adoc[leveloffset=+3]

include::modules/installation-user-infra-generate.adoc[leveloffset=+1]

include::modules/installation-initializing-manual.adoc[leveloffset=+2]

include::modules/installation-gcp-user-infra-shared-vpc-config-yaml.adoc[leveloffset=+2]

include::modules/installation-configure-proxy.adoc[leveloffset=+2]

//include::modules/installation-three-node-cluster.adoc[leveloffset=+2]

include::modules/installation-user-infra-generate-k8s-manifest-ignition.adoc[leveloffset=+2]
.Additional resources

[id="installation-gcp-user-infra-exporting-common-variables-vpc"]
== Exporting common variables

include::modules/installation-extracting-infraid.adoc[leveloffset=+2]
include::modules/installation-user-infra-exporting-common-variables.adoc[leveloffset=+2]

include::modules/installation-creating-gcp-lb.adoc[leveloffset=+1]
include::modules/installation-deployment-manager-ext-lb.adoc[leveloffset=+2]
include::modules/installation-deployment-manager-int-lb.adoc[leveloffset=+2]

include::modules/installation-creating-gcp-private-dns.adoc[leveloffset=+1]
include::modules/installation-deployment-manager-private-dns.adoc[leveloffset=+2]

include::modules/installation-creating-gcp-firewall-rules-vpc.adoc[leveloffset=+1]
include::modules/installation-deployment-manager-firewall-rules.adoc[leveloffset=+2]

include::modules/installation-creating-gcp-iam-shared-vpc.adoc[leveloffset=+1]
include::modules/installation-deployment-manager-iam-shared-vpc.adoc[leveloffset=+2]

include::modules/installation-gcp-user-infra-rhcos.adoc[leveloffset=+1]

include::modules/installation-creating-gcp-bootstrap.adoc[leveloffset=+1]
include::modules/installation-deployment-manager-bootstrap.adoc[leveloffset=+2]

include::modules/installation-creating-gcp-control-plane.adoc[leveloffset=+1]
include::modules/installation-deployment-manager-control-plane.adoc[leveloffset=+2]

include::modules/installation-gcp-user-infra-wait-for-bootstrap.adoc[leveloffset=+1]

include::modules/installation-creating-gcp-worker.adoc[leveloffset=+1]
include::modules/installation-deployment-manager-worker.adoc[leveloffset=+2]

include::modules/cli-installing-cli.adoc[leveloffset=+1]

include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1]

include::modules/installation-approve-csrs.adoc[leveloffset=+1]

include::modules/installation-gcp-user-infra-adding-ingress.adoc[leveloffset=+1]

[id="installation-gcp-user-infra-vpc-adding-firewall-rules"]
== Adding ingress firewall rules
The cluster requires several firewall rules. If you do not use a shared VPC, these rules are created by the ingress controller via the GCP cloud provider. When you use a shared VPC, you can either create cluster-wide firewall rules for all services now or create each rule based on events, when the cluster requests access. By creating each rule when the cluster requests access, you know exactly which firewall rules are required. By creating cluster-wide firewall rules, you can apply the same rule set across multiple clusters.

If you choose to create each rule based on events, you must create firewall rules after you provision the cluster and during the life of the cluster when the console notifies you that rules are missing. Events that are similar to the following event are displayed, and you must add the firewall rules that are required:

----
Firewall change required by security admin: `gcloud compute firewall-rules create k8s-fw-a26e631036a3f46cba28f8df67266d55 --network example-network --description "{\"kubernetes.io/service-name\":\"openshift-ingress/router-default\", \"kubernetes.io/service-ip\":\"35.237.236.234\"}\" --allow tcp:443,tcp:80 --source-ranges 0.0.0.0/0 --target-tags exampl-fqzq7-master,exampl-fqzq7-worker --project example-project`
----

If you encounter issues when creating these rule-based events, you can configure the cluster-wide firewall rules while your cluster is running.

include::modules/installation-creating-gcp-shared-vpc-cluster-wide-firewall-rules.adoc[leveloffset=+2]

//include::modules/installation-creating-gcp-shared-vpc-ingress-firewall-rules.adoc[leveloffset=+1]

include::modules/installation-gcp-user-infra-completing.adoc[leveloffset=+1]

.Next steps

* xref:../../installing/install_config/customizations.adoc#customizations[Customize your cluster].
* If necessary, you can
xref:../../support/remote_health_monitoring/opting-out-of-remote-health-reporting.adoc#opting-out-remote-health-reporting_opting-out-remote-health-reporting[opt out of remote health reporting].
2 changes: 1 addition & 1 deletion installing/installing_gcp/installing-gcp-user-infra.adoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[id="installing-gcp-user-infra"]
= Installing a cluster on GCP using Deployment Manager templates
= Installing a cluster on user-provisioned infrastructure in GCP by using Deployment Manager templates
include::modules/common-attributes.adoc[]
:context: installing-gcp-user-infra

Expand Down
52 changes: 49 additions & 3 deletions modules/installation-creating-gcp-bootstrap.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@
// * installing/installing_gcp/installing-gcp-user-infra.adoc
// * installing/installing_gcp/installing-restricted-networks-gcp.adoc

ifeval::["{context}" == "installing-gcp-user-infra-vpc"]
:shared-vpc:
endif::[]

[id="installation-creating-gcp-bootstrap_{context}"]
= Creating the bootstrap machine in GCP

Expand Down Expand Up @@ -32,15 +36,37 @@ have to contact Red Hat support with your installation logs.
section of this topic and save it as `04_bootstrap.py` on your computer. This
template describes the bootstrap machine that your cluster requires.

. Export the following variables required by the resource definition:
. Export the variables that the deployment template uses:
//You need these variables before you deploy the load balancers for the shared VPC case, so the export statements that are if'd out for shared-vpc are in the load balancer module.
.. Export the control plane subnet location:
+
ifndef::shared-vpc[]
----
$ export CONTROL_SUBNET=`gcloud compute networks subnets describe ${INFRA_ID}-master-subnet --region=${REGION} --format json | jq -r .selfLink`
----
endif::shared-vpc[]

.. Export the location of the {op-system-first} image that the installation program requires:
+
----
$ export CLUSTER_IMAGE=`gcloud compute images describe ${INFRA_ID}-rhcos-image --format json | jq -r .selfLink`
----

ifndef::shared-vpc[]
.. Export each zone that the cluster uses:
+
----
$ export ZONE_0=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9`
----
+
----
$ export ZONE_1=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9`
----
+
----
$ export ZONE_2=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9`
----
endif::shared-vpc[]

. Create a bucket and upload the `bootstrap.ign` file:
+
Expand Down Expand Up @@ -82,8 +108,8 @@ resources:
EOF
----
<1> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step.
<2> `region` is the region to deploy the cluster into, for example `us-east1`.
<3> `zone` is the zone to deploy the bootstrap instance into, for example `us-east1-b`.
<2> `region` is the region to deploy the cluster into, for example `us-central1`.
<3> `zone` is the zone to deploy the bootstrap instance into, for example `us-central1-b`.
<4> `cluster_network` is the `selfLink` URL to the cluster network.
<5> `control_subnet` is the `selfLink` URL to the control subnet.
<6> `image` is the `selfLink` URL to the {op-system} image.
Expand All @@ -96,6 +122,7 @@ EOF
$ gcloud deployment-manager deployments create ${INFRA_ID}-bootstrap --config 04_bootstrap.yaml
----

ifndef::shared-vpc[]
. The templates do not manage load balancer membership due to limitations of Deployment
Manager, so you must add the bootstrap machine manually:
+
Expand All @@ -105,3 +132,22 @@ $ gcloud compute target-pools add-instances \
$ gcloud compute target-pools add-instances \
${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-bootstrap
----
endif::shared-vpc[]

ifdef::shared-vpc[]
. Add the bootstrap instance to the internal load balancer instance group:
+
----
$ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-bootstrap-instance-group --zone=${ZONE_0} --instances=${INFRA_ID}-bootstrap
----

. Add the bootstrap instance group to the internal load balancer backend service:
+
----
$ gcloud compute backend-services add-backend ${INFRA_ID}-api-internal-backend-service --region=${REGION} --instance-group=${INFRA_ID}-bootstrap-instance-group --instance-group-zone=${ZONE_0}
----
endif::shared-vpc[]

ifeval::["{context}" == "installing-gcp-user-infra-vpc"]
:!shared-vpc:
endif::[]
59 changes: 57 additions & 2 deletions modules/installation-creating-gcp-control-plane.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,11 @@
//
// * installing/installing_gcp/installing-gcp-user-infra.adoc
// * installing/installing_gcp/installing-restricted-networks-gcp.adoc
// * installing/installing_gcp/installing-gcp-user-infra-vpc.adoc

ifeval::["{context}" == "installing-gcp-user-infra-vpc"]
:shared-vpc:
endif::[]

[id="installation-creating-gcp-control-plane_{context}"]
= Creating the control plane machines in GCP
Expand Down Expand Up @@ -68,8 +73,8 @@ resources:
EOF
----
<1> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step.
<2> `region` is the region to deploy the cluster into, for example `us-east1`.
<3> `zones` are the zones to deploy the bootstrap instance into, for example `us-east1-b`, `us-east1-c`, and `us-east1-d`.
<2> `region` is the region to deploy the cluster into, for example `us-central1`.
<3> `zones` are the zones to deploy the bootstrap instance into, for example `us-central1-a`, `us-central1-b`, and `us-central1-c`.
<4> `control_subnet` is the `selfLink` URL to the control subnet.
<5> `image` is the `selfLink` URL to the {op-system} image.
<6> `machine_type` is the machine type of the instance, for example `n1-standard-4`.
Expand All @@ -85,6 +90,7 @@ $ gcloud deployment-manager deployments create ${INFRA_ID}-control-plane --confi
. The templates do not manage DNS entries due to limitations of Deployment
Manager, so you must add the etcd entries manually:
+
ifndef::shared-vpc[]
----
$ export MASTER0_IP=`gcloud compute instances describe ${INFRA_ID}-m-0 --zone ${ZONE_0} --format json | jq -r .networkInterfaces[0].networkIP`
$ export MASTER1_IP=`gcloud compute instances describe ${INFRA_ID}-m-1 --zone ${ZONE_1} --format json | jq -r .networkInterfaces[0].networkIP`
Expand All @@ -101,7 +107,27 @@ $ gcloud dns record-sets transaction add \
--name _etcd-server-ssl._tcp.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type SRV --zone ${INFRA_ID}-private-zone
$ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone
----
endif::shared-vpc[]
ifdef::shared-vpc[]
----
$ export MASTER0_IP=`gcloud compute instances describe ${INFRA_ID}-m-0 --zone ${ZONE_0} --format json | jq -r .networkInterfaces[0].networkIP`
$ export MASTER1_IP=`gcloud compute instances describe ${INFRA_ID}-m-1 --zone ${ZONE_1} --format json | jq -r .networkInterfaces[0].networkIP`
$ export MASTER2_IP=`gcloud compute instances describe ${INFRA_ID}-m-2 --zone ${ZONE_2} --format json | jq -r .networkInterfaces[0].networkIP`
if [ -f transaction.yaml ]; then rm transaction.yaml; fi
$ gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
$ gcloud dns record-sets transaction add ${MASTER0_IP} --name etcd-0.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
$ gcloud dns record-sets transaction add ${MASTER1_IP} --name etcd-1.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
$ gcloud dns record-sets transaction add ${MASTER2_IP} --name etcd-2.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
$ gcloud dns record-sets transaction add \
"0 10 2380 etcd-0.${CLUSTER_NAME}.${BASE_DOMAIN}." \
"0 10 2380 etcd-1.${CLUSTER_NAME}.${BASE_DOMAIN}." \
"0 10 2380 etcd-2.${CLUSTER_NAME}.${BASE_DOMAIN}." \
--name _etcd-server-ssl._tcp.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type SRV --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
$ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
----
endif::shared-vpc[]

ifndef::shared-vpc[]
. The templates do not manage load balancer membership due to limitations of Deployment
Manager, so you must add the control plane machines manually:
+
Expand All @@ -113,3 +139,32 @@ $ gcloud compute target-pools add-instances ${INFRA_ID}-ign-target-pool --instan
$ gcloud compute target-pools add-instances ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_1}" --instances=${INFRA_ID}-m-1
$ gcloud compute target-pools add-instances ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_2}" --instances=${INFRA_ID}-m-2
----
endif::shared-vpc[]

ifdef::shared-vpc[]
. The templates do not manage load balancer membership due to limitations of Deployment
Manager, so you must add the control plane machines manually.
** For an internal cluster, use the following commands:
+
----
$ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_0}-instance-group --zone=${ZONE_0} --instances=${INFRA_ID}-m-0
$ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_1}-instance-group --zone=${ZONE_1} --instances=${INFRA_ID}-m-1
$ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_2}-instance-group --zone=${ZONE_2} --instances=${INFRA_ID}-m-2
----

** For an external cluster, use the following commands:
+
----
$ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_0}-instance-group --zone=${ZONE_0} --instances=${INFRA_ID}-m-0
$ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_1}-instance-group --zone=${ZONE_1} --instances=${INFRA_ID}-m-1
$ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_2}-instance-group --zone=${ZONE_2} --instances=${INFRA_ID}-m-2

$ gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-m-0
$ gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_1}" --instances=${INFRA_ID}-m-1
$ gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_2}" --instances=${INFRA_ID}-m-2
----
endif::shared-vpc[]

ifeval::["{context}" == "installing-gcp-user-infra-vpc"]
:!shared-vpc:
endif::[]
21 changes: 19 additions & 2 deletions modules/installation-creating-gcp-dns.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@
// * installing/installing_gcp/installing-gcp-user-infra.adoc
// * installing/installing_gcp/installing-restricted-networks-gcp.adoc

ifeval::["{context}" == "installing-gcp-user-infra-vpc"]
:shared-vpc:
endif::[]

[id="installation-creating-gcp-dns_{context}"]
= Creating networking and load balancing components in GCP

Expand Down Expand Up @@ -33,9 +37,18 @@ requires.

. Export the following variable required by the resource definition:
+
ifndef::shared-vpc[]
----
$ export CLUSTER_NETWORK=`gcloud compute networks describe ${INFRA_ID}-network --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`
----
endif::shared-vpc[]
ifdef::shared-vpc[]
----
$ export CLUSTER_NETWORK=`gcloud compute networks describe ${INFRA_ID}-network --format json | jq -r .selfLink`
$ export CLUSTER_NETWORK=`gcloud compute networks describe ${HOST_PROJECT_NETWORK} --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`
----
+
Where `<network_name>` is the name of the network that hosts the shared VPC.
endif::shared-vpc[]

. Create a `02_infra.yaml` resource definition file:
+
Expand All @@ -56,7 +69,7 @@ resources:
EOF
----
<1> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step.
<2> `region` is the region to deploy the cluster into, for example `us-east1`.
<2> `region` is the region to deploy the cluster into, for example `us-central1`.
<3> `cluster_domain` is the domain for the cluster, for example `openshift.example.com`.
<4> `cluster_network` is the `selfLink` URL to the cluster network.

Expand Down Expand Up @@ -93,3 +106,7 @@ $ gcloud dns record-sets transaction add ${CLUSTER_IP} --name api.${CLUSTER_NAME
$ gcloud dns record-sets transaction add ${CLUSTER_IP} --name api-int.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone
$ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone
----

ifeval::["{context}" == "installing-gcp-user-infra-vpc"]
:!shared-vpc:
endif::[]
Loading