diff --git a/_topic_map.yml b/_topic_map.yml index 31de06ee88d4..0cfb596e71dd 100644 --- a/_topic_map.yml +++ b/_topic_map.yml @@ -179,6 +179,8 @@ Topics: File: installing-gcp-private - Name: Installing a cluster on GCP using Deployment Manager templates File: installing-gcp-user-infra + - Name: Installing a cluster on GCP using Deployment Manager templates and a shared VPC + File: installing-gcp-user-infra-vpc - Name: Restricted network GCP installation File: installing-restricted-networks-gcp - Name: Uninstalling a cluster on GCP diff --git a/installing/installing_gcp/installing-gcp-user-infra-vpc.adoc b/installing/installing_gcp/installing-gcp-user-infra-vpc.adoc new file mode 100644 index 000000000000..1c2d2092b8bb --- /dev/null +++ b/installing/installing_gcp/installing-gcp-user-infra-vpc.adoc @@ -0,0 +1,137 @@ +[id="installing-gcp-user-infra-vpc"] += Installing a cluster with shared VPC on user-provisioned infrastructure in GCP by using Deployment Manager templates +include::modules/common-attributes.adoc[] +:context: installing-gcp-user-infra-vpc + +toc::[] + +In {product-title} version {product-version}, you can install a cluster into a shared Virtual Private Cloud (VPC) on +Google Cloud Platform (GCP) that uses infrastructure that you provide. + +The steps for performing a user-provided infrastructure installation into a shared VPC are outlined here. Several +link:https://cloud.google.com/deployment-manager/docs[Deployment Manager] templates are provided to assist in +completing these steps or to help model your own. You are also free to create +the required resources through other methods; the templates are just an +example. + +.Prerequisites + +* Review details about the +xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update] +processes. +* If you use a firewall and plan to use telemetry, you must +xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configure the firewall to allow the sites] that your cluster requires access to. ++ +[NOTE] +==== +Be sure to also review this site list if you are configuring a proxy. +==== + +[id="csr-management-gcp-vpc"] +== Certificate signing requests management + +Because your cluster has limited access to automatic machine management when you +use infrastructure that you provision, you must provide a mechanism for approving +cluster certificate signing requests (CSRs) after installation. The +`kube-controller-manager` only approves the kubelet client CSRs. The +`machine-approver` cannot guarantee the validity of a serving certificate +that is requested by using kubelet credentials because it cannot confirm that +the correct machine issued the request. You must determine and implement a +method of verifying the validity of the kubelet serving certificate requests +and approving them. + +[id="installation-gcp-user-infra-config-project-vpc"] +== Configuring the GCP project that hosts your cluster + +Before you can install {product-title}, you must configure a Google Cloud +Platform (GCP) project to host it. + +include::modules/installation-gcp-project.adoc[leveloffset=+2] +include::modules/installation-gcp-enabling-api-services.adoc[leveloffset=+2] +include::modules/installation-gcp-limits.adoc[leveloffset=+2] +include::modules/installation-gcp-service-account.adoc[leveloffset=+2] +include::modules/installation-gcp-permissions.adoc[leveloffset=+3] +include::modules/installation-gcp-regions.adoc[leveloffset=+2] +include::modules/installation-gcp-install-cli.adoc[leveloffset=+2] + +include::modules/installation-gcp-user-infra-config-host-project-vpc.adoc[leveloffset=+1] +include::modules/installation-gcp-dns.adoc[leveloffset=+2] +include::modules/installation-creating-gcp-vpc.adoc[leveloffset=+2] +include::modules/installation-deployment-manager-vpc.adoc[leveloffset=+3] + +include::modules/installation-user-infra-generate.adoc[leveloffset=+1] + +include::modules/installation-initializing-manual.adoc[leveloffset=+2] + +include::modules/installation-gcp-user-infra-shared-vpc-config-yaml.adoc[leveloffset=+2] + +include::modules/installation-configure-proxy.adoc[leveloffset=+2] + +//include::modules/installation-three-node-cluster.adoc[leveloffset=+2] + +include::modules/installation-user-infra-generate-k8s-manifest-ignition.adoc[leveloffset=+2] +.Additional resources + +[id="installation-gcp-user-infra-exporting-common-variables-vpc"] +== Exporting common variables + +include::modules/installation-extracting-infraid.adoc[leveloffset=+2] +include::modules/installation-user-infra-exporting-common-variables.adoc[leveloffset=+2] + +include::modules/installation-creating-gcp-lb.adoc[leveloffset=+1] +include::modules/installation-deployment-manager-ext-lb.adoc[leveloffset=+2] +include::modules/installation-deployment-manager-int-lb.adoc[leveloffset=+2] + +include::modules/installation-creating-gcp-private-dns.adoc[leveloffset=+1] +include::modules/installation-deployment-manager-private-dns.adoc[leveloffset=+2] + +include::modules/installation-creating-gcp-firewall-rules-vpc.adoc[leveloffset=+1] +include::modules/installation-deployment-manager-firewall-rules.adoc[leveloffset=+2] + +include::modules/installation-creating-gcp-iam-shared-vpc.adoc[leveloffset=+1] +include::modules/installation-deployment-manager-iam-shared-vpc.adoc[leveloffset=+2] + +include::modules/installation-gcp-user-infra-rhcos.adoc[leveloffset=+1] + +include::modules/installation-creating-gcp-bootstrap.adoc[leveloffset=+1] +include::modules/installation-deployment-manager-bootstrap.adoc[leveloffset=+2] + +include::modules/installation-creating-gcp-control-plane.adoc[leveloffset=+1] +include::modules/installation-deployment-manager-control-plane.adoc[leveloffset=+2] + +include::modules/installation-gcp-user-infra-wait-for-bootstrap.adoc[leveloffset=+1] + +include::modules/installation-creating-gcp-worker.adoc[leveloffset=+1] +include::modules/installation-deployment-manager-worker.adoc[leveloffset=+2] + +include::modules/cli-installing-cli.adoc[leveloffset=+1] + +include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1] + +include::modules/installation-approve-csrs.adoc[leveloffset=+1] + +include::modules/installation-gcp-user-infra-adding-ingress.adoc[leveloffset=+1] + +[id="installation-gcp-user-infra-vpc-adding-firewall-rules"] +== Adding ingress firewall rules +The cluster requires several firewall rules. If you do not use a shared VPC, these rules are created by the ingress controller via the GCP cloud provider. When you use a shared VPC, you can either create cluster-wide firewall rules for all services now or create each rule based on events, when the cluster requests access. By creating each rule when the cluster requests access, you know exactly which firewall rules are required. By creating cluster-wide firewall rules, you can apply the same rule set across multiple clusters. + +If you choose to create each rule based on events, you must create firewall rules after you provision the cluster and during the life of the cluster when the console notifies you that rules are missing. Events that are similar to the following event are displayed, and you must add the firewall rules that are required: + +---- +Firewall change required by security admin: `gcloud compute firewall-rules create k8s-fw-a26e631036a3f46cba28f8df67266d55 --network example-network --description "{\"kubernetes.io/service-name\":\"openshift-ingress/router-default\", \"kubernetes.io/service-ip\":\"35.237.236.234\"}\" --allow tcp:443,tcp:80 --source-ranges 0.0.0.0/0 --target-tags exampl-fqzq7-master,exampl-fqzq7-worker --project example-project` +---- + +If you encounter issues when creating these rule-based events, you can configure the cluster-wide firewall rules while your cluster is running. + +include::modules/installation-creating-gcp-shared-vpc-cluster-wide-firewall-rules.adoc[leveloffset=+2] + +//include::modules/installation-creating-gcp-shared-vpc-ingress-firewall-rules.adoc[leveloffset=+1] + +include::modules/installation-gcp-user-infra-completing.adoc[leveloffset=+1] + +.Next steps + +* xref:../../installing/install_config/customizations.adoc#customizations[Customize your cluster]. +* If necessary, you can +xref:../../support/remote_health_monitoring/opting-out-of-remote-health-reporting.adoc#opting-out-remote-health-reporting_opting-out-remote-health-reporting[opt out of remote health reporting]. diff --git a/installing/installing_gcp/installing-gcp-user-infra.adoc b/installing/installing_gcp/installing-gcp-user-infra.adoc index df3c6525659e..82c1e09ac143 100644 --- a/installing/installing_gcp/installing-gcp-user-infra.adoc +++ b/installing/installing_gcp/installing-gcp-user-infra.adoc @@ -1,5 +1,5 @@ [id="installing-gcp-user-infra"] -= Installing a cluster on GCP using Deployment Manager templates += Installing a cluster on user-provisioned infrastructure in GCP by using Deployment Manager templates include::modules/common-attributes.adoc[] :context: installing-gcp-user-infra diff --git a/modules/installation-creating-gcp-bootstrap.adoc b/modules/installation-creating-gcp-bootstrap.adoc index 6cc2c665ca7c..423f002804df 100644 --- a/modules/installation-creating-gcp-bootstrap.adoc +++ b/modules/installation-creating-gcp-bootstrap.adoc @@ -3,6 +3,10 @@ // * installing/installing_gcp/installing-gcp-user-infra.adoc // * installing/installing_gcp/installing-restricted-networks-gcp.adoc +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:shared-vpc: +endif::[] + [id="installation-creating-gcp-bootstrap_{context}"] = Creating the bootstrap machine in GCP @@ -32,15 +36,37 @@ have to contact Red Hat support with your installation logs. section of this topic and save it as `04_bootstrap.py` on your computer. This template describes the bootstrap machine that your cluster requires. -. Export the following variables required by the resource definition: +. Export the variables that the deployment template uses: +//You need these variables before you deploy the load balancers for the shared VPC case, so the export statements that are if'd out for shared-vpc are in the load balancer module. +.. Export the control plane subnet location: + +ifndef::shared-vpc[] ---- $ export CONTROL_SUBNET=`gcloud compute networks subnets describe ${INFRA_ID}-master-subnet --region=${REGION} --format json | jq -r .selfLink` +---- +endif::shared-vpc[] + +.. Export the location of the {op-system-first} image that the installation program requires: ++ +---- $ export CLUSTER_IMAGE=`gcloud compute images describe ${INFRA_ID}-rhcos-image --format json | jq -r .selfLink` +---- + +ifndef::shared-vpc[] +.. Export each zone that the cluster uses: ++ +---- $ export ZONE_0=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9` +---- ++ +---- $ export ZONE_1=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9` +---- ++ +---- $ export ZONE_2=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9` ---- +endif::shared-vpc[] . Create a bucket and upload the `bootstrap.ign` file: + @@ -82,8 +108,8 @@ resources: EOF ---- <1> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step. -<2> `region` is the region to deploy the cluster into, for example `us-east1`. -<3> `zone` is the zone to deploy the bootstrap instance into, for example `us-east1-b`. +<2> `region` is the region to deploy the cluster into, for example `us-central1`. +<3> `zone` is the zone to deploy the bootstrap instance into, for example `us-central1-b`. <4> `cluster_network` is the `selfLink` URL to the cluster network. <5> `control_subnet` is the `selfLink` URL to the control subnet. <6> `image` is the `selfLink` URL to the {op-system} image. @@ -96,6 +122,7 @@ EOF $ gcloud deployment-manager deployments create ${INFRA_ID}-bootstrap --config 04_bootstrap.yaml ---- +ifndef::shared-vpc[] . The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the bootstrap machine manually: + @@ -105,3 +132,22 @@ $ gcloud compute target-pools add-instances \ $ gcloud compute target-pools add-instances \ ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-bootstrap ---- +endif::shared-vpc[] + +ifdef::shared-vpc[] +. Add the bootstrap instance to the internal load balancer instance group: ++ +---- +$ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-bootstrap-instance-group --zone=${ZONE_0} --instances=${INFRA_ID}-bootstrap +---- + +. Add the bootstrap instance group to the internal load balancer backend service: ++ +---- +$ gcloud compute backend-services add-backend ${INFRA_ID}-api-internal-backend-service --region=${REGION} --instance-group=${INFRA_ID}-bootstrap-instance-group --instance-group-zone=${ZONE_0} +---- +endif::shared-vpc[] + +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:!shared-vpc: +endif::[] diff --git a/modules/installation-creating-gcp-control-plane.adoc b/modules/installation-creating-gcp-control-plane.adoc index ece42e3a47bb..d4da739a181c 100644 --- a/modules/installation-creating-gcp-control-plane.adoc +++ b/modules/installation-creating-gcp-control-plane.adoc @@ -2,6 +2,11 @@ // // * installing/installing_gcp/installing-gcp-user-infra.adoc // * installing/installing_gcp/installing-restricted-networks-gcp.adoc +// * installing/installing_gcp/installing-gcp-user-infra-vpc.adoc + +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:shared-vpc: +endif::[] [id="installation-creating-gcp-control-plane_{context}"] = Creating the control plane machines in GCP @@ -68,8 +73,8 @@ resources: EOF ---- <1> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step. -<2> `region` is the region to deploy the cluster into, for example `us-east1`. -<3> `zones` are the zones to deploy the bootstrap instance into, for example `us-east1-b`, `us-east1-c`, and `us-east1-d`. +<2> `region` is the region to deploy the cluster into, for example `us-central1`. +<3> `zones` are the zones to deploy the bootstrap instance into, for example `us-central1-a`, `us-central1-b`, and `us-central1-c`. <4> `control_subnet` is the `selfLink` URL to the control subnet. <5> `image` is the `selfLink` URL to the {op-system} image. <6> `machine_type` is the machine type of the instance, for example `n1-standard-4`. @@ -85,6 +90,7 @@ $ gcloud deployment-manager deployments create ${INFRA_ID}-control-plane --confi . The templates do not manage DNS entries due to limitations of Deployment Manager, so you must add the etcd entries manually: + +ifndef::shared-vpc[] ---- $ export MASTER0_IP=`gcloud compute instances describe ${INFRA_ID}-m-0 --zone ${ZONE_0} --format json | jq -r .networkInterfaces[0].networkIP` $ export MASTER1_IP=`gcloud compute instances describe ${INFRA_ID}-m-1 --zone ${ZONE_1} --format json | jq -r .networkInterfaces[0].networkIP` @@ -101,7 +107,27 @@ $ gcloud dns record-sets transaction add \ --name _etcd-server-ssl._tcp.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type SRV --zone ${INFRA_ID}-private-zone $ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone ---- +endif::shared-vpc[] +ifdef::shared-vpc[] +---- +$ export MASTER0_IP=`gcloud compute instances describe ${INFRA_ID}-m-0 --zone ${ZONE_0} --format json | jq -r .networkInterfaces[0].networkIP` +$ export MASTER1_IP=`gcloud compute instances describe ${INFRA_ID}-m-1 --zone ${ZONE_1} --format json | jq -r .networkInterfaces[0].networkIP` +$ export MASTER2_IP=`gcloud compute instances describe ${INFRA_ID}-m-2 --zone ${ZONE_2} --format json | jq -r .networkInterfaces[0].networkIP` +if [ -f transaction.yaml ]; then rm transaction.yaml; fi +$ gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +$ gcloud dns record-sets transaction add ${MASTER0_IP} --name etcd-0.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +$ gcloud dns record-sets transaction add ${MASTER1_IP} --name etcd-1.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +$ gcloud dns record-sets transaction add ${MASTER2_IP} --name etcd-2.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +$ gcloud dns record-sets transaction add \ + "0 10 2380 etcd-0.${CLUSTER_NAME}.${BASE_DOMAIN}." \ + "0 10 2380 etcd-1.${CLUSTER_NAME}.${BASE_DOMAIN}." \ + "0 10 2380 etcd-2.${CLUSTER_NAME}.${BASE_DOMAIN}." \ + --name _etcd-server-ssl._tcp.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type SRV --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +$ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +---- +endif::shared-vpc[] +ifndef::shared-vpc[] . The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the control plane machines manually: + @@ -113,3 +139,32 @@ $ gcloud compute target-pools add-instances ${INFRA_ID}-ign-target-pool --instan $ gcloud compute target-pools add-instances ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_1}" --instances=${INFRA_ID}-m-1 $ gcloud compute target-pools add-instances ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_2}" --instances=${INFRA_ID}-m-2 ---- +endif::shared-vpc[] + +ifdef::shared-vpc[] +. The templates do not manage load balancer membership due to limitations of Deployment +Manager, so you must add the control plane machines manually. +** For an internal cluster, use the following commands: ++ +---- +$ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_0}-instance-group --zone=${ZONE_0} --instances=${INFRA_ID}-m-0 +$ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_1}-instance-group --zone=${ZONE_1} --instances=${INFRA_ID}-m-1 +$ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_2}-instance-group --zone=${ZONE_2} --instances=${INFRA_ID}-m-2 +---- + +** For an external cluster, use the following commands: ++ +---- +$ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_0}-instance-group --zone=${ZONE_0} --instances=${INFRA_ID}-m-0 +$ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_1}-instance-group --zone=${ZONE_1} --instances=${INFRA_ID}-m-1 +$ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_2}-instance-group --zone=${ZONE_2} --instances=${INFRA_ID}-m-2 + +$ gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-m-0 +$ gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_1}" --instances=${INFRA_ID}-m-1 +$ gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_2}" --instances=${INFRA_ID}-m-2 +---- +endif::shared-vpc[] + +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:!shared-vpc: +endif::[] diff --git a/modules/installation-creating-gcp-dns.adoc b/modules/installation-creating-gcp-dns.adoc index 28a30520ddc8..5faa02c49500 100644 --- a/modules/installation-creating-gcp-dns.adoc +++ b/modules/installation-creating-gcp-dns.adoc @@ -3,6 +3,10 @@ // * installing/installing_gcp/installing-gcp-user-infra.adoc // * installing/installing_gcp/installing-restricted-networks-gcp.adoc +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:shared-vpc: +endif::[] + [id="installation-creating-gcp-dns_{context}"] = Creating networking and load balancing components in GCP @@ -33,9 +37,18 @@ requires. . Export the following variable required by the resource definition: + +ifndef::shared-vpc[] +---- +$ export CLUSTER_NETWORK=`gcloud compute networks describe ${INFRA_ID}-network --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink` +---- +endif::shared-vpc[] +ifdef::shared-vpc[] ---- -$ export CLUSTER_NETWORK=`gcloud compute networks describe ${INFRA_ID}-network --format json | jq -r .selfLink` +$ export CLUSTER_NETWORK=`gcloud compute networks describe ${HOST_PROJECT_NETWORK} --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink` ---- ++ +Where `` is the name of the network that hosts the shared VPC. +endif::shared-vpc[] . Create a `02_infra.yaml` resource definition file: + @@ -56,7 +69,7 @@ resources: EOF ---- <1> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step. -<2> `region` is the region to deploy the cluster into, for example `us-east1`. +<2> `region` is the region to deploy the cluster into, for example `us-central1`. <3> `cluster_domain` is the domain for the cluster, for example `openshift.example.com`. <4> `cluster_network` is the `selfLink` URL to the cluster network. @@ -93,3 +106,7 @@ $ gcloud dns record-sets transaction add ${CLUSTER_IP} --name api.${CLUSTER_NAME $ gcloud dns record-sets transaction add ${CLUSTER_IP} --name api-int.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone $ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone ---- + +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:!shared-vpc: +endif::[] diff --git a/modules/installation-creating-gcp-firewall-rules-vpc.adoc b/modules/installation-creating-gcp-firewall-rules-vpc.adoc new file mode 100644 index 000000000000..057fb9c15b18 --- /dev/null +++ b/modules/installation-creating-gcp-firewall-rules-vpc.adoc @@ -0,0 +1,59 @@ +// Module included in the following assemblies: +// +// * installing/installing_gcp/installing-gcp-user-infra-vpc.adoc + +[id="installation-creating-gcp-firewall-rules-vpc_{context}"] += Creating firewall rules in GCP + +You must create firewall rules in Google Cloud Platform (GCP) for your +{product-title} cluster to use. One way to create these components is +to modify the provided Deployment Manager template. + +[NOTE] +==== +If you do not use the provided Deployment Manager template to create your GCP +infrastructure, you must review the provided information and manually create +the infrastructure. If your cluster does not initialize correctly, you might +have to contact Red Hat support with your installation logs. +==== + +.Prerequisites + +* Configure a GCP account. +* Generate the Ignition config files for your cluster. +* Create and configure a VPC and associated subnets in GCP. + +.Procedure + +. Copy the template from the +*Deployment Manager template for firewall rules* +section of this topic and save it as `03_firewall.py` on your computer. This +template describes the security groups that your cluster requires. + +. Create a `03_firewall.yaml` resource definition file: ++ +---- +$ cat <03_firewall.yaml +imports: +- path: 03_firewall.py + +resources: +- name: cluster-firewall + type: 03_firewall.py + properties: + allowed_external_cidr: '0.0.0.0/0' <1> + infra_id: '${INFRA_ID}' <2> + cluster_network: '${CLUSTER_NETWORK}' <3> + network_cidr: '${NETWORK_CIDR}' <4> +EOF +---- +<1> `allowed_external_cidr` is the CIDR range that can access the cluster API and SSH to the bootstrap host. For an internal cluster, set this value to `${NETWORK_CIDR}`. +<2> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step. +<3> `cluster_network` is the `selfLink` URL to the cluster network. +<4> `network_cidr` is the CIDR of the VPC network, for example `10.0.0.0/16`. + +. Create the deployment by using the `gcloud` CLI: ++ +---- +$ gcloud deployment-manager deployments create ${INFRA_ID}-firewall --config 03_firewall.yaml --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +---- diff --git a/modules/installation-creating-gcp-iam-shared-vpc.adoc b/modules/installation-creating-gcp-iam-shared-vpc.adoc new file mode 100644 index 000000000000..7c79102139d7 --- /dev/null +++ b/modules/installation-creating-gcp-iam-shared-vpc.adoc @@ -0,0 +1,116 @@ +// Module included in the following assemblies: +// +// * installing/installing_gcp/installing-gcp-user-infra-vpc.adoc + +[id="installation-creating-gcp-iam-shared-vpc_{context}"] += Creating IAM roles in GCP + +You must create IAM roles in Google Cloud Platform (GCP) for your +{product-title} cluster to use. One way to create these components is +to modify the provided Deployment Manager template. + +[NOTE] +==== +If you do not use the provided Deployment Manager template to create your GCP +infrastructure, you must review the provided information and manually create +the infrastructure. If your cluster does not initialize correctly, you might +have to contact Red Hat support with your installation logs. +==== + +.Prerequisites + +* Configure a GCP account. +* Generate the Ignition config files for your cluster. +* Create and configure a VPC and associated subnets in GCP. + +.Procedure + +. Copy the template from the +*Deployment Manager template for IAM roles* +section of this topic and save it as `03_iam.py` on your computer. This +template describes the IAM roles that your cluster requires. + +. Create a `03_iam.yaml` resource definition file: ++ +---- +$ cat <03_iam.yaml +imports: +- path: 03_iam.py +resources: +- name: cluster-iam + type: 03_iam.py + properties: + infra_id: '${INFRA_ID}' <1> +EOF +---- +<1> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step. + +. Create the deployment by using the `gcloud` CLI: ++ +---- +$ gcloud deployment-manager deployments create ${INFRA_ID}-iam --config 03_iam.yaml +---- + +. Export the variable for the master service account: ++ +---- +$ export MASTER_SA=`gcloud iam service-accounts list | grep "^${INFRA_ID}-master-node " | awk '{print $2}'` +---- + +. Export the variable for the master service account: ++ +---- +$ export WORKER_SA=`gcloud iam service-accounts list | grep "^${INFRA_ID}-worker-node " | awk '{print $2}'` +---- + +. Assign the permissions that the installation program requires to the service accounts for the subnets that host the control plane and compute subnets: + +.. Grant the `networkViewer` role of the project that hosts your shared VPC to the master service account: ++ +---- +$ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} projects add-iam-policy-binding ${HOST_PROJECT} --member "serviceAccount:${MASTER_SA}" --role "roles/compute.networkViewer" +---- + +.. Grant the `networkUser` role to the master service account for the control plane subnet: ++ +---- +$ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} compute networks subnets add-iam-policy-binding "${HOST_PROJECT_CONTROL_SUBNET}" --member "serviceAccount:${MASTER_SA}" --role "roles/compute.networkUser" --region ${REGION} +---- + +.. Grant the `networkUser` role to the worker service account for the control plane subnet: ++ +---- +$ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} compute networks subnets add-iam-policy-binding "${HOST_PROJECT_CONTROL_SUBNET}" --member "serviceAccount:${WORKER_SA}" --role "roles/compute.networkUser" --region ${REGION} +---- + +.. Grant the `networkUser` role to the master service account for the compute subnet: ++ +---- +$ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} compute networks subnets add-iam-policy-binding "${HOST_PROJECT_COMPUTE_SUBNET}" --member "serviceAccount:${MASTER_SA}" --role "roles/compute.networkUser" --region ${REGION} +---- + +.. Grant the `networkUser` role to the worker service account for the compute subnet: ++ +---- +$ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} compute networks subnets add-iam-policy-binding "${HOST_PROJECT_COMPUTE_SUBNET}" --member "serviceAccount:${WORKER_SA}" --role "roles/compute.networkUser" --region ${REGION} +---- + +. The templates do not create the policy bindings due to limitations of Deployment +Manager, so you must create them manually: ++ +---- +$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/compute.instanceAdmin" +$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/compute.networkAdmin" +$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/compute.securityAdmin" +$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/iam.serviceAccountUser" +$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/storage.admin" + +$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${WORKER_SA}" --role "roles/compute.viewer" +$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${WORKER_SA}" --role "roles/storage.admin" +---- + +. Create a service account key and store it locally for later use: ++ +---- +$ gcloud iam service-accounts keys create service-account-key.json --iam-account=${MASTER_SA} +---- diff --git a/modules/installation-creating-gcp-lb.adoc b/modules/installation-creating-gcp-lb.adoc new file mode 100644 index 000000000000..61de9230bb01 --- /dev/null +++ b/modules/installation-creating-gcp-lb.adoc @@ -0,0 +1,140 @@ +// Module included in the following assemblies: +// +// * installing/installing_gcp/installing-gcp-user-infra-vpc.adoc + +[id="installation-creating-gcp-lb_{context}"] += Creating load balancers in GCP + +You must configure load balancers in Google Cloud Platform (GCP) for your +{product-title} cluster to use. One way to create these components is +to modify the provided Deployment Manager template. + +[NOTE] +==== +If you do not use the provided Deployment Manager template to create your GCP +infrastructure, you must review the provided information and manually create +the infrastructure. If your cluster does not initialize correctly, you might +have to contact Red Hat support with your installation logs. +==== + +.Prerequisites + +* Configure a GCP account. +* Generate the Ignition config files for your cluster. +* Create and configure a VPC and associated subnets in GCP. + +.Procedure + +. Copy the template from the *Deployment Manager template for the internal load balancer* +section of this topic and save it as `02_lb_int.py` on your computer. This +template describes the internal load balancing objects that your cluster +requires. + +. For an external cluster, also copy the template from the *Deployment Manager template for the external load balancer* +section of this topic and save it as `02_lb_ext.py` on your computer. This +template describes the external load balancing objects that your cluster +requires. + +. Export the variables that the deployment template uses: + +.. Export the cluster network location: ++ +---- +$ export CLUSTER_NETWORK=`gcloud compute networks describe ${HOST_PROJECT_NETWORK} --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink` +---- + +.. Export the control plane subnet location: ++ +---- +$ export CONTROL_SUBNET=`gcloud compute networks subnets describe ${HOST_PROJECT_CONTROL_SUBNET} --region=${REGION} --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink` +---- + +.. Export each zone that the cluster uses: ++ +---- +$ export ZONE_0=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9` +---- ++ +---- +$ export ZONE_1=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9` +---- ++ +---- +$ export ZONE_2=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9` +---- + +. Create a `02_lb.yaml` resource definition file: +** For an internal cluster, run the following command: ++ +---- +$ cat <02_lb.yaml +imports: +- path: 02_lb_int.py + +resources: +- name: cluster-lb-int + type: 02_lb_int.py + properties: + cluster_network: '${CLUSTER_NETWORK}' <1> + control_subnet: '${CONTROL_SUBNET}' <2> + infra_id: '${INFRA_ID}' <3> + region: '${REGION}' <4> + zones: + - '${ZONE_0}' + - '${ZONE_1}' + - '${ZONE_2}' +EOF +---- +<1> `cluster_network` is the `selfLink` URL to the cluster network. +<2> `control_subnet` is the `selfLink` URL to the control subnet. +<3> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step. +<4> `region` is the region to deploy the cluster into, for example `us-central1`. + +** For an external cluster, run the following command: ++ +---- +$ cat <02_lb.yaml +imports: +- path: 02_lb_ext.py +- path: 02_lb_int.py +resources: +- name: cluster-lb-ext + type: 02_lb_ext.py + properties: + infra_id: '${INFRA_ID}' <1> + region: '${REGION}' <2> +- name: cluster-lb-int + type: 02_lb_int.py + properties: + cluster_network: '${CLUSTER_NETWORK}' <3> + control_subnet: '${CONTROL_SUBNET}' <4> + infra_id: '${INFRA_ID}' <1> + region: '${REGION}' <2> + zones: + - '${ZONE_0}' + - '${ZONE_1}' + - '${ZONE_2}' +EOF +---- +<1> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step. +<2> `region` is the region to deploy the cluster into, for example `us-central1`. +<3> `cluster_network` is the `selfLink` URL to the cluster network. +<4> `control_subnet` is the `selfLink` URL to the control subnet. + +. Create the deployment by using the `gcloud` CLI: ++ +---- +$ gcloud deployment-manager deployments create ${INFRA_ID}-lb --config 02_lb.yaml +---- + +. Export the cluster IP address: ++ +---- +$ export CLUSTER_IP=$(gcloud compute addresses describe ${INFRA_ID}-cluster-ip --region=${REGION} --format json | jq -r .address) +---- + +. For an external cluster, also export the cluster public IP address: ++ +---- +$ export CLUSTER_PUBLIC_IP=$(gcloud compute addresses describe ${INFRA_ID}-cluster-public-ip --region=${REGION} --format json | jq -r .address) +---- diff --git a/modules/installation-creating-gcp-private-dns.adoc b/modules/installation-creating-gcp-private-dns.adoc new file mode 100644 index 000000000000..6f401a744ae5 --- /dev/null +++ b/modules/installation-creating-gcp-private-dns.adoc @@ -0,0 +1,79 @@ +// Module included in the following assemblies: +// +// * installing/installing_gcp/installing-gcp-user-infra-vpc.adoc + +[id="installation-creating-gcp-private-dns_{context}"] += Creating a private DNS zone in GCP + +You must configure a private DNS zone in Google Cloud Platform (GCP) for your +{product-title} cluster to use. One way to create this component is +to modify the provided Deployment Manager template. + +[NOTE] +==== +If you do not use the provided Deployment Manager template to create your GCP +infrastructure, you must review the provided information and manually create +the infrastructure. If your cluster does not initialize correctly, you might +have to contact Red Hat support with your installation logs. +==== + +.Prerequisites + +* Configure a GCP account. +* Generate the Ignition config files for your cluster. +* Create and configure a VPC and associated subnets in GCP. + +.Procedure + +. Copy the template from the *Deployment Manager template for the private DNS* +section of this topic and save it as `02_dns.py` on your computer. This +template describes the private DNS objects that your cluster +requires. + +. Create a `02_dns.yaml` resource definition file: ++ +---- +$ cat <02_dns.yaml +imports: +- path: 02_dns.py + +resources: +- name: cluster-dns + type: 02_dns.py + properties: + infra_id: '${INFRA_ID}' <1> + cluster_domain: '${CLUSTER_NAME}.${BASE_DOMAIN}' <2> + cluster_network: '${CLUSTER_NETWORK}' <3> +EOF +---- +<1> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step. +<2> `cluster_domain` is the domain for the cluster, for example `openshift.example.com`. +<3> `cluster_network` is the `selfLink` URL to the cluster network. + +. Create the deployment by using the `gcloud` CLI: ++ +---- +$ gcloud deployment-manager deployments create ${INFRA_ID}-dns --config 02_dns.yaml --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +---- + +. The templates do not create DNS entries due to limitations of Deployment +Manager, so you must create them manually: + +.. Add the internal DNS entries: ++ +---- +$ if [ -f transaction.yaml ]; then rm transaction.yaml; fi +$ gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +$ gcloud dns record-sets transaction add ${CLUSTER_IP} --name api.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +$ gcloud dns record-sets transaction add ${CLUSTER_IP} --name api-int.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +$ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +---- + +.. For an external cluster, also add the external DNS entries: ++ +---- +$ if [ -f transaction.yaml ]; then rm transaction.yaml; fi +$ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME} +$ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} dns record-sets transaction add ${CLUSTER_PUBLIC_IP} --name api.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${BASE_DOMAIN_ZONE_NAME} +$ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME} +---- diff --git a/modules/installation-creating-gcp-security.adoc b/modules/installation-creating-gcp-security.adoc index 88669677854b..756bf8e1aef2 100644 --- a/modules/installation-creating-gcp-security.adoc +++ b/modules/installation-creating-gcp-security.adoc @@ -48,21 +48,22 @@ resources: - name: cluster-security type: 03_security.py properties: - infra_id: '${INFRA_ID}' <1> - region: '${REGION}' <2> - - cluster_network: '${CLUSTER_NETWORK}' <3> - network_cidr: '${NETWORK_CIDR}' <4> - master_nat_ip: '${MASTER_NAT_IP}' <5> - worker_nat_ip: '${WORKER_NAT_IP}' <6> + allowed_external_cidr: '0.0.0.0/0' <1> + infra_id: '${INFRA_ID}' <2> + region: '${REGION}' <3> + cluster_network: '${CLUSTER_NETWORK}' <4> + network_cidr: '${NETWORK_CIDR}' <5> + master_nat_ip: '${MASTER_NAT_IP}' <6> + worker_nat_ip: '${WORKER_NAT_IP}' <7> EOF ---- -<1> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step. -<2> `region` is the region to deploy the cluster into, for example `us-east1`. -<3> `cluster_network` is the `selfLink` URL to the cluster network. -<4> `network_cidr` is the CIDR of the VPC network, for example `10.0.0.0/16`. -<5> `master_nat_ip` is the IP address of the master NAT, for example `34.94.100.1`. -<6> `worker_nat_ip` is the IP address of the worker NAT, for example `34.94.200.1`. +<1> `allowed_external_cidr` is the CIDR range that can access the cluster API and SSH to the bootstrap host. For an internal cluster, set this value to `${NETWORK_CIDR}`. +<2> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step. +<3> `region` is the region to deploy the cluster into, for example `us-central1`. +<4> `cluster_network` is the `selfLink` URL to the cluster network. +<5> `network_cidr` is the CIDR of the VPC network, for example `10.0.0.0/16`. +<6> `master_nat_ip` is the IP address of the master NAT, for example `34.94.100.1`. +<7> `worker_nat_ip` is the IP address of the worker NAT, for example `34.94.200.1`. . Create the deployment by using the `gcloud` CLI: + @@ -70,18 +71,28 @@ EOF $ gcloud deployment-manager deployments create ${INFRA_ID}-security --config 03_security.yaml ---- +. Export the variable for the master service account: ++ +---- +$ export MASTER_SA=${INFRA_ID}-m@${PROJECT_NAME}.iam.gserviceaccount.com +---- + +. Export the variable for the master service account: ++ +---- +$ export WORKER_SA=${INFRA_ID}-w@${PROJECT_NAME}.iam.gserviceaccount.com +---- + . The templates do not create the policy bindings due to limitations of Deployment Manager, so you must create them manually: + ---- -$ export MASTER_SA=${INFRA_ID}-m@${PROJECT_NAME}.iam.gserviceaccount.com $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/compute.instanceAdmin" $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/compute.networkAdmin" $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/compute.securityAdmin" $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/iam.serviceAccountUser" $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/storage.admin" -$ export WORKER_SA=${INFRA_ID}-w@${PROJECT_NAME}.iam.gserviceaccount.com $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${WORKER_SA}" --role "roles/compute.viewer" $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${WORKER_SA}" --role "roles/storage.admin" ---- diff --git a/modules/installation-creating-gcp-shared-vpc-cluster-wide-firewall-rules.adoc b/modules/installation-creating-gcp-shared-vpc-cluster-wide-firewall-rules.adoc new file mode 100644 index 000000000000..123ad9a051b5 --- /dev/null +++ b/modules/installation-creating-gcp-shared-vpc-cluster-wide-firewall-rules.adoc @@ -0,0 +1,43 @@ +// Module included in the following assemblies: +// +// * installing/installing_gcp/installing-gcp-user-infra-vpc.adoc + +[id="installation-creating-gcp-shared-vpc-cluster-wide-firewall-rules_{context}"] += Creating cluster-wide firewall rules for a shared VPC in GCP + +You can create cluster-wide firewall rules to allow the access that the {product-title} cluster requires. + +[WARNING] +==== +If you do not choose to create firewall rules based on cluster events, you must create cluster-wide firewall rules. +==== + +.Prerequisites + +* You exported the variables that the Deployment Manager templates require to deploy your cluster. +* You created the networking and load balancing components in GCP that your cluster requires. + +.Procedure + +. Add a single firewall rule to allow the Google Cloud Engine health checks to access all of the services. This rule enables the ingress load balancers to determine the health status of their instances. ++ +---- +$ gcloud compute firewall-rules create --allow='tcp:30000-32767,udp:30000-32767' --network="${CLUSTER_NETWORK}" --source-ranges='130.211.0.0/22,35.191.0.0/16,209.85.152.0/22,209.85.204.0/22' --target-tags="${INFRA_ID}-master,${INFRA_ID}-worker" ${INFRA_ID}-ingress-hc --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} +---- + +. Add a single firewall rule to allow access to all cluster services: ++ +-- +** For an external cluster: ++ +---- +$ gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network="${CLUSTER_NETWORK}" --source-ranges="0.0.0.0/0" --target-tags="${INFRA_ID}-master,${INFRA_ID}-worker" ${INFRA_ID}-ingress --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} +---- +** For a private cluster: ++ +---- +$ gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network="${CLUSTER_NETWORK}" --source-ranges=${NETWORK_CIDR} --target-tags="${INFRA_ID}-master,${INFRA_ID}-worker" ${INFRA_ID}-ingress --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} +---- +-- ++ +Because this rule only allows traffic on TCP ports `80` and `443`, ensure that you add all the ports that your services use. diff --git a/modules/installation-creating-gcp-shared-vpc-ingress-firewall-rules.adoc b/modules/installation-creating-gcp-shared-vpc-ingress-firewall-rules.adoc new file mode 100644 index 000000000000..3861e982e9bb --- /dev/null +++ b/modules/installation-creating-gcp-shared-vpc-ingress-firewall-rules.adoc @@ -0,0 +1,32 @@ +// Module included in the following assemblies: +// +// * installing/installing_gcp/installing-gcp-user-infra-vpc.adoc + +[id="installation-creating-gcp-shared-vpc-ingress-firewall-rules_{context}"] += Creating ingress firewall rules for a shared VPC in GCP + +You must create ingress firewall rules to allow the access that the {product-title} cluster requires. + +.Prerequisites + +* You exported the variables that the Deployment Manager templates require to deploy your cluster. +* You created the networking and load balancing components in GCP that your cluster requires. + +.Procedure + +* Add ingress firewall rules: +** For an external cluster: ++ +---- +$ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} compute firewall-rules create --allow='tc p:30000-32767,udp:30000-32767' --network="${CLUSTER_NETWORK}" --source-ranges='130.211.0.0/22,35.191.0.0/16, 209.85.152.0/22,209.85.204.0/22' --target-tags="${INFRA_ID}-master,${INFRA_ID}-worker" ${INFRA_ID}-ingress-h c +417 +418 gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} compute firewall-rules create --allow='tc p:80,tcp:443' --network="${CLUSTER_NETWORK}" --source-ranges="0.0.0.0/0" --target-tags="${INFRA_ID}-master,$ {INFRA_ID}-worker" ${INFRA_ID}-ingress +---- + +** For an internal cluster: ++ +---- +$ gcloud compute firewall-rules create --allow='tcp:30000-32767,udp:30000-32767' --network="${CLUSTER_NETWORK }" --source-ranges='130.211.0.0/22,35.191.0.0/16,209.85.152.0/22,209.85.204.0/22' --target-tags="${INFRA_ID} -master,${INFRA_ID}-worker" ${INFRA_ID}-ingress-hc +383 +gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network="${CLUSTER_NETWORK}" --source-ranges="${NETWORK_CIDR}" --target-tags="${INFRA_ID}-master,${INFRA_ID}-worker" ${INFRA_ID}-ingress +---- diff --git a/modules/installation-creating-gcp-vpc.adoc b/modules/installation-creating-gcp-vpc.adoc index c0911452ab02..1475a55c7b78 100644 --- a/modules/installation-creating-gcp-vpc.adoc +++ b/modules/installation-creating-gcp-vpc.adoc @@ -2,6 +2,11 @@ // // * installing/installing_gcp/installing-gcp-user-infra.adoc // * installing/installing_gcp/installing-restricted-networks-gcp.adoc +// * installing/installing_gcp/installing-gcp-user-infra-vpc.adoc + +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:shared-vpc: +endif::[] [id="installation-creating-gcp-vpc_{context}"] = Creating a VPC in GCP @@ -21,7 +26,9 @@ have to contact Red Hat support with your installation logs. .Prerequisites * Configure a GCP account. +ifndef::shared-vpc[] * Generate the Ignition config files for your cluster. +endif::shared-vpc[] .Procedure @@ -29,6 +36,40 @@ have to contact Red Hat support with your installation logs. section of this topic and save it as `01_vpc.py` on your computer. This template describes the VPC that your cluster requires. +ifdef::shared-vpc[] +. Export the following variables required by the resource definition: + +.. Export the control plane CIDR: ++ +---- +$ export MASTER_SUBNET_CIDR='10.0.0.0/19' +---- + +.. Export the compute CIDR: ++ +---- +$ export WORKER_SUBNET_CIDR='10.0.32.0/19' +---- + +.. Export the region to deploy the VPC network and cluster to: ++ +---- +$ export REGION='' +---- + +. Export the variable for the ID of the project that hosts the shared VPC: ++ +---- +$ export HOST_PROJECT= +---- + +. Export the variable for the email of the service account that belongs to host project: ++ +---- +$ export HOST_PROJECT_ACCOUNT= +---- +endif::shared-vpc[] + . Create a `01_vpc.yaml` resource definition file: + ---- @@ -40,19 +81,58 @@ resources: - name: cluster-vpc type: 01_vpc.py properties: +ifndef::shared-vpc[] infra_id: '${INFRA_ID}' <1> +endif::shared-vpc[] +ifdef::shared-vpc[] + infra_id: '' <1> +endif::shared-vpc[] region: '${REGION}' <2> master_subnet_cidr: '${MASTER_SUBNET_CIDR}' <3> worker_subnet_cidr: '${WORKER_SUBNET_CIDR}' <4> EOF ---- +ifndef::shared-vpc[] <1> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step. -<2> `region` is the region to deploy the cluster into, for example `us-east1`. +endif::shared-vpc[] +ifdef::shared-vpc[] +<1> `infra_id` is the prefix of the network name. +endif::shared-vpc[] +<2> `region` is the region to deploy the cluster into, for example `us-central1`. <3> `master_subnet_cidr` is the CIDR for the master subnet, for example `10.0.0.0/19`. <4> `worker_subnet_cidr` is the CIDR for the worker subnet, for example `10.0.32.0/19`. . Create the deployment by using the `gcloud` CLI: + +ifndef::shared-vpc[] ---- $ gcloud deployment-manager deployments create ${INFRA_ID}-vpc --config 01_vpc.yaml ---- +endif::shared-vpc[] +ifdef::shared-vpc[] +---- +$ gcloud deployment-manager deployments create --config 01_vpc.yaml --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} <1> +---- +<1> For ``, specify the name of the VPC to deploy. + +. Export the VPC variable that other components require: +.. Export the name of the host project network: ++ +---- +$ export HOST_PROJECT_NETWORK= +---- +.. Export the name of the host project control plane subnet: ++ +---- +$ export HOST_PROJECT_CONTROL_SUBNET= +---- +.. Export the name of the host project compute subnet: ++ +---- +$ export HOST_PROJECT_COMPUTE_SUBNET= +---- +endif::shared-vpc[] + +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:!shared-vpc: +endif::[] diff --git a/modules/installation-creating-gcp-worker.adoc b/modules/installation-creating-gcp-worker.adoc index 3c359c8edcd6..ac8c39065554 100644 --- a/modules/installation-creating-gcp-worker.adoc +++ b/modules/installation-creating-gcp-worker.adoc @@ -3,6 +3,10 @@ // * installing/installing_gcp/installing-gcp-user-infra.adoc // * installing/installing_gcp/installing-restricted-networks-gcp.adoc +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:shared-vpc: +endif::[] + [id="installation-creating-gcp-worker_{context}"] = Creating additional worker machines in GCP @@ -39,11 +43,29 @@ have to contact Red Hat support with your installation logs. section of this topic and save it as `06_worker.py` on your computer. This template describes the worker machines that your cluster requires. -. Export the following variables needed by the resource definition: +. Export the variables that the resource definition uses. +.. Export the subnet that hosts the compute machines: + +ifndef::shared-vpc[] ---- $ export COMPUTE_SUBNET=`gcloud compute networks subnets describe ${INFRA_ID}-worker-subnet --region=${REGION} --format json | jq -r .selfLink` +---- +endif::shared-vpc[] +ifdef::shared-vpc[] +---- +$ export COMPUTE_SUBNET=$(gcloud compute networks subnets describe ${HOST_PROJECT_COMPUTE_SUBNET} --region=${REGION} --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink)` +---- +endif::shared-vpc[] + +.. Export the email address for your service account: ++ +---- $ export WORKER_SERVICE_ACCOUNT_EMAIL=`gcloud iam service-accounts list | grep "^${INFRA_ID}-worker-node " | awk '{print $2}'` +---- + +.. Export the location of the compute machine Ignition config file: ++ +---- $ export WORKER_IGNITION=`cat worker.ign` ---- @@ -73,8 +95,8 @@ EOF ---- <1> `name` is the name of the worker machine, for example `w-a-0`. <2> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step. -<3> `region` is the region to deploy the cluster into, for example `us-east1`. -<4> `zone` is the zone to deploy the worker machine into, for example `us-east1-b`. +<3> `region` is the region to deploy the cluster into, for example `us-central1`. +<4> `zone` is the zone to deploy the worker machine into, for example `us-central1-a`. <5> `compute_subnet` is the `selfLink` URL to the compute subnet. <6> `image` is the `selfLink` URL to the {op-system} image. <7> `machine_type` is the machine type of the instance, for example `n1-standard-4`. @@ -90,3 +112,7 @@ file. ---- $ gcloud deployment-manager deployments create ${INFRA_ID}-worker --config 06_worker.yaml ---- + +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:!shared-vpc: +endif::[] diff --git a/modules/installation-deployment-manager-bootstrap.adoc b/modules/installation-deployment-manager-bootstrap.adoc index c2b0e54b3b96..97818debca89 100644 --- a/modules/installation-deployment-manager-bootstrap.adoc +++ b/modules/installation-deployment-manager-bootstrap.adoc @@ -20,18 +20,6 @@ def GenerateConfig(context): 'properties': { 'region': context.properties['region'] } - }, { - 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', - 'type': 'compute.v1.firewall', - 'properties': { - 'network': context.properties['cluster_network'], - 'allowed': [{ - 'IPProtocol': 'tcp', - 'ports': ['22'] - }], - 'sourceRanges': ['0.0.0.0/0'], - 'targetTags': [context.properties['infra_id'] + '-bootstrap'] - } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', @@ -65,6 +53,22 @@ def GenerateConfig(context): }, 'zone': context.properties['zone'] } + }, { + 'name': context.properties['infra_id'] + '-bootstrap-instance-group', + 'type': 'compute.v1.instanceGroup', + 'properties': { + 'namedPorts': [ + { + 'name': 'ignition', + 'port': 22623 + }, { + 'name': 'https', + 'port': 6443 + } + ], + 'network': context.properties['cluster_network'], + 'zone': context.properties['zone'] + } }] return {'resources': resources} diff --git a/modules/installation-deployment-manager-dns.adoc b/modules/installation-deployment-manager-dns.adoc index 006eb63c283d..78e333aa7ab4 100644 --- a/modules/installation-deployment-manager-dns.adoc +++ b/modules/installation-deployment-manager-dns.adoc @@ -82,6 +82,5 @@ def GenerateConfig(context): } } }] - return {'resources': resources} ---- diff --git a/modules/installation-deployment-manager-ext-lb.adoc b/modules/installation-deployment-manager-ext-lb.adoc new file mode 100644 index 000000000000..a5376884cb7c --- /dev/null +++ b/modules/installation-deployment-manager-ext-lb.adoc @@ -0,0 +1,48 @@ +// Module included in the following assemblies: +// +// * installing/installing_gcp/installing-gcp-user-infra-vpc.adoc + +[id="installation-deployment-manager-ext-lb_{context}"] += Deployment Manager template for the external load balancer + +You can use the following Deployment Manager template to deploy the external load balancer that you need for your {product-title} cluster: + +.`02_lb_ext.py` Deployment Manager template +[source,python] +---- +def GenerateConfig(context): + + resources = [{ + 'name': context.properties['infra_id'] + '-cluster-public-ip', + 'type': 'compute.v1.address', + 'properties': { + 'region': context.properties['region'] + } + }, { + 'name': context.properties['infra_id'] + '-api-http-health-check', + 'type': 'compute.v1.httpHealthCheck', + 'properties': { + 'port': 6080, + 'requestPath': '/readyz' + } + }, { + 'name': context.properties['infra_id'] + '-api-target-pool', + 'type': 'compute.v1.targetPool', + 'properties': { + 'region': context.properties['region'], + 'healthChecks': ['$(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], + 'instances': [] + } + }, { + 'name': context.properties['infra_id'] + '-api-forwarding-rule', + 'type': 'compute.v1.forwardingRule', + 'properties': { + 'region': context.properties['region'], + 'IPAddress': '$(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', + 'target': '$(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', + 'portRange': '6443' + } + }] + + return {'resources': resources} +---- diff --git a/modules/installation-deployment-manager-firewall-rules.adoc b/modules/installation-deployment-manager-firewall-rules.adoc new file mode 100644 index 000000000000..35059f177644 --- /dev/null +++ b/modules/installation-deployment-manager-firewall-rules.adoc @@ -0,0 +1,137 @@ +// Module included in the following assemblies: +// +// * installing/installing_gcp/installing-gcp-user-infra-shared-vpc.adoc + +[id="installation-deployment-manager-firewall-rules_{context}"] += Deployment Manager template for firewall rules + +You can use the following Deployment Manager template to deploy the firewall rues that you need for your {product-title} cluster: + +.`03_firewall.py` Deployment Manager template +[source,python] +---- +def GenerateConfig(context): + + resources = [{ + 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', + 'type': 'compute.v1.firewall', + 'properties': { + 'network': context.properties['cluster_network'], + 'allowed': [{ + 'IPProtocol': 'tcp', + 'ports': ['22'] + }], + 'sourceRanges': [context.properties['allowed_external_cidr']], + 'targetTags': [context.properties['infra_id'] + '-bootstrap'] + } + }, { + 'name': context.properties['infra_id'] + '-api', + 'type': 'compute.v1.firewall', + 'properties': { + 'network': context.properties['cluster_network'], + 'allowed': [{ + 'IPProtocol': 'tcp', + 'ports': ['6443'] + }], + 'sourceRanges': [context.properties['allowed_external_cidr']], + 'targetTags': [context.properties['infra_id'] + '-master'] + } + }, { + 'name': context.properties['infra_id'] + '-health-checks', + 'type': 'compute.v1.firewall', + 'properties': { + 'network': context.properties['cluster_network'], + 'allowed': [{ + 'IPProtocol': 'tcp', + 'ports': ['6080', '6443', '22624'] + }], + 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], + 'targetTags': [context.properties['infra_id'] + '-master'] + } + }, { + 'name': context.properties['infra_id'] + '-etcd', + 'type': 'compute.v1.firewall', + 'properties': { + 'network': context.properties['cluster_network'], + 'allowed': [{ + 'IPProtocol': 'tcp', + 'ports': ['2379-2380'] + }], + 'sourceTags': [context.properties['infra_id'] + '-master'], + 'targetTags': [context.properties['infra_id'] + '-master'] + } + }, { + 'name': context.properties['infra_id'] + '-control-plane', + 'type': 'compute.v1.firewall', + 'properties': { + 'network': context.properties['cluster_network'], + 'allowed': [{ + 'IPProtocol': 'tcp', + 'ports': ['10257'] + },{ + 'IPProtocol': 'tcp', + 'ports': ['10259'] + },{ + 'IPProtocol': 'tcp', + 'ports': ['22623'] + }], + 'sourceTags': [ + context.properties['infra_id'] + '-master', + context.properties['infra_id'] + '-worker' + ], + 'targetTags': [context.properties['infra_id'] + '-master'] + } + }, { + 'name': context.properties['infra_id'] + '-internal-network', + 'type': 'compute.v1.firewall', + 'properties': { + 'network': context.properties['cluster_network'], + 'allowed': [{ + 'IPProtocol': 'icmp' + },{ + 'IPProtocol': 'tcp', + 'ports': ['22'] + }], + 'sourceRanges': [context.properties['network_cidr']], + 'targetTags': [ + context.properties['infra_id'] + '-master', + context.properties['infra_id'] + '-worker' + ] + } + }, { + 'name': context.properties['infra_id'] + '-internal-cluster', + 'type': 'compute.v1.firewall', + 'properties': { + 'network': context.properties['cluster_network'], + 'allowed': [{ + 'IPProtocol': 'udp', + 'ports': ['4789', '6081'] + },{ + 'IPProtocol': 'tcp', + 'ports': ['9000-9999'] + },{ + 'IPProtocol': 'udp', + 'ports': ['9000-9999'] + },{ + 'IPProtocol': 'tcp', + 'ports': ['10250'] + },{ + 'IPProtocol': 'tcp', + 'ports': ['30000-32767'] + },{ + 'IPProtocol': 'udp', + 'ports': ['30000-32767'] + }], + 'sourceTags': [ + context.properties['infra_id'] + '-master', + context.properties['infra_id'] + '-worker' + ], + 'targetTags': [ + context.properties['infra_id'] + '-master', + context.properties['infra_id'] + '-worker' + ] + } + }] + + return {'resources': resources} +---- diff --git a/modules/installation-deployment-manager-iam-shared-vpc.adoc b/modules/installation-deployment-manager-iam-shared-vpc.adoc new file mode 100644 index 000000000000..a890743253db --- /dev/null +++ b/modules/installation-deployment-manager-iam-shared-vpc.adoc @@ -0,0 +1,32 @@ +// Module included in the following assemblies: +// +// * installing/installing_gcp/installing-gcp-user-infra-shared-vpc.adoc + +[id="installation-deployment-manager-iam-shared-vpc_{context}"] += Deployment Manager template for IAM roles + +You can use the following Deployment Manager template to deploy the IAM roles that you need for your {product-title} cluster: + +.`03_iam.py` Deployment Manager template +[source,python] +---- +def GenerateConfig(context): + + resources = [{ + 'name': context.properties['infra_id'] + '-master-node-sa', + 'type': 'iam.v1.serviceAccount', + 'properties': { + 'accountId': context.properties['infra_id'] + '-m', + 'displayName': context.properties['infra_id'] + '-master-node' + } + }, { + 'name': context.properties['infra_id'] + '-worker-node-sa', + 'type': 'iam.v1.serviceAccount', + 'properties': { + 'accountId': context.properties['infra_id'] + '-w', + 'displayName': context.properties['infra_id'] + '-worker-node' + } + }] + + return {'resources': resources} +---- diff --git a/modules/installation-deployment-manager-int-lb.adoc b/modules/installation-deployment-manager-int-lb.adoc new file mode 100644 index 000000000000..39b63980fb81 --- /dev/null +++ b/modules/installation-deployment-manager-int-lb.adoc @@ -0,0 +1,83 @@ +// Module included in the following assemblies: +// +// * installing/installing_gcp/installing-gcp-user-infra-vpc.adoc + +[id="installation-deployment-manager-int-lb_{context}"] += Deployment Manager template for the internal load balancer + +You can use the following Deployment Manager template to deploy the internal load balancer that you need for your {product-title} cluster: + +.`02_lb_int.py` Deployment Manager template +[source,python] +---- +def GenerateConfig(context): + + backends = [] + for zone in context.properties['zones']: + backends.append({ + 'group': '$(ref.' + context.properties['infra_id'] + '-master-' + zone + '-instance-group' + '.selfLink)' + }) + + resources = [{ + 'name': context.properties['infra_id'] + '-cluster-ip', + 'type': 'compute.v1.address', + 'properties': { + 'addressType': 'INTERNAL', + 'region': context.properties['region'], + 'subnetwork': context.properties['control_subnet'] + } + }, { + 'name': context.properties['infra_id'] + '-api-internal-health-check', + 'type': 'compute.v1.healthCheck', + 'properties': { + 'httpsHealthCheck': { + 'port': 6443, + 'requestPath': '/readyz' + }, + 'type': "HTTPS" + } + }, { + 'name': context.properties['infra_id'] + '-api-internal-backend-service', + 'type': 'compute.v1.regionBackendService', + 'properties': { + 'backends': backends, + 'healthChecks': ['$(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], + 'loadBalancingScheme': 'INTERNAL', + 'region': context.properties['region'], + 'protocol': 'TCP', + 'timeoutSec': 120 + } + }, { + 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', + 'type': 'compute.v1.forwardingRule', + 'properties': { + 'backendService': '$(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', + 'IPAddress': '$(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', + 'loadBalancingScheme': 'INTERNAL', + 'ports': ['6443','22623'], + 'region': context.properties['region'], + 'subnetwork': context.properties['control_subnet'] + } + }] + + for zone in context.properties['zones']: + resources.append({ + 'name': context.properties['infra_id'] + '-master-' + zone + '-instance-group', + 'type': 'compute.v1.instanceGroup', + 'properties': { + 'namedPorts': [ + { + 'name': 'ignition', + 'port': 22623 + }, { + 'name': 'https', + 'port': 6443 + } + ], + 'network': context.properties['cluster_network'], + 'zone': zone + } + }) + + return {'resources': resources} +---- diff --git a/modules/installation-deployment-manager-private-dns.adoc b/modules/installation-deployment-manager-private-dns.adoc new file mode 100644 index 000000000000..fd6ca550d14f --- /dev/null +++ b/modules/installation-deployment-manager-private-dns.adoc @@ -0,0 +1,31 @@ +// Module included in the following assemblies: +// +// * installing/installing_gcp/installing-gcp-user-infra-vpc.adoc + +[id="installation-deployment-manager-private-dns_{context}"] += Deployment Manager template for the private DNS + +You can use the following Deployment Manager template to deploy the private DNS that you need for your {product-title} cluster: + +.`02_dns.py` Deployment Manager template +[source,python] +---- +def GenerateConfig(context): + + resources = [{ + 'name': context.properties['infra_id'] + '-private-zone', + 'type': 'dns.v1.managedZone', + 'properties': { + 'description': '', + 'dnsName': context.properties['cluster_domain'] + '.', + 'visibility': 'private', + 'privateVisibilityConfig': { + 'networks': [{ + 'networkUrl': context.properties['cluster_network'] + }] + } + } + }] + + return {'resources': resources} +---- diff --git a/modules/installation-deployment-manager-vpc.adoc b/modules/installation-deployment-manager-vpc.adoc index 0b701ce98eb4..cfa0b06e4947 100644 --- a/modules/installation-deployment-manager-vpc.adoc +++ b/modules/installation-deployment-manager-vpc.adoc @@ -37,18 +37,6 @@ def GenerateConfig(context): 'network': '$(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } - }, { - 'name': context.properties['infra_id'] + '-master-nat-ip', - 'type': 'compute.v1.address', - 'properties': { - 'region': context.properties['region'] - } - }, { - 'name': context.properties['infra_id'] + '-worker-nat-ip', - 'type': 'compute.v1.address', - 'properties': { - 'region': context.properties['region'] - } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', @@ -57,8 +45,7 @@ def GenerateConfig(context): 'network': '$(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', - 'natIpAllocateOption': 'MANUAL_ONLY', - 'natIps': ['$(ref.' + context.properties['infra_id'] + '-master-nat-ip.selfLink)'], + 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ @@ -67,9 +54,8 @@ def GenerateConfig(context): }] }, { 'name': context.properties['infra_id'] + '-nat-worker', - 'natIpAllocateOption': 'MANUAL_ONLY', - 'natIps': ['$(ref.' + context.properties['infra_id'] + '-worker-nat-ip.selfLink)'], - 'minPortsPerVm': 128, + 'natIpAllocateOption': 'AUTO_ONLY', + 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': '$(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', diff --git a/modules/installation-extracting-infraid.adoc b/modules/installation-extracting-infraid.adoc index 241c5a40fa2f..b89ffd4edff6 100644 --- a/modules/installation-extracting-infraid.adoc +++ b/modules/installation-extracting-infraid.adoc @@ -31,6 +31,12 @@ ifeval::["{context}" == "installing-gcp-user-infra"] :cp-template: Deployment Manager :gcp: endif::[] +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:cp-first: Google Cloud Platform +:cp: GCP +:cp-template: Deployment Manager +:gcp: +endif::[] ifeval::["{context}" == "installing-restricted-networks-gcp"] :cp-first: Google Cloud Platform :cp: GCP @@ -67,7 +73,7 @@ endif::azure[] metadata, run the following command: + ---- -$ jq -r .infraID //metadata.json <1> +$ jq -r .infraID /metadata.json <1> openshift-vw9j6 <2> ---- <1> For ``, specify the path to the directory that you stored the @@ -99,6 +105,12 @@ ifeval::["{context}" == "installing-gcp-user-infra"] :!cp-template: :!gcp: endif::[] +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:!cp-first: Google Cloud Platform +:!cp: GCP +:!cp-template: Deployment Manager +:!gcp: +endif::[] ifeval::["{context}" == "installing-restricted-networks-gcp"] :!cp-first: :!cp: diff --git a/modules/installation-gcp-dns.adoc b/modules/installation-gcp-dns.adoc index 8dc999c87650..acbd310df420 100644 --- a/modules/installation-gcp-dns.adoc +++ b/modules/installation-gcp-dns.adoc @@ -4,12 +4,22 @@ // * installing/installing_gcp/installing-gcp-user-infra.adoc // * installing/installing_gcp/installing-restricted-networks-gcp.adoc +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:user-infra-vpc: +endif::[] + [id="installation-gcp-dns_{context}"] = Configuring DNS for GCP To install {product-title}, the Google Cloud Platform (GCP) account you use must -have a dedicated public hosted zone in the same project that you host the -{product-title} cluster. This zone must be authoritative for the domain. The +have a dedicated public hosted zone +ifndef::user-infra-vpc[] +in the same project that you host the {product-title} cluster. +endif::user-infra-vpc[] +ifdef::user-infra-vpc[] +in the project that hosts the shared VPC that you install the cluster into. +endif::user-infra-vpc[] +This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. @@ -46,3 +56,7 @@ link:https://support.google.com/domains/answer/3290309?hl=en[How to switch to cu . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See link:https://cloud.google.com/dns/docs/migrating[Migrating to Cloud DNS] in the GCP documentation. . If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. + +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:!user-infra-vpc: +endif::[] diff --git a/modules/installation-gcp-enabling-api-services.adoc b/modules/installation-gcp-enabling-api-services.adoc index dc9849fa7d78..4152d07faeb1 100644 --- a/modules/installation-gcp-enabling-api-services.adoc +++ b/modules/installation-gcp-enabling-api-services.adoc @@ -7,6 +7,9 @@ ifeval::["{context}" == "installing-gcp-user-infra"] :template: endif::[] +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:template: +endif::[] ifeval::["{context}" == "installing-gcp-restricted-networks"] :template: endif::[] @@ -73,6 +76,9 @@ endif::template[] ifeval::["{context}" == "installing-gcp-user-infra"] :!template: endif::[] +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:!template: +endif::[] ifeval::["{context}" == "installing-gcp-restricted-networks"] :!template: endif::[] diff --git a/modules/installation-gcp-limits.adoc b/modules/installation-gcp-limits.adoc index 0da3d542b743..765e11b5aabe 100644 --- a/modules/installation-gcp-limits.adoc +++ b/modules/installation-gcp-limits.adoc @@ -7,6 +7,9 @@ ifeval::["{context}" == "installing-gcp-user-infra"] :template: endif::[] +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:template: +endif::[] ifeval::["{context}" == "installing-restricted-networks-gcp"] :template: endif::[] @@ -90,9 +93,13 @@ If you plan to deploy your cluster in one of the following regions, you will exc * us-west2 You can increase resource quotas from the link:https://console.cloud.google.com/iam-admin/quotas[GCP console], but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your {product-title} cluster. + ifeval::["{context}" == "installing-gcp-user-infra"] :!template: endif::[] +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:!template: +endif::[] ifeval::["{context}" == "installing-restricted-networks-gcp"] :!template: endif::[] diff --git a/modules/installation-gcp-permissions.adoc b/modules/installation-gcp-permissions.adoc index 6e4499710e0e..3c1060791ff8 100644 --- a/modules/installation-gcp-permissions.adoc +++ b/modules/installation-gcp-permissions.adoc @@ -10,6 +10,9 @@ endif::[] ifeval::["{context}" == "installing-restricted-networks-gcp"] :template: endif::[] +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:template: +endif::[] [id="installation-gcp-permissions_{context}"] = Required GCP permissions @@ -69,3 +72,6 @@ endif::[] ifeval::["{context}" == "installing-restricted-networks-gcp"] :!template: endif::[] +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:!template: +endif::[] diff --git a/modules/installation-gcp-user-infra-adding-ingress.adoc b/modules/installation-gcp-user-infra-adding-ingress.adoc index 9b8e941bf89e..5550536cb0d7 100644 --- a/modules/installation-gcp-user-infra-adding-ingress.adoc +++ b/modules/installation-gcp-user-infra-adding-ingress.adoc @@ -2,9 +2,19 @@ // // * installing/installing_gcp/installing-gcp-user-infra.adoc // * installing/installing_gcp/installing-restricted-networks-gcp.adoc +// * installing/installing_gcp/installing-gcp-user-infra-vpc.adoc + +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:shared-vpc: +endif::[] [id="installation-gcp-user-infra-adding-ingress_{context}"] +ifndef::shared-vpc[] = Optional: Adding the ingress DNS records +endif::shared-vpc[] +ifdef::shared-vpc[] += Adding the ingress DNS records +endif::shared-vpc[] If you removed the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at @@ -34,23 +44,31 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98 ---- -. Add the A record to your public and private zones: +. Add the A record to your zones: +** To use A records: +... Export the variable for the router IP address: + ---- $ export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}'` - -$ if [ -f transaction.yaml ]; then rm transaction.yaml; fi -$ gcloud dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME} -$ gcloud dns record-sets transaction add ${ROUTER_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${BASE_DOMAIN_ZONE_NAME} -$ gcloud dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME} - +---- +... Add the A record to the private zones: ++ +---- $ if [ -f transaction.yaml ]; then rm transaction.yaml; fi -$ gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone -$ gcloud dns record-sets transaction add ${ROUTER_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${INFRA_ID}-private-zone -$ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone +$ gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +$ gcloud dns record-sets transaction add ${ROUTER_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +$ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} ---- +... For an external cluster, also add the A record to the public zones: + -If you prefer to add explicit domains instead of using a wildcard, you can +---- +$ if [ -f transaction.yaml ]; then rm transaction.yaml; fi +$ gcloud dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME} --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +$ gcloud dns record-sets transaction add ${ROUTER_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${BASE_DOMAIN_ZONE_NAME} --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +$ gcloud dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME} --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +---- + +** To add explicit domains instead of using a wildcard, create entries for each of the cluster's current routes: + ---- @@ -62,3 +80,7 @@ alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com grafana-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com ---- + +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:!shared-vpc: +endif::[] diff --git a/modules/installation-gcp-user-infra-config-host-project-vpc.adoc b/modules/installation-gcp-user-infra-config-host-project-vpc.adoc new file mode 100644 index 000000000000..25bf96309927 --- /dev/null +++ b/modules/installation-gcp-user-infra-config-host-project-vpc.adoc @@ -0,0 +1,41 @@ +// Module included in the following assemblies: +// +// * installing/installing_gcp/installing-gcp-user-infra-vpc.adoc + +[id="installation-gcp-user-infra-config-host-project-vpc_{context}"] += Configuring the GCP project that hosts your shared VPC network + +If you use a shared Virtual Private Cloud (VPC) to host your {product-title} cluster in Google Cloud Platform (GCP), you must configure the project that hosts it. + +[NOTE] +==== +If you already have a project that hosts the shared VPC network, review this section to ensure that the project meets all of the requirements to install a {product-title} cluster. +==== + +.Procedure + +. Create a project to host the shared VPC for your {product-title} cluster. See +link:https://cloud.google.com/resource-manager/docs/creating-managing-projects[Creating and Managing Projects] in the GCP documentation. + +. Create a service account in the project that hosts your shared VPC. See +link:https://cloud.google.com/iam/docs/creating-managing-service-accounts#creating_a_service_account[Creating a service account] +in the GCP documentation. + +. Grant the service account the appropriate permissions. You can either +grant the individual permissions that follow or assign the `Owner` role to it. +See link:https://cloud.google.com/iam/docs/granting-roles-to-service-accounts#granting_access_to_a_service_account_for_a_resource[Granting roles to a service account for specific resources]. ++ +[NOTE] +==== +While making the service account an Owner of the project is the easiest way to gain the required permissions, it means that that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. + +The service account for the project that hosts the shared VPC network requires the following roles: +* Compute Network User +* Compute Security Admin +* Deployment Manager Editor +* DNS Administrator +* Security Admin +* Network Management Admin +==== + +. Set up the shared VPC. See link:https://cloud.google.com/vpc/docs/provisioning-shared-vpc#setting_up[Setting up Shared VPC] in the GCP documentation. diff --git a/modules/installation-gcp-user-infra-shared-vpc-config-yaml.adoc b/modules/installation-gcp-user-infra-shared-vpc-config-yaml.adoc new file mode 100644 index 000000000000..e093f71b45dd --- /dev/null +++ b/modules/installation-gcp-user-infra-shared-vpc-config-yaml.adoc @@ -0,0 +1,77 @@ +// Module included in the following assemblies: +// +// * installing/installing_gcp/installing-gcp-user-infra-vpc.adoc + +[id="installation-gcp-user-infra-shared-vpc-config-yaml_{context}"] += Sample customized `install-config.yaml` file for GCP + +You can customize the `install-config.yaml` file to specify more details about your {product-title} cluster's platform or modify the values of the required parameters. + +[IMPORTANT] +==== +This sample YAML file is provided for reference only. You must obtain your `install-config.yaml` file by using the installation program and modify it. +==== + +[source,yaml] +---- +apiVersion: v1 +baseDomain: example.com <1> +controlPlane: <2> + hyperthreading: Enabled <3> <4> + name: master + platform: + gcp: + type: n2-standard-4 + zones: + - us-central1-a + - us-central1-c + replicas: 3 +compute: <2> +- hyperthreading: Enabled <3> + name: worker + platform: + gcp: + type: n2-standard-4 + zones: + - us-central1-a + - us-central1-c + replicas: 0 +metadata: + name: test-cluster +networking: + clusterNetwork: + - cidr: 10.128.0.0/14 + hostPrefix: 23 + machineNetwork: + - cidr: 10.0.0.0/16 + networkType: OpenShiftSDN + serviceNetwork: + - 172.30.0.0/16 +platform: + gcp: + ProjectID: openshift-production + region: us-central1 <5> +pullSecret: '{"auths": ...}' +fips: false <6> +sshKey: ssh-ed25519 AAAA... <7> +publish: Internal <8> +---- +<1> Specify the public DNS on the host project. +<2> If you do not provide these parameters and values, the installation program provides the default value. +<3> The `controlPlane` section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the `compute` section must begin with a hyphen, `-`, and the first line of the `controlPlane` section must not. Although both sections currently define a single machine pool, it is possible that future versions of {product-title} will support defining multiple compute pools during installation. Only one control plane pool is used. +<4> Whether to enable or disable simultaneous multithreading, or `hyperthreading`. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to `Disabled`. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. ++ +[IMPORTANT] +==== +If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as `n1-standard-8`, for your machines if you disable simultaneous multithreading. +==== +<5> Specify the region that your VPC network is in. +<6> Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the {op-system-first} machines that {product-title} runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with {op-system} instead. +<7> You can optionally provide the `sshKey` value that you use to access the machines in your cluster. ++ +[NOTE] +==== +For production {product-title} clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your `ssh-agent` process uses. +==== +<8> How to publish the user-facing endpoints of your cluster. Set `publish` to `Internal` to deploy a private cluster, which cannot be accessed from the internet. The default value is `External`. +To use a shared VPC in a cluster that uses infrastructure that you provision, you must set `publish` to `Internal`. The installation program will no longer be able to access the public DNS zone for the the base domain in the host project. diff --git a/modules/installation-gcp-user-infra-wait-for-bootstrap.adoc b/modules/installation-gcp-user-infra-wait-for-bootstrap.adoc index 8717b8fc014f..37c3c9c88d12 100644 --- a/modules/installation-gcp-user-infra-wait-for-bootstrap.adoc +++ b/modules/installation-gcp-user-infra-wait-for-bootstrap.adoc @@ -1,6 +1,11 @@ // Module included in the following assemblies: // // * installing/installing_gcp/installing-gcp-user-infra.adoc +// * installing/installing_gcp/installing-gcp-user-infra-vpc.adoc + +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:shared-vpc: +endif::[] [id="installation-gcp-user-infra-wait-for-bootstrap_{context}"] = Wait for bootstrap completion and remove bootstrap resources in GCP @@ -38,6 +43,7 @@ If the command exits without a `FATAL` warning, your production control plane has initialized. . Delete the bootstrap resources: +ifndef::shared-vpc[] + ---- $ gcloud compute target-pools remove-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-bootstrap @@ -46,3 +52,17 @@ $ gsutil rm gs://${INFRA_ID}-bootstrap-ignition/bootstrap.ign $ gsutil rb gs://${INFRA_ID}-bootstrap-ignition $ gcloud deployment-manager deployments delete ${INFRA_ID}-bootstrap ---- +endif::shared-vpc[] +ifdef::shared-vpc[] ++ +---- +$ gcloud compute backend-services remove-backend ${INFRA_ID}-api-internal-backend-service --region=${REGION} --instance-group=${INFRA_ID}-bootstrap-instance-group --instance-group-zone=${ZONE_0} +$ gsutil rm gs://${INFRA_ID}-bootstrap-ignition/bootstrap.ign +$ gsutil rb gs://${INFRA_ID}-bootstrap-ignition +$ gcloud deployment-manager deployments delete -q ${INFRA_ID}-bootstrap +---- +endif::shared-vpc[] + +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:!shared-vpc: +endif::[] diff --git a/modules/installation-initializing-manual.adoc b/modules/installation-initializing-manual.adoc index cbc1d6d0013b..2c88bb3cf32c 100644 --- a/modules/installation-initializing-manual.adoc +++ b/modules/installation-initializing-manual.adoc @@ -17,7 +17,7 @@ endif::[] = Manually creating the installation configuration file For installations of {product-title} that use user-provisioned -infrastructure, you must manually generate your installation configuration file. +infrastructure, you manually generate your installation configuration file. .Prerequisites diff --git a/modules/installation-initializing.adoc b/modules/installation-initializing.adoc index 994b20f70e10..7f5cca8b9adf 100644 --- a/modules/installation-initializing.adoc +++ b/modules/installation-initializing.adoc @@ -57,6 +57,9 @@ endif::[] ifeval::["{context}" == "installing-gcp-user-infra"] :gcp: endif::[] +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:gcp: +endif::[] ifeval::["{context}" == "installing-restricted-networks-gcp"] :gcp: endif::[] @@ -351,6 +354,9 @@ endif::[] ifeval::["{context}" == "installing-gcp-user-infra"] :!gcp: endif::[] +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:!gcp: +endif::[] ifeval::["{context}" == "installing-openstack-installer-custom"] :!osp: endif::[] diff --git a/modules/installation-user-infra-exporting-common-variables.adoc b/modules/installation-user-infra-exporting-common-variables.adoc index f23138ed09d6..8f32b8c6cc7d 100644 --- a/modules/installation-user-infra-exporting-common-variables.adoc +++ b/modules/installation-user-infra-exporting-common-variables.adoc @@ -15,6 +15,19 @@ ifeval::["{context}" == "installing-restricted-networks-gcp"] :cp-template: Deployment Manager endif::[] +ifeval::["{context}" == "installing-restricted-networks-gcp-vpc"] +:cp-first: Google Cloud Platform +:cp: GCP +:cp-template: Deployment Manager +endif::[] + +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:cp-first: Google Cloud Platform +:cp: GCP +:cp-template: Deployment Manager +:shared-vpc: +endif::[] + [id="installation-user-infra-exporting-common-variables_{context}"] = Exporting common variables for {cp-template} templates @@ -36,9 +49,10 @@ variables, which are detailed in their related procedures. .Procedure -* Export the following common variables to be used by the provided {cp-template} +. Export the following common variables to be used by the provided {cp-template} templates: + +ifndef::shared-vpc[] ---- $ export BASE_DOMAIN='' $ export BASE_DOMAIN_ZONE_NAME='' @@ -53,6 +67,23 @@ $ export PROJECT_NAME=`jq -r .gcp.projectID /metadata.js $ export REGION=`jq -r .gcp.region /metadata.json` ---- <1> For ``, specify the path to the directory that you stored the installation files in. +endif::shared-vpc[] +//you need some of these variables for the VPC, and you do that + +ifdef::shared-vpc[] +---- +$ export BASE_DOMAIN='' <1> +$ export BASE_DOMAIN_ZONE_NAME='' <1> +$ export NETWORK_CIDR='10.0.0.0/16' + +$ export KUBECONFIG=/auth/kubeconfig <2> +$ export CLUSTER_NAME=`jq -r .clusterName /metadata.json` +$ export INFRA_ID=`jq -r .infraID /metadata.json` +$ export PROJECT_NAME=`jq -r .gcp.projectID /metadata.json` +---- +<1> Supply the values for the host project. +<2> For ``, specify the path to the directory that you stored the installation files in. +endif::shared-vpc[] ifeval::["{context}" == "installing-gcp-user-infra"] :!cp-first: @@ -65,3 +96,16 @@ ifeval::["{context}" == "installing-restricted-networks-gcp"] :!cp: :!cp-template: endif::[] + +ifeval::["{context}" == "installing-restricted-networks-gcp-vpc"] +:!cp-first: Google Cloud Platform +:!cp: GCP +:!cp-template: Deployment Manager +endif::[] + +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:!cp-first: Google Cloud Platform +:!cp: GCP +:!cp-template: Deployment Manager +:!shared-vpc: +endif::[] diff --git a/modules/installation-user-infra-generate-k8s-manifest-ignition.adoc b/modules/installation-user-infra-generate-k8s-manifest-ignition.adoc index e514f6cb319e..6453f20d9771 100644 --- a/modules/installation-user-infra-generate-k8s-manifest-ignition.adoc +++ b/modules/installation-user-infra-generate-k8s-manifest-ignition.adoc @@ -36,6 +36,10 @@ endif::[] ifeval::["{context}" == "installing-gcp-user-infra"] :gcp: endif::[] +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:gcp: +:user-infra-vpc: +endif::[] ifeval::["{context}" == "installing-restricted-networks-gcp"] :gcp: :restricted: @@ -101,12 +105,14 @@ By removing these files, you prevent the cluster from automatically generating c endif::aws,azure,gcp[] ifdef::gcp[] +ifndef::user-infra-vpc[] . Optional: If you do not want the cluster to provision compute machines, remove the Kubernetes manifest files that define the worker machines: +endif::user-infra-vpc[] endif::gcp[] -ifdef::aws,azure[] +ifdef::aws,azure,user-infra-vpc[] . Remove the Kubernetes manifest files that define the worker machines: -endif::aws,azure[] +endif::aws,azure,user-infra-vpc[] ifdef::aws,azure,gcp[] + ---- @@ -150,10 +156,16 @@ Currently, due to a link:https://github.com/kubernetes/kubernetes/issues/65618[K ==== ifdef::gcp,aws,azure[] +ifndef::user-infra-vpc[] . Optional: If you do not want link:https://github.com/openshift/cluster-ingress-operator[the Ingress Operator] to create DNS records on your behalf, remove the `privateZone` and `publicZone` sections from the `/manifests/cluster-dns-02-config.yml` DNS configuration file: +endif::user-infra-vpc[] +ifdef::user-infra-vpc[] +. Remove the `privateZone` +sections from the `/manifests/cluster-dns-02-config.yml` DNS configuration file: +endif::user-infra-vpc[] + [source,yaml] ---- @@ -166,15 +178,69 @@ spec: baseDomain: example.openshift.com privateZone: <1> id: mycluster-100419-private-zone +ifndef::user-infra-vpc[] publicZone: <1> id: example.openshift.com +endif::user-infra-vpc[] status: {} ---- -<1> Remove these sections completely. +<1> Remove this section completely. + +ifndef::user-infra-vpc[] If you do so, you must add ingress DNS records manually in a later step. +endif::user-infra-vpc[] endif::gcp,aws,azure[] +ifdef::user-infra-vpc[] +. Configure the cloud provider for your VPC. ++ +-- +.. Open the `/manifests/cloud-provider-config.yaml` file. +.. Add the `network-project-id` parameter and set its value to the ID of project that hosts the shared VPC network. +.. Add the `network-name` parameter and set its value to the name of the shared VPC network that hosts the {product-title} cluster. +.. Replace the value of the `subnetwork-name` parameter with the value of the shared VPC subnet that hosts your compute machines. ++ +-- +The contents of the `/manifests/cloud-provider-config.yaml` resemble the following example: ++ +---- +config: |+ + [global] + project-id = example-project + regional = true + multizone = true + node-tags = opensh-ptzzx-master + node-tags = opensh-ptzzx-worker + node-instance-prefix = opensh-ptzzx + external-instance-groups-prefix = opensh-ptzzx + network-project-id = example-shared-vpc + network-name = example-network + subnetwork-name = example-worker-subnet +---- + +. If you deploy a cluster that is not on a private network, open the `/manifests/cluster-ingress-default-ingresscontroller.yaml` file and replace the value of the `scope` parameter with `External`. The contents of the file resemble the following example: ++ +[source,yaml] +---- +apiVersion: operator.openshift.io/v1 +kind: IngressController +metadata: + creationTimestamp: null + name: default + namespace: openshift-ingress-operator +spec: + endpointPublishingStrategy: + loadBalancer: + scope: External + type: LoadBalancerService +status: + availableReplicas: 0 + domain: '' + selector: '' +---- + +endif::user-infra-vpc[] + ifdef::azure-user-infra[] . When configuring Azure on user-provisioned infrastructure, you must export some common variables defined in the manifest files to use later in the Azure @@ -233,6 +299,10 @@ endif::[] ifeval::["{context}" == "installing-gcp-user-infra"] :!gcp: endif::[] +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:!gcp: +:!user-infra-vpc: +endif::[] ifeval::["{context}" == "installing-restricted-networks-vsphere"] :!restricted: endif::[] diff --git a/modules/installation-user-infra-generate.adoc b/modules/installation-user-infra-generate.adoc index 8c66cf47beb4..cb277b2d3a05 100644 --- a/modules/installation-user-infra-generate.adoc +++ b/modules/installation-user-infra-generate.adoc @@ -28,6 +28,11 @@ ifeval::["{context}" == "installing-gcp-user-infra"] :cp: GCP :gcp: endif::[] +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:cp-first: Google Cloud Platform +:cp: GCP +:gcp: +endif::[] ifeval::["{context}" == "installing-restricted-networks-gcp"] :cp-first: Google Cloud Platform :cp: GCP @@ -81,6 +86,11 @@ ifeval::["{context}" == "installing-gcp-user-infra"] :!cp: :!gcp: endif::[] +ifeval::["{context}" == "installing-gcp-user-infra-vpc"] +:!cp-first: Google Cloud Platform +:!cp: GCP +:!gcp: +endif::[] ifeval::["{context}" == "installing-restricted-networks-gcp"] :!cp-first: :!cp: @@ -93,4 +103,4 @@ endif::[] ifeval::["{context}" == "installing-openstack-user-kuryr"] :!cp-first: Red Hat OpenStack Platform :!cp: RHOSP -endif::[] \ No newline at end of file +endif::[]