diff --git a/installing/installing_aws_user_infra/installing-aws-user-infra.adoc b/installing/installing_aws_user_infra/installing-aws-user-infra.adoc index a92df7ca360d..b2534dcdd1a5 100644 --- a/installing/installing_aws_user_infra/installing-aws-user-infra.adoc +++ b/installing/installing_aws_user_infra/installing-aws-user-infra.adoc @@ -116,8 +116,13 @@ include::modules/registry-configuring-storage-aws-user-infra.adoc[leveloffset=+3 include::modules/installation-registry-storage-non-production.adoc[leveloffset=+3] +include::modules/installation-aws-user-infra-delete-bootstrap.adoc[leveloffset=+1] + +include::modules/installation-create-ingress-dns-records.adoc[leveloffset=+1] + include::modules/installation-aws-user-infra-installation.adoc[leveloffset=+1] + .Next steps * xref:../../installing/install_config/customizations.adoc#customizations[Customize your cluster]. diff --git a/installing/installing_restricted_networks/installing-restricted-networks-aws.adoc b/installing/installing_restricted_networks/installing-restricted-networks-aws.adoc index 67ad70809c4b..9aef7fb350af 100644 --- a/installing/installing_restricted_networks/installing-restricted-networks-aws.adoc +++ b/installing/installing_restricted_networks/installing-restricted-networks-aws.adoc @@ -133,8 +133,13 @@ include::modules/registry-configuring-storage-aws-user-infra.adoc[leveloffset=+3 include::modules/installation-registry-storage-non-production.adoc[leveloffset=+3] +include::modules/installation-aws-user-infra-delete-bootstrap.adoc[leveloffset=+1] + +include::modules/installation-create-ingress-dns-records.adoc[leveloffset=+1] + include::modules/installation-aws-user-infra-installation.adoc[leveloffset=+1] + .Next steps * xref:../../installing/install_config/customizations.adoc#customizations[Customize your cluster]. diff --git a/modules/installation-aws-user-infra-delete-bootstrap.adoc b/modules/installation-aws-user-infra-delete-bootstrap.adoc new file mode 100644 index 000000000000..31064cc75333 --- /dev/null +++ b/modules/installation-aws-user-infra-delete-bootstrap.adoc @@ -0,0 +1,23 @@ +// Module included in the following assemblies: +// +// * installing/installing_aws_user_infra/installing-aws-user-infra.adoc +// * installing/installing_restricted_networks/installing-restricted-networks-aws.adoc + +[id="installation-aws-user-infra-delete-bootstrap_{context}"] += Deleting the bootstrap resources + +After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS). + +.Prerequisites + +* You completed the initial Operator configuration for your cluster. + +.Procedure + +. Delete the bootstrap resources. If you used the CloudFormation template, +link:https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html[delete its stack]: ++ +---- +$ aws cloudformation delete-stack --stack-name <1> +---- +<1> `` is the name of your bootstrap stack. diff --git a/modules/installation-aws-user-infra-installation.adoc b/modules/installation-aws-user-infra-installation.adoc index df5f57d5e84e..4494433ba514 100644 --- a/modules/installation-aws-user-infra-installation.adoc +++ b/modules/installation-aws-user-infra-installation.adoc @@ -11,25 +11,16 @@ endif::[] = Completing an AWS installation on user-provisioned infrastructure After you start the {product-title} installation on Amazon Web Service (AWS) -user-provisioned infrastructure, remove the bootstrap node, and wait for -installation to complete. +user-provisioned infrastructure, monitor the deployment to completion. .Prerequisites -* Deploy the bootstrap node for an {product-title} cluster on user-provisioned AWS infrastructure. +* Removed the bootstrap node for an {product-title} cluster on user-provisioned AWS infrastructure. * Install the `oc` CLI and log in. .Procedure -. Delete the bootstrap resources. If you used the CloudFormation template, -link:https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html[delete its stack]: -+ ----- -$ aws cloudformation delete-stack --stack-name <1> ----- -<1> `` is the name of your bootstrap stack. - -. Complete the cluster installation: +* Complete the cluster installation: + ---- $ ./openshift-install --dir= wait-for install-complete <1> diff --git a/modules/installation-create-ingress-dns-records.adoc b/modules/installation-create-ingress-dns-records.adoc new file mode 100644 index 000000000000..a57a51d3c247 --- /dev/null +++ b/modules/installation-create-ingress-dns-records.adoc @@ -0,0 +1,117 @@ +// Module included in the following assemblies: +// +// * installing/installing_aws_user_infra/installing-aws-user-infra.adoc +// * installing/installing_restricted_networks/installing-restricted-networks-aws.adoc + +[id="installation-create-ingress-dns-records_{context}"] += Creating the Ingress DNS Records + +If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. +You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias. + +.Prerequisites + +* You deployed an {product-title} cluster on Amazon Web Services (AWS) by using infrastructure that you provisioned. +* Install the OpenShift Command-line Interface (CLI), commonly known as `oc`. +* Install the `jq` package. +* Download the AWS CLI and install it on your computer. See +link:https://docs.aws.amazon.com/cli/latest/userguide/install-bundle.html[Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix)]. + +.Procedure + +. Determine the routes to create. +** To create a wildcard record, use `*.apps..`, where `` is your cluster name, and `` is the Route53 base domain for your {product-title} cluster. +** To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command: ++ +---- +$ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes +oauth-openshift.apps.. +console-openshift-console.apps.. +downloads-openshift-console.apps.. +alertmanager-main-openshift-monitoring.apps.. +grafana-openshift-monitoring.apps.. +prometheus-k8s-openshift-monitoring.apps.. +---- + +. Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the `EXTERNAL-IP` column: ++ +---- +$ oc -n openshift-ingress get service router-default +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m +---- + +. Locate the hosted zone ID for the load balancer: ++ +---- +$ aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "").CanonicalHostedZoneNameID' <1> + +Z3AADJGX6KTTL2 +---- +<1> For ``, specify the value of the external IP address of the Ingress Operator load balancer that you obtained. ++ +The output of this command is the load balancer hosted zone ID. + +. Obtain the public hosted zone ID for your cluster's domain: ++ +---- +$ aws route53 list-hosted-zones-by-name \ + --dns-name "" \ <1> + --query 'HostedZones[? Config.PrivateZone != `true` && Name == `.`].Id' <1> + --output text + +/hostedzone/Z3URY6TWQ91KVV +---- +<1> For ``, specify the Route53 base domain for your {product-title} cluster. ++ +The public hosted zone ID for your domain is shown in the command output. In this example, it is `Z3URY6TWQ91KVV`. + +. Add the alias records to your private zone: ++ +---- +$ aws route53 change-resource-record-sets --hosted-zone-id "" --change-batch '{ <1> +> "Changes": [ +> { +> "Action": "CREATE", +> "ResourceRecordSet": { +> "Name": "\\052.apps.", <2> +> "Type": "A", +> "AliasTarget":{ +> "HostedZoneId": "", <3> +> "DNSName": ".", <4> +> "EvaluateTargetHealth": false +> } +> } +> } +> ] +> }' +---- +<1> For ``, specify the value from the output of the CloudFormation template for DNS and load balancing. +<2> For ``, specify the domain or subdomain that you use with your {product-title} cluster. +<3> For ``, specify the public hosted zone ID for the load balancer that you obtained. +<4> For ``, specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period (`.`) in this parameter value. + +. Add the records to your public zone: ++ +---- +$ aws route53 change-resource-record-sets --hosted-zone-id """ --change-batch '{ <1> +> "Changes": [ +> { +> "Action": "CREATE", +> "ResourceRecordSet": { +> "Name": "\\052.apps.", <2> +> "Type": "A", +> "AliasTarget":{ +> "HostedZoneId": "", <3> +> "DNSName": ".", <4> +> "EvaluateTargetHealth": false +> } +> } +> } +> ] +> }' +---- +<1> For ``, specify the public hosted zone for your domain. +<2> For ``, specify the domain or subdomain that you use with your {product-title} cluster. +<3> For ``, specify the public hosted zone ID for the load balancer that you obtained. +<4> For ``, specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period (`.`) in this parameter value. diff --git a/modules/installation-user-infra-generate-k8s-manifest-ignition.adoc b/modules/installation-user-infra-generate-k8s-manifest-ignition.adoc index 18b4d10a6d3e..256aa7f17869 100644 --- a/modules/installation-user-infra-generate-k8s-manifest-ignition.adoc +++ b/modules/installation-user-infra-generate-k8s-manifest-ignition.adoc @@ -103,7 +103,7 @@ endif::aws,gcp[] Currently, due to a link:https://github.com/kubernetes/kubernetes/issues/65618[Kubernetes limitation], router Pods running on control plane machines will not be reachable by the ingress load balancer. This step might not be required in a future minor version of {product-title}. ==== -ifdef::gcp[] +ifdef::gcp,aws[] . Optional: If you do not want link:https://github.com/openshift/cluster-ingress-operator[the Ingress Operator] to create DNS records on your behalf, remove the `privateZone` and `publicZone` @@ -127,7 +127,7 @@ status: {} <1> Remove these sections completely. + If you do so, you must add ingress DNS records manually in a later step. -endif::gcp[] +endif::gcp,aws[] . Obtain the Ignition config files: +