-
Notifications
You must be signed in to change notification settings - Fork 1.8k
AWS UPI draft #14241
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AWS UPI draft #14241
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,78 @@ | ||
| [id="installing-aws-upi"] | ||
| = Installing a cluster on AWS using CloudFormation templates | ||
| include::modules/common-attributes.adoc[] | ||
| :context: installing-aws-upi | ||
|
|
||
| toc::[] | ||
|
|
||
| In {product-title} version {product-version}, you can install a | ||
| cluster on Amazon Web Services (AWS) using infrastructure that you provide. | ||
|
|
||
| One way to create this infrastructure is to use the provided | ||
| CloudFormation templates. You can modify the templates to customize your | ||
| infrastructure or use the information that they contain to create AWS objects | ||
| according to your company's policies. | ||
|
|
||
| .Prerequisites | ||
|
|
||
| * xref:../../installing/installing_aws/installing-aws-account.adoc[Configure an AWS account] | ||
| to host the cluster. | ||
|
|
||
| include::modules/installation-overview.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-aws-upi-requirements.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-aws-permissions.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-obtaining-installer.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-provide-credentials.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-generate-aws-upi.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-extracting-infraid.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-creating-aws-vpc.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-cloudformation-vpc.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-creating-aws-dns.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-cloudformation-dns.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-creating-aws-security.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-cloudformation-security.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-aws-upi-rhcos-ami.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-creating-aws-bootstrap.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-cloudformation-bootstrap.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-creating-aws-control-plane.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-cloudformation-control-plane.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-aws-upi-bootstrap.adoc[leveloffset=+1] | ||
|
|
||
| //// | ||
| [id="installing-workers-aws-upi"] | ||
| == Creating worker nodes | ||
|
|
||
| You can either manually create worker nodes or use a MachineSet to create worker | ||
| nodes after the cluster deploys. If you use a MachineSet to create and maintain | ||
| the workers, you can allow the cluster to manage them. This allows you to easily | ||
| scale, manage, and upgrade your workers. | ||
| //// | ||
|
|
||
|
|
||
| include::modules/installation-creating-aws-worker.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-cloudformation-worker.adoc[leveloffset=+3] | ||
|
|
||
| include::modules/cli-install.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-aws-upi-installation.adoc[leveloffset=+1] |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,34 @@ | ||
| // Module included in the following assemblies: | ||
| // | ||
| // * installing/installing_aws_upi/installing-aws-upi.adoc | ||
|
|
||
| [id="installation-aws-upi-bootstrap-{context}"] | ||
| = Initializing the bootstrap node on AWS UPI | ||
kalexand-rh marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| After you create all of the required infrastructure in Amazon Web Services (AWS), | ||
| you can install the cluster. | ||
|
|
||
| .Prerequisites | ||
|
|
||
| * Configure an AWS account. | ||
| * Generate the Ignition config files for your cluster. | ||
kalexand-rh marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| * Create and configure a VPC and assocated subnets in AWS. | ||
| * Create and configure DNS, load balancers, and listeners in AWS. | ||
| * Create control plane and compute roles. | ||
| * Create the bootstrap machine. | ||
| * Create the control plane machines. | ||
| * If you plan to manually manage the worker machines, create the worker machines. | ||
|
|
||
| .Procedure | ||
|
|
||
| . Change to the directory that contains the installation program and run the | ||
| following command: | ||
| + | ||
| ---- | ||
| $ ./openshift-install wait-for bootstrap-complete --dir=<installation-directory> <1> | ||
kalexand-rh marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| ---- | ||
| <1> Specify the directory name that you generated the `install-config.yaml` file for | ||
| your cluster in. | ||
| + | ||
| If the command exits without a `FATAL` warning, your production control plane | ||
| has initialized. | ||
kalexand-rh marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,90 @@ | ||
| // Module included in the following assemblies: | ||
| // | ||
| // * installing/installing_aws_upi/installing-aws-upi.adoc | ||
|
|
||
| [id="installation-aws-upi-installation-{context}"] | ||
| = Completing an AWS UPI installation | ||
|
|
||
| After you start the {product-title} installation on Amazon Web Service (AWS) | ||
| user-provisioned infrastructure, remove the bootstrap node, reconcile the default | ||
| Machine and MachineSet definitions, and delete unused nodes | ||
|
|
||
| .Prerequisites | ||
|
|
||
| * Deploy the bootstrap node for an {product-title} cluster on user-provisioned AWS infrastructure. | ||
| * Install the `oc` CLI and log in. | ||
|
|
||
| .Procedure | ||
|
|
||
| . Delete the bootstrap resources. If you used the CloudFormation template, | ||
| link:https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html[delete its stack]: | ||
| + | ||
| ---- | ||
| $ aws cloudformation delete-stack --stack-name <name> <1> | ||
| ---- | ||
| <1> `<name>` is the name of your bootstrap stack. | ||
|
|
||
| //// | ||
| . View the list of machines in the `openshift-machine-api` namespace: | ||
|
||
| + | ||
| ---- | ||
| $ oc get machines --namespace openshift-machine-api | ||
| NAME INSTANCE STATE TYPE REGION ZONE AGE | ||
| test-tkh7l-master-0 m4.xlarge us-east-2 us-east-2a 9m22s | ||
| test-tkh7l-master-1 m4.xlarge us-east-2 us-east-2b 9m22s | ||
| test-tkh7l-master-2 m4.xlarge us-east-2 us-east-2c 9m21s | ||
| test-tkh7l-worker-us-east-2a-qjcxq m4.large us-east-2 us-east-2a 8m6s | ||
| test-tkh7l-worker-us-east-2b-nq8zs m4.large us-east-2 us-east-2b 8m6s | ||
| test-tkh7l-worker-us-east-2c-ww6c6 m4.large us-east-2 us-east-2c 8m7s | ||
| ---- | ||
| + | ||
| Note the `NAME` of each node. Because you manually deployed control plane | ||
| nodes, the master machines are not controlled by the Machine API. Similarly, | ||
| the worker machines are not backed by AWS instances on your subnet. You delete | ||
| each of these machines. | ||
|
|
||
| . Delete each of the listed machines: | ||
| + | ||
| ---- | ||
| $ oc delete machine --namespace openshift-machine-api <node_name> <1> | ||
| machine.machine.openshift.io "<node_name>" deleted | ||
| ---- | ||
| <1> Specify the name of a master or worker node to delete. | ||
| //// | ||
|
|
||
| . Review the pending certificate signing requests (CSRs) and ensure that the | ||
kalexand-rh marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| requests are for the machines that you added to the cluster: | ||
| + | ||
| ---- | ||
| $ oc get csr | ||
|
|
||
| NAME AGE REQUESTOR CONDITION | ||
| csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued | ||
| csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued | ||
| csr-b96j4 25s system:node:ip-10-0-52-215.us-east-2.compute.internal Approved,Issued | ||
| csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending | ||
| csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending | ||
| ---- | ||
|
|
||
| . Approve the CSRs for your cluster machines. | ||
| ** To approve them individually, run the following command for each valid | ||
| CSR: | ||
| + | ||
| ---- | ||
| $ oc adm certificate approve <csr_name> <1> | ||
| ---- | ||
| <1> `<csr_name>` is the name of a CSR from the list of current CSRs. | ||
|
|
||
| ** If all the CSRs are valid, approve them all by running the following | ||
| command: | ||
| + | ||
| ---- | ||
| $ oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve | ||
| ---- | ||
|
|
||
| . Complete the cluster installation: | ||
| + | ||
| ---- | ||
| $ ./openshift-install wait-for install-complete | ||
| INFO Waiting up to 30m0s for the cluster to initialize... | ||
| ---- | ||
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know there's no diff for this but can you add a bullet point to test aws cli under
Configuring your computer for installationafter commandaws configure --profile=<profile_name>? aws cli command can be very finicky and stops users in their tracks.OCP on AWS reference architecture contains a good test. See this link(Sorry for the bad link).
The test cmd is basically...
aws sts get-caller-identityThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mazzystr, will you try that link again? I'm happy to add that bullet point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://access.redhat.com/documentation/en-us/reference_architectures/2018/html-single/deploying_and_managing_openshift_3.9_on_amazon_web_services/index#client_testing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @mazzystr! I've added the test.