Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions _topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,11 @@ Topics:
File: installing-aws-network-customizations
- Name: Uninstalling a cluster on AWS
File: uninstalling-cluster-aws
- Name: Installing on AWS UPI
Dir: installing_aws_upi
Topics:
- Name: Installing a cluster on AWS using CloudFormation templates
File: installing-aws-upi
- Name: Installing on bare metal
Dir: installing_bare_metal
Topics:
Expand Down
2 changes: 0 additions & 2 deletions installing/installing_aws/installing-aws-customizations.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,6 @@ to host the cluster.

include::modules/installation-overview.adoc[leveloffset=+1]

include::modules/installation-clouds.adoc[leveloffset=+1]

include::modules/installation-provide-credentials.adoc[leveloffset=+1]
Copy link

@mazzystr mazzystr Apr 17, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know there's no diff for this but can you add a bullet point to test aws cli under Configuring your computer for installation after command aws configure --profile=<profile_name>? aws cli command can be very finicky and stops users in their tracks.

OCP on AWS reference architecture contains a good test. See this link(Sorry for the bad link).

The test cmd is basically...
aws sts get-caller-identity

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mazzystr, will you try that link again? I'm happy to add that bullet point.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @mazzystr! I've added the test.


include::modules/installation-obtaining-installer.adoc[leveloffset=+1]
Expand Down
2 changes: 0 additions & 2 deletions installing/installing_aws/installing-aws-default.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,6 @@ to host the cluster.

include::modules/installation-overview.adoc[leveloffset=+1]

include::modules/installation-clouds.adoc[leveloffset=+1]

include::modules/installation-provide-credentials.adoc[leveloffset=+1]

include::modules/installation-obtaining-installer.adoc[leveloffset=+1]
Expand Down
78 changes: 78 additions & 0 deletions installing/installing_aws_upi/installing-aws-upi.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
[id="installing-aws-upi"]
= Installing a cluster on AWS using CloudFormation templates
include::modules/common-attributes.adoc[]
:context: installing-aws-upi

toc::[]

In {product-title} version {product-version}, you can install a
cluster on Amazon Web Services (AWS) using infrastructure that you provide.

One way to create this infrastructure is to use the provided
CloudFormation templates. You can modify the templates to customize your
infrastructure or use the information that they contain to create AWS objects
according to your company's policies.

.Prerequisites

* xref:../../installing/installing_aws/installing-aws-account.adoc[Configure an AWS account]
to host the cluster.

include::modules/installation-overview.adoc[leveloffset=+1]

include::modules/installation-aws-upi-requirements.adoc[leveloffset=+1]

include::modules/installation-aws-permissions.adoc[leveloffset=+2]

include::modules/installation-obtaining-installer.adoc[leveloffset=+1]

include::modules/installation-provide-credentials.adoc[leveloffset=+1]

include::modules/installation-generate-aws-upi.adoc[leveloffset=+1]

include::modules/installation-extracting-infraid.adoc[leveloffset=+1]

include::modules/installation-creating-aws-vpc.adoc[leveloffset=+1]

include::modules/installation-cloudformation-vpc.adoc[leveloffset=+2]

include::modules/installation-creating-aws-dns.adoc[leveloffset=+1]

include::modules/installation-cloudformation-dns.adoc[leveloffset=+2]

include::modules/installation-creating-aws-security.adoc[leveloffset=+1]

include::modules/installation-cloudformation-security.adoc[leveloffset=+2]

include::modules/installation-aws-upi-rhcos-ami.adoc[leveloffset=+1]

include::modules/installation-creating-aws-bootstrap.adoc[leveloffset=+1]

include::modules/installation-cloudformation-bootstrap.adoc[leveloffset=+2]

include::modules/installation-creating-aws-control-plane.adoc[leveloffset=+1]

include::modules/installation-cloudformation-control-plane.adoc[leveloffset=+2]

include::modules/installation-aws-upi-bootstrap.adoc[leveloffset=+1]

////
[id="installing-workers-aws-upi"]
== Creating worker nodes

You can either manually create worker nodes or use a MachineSet to create worker
nodes after the cluster deploys. If you use a MachineSet to create and maintain
the workers, you can allow the cluster to manage them. This allows you to easily
scale, manage, and upgrade your workers.
////


include::modules/installation-creating-aws-worker.adoc[leveloffset=+2]

include::modules/installation-cloudformation-worker.adoc[leveloffset=+3]

include::modules/cli-install.adoc[leveloffset=+1]

include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1]

include::modules/installation-aws-upi-installation.adoc[leveloffset=+1]
2 changes: 1 addition & 1 deletion modules/installation-aws-permissions.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ cluster, the IAM user requires the following permissions:
|`bootstrap`
|`s3:GetObject`
|Yes
|Allows fetching Ignition configs from installation bucket?
|Allows fetching Ignition config files from the installation bucket

|`master`
|`elasticloadbalancing:*`
Expand Down
34 changes: 34 additions & 0 deletions modules/installation-aws-upi-bootstrap.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
// Module included in the following assemblies:
//
// * installing/installing_aws_upi/installing-aws-upi.adoc

[id="installation-aws-upi-bootstrap-{context}"]
= Initializing the bootstrap node on AWS UPI

After you create all of the required infrastructure in Amazon Web Services (AWS),
you can install the cluster.

.Prerequisites

* Configure an AWS account.
* Generate the Ignition config files for your cluster.
* Create and configure a VPC and assocated subnets in AWS.
* Create and configure DNS, load balancers, and listeners in AWS.
* Create control plane and compute roles.
* Create the bootstrap machine.
* Create the control plane machines.
* If you plan to manually manage the worker machines, create the worker machines.

.Procedure

. Change to the directory that contains the installation program and run the
following command:
+
----
$ ./openshift-install wait-for bootstrap-complete --dir=<installation-directory> <1>
----
<1> Specify the directory name that you generated the `install-config.yaml` file for
your cluster in.
+
If the command exits without a `FATAL` warning, your production control plane
has initialized.
90 changes: 90 additions & 0 deletions modules/installation-aws-upi-installation.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
// Module included in the following assemblies:
//
// * installing/installing_aws_upi/installing-aws-upi.adoc

[id="installation-aws-upi-installation-{context}"]
= Completing an AWS UPI installation

After you start the {product-title} installation on Amazon Web Service (AWS)
user-provisioned infrastructure, remove the bootstrap node, reconcile the default
Machine and MachineSet definitions, and delete unused nodes

.Prerequisites

* Deploy the bootstrap node for an {product-title} cluster on user-provisioned AWS infrastructure.
* Install the `oc` CLI and log in.

.Procedure

. Delete the bootstrap resources. If you used the CloudFormation template,
link:https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html[delete its stack]:
+
----
$ aws cloudformation delete-stack --stack-name <name> <1>
----
<1> `<name>` is the name of your bootstrap stack.

////
. View the list of machines in the `openshift-machine-api` namespace:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think they're working on it, but I'll track the bug and follow up when it's resolved.

+
----
$ oc get machines --namespace openshift-machine-api
NAME INSTANCE STATE TYPE REGION ZONE AGE
test-tkh7l-master-0 m4.xlarge us-east-2 us-east-2a 9m22s
test-tkh7l-master-1 m4.xlarge us-east-2 us-east-2b 9m22s
test-tkh7l-master-2 m4.xlarge us-east-2 us-east-2c 9m21s
test-tkh7l-worker-us-east-2a-qjcxq m4.large us-east-2 us-east-2a 8m6s
test-tkh7l-worker-us-east-2b-nq8zs m4.large us-east-2 us-east-2b 8m6s
test-tkh7l-worker-us-east-2c-ww6c6 m4.large us-east-2 us-east-2c 8m7s
----
+
Note the `NAME` of each node. Because you manually deployed control plane
nodes, the master machines are not controlled by the Machine API. Similarly,
the worker machines are not backed by AWS instances on your subnet. You delete
each of these machines.

. Delete each of the listed machines:
+
----
$ oc delete machine --namespace openshift-machine-api <node_name> <1>
machine.machine.openshift.io "<node_name>" deleted
----
<1> Specify the name of a master or worker node to delete.
////

. Review the pending certificate signing requests (CSRs) and ensure that the
requests are for the machines that you added to the cluster:
+
----
$ oc get csr

NAME AGE REQUESTOR CONDITION
csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-b96j4 25s system:node:ip-10-0-52-215.us-east-2.compute.internal Approved,Issued
csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending
csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending
----

. Approve the CSRs for your cluster machines.
** To approve them individually, run the following command for each valid
CSR:
+
----
$ oc adm certificate approve <csr_name> <1>
----
<1> `<csr_name>` is the name of a CSR from the list of current CSRs.

** If all the CSRs are valid, approve them all by running the following
command:
+
----
$ oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve
----

. Complete the cluster installation:
+
----
$ ./openshift-install wait-for install-complete
INFO Waiting up to 30m0s for the cluster to initialize...
----
Loading