Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion images/installer/Dockerfile.upi.ci
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ RUN yum install --setopt=tsflags=nodocs -y \
openssh-clients && \
yum update -y && \
yum install --setopt=tsflags=nodocs -y \
unzip gzip jq awscli util-linux && \
unzip gzip awscli util-linux && \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you provide a reason why we are dropping jq ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CI can't install epel-release and has an issue with internal repos, so upi-installer build fails with it.

@stevekuznetsov is looking at it afaik

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@smarterclayton where do the devel-4.0 rpm repos come from?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Builds for this started to see them unreachable

yum clean all && rm -rf /var/cache/yum/*

ENV TERRAFORM_VERSION=0.11.11
Expand Down
35 changes: 11 additions & 24 deletions upi/vsphere/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,37 +19,24 @@ sshKey: YOUR_SSH_KEY
3. Fill out a terraform.tfvars file with the ignition configs generated.
There is an example terraform.tfvars file in this directory named terraform.tfvars.example. The example file is set up for use with the dev cluster running at vcsa.vmware.devcluster.openshift.com. At a minimum, you need to set values for `cluster_id`, `cluster_domain`, `vsphere_user`, `vsphere_password`, `bootstrap_ignition_url`, `control_plane_ignition`, and `compute_ignition`.
The bootstrap ignition config must be placed in a location that will be accessible by the bootstrap machine. For example, you could store the bootstrap ignition config in a gist.
Initially, the `bootstrap_complete` variable must be false, the `bootstrap_ip` variable must be an empty string, and the `control_plane_ips variable must be an empty list.

4. Run `terraform init`.

5. Run `terraform apply -auto-approve`.
5. Ensure that you have you AWS profile set and a region specified. The installation will use create AWS route53 resources for routing to the OpenShift cluster.

6. Find the IP address of the bootstrap machine.
If you provided an extra user, you can use that user to log into the bootstrap machine via the vSphere web console.
Alternatively, you could iterate through the IP addresses in the 139.178.89.192/26 block looking for one that has the expected hostname, which is bootstrap-0.{cluster_domain}. For example, `ssh -i ~/.ssh/libra.pem -o StrictHostNameChecking=no -q [email protected] hostname`
6. Run `terraform apply -auto-approve -var 'step=1'`.
This will create the bootstrap VM.

7. Update the terraform.tfvars file with the IP address of the bootstrap machine.
7. Run `terraform apply -auto-approve -var 'step=2'`.
This will create the control-plane and compute VMs.

8. Run `terraform apply -auto-approve`.
From this point forward, route53 resources will be managed by terraform. You will need to have your AWS profile set and a region specified.
8. Run `openshift-install upi bootstrap-complete`. Wait for the bootstrapping to complete.

9. Find the IP addresses of the control plane machines. See step 6 for examples of how to do this. The expected hostnames are control-plane-{0,1,2}.{cluster_domain}. The control plane machines will change their IP addresses once. You need the final IP addresses. If you happen to use the first set of IP addresses, you can later update the IP addresses in the terraform.tfvars file and re-run terraform.
9. Run `terraform apply -auto-approve -var 'step=3'`.
This will destroy the bootstrap VM.

10. Update the terraform.tfvars file with the IP addresses of the control plane machines.
10. Run `openshift-install upi finish`. Wait for the cluster install to finish.

11. Run `terraform apply -auto-approve`.
11. Enjoy your new OpenShift cluster.

12. Run `openshift-install user-provided-infrastructure`. Wait for the bootstrapping to complete.
You *may* need to log into each of the control plane machines. It would seem that, for some reason, the etcd-member pod does not start until the machine is logged into.

13. Update the terraform.tfvars file to set the `bootstrap_complete` variable to "true".

14. Run `terraform apply -auto-approve`.

15. Run `openshift-install user-provided-infrastructure finish`. Wait for the cluster install to finish.
Currently, the cluster install does not finish. There is an outstanding issue with the openshift-console operator not installing successfully. The cluster should still be usable save for the console, however.

16. Enjoy your new OpenShift cluster.

17. Run `terraform destroy -auto-approve`.
12. Run `terraform destroy -auto-approve -var 'step=3'`.
9 changes: 4 additions & 5 deletions upi/vsphere/machine/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ data "ignition_user" "extra_users" {

name = "${var.extra_user_names[count.index]}"
password_hash = "${var.extra_user_password_hashes[count.index]}"
groups = ["sudo"]
}

data "ignition_config" "ign" {
Expand All @@ -53,17 +54,14 @@ data "ignition_config" "ign" {
resource "vsphere_virtual_machine" "vm" {
count = "${var.instance_count}"

name = "${var.name}-${count.index}"
name = "${var.cluster_id}-${var.name}-${count.index}"
resource_pool_id = "${var.resource_pool_id}"
datastore_id = "${data.vsphere_datastore.datastore.id}"
num_cpus = "4"
memory = "8192"
guest_id = "other26xLinux64Guest"
folder = "${var.folder}"

wait_for_guest_net_timeout = 0
wait_for_guest_net_routable = false

network_interface {
network_id = "${data.vsphere_network.network.id}"
}
Expand All @@ -80,7 +78,8 @@ resource "vsphere_virtual_machine" "vm" {

vapp {
properties {
"guestinfo.coreos.config.data" = "${data.ignition_config.ign.*.rendered[count.index]}"
"guestinfo.ignition.config.data" = "${base64encode(data.ignition_config.ign.*.rendered[count.index])}"
"guestinfo.ignition.config.data.encoding" = "base64"
}
}
}
3 changes: 3 additions & 0 deletions upi/vsphere/machine/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
output "ip_addresses" {
value = ["${vsphere_virtual_machine.vm.*.default_ip_address}"]
}
4 changes: 4 additions & 0 deletions upi/vsphere/machine/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -51,3 +51,7 @@ variable "datacenter_id" {
variable "template" {
type = "string"
}

variable "cluster_id" {
type = "string"
}
29 changes: 22 additions & 7 deletions upi/vsphere/main.tf
Original file line number Diff line number Diff line change
@@ -1,3 +1,12 @@
locals {
bootstrap_needed = "${var.step < 3}"
nodes_needed = "${var.step > 1}"
dns_needed = "${var.step >= 2}"

control_plane_count = "${local.nodes_needed ? var.control_plane_instance_count: 0}"
compute_count = "${local.nodes_needed ? var.compute_instance_count: 0}"
}

provider "vsphere" {
user = "${var.vsphere_user}"
password = "${var.vsphere_password}"
Expand Down Expand Up @@ -28,7 +37,7 @@ module "bootstrap" {
source = "./machine"

name = "bootstrap"
instance_count = "${var.bootstrap_complete ? 0 : 1}"
instance_count = "${local.bootstrap_needed ? 1 : 0}"
ignition_url = "${var.bootstrap_ignition_url}"
resource_pool_id = "${module.resource_pool.pool_id}"
datastore = "${var.vsphere_datastore}"
Expand All @@ -37,6 +46,7 @@ module "bootstrap" {
datacenter_id = "${data.vsphere_datacenter.dc.id}"
template = "${var.vm_template}"
cluster_domain = "${var.cluster_domain}"
cluster_id = "${var.cluster_id}"

extra_user_names = ["${var.extra_user_names}"]
extra_user_password_hashes = ["${var.extra_user_password_hashes}"]
Expand All @@ -46,7 +56,7 @@ module "control_plane" {
source = "./machine"

name = "control-plane"
instance_count = "${var.control_plane_instance_count}"
instance_count = "${local.nodes_needed ? var.control_plane_instance_count: 0}"
ignition = "${var.control_plane_ignition}"
resource_pool_id = "${module.resource_pool.pool_id}"
folder = "${module.folder.path}"
Expand All @@ -55,6 +65,7 @@ module "control_plane" {
datacenter_id = "${data.vsphere_datacenter.dc.id}"
template = "${var.vm_template}"
cluster_domain = "${var.cluster_domain}"
cluster_id = "${var.cluster_id}"

extra_user_names = ["${var.extra_user_names}"]
extra_user_password_hashes = ["${var.extra_user_password_hashes}"]
Expand All @@ -64,7 +75,7 @@ module "compute" {
source = "./machine"

name = "compute"
instance_count = "${var.compute_instance_count}"
instance_count = "${local.nodes_needed ? var.compute_instance_count: 0}"
ignition = "${var.compute_ignition}"
resource_pool_id = "${module.resource_pool.pool_id}"
folder = "${module.folder.path}"
Expand All @@ -73,6 +84,7 @@ module "compute" {
datacenter_id = "${data.vsphere_datacenter.dc.id}"
template = "${var.vm_template}"
cluster_domain = "${var.cluster_domain}"
cluster_id = "${var.cluster_id}"

extra_user_names = ["${var.extra_user_names}"]
extra_user_password_hashes = ["${var.extra_user_password_hashes}"]
Expand All @@ -81,8 +93,11 @@ module "compute" {
module "dns" {
source = "./route53"

base_domain = "${var.base_domain}"
cluster_domain = "${var.cluster_domain}"
bootstrap_ip = "${var.bootstrap_complete ? "" : var.bootstrap_ip}"
control_plane_ips = "${var.control_plane_ips}"
base_domain = "${var.base_domain}"
cluster_domain = "${var.cluster_domain}"
bootstrap_ip = ["${module.bootstrap.ip_addresses}"]
control_plane_instance_count = "${local.dns_needed ? var.control_plane_instance_count: 0}"
control_plane_ips = ["${module.control_plane.ip_addresses}"]
compute_instance_count = "${local.dns_needed ? var.compute_instance_count: 0}"
compute_ips = ["${module.compute.ip_addresses}"]
}
37 changes: 26 additions & 11 deletions upi/vsphere/route53/main.tf
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
locals {
route53_zone_count = "${length(var.control_plane_ips) + length(var.bootstrap_ip) == "0" ? "0" : "1"}"
control_plane_ips = "${var.control_plane_ips}"
compute_ips = "${var.compute_ips}"
}

data "aws_route53_zone" "base" {
name = "${var.base_domain}"
}

resource "aws_route53_zone" "cluster" {
count = "${local.route53_zone_count}"

name = "${var.cluster_domain}"
force_destroy = true

Expand All @@ -18,8 +17,6 @@ resource "aws_route53_zone" "cluster" {
}

resource "aws_route53_record" "name_server" {
count = "${local.route53_zone_count}"

name = "${var.cluster_domain}"
type = "NS"
ttl = "300"
Expand All @@ -28,32 +25,30 @@ resource "aws_route53_record" "name_server" {
}

resource "aws_route53_record" "api" {
count = "${local.route53_zone_count}"

type = "A"
ttl = "60"
zone_id = "${aws_route53_zone.cluster.zone_id}"
name = "api.${var.cluster_domain}"
set_identifier = "api"
records = "${compact(concat(list(var.bootstrap_ip), var.control_plane_ips))}"
records = ["${concat(var.bootstrap_ip, var.control_plane_ips)}"]

weighted_routing_policy {
weight = 90
}
}

resource "aws_route53_record" "etcd_a_nodes" {
count = "${length(var.control_plane_ips)}"
count = "${var.control_plane_instance_count}"

type = "A"
ttl = "60"
zone_id = "${aws_route53_zone.cluster.zone_id}"
name = "etcd-${count.index}.${var.cluster_domain}"
records = ["${var.control_plane_ips[count.index]}"]
records = ["${local.control_plane_ips[count.index]}"]
}

resource "aws_route53_record" "etcd_cluster" {
count = "${length(var.control_plane_ips) == "0" ? "0" : "1"}"
count = "${var.control_plane_instance_count == "0" ? "0" : "1"}"

type = "SRV"
ttl = "60"
Expand All @@ -62,6 +57,26 @@ resource "aws_route53_record" "etcd_cluster" {
records = ["${formatlist("0 10 2380 %s", aws_route53_record.etcd_a_nodes.*.fqdn)}"]
}

resource "aws_route53_record" "control_plane_nodes" {
count = "${var.control_plane_instance_count}"

type = "A"
ttl = "60"
zone_id = "${aws_route53_zone.cluster.zone_id}"
name = "control-plane-${count.index}.${var.cluster_domain}"
records = ["${local.control_plane_ips[count.index]}"]
}

resource "aws_route53_record" "compute_nodes" {
count = "${var.compute_instance_count}"

type = "A"
ttl = "60"
zone_id = "${aws_route53_zone.cluster.zone_id}"
name = "compute-${count.index}.${var.cluster_domain}"
records = ["${local.compute_ips[count.index]}"]
}

resource "aws_route53_record" "ingress" {
count = "${var.compute_instance_count == "0" ? "0" : "1"}"

Expand Down
12 changes: 12 additions & 0 deletions upi/vsphere/route53/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,25 @@ variable "cluster_domain" {
}

variable "bootstrap_ip" {
type = "list"
}

variable "control_plane_instance_count" {
type = "string"
}

variable "control_plane_ips" {
type = "list"
}

variable "compute_instance_count" {
type = "string"
}

variable "compute_ips" {
type = "list"
}

variable "base_domain" {
description = "The base domain used for public records."
type = "string"
Expand Down
19 changes: 2 additions & 17 deletions upi/vsphere/terraform.tfvars.example
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
// Set to true once the bootstrapping is complete. The bootstrap machine will be destroyed if this variable is set to "true".
//bootstrap_complete = true

// The IP address of the bootstrap node.
// If using the dev vSphere cluster, this IP will be in the 139.178.89.192/26 block.
//bootstrap_ip = "139.178.89.xxx"

// The IP addresses of the control plan nodes.
// If using the dev vSphere cluster, this IP will be in the 139.178.89.192/26 block.
//control_plane_ips = ["139.178.89.xxx","139.178.89.xxx","139.178.89.xxx"]

// ID identifying the cluster to create. Use your username so that resources created can be tracked back to you.
cluster_id = "example-cluster"

Expand Down Expand Up @@ -40,12 +29,8 @@ vsphere_datacenter = "dc1"
// Name of the vSphere data store to use for the VMs. The dev cluster uses "nvme-ds1".
vsphere_datastore = "nvme-ds1"

// Name of the VM template to clone to create VMs for the cluster. The dev cluster has templates named "rhcos-latest" and "rhcos-davis-no-ig".
// The "rhcos-latest" template is a recent version of rhcos. There is an issue running the journald gateway on the bootstrap machine with the rhel8 rchos.
// If you want to use the latest rhcos, you should remove the systemd-journal-gatewayd systemd units from the bootstrap ignition config for the
// time being.
// The "rhcos-davis-no-ig" template is a rhel7 rchos.
vm_template = "rhcos-davis-no-ig"
// Name of the VM template to clone to create VMs for the cluster. The dev cluster has a template named "rhcos-latest".
vm_template = "rhcos-latest"

// URL of the bootstrap ignition. This needs to be publicly accessible so that the bootstrap machine can pull the ignition.
bootstrap_ignition_url = "URL_FOR_YOUR_BOOTSTRAP_IGNITION"
Expand Down
17 changes: 2 additions & 15 deletions upi/vsphere/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -80,15 +80,8 @@ variable "bootstrap_ignition_url" {
type = "string"
}

variable "bootstrap_complete" {
type = "string"
default = "false"
}

variable "bootstrap_ip" {
type = "string"
description = "The IP address in the machine_cidr to apply to the bootstrap."
default = ""
variable "step" {
type = "string"
}

///////////
Expand All @@ -105,12 +98,6 @@ variable "control_plane_ignition" {
type = "string"
}

variable "control_plane_ips" {
type = "list"
description = "The IP addresses in the machine_cidr to apply to the control plane machines."
default = []
}

//////////
// Compute machine variables
//////////
Expand Down