Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions _topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -268,6 +268,14 @@ Topics:
File: ipi-install-expanding-the-cluster
- Name: Troubleshooting
File: ipi-install-troubleshooting
- Name: Deploying installer-provisioned clusters on IBM Cloud
Dir: installing_ibm_cloud
Distros: openshift-origin,openshift-enterprise
Topics:
- Name: Prerequisites
File: install-ibm-cloud-prerequisites
- Name: Installation workflow
File: install-ibm-cloud-installation-workflow
- Name: Installing with z/VM on IBM Z and LinuxONE
Dir: installing_ibm_z
Distros: openshift-enterprise
Expand Down
1 change: 1 addition & 0 deletions installing/installing_ibm_cloud/images
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
[id="install-ibm-cloud-installation-workflow"]
= Setting up the environment for an {product-title} installation
include::modules/common-attributes.adoc[]
:context: install-ibm-cloud-installation-workflow

toc::[]

include::modules/install-ibm-cloud-preparing-the-provisioner-node.adoc[leveloffset=+1]

include::modules/install-ibm-cloud-configuring-the-public-subnet.adoc[leveloffset=+1]

include::modules/ipi-install-retrieving-the-openshift-installer.adoc[leveloffset=+1]

include::modules/ipi-install-extracting-the-openshift-installer.adoc[leveloffset=+1]

include::modules/install-ibm-cloud-configuring-the-install-config-file.adoc[leveloffset=+1]

include::modules/ipi-install-additional-install-config-parameters.adoc[leveloffset=+1]

include::modules/ipi-install-root-device-hints.adoc[leveloffset=+1]

include::modules/ipi-install-creating-the-openshift-manifests.adoc[leveloffset=+1]

include::modules/ipi-install-deploying-the-cluster-via-the-openshift-installer.adoc[leveloffset=+1]

include::modules/ipi-install-following-the-installation.adoc[leveloffset=+1]
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
[id="install-ibm-cloud-prerequisites"]
= Prerequisites
include::modules/common-attributes.adoc[]
:context: install-ibm-cloud

toc::[]

You can use installer-provisioned installation to install {product-title} on IBM Cloud® nodes. This document describes the prerequisites and procedures when installing {product-title} on IBM Cloud nodes.

[IMPORTANT]
====
Red Hat supports IPMI and PXE on the `provisioning` network only. Red Hat has not tested Red Fish, virtual media, or other complementary technologies such as Secure Boot on IBM Cloud deployments. The `provisioning` network is required.
====

Installer-provisioned installation of {product-title} requires:

* One provisioner node with {op-system-first} 8.x installed
* Three control plane nodes
* One routable network
* One network for provisioning nodes

Before starting an installer-provisioned installation of {product-title} on IBM Cloud, address the following prerequisites and requirements.

include::modules/install-ibm-cloud-setting-up-ibm-cloud-infrastructure.adoc[leveloffset=+1]
1 change: 1 addition & 0 deletions installing/installing_ibm_cloud/modules
110 changes: 110 additions & 0 deletions modules/install-ibm-cloud-configuring-the-install-config-file.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
// This is included in the following assemblies:
//
// installing_ibm_cloud/install-ibm-cloud-installing-on-ibm-cloud.adoc

[id="configuring-the-install-config-file_{context}"]
= Configuring the install-config.yaml file

The `install-config.yaml` file requires some additional details. Most of the information is teaching the installer and the resulting cluster enough about the available IBM Cloud® hardware so that it is able to fully manage it. The material difference between installing on bare metal and installing on IBM Cloud is that you must explicitly set the privilege level for IPMI in the BMC section of the `install-config.yaml` file.

.Procedure

. Configure `install-config.yaml`. Change the appropriate variables to match the environment, including `pullSecret` and `sshKey`.
+
[source,yaml]
----
apiVersion: v1
baseDomain: <domain>
metadata:
name: <cluster_name>
networking:
machineCIDR: <public_cidr>
networkType: OVNKubernetes
compute:
- name: worker
replicas: 2
controlPlane:
name: master
replicas: 3
platform:
baremetal: {}
platform:
baremetal:
apiVIP: <api_ip>
ingressVIP: <wildcard_ip>
provisioningNetworkInterface: <NIC1>
provisioningNetworkCIDR: <CIDR>
hosts:
- name: openshift-master-0
role: master
bmc:
address: ipmi://10.196.130.145?privilegelevel=OPERATOR <1>
username: root
password: <password>
bootMACAddress: 00:e0:ed:6a:ca:b4 <2>
rootDeviceHints:
deviceName: "/dev/sda"
- name: openshift-worker-0
role: worker
bmc:
address: ipmi://<out-of-band-ip>?privilegelevel=OPERATOR <1>
username: <user>
password: <password>
bootMACAddress: <NIC1_mac_address> <2>
rootDeviceHints:
deviceName: "/dev/sda"
pullSecret: '<pull_secret>'
sshKey: '<ssh_pub_key>'
----
+
<1> The `bmc.address` provides a `privilegelevel` configuration setting with the value set to `OPERATOR`. This is required for IBM Cloud.
<2> Add the MAC address of the private `provisioning` network NIC for the corresponding node.
+
[NOTE]
====
You can use the `ibmcloud` command-line utility to retrieve the password.

[source,terminal]
----
$ ibmcloud sl hardware detail <id> --output JSON | \
jq '"(.networkManagementIpAddress) (.remoteManagementAccounts[0].password)"'
----

Replace `<id>` with the ID of the node.
====

. Create a directory to store the cluster configuration:
+
[source,terminal]
----
$ mkdir ~/clusterconfigs
----

. Copy the `install-config.yaml` file into the directory:
+
[source,terminal]
----
$ cp install-config.yaml ~/clusterconfig
----

. Ensure all bare metal nodes are powered off prior to installing the {product-title} cluster:
+
[source,terminal]
----
$ ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off
----

. Remove old bootstrap resources if any are left over from a previous deployment attempt:
+
[source,bash]
----
for i in $(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print $2'});
do
sudo virsh destroy $i;
sudo virsh undefine $i;
sudo virsh vol-delete $i --pool $i;
sudo virsh vol-delete $i.ign --pool $i;
sudo virsh pool-destroy $i;
sudo virsh pool-undefine $i;
done
----
191 changes: 191 additions & 0 deletions modules/install-ibm-cloud-configuring-the-public-subnet.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,191 @@
// This is included in the following assemblies:
//
// installing_ibm_cloud/install-ibm-cloud-installing-on-ibm-cloud.adoc

[id="configuring-the-public-subnet_{context}"]
= Configuring the public subnet

All of the {product-title} cluster nodes must be on the public subnet. IBM Cloud&#174; does not provide a DHCP server on the subnet. Set it up separately on the provisioner node.

You must reset the BASH variables defined when preparing the provisioner node. Rebooting the provisioner node after preparing it will delete the BASH variables previously set.

.Procedure

. Install `dnsmasq`:
+
[source,terminal]
----
$ sudo dnf install dnsmasq
----

. Open the `dnsmasq` configuration file:
+
[source,terminal]
----
$ sudo vi /etc/dnsmasq.conf
----

. Add the following configuration to the `dnsmasq` configuration file:
+
[source,text]
----
interface=baremetal
except-interface=lo
bind-dynamic
log-dhcp

dhcp-range=<ip_addr>,<ip_addr>,<pub_cidr> <1>
dhcp-option=baremetal,121,0.0.0.0/0,<pub_gateway>,<prvn_priv_ip>,<prvn_pub_ip> <2>

dhcp-hostsfile=/var/lib/dnsmasq/dnsmasq.hostsfile
----
+
<1> Set the DHCP range. Replace both instances of `<ip_addr>` with one unused IP address from the public subnet so that the `dhcp-range` for the `baremetal` network begins and ends with the same the IP address. Replace `<pub_cidr>` with the CIDR of the public subnet.
+
<2> Set the DHCP option. Replace `<pub_gateway>` with the IP address of the gateway for the `baremetal` network. Replace `<prvn_priv_ip>` with the IP address of the provisioner node's private IP address on the `provisioning` network. Replace `<prvn_pub_ip>` with the IP address of the provisioner node's public IP address on the `baremetal` network.
+
To retrieve the value for `<pub_cidr>`, execute:
+
[source,terminal]
----
$ ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .cidr
----
+
Replace `<publicsubnetid>` with the ID of the public subnet.
+
To retrieve the value for `<pub_gateway>`, execute:
+
[source,terminal]
----
$ ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .gateway -r
----
+
Replace `<publicsubnetid>` with the ID of the public subnet.
+
To retrieve the value for `<prvn_priv_ip>`, execute:
+
[source,terminal]
----
$ ibmcloud sl hardware detail <id> --output JSON | \
jq .primaryBackendIpAddress -r
----
+
Replace `<id>` with the ID of the provisioner node.
+
To retrieve the value for `<prvn_pub_ip>`, execute:
+
[source,terminal]
----
$ ibmcloud sl hardware detail <id> --output JSON | jq .primaryIpAddress -r
----
+
Replace `<id>` with the ID of the provisioner node.

. Obtain the list of hardware for the cluster:
+
[source,terminal]
----
$ ibmcloud sl hardware list
----

. Obtain the MAC addresses and IP addresses for each node:
+
[source,terminal]
----
$ ibmcloud sl hardware detail <id> --output JSON | \
jq '.networkComponents[] | \
"\(.primaryIpAddress) \(.macAddress)"' | grep -v null
----
+
Replace `<id>` with the ID of the node.
+
.Example output
[source,terminal]
----
"10.196.130.144 00:e0:ed:6a:ca:b4"
"141.125.65.215 00:e0:ed:6a:ca:b5"
----
+
Make a note of the MAC address and IP address of the public network. Make a separate note of the MAC address of the private network, which you will use later in the `install-config.yaml` file. Repeat this procedure for each node until you have all the public MAC and IP addresses for the public `baremetal` network, and the MAC addresses of the private `provisioning` network.

. Add the MAC and IP address pair of the public `baremetal` network for each node into the `dnsmasq.hostsfile` file:
+
[source,terminal]
----
$ sudo vim /var/lib/dnsmasq/dnsmasq.hostsfile
----
+
.Example input
[source,text]
----
00:e0:ed:6a:ca:b5,141.125.65.215,master-0
<mac>,<ip>,master-1
<mac>,<ip>,master-2
<mac>,<ip>,worker-0
<mac>,<ip>,worker-1
...
----
+
Replace `<mac>,<ip>` with the public MAC address and public IP address of the corresponding node name.

. Start `dnsmasq`:
+
[source,terminal]
----
$ sudo systemctl start dnsmasq
----

. Enable `dnsmasq` so that it starts when booting the node:
+
[source,terminal]
----
$ sudo systemctl enable dnsmasq
----

. Verify `dnsmasq` is running:
+
[source,terminal]
----
$ sudo systemctl status dnsmasq
----
+
.Example output
[source,terminal]
----
● dnsmasq.service - DNS caching server.
Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-10-05 05:04:14 CDT; 49s ago
Main PID: 3101 (dnsmasq)
Tasks: 1 (limit: 204038)
Memory: 732.0K
CGroup: /system.slice/dnsmasq.service
└─3101 /usr/sbin/dnsmasq -k
----

. Open ports `53` and `67` with UDP protocol:
+
[source,terminal]
----
$ sudo firewall-cmd --add-port 53/udp --permanent
----
+
[source,terminal]
----
$ sudo firewall-cmd --add-port 67/udp --permanent
----

. Add `provisioning` to the external zone with masquerade:
+
[source,terminal]
----
$ sudo firewall-cmd --change-zone=provisioning --zone=external --permanent
----
+
This step ensures network address translation for IPMI calls to the management subnet.

. Reload the `firewalld` configuration:
+
[source,terminal]
----
$ sudo firewall-cmd --reload
----
Loading