Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions modules/installation-osp-api-octavia.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
// Module included in the following assemblies:
//
// * networking/openstack/load-balancing-openstack.adoc

[id="installation-osp-api-octavia_{context}"]
= Scaling clusters for application traffic by using Octavia

{product-title} clusters that run on {rh-openstack-first} can use the Octavia
load balancing service to distribute traffic across multiple VMs or floating IP
addresses. This feature mitigates the bottleneck that single machines or
addresses create.

If your cluster uses Kuryr, the Cluster Network Operator created an internal
Octavia load balancer at deployment. You can use this load balancer for
application network scaling.

If your cluster does not use Kuryr, you must create your own Octavia load
balancer to use it for application network scaling.
82 changes: 82 additions & 0 deletions modules/installation-osp-api-scaling.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
// Module included in the following assemblies:
//
// * networking/openstack/load-balancing-openstack.adoc

[id="installation-osp-api-scaling_{context}"]
= Scaling clusters by using Octavia

If you want to use multiple API load balancers, or if your cluster does not use Kuryr, create an Octavia load balancer and then configure your cluster to use it.

.Prerequisites

* Octavia is available on your {rh-openstack} deployment.

.Procedure

. From a command line, create an Octavia load balancer that uses the Amphora driver:
+
[source,terminal]
----
$ openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet>
----
+
You can use a name of your choice instead of `API_OCP_CLUSTER`.

. After the load balancer becomes active, create listeners:
+
[source,terminal]
----
$ openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER
----
+
[NOTE]
====
To view the load balancer's status, enter `openstack loadbalancer list`.
====

. Create a pool that uses the round robin algorithm and has session persistence enabled:
+
[source,terminal]
----
$ openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS
----

. To ensure that control plane machines are available, create a health monitor:
+
[source,terminal]
----
$ openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443
----

. Add the control plane machines as members of the load balancer pool:
+
[source,terminal]
----
$ for SERVER in $(MASTER-0-IP MASTER-1-IP MASTER-2-IP)
do
openstack loadbalancer member create --address $SERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443
done
----

. Optional: To reuse the cluster API floating IP address, unset it:
+
[source,terminal]
----
$ openstack floating ip unset $API_FIP
----

. Add either the unset `API_FIP` or a new address to the created load balancer VIP:
+
[source,terminal]
----
$ openstack floating ip set --port $(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) $API_FIP
----

Your cluster now uses Octavia for load balancing.

[NOTE]
====
If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora VM.

You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck.
====
40 changes: 40 additions & 0 deletions modules/installation-osp-kuryr-api-scaling.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
// Module included in the following assemblies:
//
// * networking/openstack/load-balancing-openstack.adoc

[id="installation-osp-kuryr-api-scaling_{context}"]
= Scaling clusters that use Kuryr by using Octavia

If your cluster uses Kuryr, associate your cluster's API floating IP address
with the pre-existing Octavia load balancer.

.Prerequisites

* Your {product-title} cluster uses Kuryr.

* Octavia is available on your {rh-openstack} deployment.

.Procedure

. Optional: From a command line, to reuse the cluster API floating IP address, unset it:
+
[source,terminal]
----
$ openstack floating ip unset $API_FIP
----

. Add either the unset `API_FIP` or a new address to the created load balancer VIP:
+
[source,terminal]
----
$ openstack floating ip set --port $(openstack loadbalancer show -c <vip_port_id> -f value ${OCP_CLUSTER}-kuryr-api-loadbalancer) $API_FIP
----

Your cluster now uses Octavia for load balancing.

[NOTE]
====
If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora VM.

You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck.
====
122 changes: 122 additions & 0 deletions modules/installation-osp-kuryr-ingress-scaling.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
// Module included in the following assemblies:
//
// * networking/openstack/load-balancing-openstack.adoc

[id="installation-osp-kuryr-octavia-scale_{context}"]
= Scaling for ingress traffic by using {rh-openstack} Octavia

You can use Octavia load balancers to scale Ingress controllers on clusters that use Kuryr.

.Prerequisites

* Your {product-title} cluster uses Kuryr.

* Octavia is available on your {rh-openstack} deployment.

.Procedure

. To copy the current internal router service, on a command line, enter:
+
[source,terminal]
----
$ oc -n openshift-ingress get svc router-internal-default -o yaml > external_router.yaml
----

. In the file `external_router.yaml`, change the values of `metadata.name` and `spec.type` to
`LoadBalancer`.
+
[source,yaml]
.Example router file
----
apiVersion: v1
kind: Service
metadata:
labels:
ingresscontroller.operator.openshift.io/owning-ingresscontroller: default
name: router-external-default <1>
namespace: openshift-ingress
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
- name: metrics
port: 1936
protocol: TCP
targetPort: 1936
selector:
ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default
sessionAffinity: None
type: LoadBalancer <2>
----
<1> Ensure that this value is descriptive, like `router-external-default`.
<2> Ensure that this value is `LoadBalancer`.

[NOTE]
====
You can delete timestamps and other information that is irrelevant to load balancing.
====

. From a command line, create a service from the `external_router.yaml` file:
+
[source,terminal]
----
$ oc apply -f external_router.yaml
----

. Verify that the service's external IP address is the same as the one that is associated with the load balancer:
.. On a command line, retrieve the service's external IP address:
+
[source,terminal]
----
$ oc -n openshift-ingress get svc
----
+
[source,terminal]
.Example output
----
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
router-external-default LoadBalancer 172.30.235.33 10.46.22.161 80:30112/TCP,443:32359/TCP,1936:30317/TCP 3m38s
router-internal-default ClusterIP 172.30.115.123 <none> 80/TCP,443/TCP,1936/TCP 22h
----

.. Retrieve the load balancer's IP address:
+
[source,terminal]
----
$ openstack loadbalancer list | grep router-external
----
+
.Example output
[source,terminal]
----
| 21bf6afe-b498-4a16-a958-3229e83c002c | openshift-ingress/router-external-default | 66f3816acf1b431691b8d132cc9d793c | 172.30.235.33 | ACTIVE | octavia |
----

.. Verify that the addresses you retrieved in the previous steps are associated with each other in the floating IP list:
+
[source,terminal]
----
$ openstack floating ip list | grep 172.30.235.33
----
+
.Example output
[source,terminal]
----
| e2f80e97-8266-4b69-8636-e58bacf1879e | 10.46.22.161 | 172.30.235.33 | 655e7122-806a-4e0a-a104-220c6e17bda6 | a565e55a-99e7-4d15-b4df-f9d7ee8c9deb | 66f3816acf1b431691b8d132cc9d793c |
----

You can now use the value of `EXTERNAL-IP` as the new Ingress address.


[NOTE]
====
If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora VM.

You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck.
====
6 changes: 5 additions & 1 deletion networking/load-balancing-openstack.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,8 @@ include::modules/common-attributes.adoc[]

toc::[]

include::modules/installation-osp-kuryr-octavia-upgrade.adoc[leveloffset=+1]
include::modules/installation-osp-kuryr-octavia-upgrade.adoc[leveloffset=+1]
include::modules/installation-osp-api-octavia.adoc[leveloffset=+1]
include::modules/installation-osp-api-scaling.adoc[leveloffset=+2]
include::modules/installation-osp-kuryr-api-scaling.adoc[leveloffset=+2]
include::modules/installation-osp-kuryr-ingress-scaling.adoc[leveloffset=+1]