Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
66 changes: 66 additions & 0 deletions modules/nw-ingress-setting-internal-lb.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
// Module included in the following assemblies:
//
// * networking/ingress-operator.adoc

[id="nw-ingress-setting-internal-lb_{context}"]
= Configuring an Ingress Controller to use an internal load balancer

When creating an Ingress Controller on cloud platforms, the Ingress Controller is published by a public cloud load balancer by default.
As an administrator, you can create an Ingress Controller that uses an internal cloud load balancer.

You can configure the `default` Ingress Controller for your cluster to be internal by deleting and recreating it.

[WARNING]
====
If your cloud provider is Azure, you must have at least one public load balancer that points to your nodes.
If you do not, all of your nodes will lose egress connectivity to the Internet.
====

[IMPORTANT]
====
If you want to change the `scope` for an `IngressController` object, you must delete and then recreate that `IngressController` object. You cannot change the `.spec.endpointPublishingStrategy.loadBalancer.scope` parameter after the Custom Resource (CR) is created.
====

See the link:https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer[Kubernetes Services documentation]
for implementation details.

.Prerequisites

* Install the OpenShift Command-line Interface (CLI), commonly known as `oc`.
* Log in as a user with `cluster-admin` privileges.

.Procedure

. Create an `IngressController` Custom Resource (CR) in a file named `<name>-ingress-controller.yaml`, such as in the following example:
+
[source,yaml]
----
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
namespace: openshift-ingress-operator
name: <name> <1>
spec:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.spec.domain is required unless we're talking about the default resource (which can omit .spec.domain and claim the default domain).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ironcladlou I'm helping out to get this PR merged and just want to verify - this is the only update this PR needs? Thanks!

(I'll also likely have follow-up questions related to "User defined IngressControllers at installation".)

domain: <domain> <2>
endpointPublishingStrategy:
type: LoadBalancerService
loadBalancer:
scope: Internal <3>
----
<1> Replace `<name>` with a name for the `IngressController` object.
<2> Specify the `domain` for the application published by the controller.
If the `name` for the controller is `default` and you do not specify the `domain` parameter, the default cluster domain is used.
<3> Specify a value of `Internal` to use an internal load balancer.

. Create the Ingress Controller defined in the previous step by running the following command:
+
----
$ oc create -f <name>-ingress-controller.yaml <1>
----
<1> Replace `<name>` with the name of the `IngressController` object.

. Optional: Confirm that the Ingress Controller was created by running the following command:
+
----
$ oc --all-namespaces=true get ingresscontrollers
----
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jboxman, this restriction came up on the Group G arch call:

4.2.0 warning on switching ingress to Internal can cause all worker nodes to lose internet connectivity. The only way to prevent that would be to have at least one public k8s service even if it doesn’t do anything

I think that we might need a step to confirm that you have such a service.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm surprised to learn there is such a restriction, and it seems like a very serious defect. Is there an associated Bugzilla report?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@abhinavdahiya, was there a bug for the need to have at least one public k8s service if you switch the ingress controller to private?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be noted that the restriction applies specifically to Azure, not to other cloud platforms. (It has already been noted in the Group G arch call, but I'm repeating it here for the benefit of anyone who learns of the issue from seeing this thread.)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm going to talk with @Miciah and @abhinavdahiya and @wking about how we solve this transparently. I think we should treat the issue as a blocker bug that we'll solve before release and remove the note here entirely.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds like the installer already sets up egress for private clusters in 4.3 by installing a public LB, so there's nothing to note here after all?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the installer only sets up egress by creating a dummy public standard load balancer, when users request Internal clusters.

so for public cluster users, that want to make their ingress private as Day-2, there should be warning that, all nodes of the cluster will loose egress to internet in case there is no public LB pointing to those nodes...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@abhinavdahiya, is that for any public cloud provider user, or just for Azure? As I'm still trying to internalize this, my first pass at this is simply the following:

If your cloud provider is Azure, you must have at least one public load balancer that points to your nodes.
If you do not, all of your nodes will lose egress connectivity to the Internet.

But if I include this as part of the procedure, is there a way to confirm if there is at least one public load-balancer point to all nodes? Doesn't having a public load-balancer defeat the purpose of private ingress?

Thanks.

cc @ironcladlou @Miciah

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @ironcladlou @Miciah

Any thoughts on this ^^?

Thanks.

2 changes: 1 addition & 1 deletion modules/nw-ingress-sharding-namespace-labels.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
// * ingress-operator.adoc

[id="nw-ingress-sharding-namespace-labels_{context}"]
= Configuring ingress controller sharding by using namespace labels
= Configuring Ingress Controller sharding by using namespace labels

Ingress Controller sharding by using namespace labels means that the Ingress
Controller serves any route in any namespace that is selected by the namespace
Expand Down
4 changes: 2 additions & 2 deletions modules/nw-ingress-sharding-route-labels.adoc
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
// Module included in the following assemblies:
//
// * configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-ingress-controller.adoc
// * ingress-operator.adoc
// * networking/ingress-operator.adoc

[id="nw-ingress-sharding-route-labels_{context}"]
= Configuring ingress controller sharding by using route labels
= Configuring Ingress Controller sharding by using route labels

Ingress Controller sharding by using route labels means that the Ingress
Controller serves any route in any namespace that is selected by the route
Expand Down
4 changes: 3 additions & 1 deletion networking/ingress-operator.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ toc::[]

The Ingress Operator implements the `ingresscontroller` API and is the
component responsible for enabling external access to {product-title}
cluster services. The operator makes this possible by deploying and
cluster services. The Operator makes this possible by deploying and
managing one or more HAProxy-based
link:https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/[Ingress Controllers]
to handle routing. You can use the Ingress Operator to route traffic by
Expand All @@ -31,4 +31,6 @@ include::modules/nw-ingress-sharding-route-labels.adoc[leveloffset=+1]

include::modules/nw-ingress-sharding-namespace-labels.adoc[leveloffset=+1]

include::modules/nw-ingress-setting-internal-lb.adoc[leveloffset=+1]

//include::modules/nw-ingress-select-route.adoc[leveloffset=+1]