Skip to content

Commit

Permalink
Apply suggestions from code review
Browse files Browse the repository at this point in the history
Co-authored-by: Tim Bannister <[email protected]>
  • Loading branch information
aojea and sftim committed Nov 28, 2023
1 parent fcef7ce commit 4e8ced0
Showing 1 changed file with 14 additions and 11 deletions.
25 changes: 14 additions & 11 deletions content/en/docs/tasks/network/extend-service-ip-ranges.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,8 @@ This document shares how to extend the existing Service IP range assigned to a c
## API

Kubernetes clusters that are created with the `MultiCIDRServiceAllocator`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) will create a new ServiceCIDR object with the well-known name `kubernetes` from the kube-apiserver `--service-cluster-ip-range` flag values.
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) will create a new ServiceCIDR object that takes the well-known name `kubernetes`, and that uses an IP address range
based on the value of the `--service-cluster-ip-range` command line argument to kube-apiserver.

```sh
kubectl get servicecidr
Expand All @@ -34,7 +35,9 @@ NAME CIDRS AGE
kubernetes 10.96.0.0/28 17d
```

The well-known `kubernetes` Service, that exposes the kube-apiserver endpoint to the Pods, uses as ClusterIP the first IP from the default ServiceCIDR.
The well-known `kubernetes` Service, that exposes the kube-apiserver endpoint to the Pods, calculates
the first IP address from the default ServiceCIDR range and uses that IP address as its
cluster IP address.

```sh
kubectl get service kubernetes
Expand All @@ -54,7 +57,7 @@ NAME PARENTREF
10.96.0.1 services/default/kubernetes
```

The ServiceCIDRs are protected with finalizers, to avoid leaving Service ClusterIPs orphans,
The ServiceCIDRs are protected with {{<glossary_tooltip text="finalizers" term_id="finalizer">}}, to avoid leaving Service ClusterIPs orphans;
the finalizer is only removed if there is another subnet that contains the existing IPAddresses or
there are no IPAddresses belonging to the subnet.

Expand All @@ -66,7 +69,7 @@ There are cases that users will need to increase the number addresses available

### Adding a new ServiceCIDR

On a cluster with a 10.96.0.0/28 range for Services, there is only 2^(32-28) - 2 = 14 IP addresses avaalable. The `kubernetes.default` Service is always created, that leaves us with only 13 possible Services.
On a cluster with a 10.96.0.0/28 range for Services, there is only 2^(32-28) - 2 = 14 IP addresses available. The `kubernetes.default` Service is always created; you this example, that leaves you with only 13 possible Services.

```sh
for i in $(seq 1 13); do kubectl create service clusterip "test-$i" --tcp 80 -o json | jq -r .spec.clusterIP; done
Expand All @@ -87,7 +90,8 @@ for i in $(seq 1 13); do kubectl create service clusterip "test-$i" --tcp 80 -o
error: failed to create ClusterIP service: Internal error occurred: failed to allocate a serviceIP: range is full
```

The number of available addresses can be increased by creating a new ServiceCIDR that extends or adds new IP ranges.
You can increase the number of IP addresses available for Services, by creating a new ServiceCIDR
that extends or adds new IP address ranges.

```sh
cat <EOF | kubectl apply -f -
Expand All @@ -104,7 +108,7 @@ EOF
servicecidr.networking.k8s.io/newcidr1 created
```

and this will allow us to create new Services with ClusterIPs that will be picked from this new range.
and this will allow you to create new Services with ClusterIPs that will be picked from this new range.

```sh
for i in $(seq 13 16); do kubectl create service clusterip "test-$i" --tcp 80 -o json | jq -r .spec.clusterIP; done
Expand All @@ -118,7 +122,7 @@ for i in $(seq 13 16); do kubectl create service clusterip "test-$i" --tcp 80 -o

### Deleting a ServiceCIDR

A ServiceCIDR can not be deleted if there are IPAddressess that depend on it
You cannot delete a ServiceCIDR if there are IPAddresses that depend on the ServiceCIDR.

```sh
kubectl delete servicecidr newcidr1
Expand All @@ -127,9 +131,7 @@ kubectl delete servicecidr newcidr1
servicecidr.networking.k8s.io "newcidr1" deleted
```

this is managed by a controller that runs in the kube-controller-manager
that participates on the deletion of the object, and removes
the finalizer once it verifies it will not leave orphan IPAddresses.
Kubernetes uses a finalizer on the ServiceCIDR to track this dependent relationship.

```sh
kubectl get servicecidr newcidr1 -o yaml
Expand Down Expand Up @@ -172,7 +174,8 @@ service "test-15" deleted
service "test-16" deleted
```

the controller will remove the finalizer and the ServiceCIDR will be finally deleted.
the control plane notices the removal. The control plane then removes its finalizer,
so that the ServiceCIDR that was pending deletion will actually be removed.

```sh
kubectl get servicecidr newcidr1
Expand Down

0 comments on commit 4e8ced0

Please sign in to comment.