You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
If the Ingress Controller pod stops being the leader, it cannot become the leader again. This becomes problematic when only one pod is running. Because after it stops being the leader, this means it will not report any statuses. And since only one pod is running, this means no statuses will be reported at all (until the pod is restarted).
To Reproduce
This problem was originally observed with NGINX Gateway Fabric: nginxinc/nginx-gateway-fabric#1100 when the pod lost connectivity with the API server, but can be reproduced on NIC following these steps:
Deploy Ingress Controller with log level 3 and 1 replica
Remove permissions to leases by editing the Ingress Controller clusterrole and removing the following section:
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- get
- list
- watch
- update
- create
This forces an API server error when Ingress Controller tries to renew its lease.
Check the Ingress Controller logs and grep for "leader"
I1010 22:34:52.856615 1 leaderelection.go:250] attempting to acquire leader lease nginx-ingress/my-release-nginx-ingress-leader-election...
I1010 22:34:52.863761 1 leaderelection.go:260] successfully acquired lease nginx-ingress/my-release-nginx-ingress-leader-election
I1010 22:34:52.864055 1 leader.go:59] started leading
I1010 22:34:52.864083 1 leader.go:63] Updating status for 0 Ingresses
I1010 22:34:52.864088 1 leader.go:72] updating VirtualServer and VirtualServerRoutes status
E1010 22:37:08.053994 1 leaderelection.go:332] error retrieving resource lock nginx-ingress/my-release-nginx-ingress-leader-election: leases.coordination.k8s.io "my-release-nginx-ingress-leader-election" is forbidden: User "system:serviceaccount:nginx-ingress:my-release-nginx-ingress" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "nginx-ingress"
E1010 22:37:15.555731 1 leaderelection.go:332] error retrieving resource lock nginx-ingress/my-release-nginx-ingress-leader-election: leases.coordination.k8s.io "my-release-nginx-ingress-leader-election" is forbidden: User "system:serviceaccount:nginx-ingress:my-release-nginx-ingress" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "nginx-ingress"
I1010 22:37:23.031760 1 leaderelection.go:285] failed to renew lease nginx-ingress/my-release-nginx-ingress-leader-election: timed out waiting for the condition
I1010 22:37:23.031886 1 leader.go:96] stopped leading
Once you see the "stopped leading" message, restore the permissions to leases by editing the Ingress Controller's clusterrole again and adding it back in the snippet in step 2.
Check the logs again and see that the Pod never starts leading again.
Expected behavior
The Ingress Controller Pod can become the leader again after the leader lease is lost.
Your environment
Version of the Ingress Controller: NGINX Ingress Controller Version=3.3.0 Commit=f255b03122e9a1e556c227172086b854cff6e4c3 Date=2023-09-26T18:32:05Z DirtyState=false Arch=linux/amd64 Go=go1.21.1
Version of Kubernetes: 1.28.0
Kubernetes platform (e.g. Mini-kube or GCP): kind
Using NGINX or NGINX Plus: NGINX nginx/1.25.2
The text was updated successfully, but these errors were encountered:
Describe the bug
If the Ingress Controller pod stops being the leader, it cannot become the leader again. This becomes problematic when only one pod is running. Because after it stops being the leader, this means it will not report any statuses. And since only one pod is running, this means no statuses will be reported at all (until the pod is restarted).
To Reproduce
This problem was originally observed with NGINX Gateway Fabric: nginxinc/nginx-gateway-fabric#1100 when the pod lost connectivity with the API server, but can be reproduced on NIC following these steps:
Deploy Ingress Controller with log level 3 and 1 replica
Remove permissions to
leases
by editing the Ingress Controller clusterrole and removing the following section:This forces an API server error when Ingress Controller tries to renew its lease.
Check the Ingress Controller logs and grep for "leader"
Once you see the "stopped leading" message, restore the permissions to
leases
by editing the Ingress Controller's clusterrole again and adding it back in the snippet in step 2.Check the logs again and see that the Pod never starts leading again.
Expected behavior
The Ingress Controller Pod can become the leader again after the leader lease is lost.
Your environment
The text was updated successfully, but these errors were encountered: