-
Notifications
You must be signed in to change notification settings - Fork 1.7k
[Nginx ingress controller] Downtime of service exposed via Ingress while doing upgrade. #2098
Comments
I believe a fix for this can be found at kubernetes/ingress-nginx#322 (comment) |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Is this actually fixed? I still see this behaviour but I'm using a relatively old version of the nginx ingress (0.8.3). |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
I'm seeing exactly the same thing, I set |
Try this in your Deployment manifest:
Kubernetes sends the TERM signal to your container and the request to remove it from service to the apiserver at the same time. It takes a little bit of time for the container to be removed from the proxies, until that has happened (usually ~1sec) the container will keep receiving new connections. This is due to the distributed nature of the system. Sadly there is no state-machine which will guarantee that a container is removed from service before the TERM signal is sent and the Kubernetes maintainers have spoken out against adding it for now. Another problem is that in the worst-case the nginx configuration only gets updated every 10 seconds. The preStop hook delays the TERM signal and give proxies and nginx time to update their configurations. |
/reopen |
@deviluper: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Test System:
Steps to reproduce
create a
Deployment
consisting of a simple service like a bare nginxService
Ingress Resource
Perform update by simply increasing the
attempt
label, andkubectl apply -f deploy.yaml
while in a terminal fetching the NGINX web page every second with
watch -n "curl http://weather-demo.internal.ch"
Expected results
The Deployment is updated with no downtime
Actual results
Running
stateIf at the same time I expose the service via NodePort and curl there, the service is always available.
The text was updated successfully, but these errors were encountered: