-
-
Notifications
You must be signed in to change notification settings - Fork 298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to upgrade an API-Platform/FrankenPHP/Mercure Docker Swarm service without downtime #898
Comments
Is anyone able to give me any pointed on this one as we're now approaching a produciton launch and I'd prefer if we didn't gave to do all our deploys out of hours when we can have a brief outage for the update? Any pointers/thoughts/ideas are very welcome. Thank you. |
I guess that Docker starts a new container before stopping the existing one. This is an issue when using the Bolt transport because BoltDB relies on a lock. The first container must release the lock for the second one to take it. An option is to upgrade to the (paid) on-premise version, which doesn't have this issue because, unlike Bolt, Redis supports concurrent connections. Another option would be to patch check if Docker sends some signals to the existing container before starting the new one, catch this signal in the Bolt transport, and close the connection to the Bolt DB immediately (that will release the lock). |
This issue seems to confirm this theory: influxdata/influxdb#24320 |
I fixes it. I had multiple mercure processes running. This made couple of problems. Looks Like Its fixed. |
Thanks for the info. Sorry for the delay in responding. We've just managed our releases to be out of hours. I'm thinking of just removing the Mercure block from the Caddyfile for now, as we're not currently using Mercure, and work out how to re-add it in the future. |
I think that this is a Mercure issue, but please correct me if I'm wrong…
We have just deployed an API-Platform based project to a Docker Swarm and it's working nicely, however when we attempt to update the services, the first update attempt seems to always fail, with the following error appearing in the logs…
If we re-run the same
docker stack update
command the existing service appears to stop, and the API goes offline for a brief period while the new service starts up, and then everything works again.Is this caused by some form of locking on the Mercure data store? Is there a way around this?
I've briefly looked at the High Availability docs today, and how you can build a custom transport, but I'm not very familiar with Go, so would not know where to start with this. Any pointers on this, if it would help resolve this issue would be very much appreciated.
Thanks for all your great work on this project.
The text was updated successfully, but these errors were encountered: