You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We currently use a timestamp annotation in our deployment template (similar to the suggestion in https://stackoverflow.com/a/64151965) to ensure that the helm upgraderun by GitHub Actions always recreates the pods. However, terminating a D2 pod sometimes takes up to a minute, during which both the old and the new instance may respond to user commands.
Ideally, we could find a way to either wait for the old pod to fully terminate (with the downside of having ~1m downtime per deployment), coordinate termination in a way that the terminating pod no longer responds (perhaps even by using Discord's sharding mechanism?) or restart the pod directly after running helm upgrade.
The text was updated successfully, but these errors were encountered:
We currently use a timestamp annotation in our deployment template (similar to the suggestion in https://stackoverflow.com/a/64151965) to ensure that the
helm upgrade
run by GitHub Actions always recreates the pods. However, terminating a D2 pod sometimes takes up to a minute, during which both the old and the new instance may respond to user commands.Ideally, we could find a way to either wait for the old pod to fully terminate (with the downside of having ~1m downtime per deployment), coordinate termination in a way that the terminating pod no longer responds (perhaps even by using Discord's sharding mechanism?) or restart the pod directly after running
helm upgrade
.The text was updated successfully, but these errors were encountered: