You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 4, 2023. It is now read-only.
Cluster went unavailable after a scale down of master pods.
What you expected to happen:
Reducing the number of masters should result in a new minimum master count applied to all nodes before the scaled down masters are terminated.
How to reproduce it (as minimally and precisely as possible):
Create a cluster with 3 masters (minimum 2). Add 3 more masters (bad i know, but was for a simple configuration change so should be quick). After 3 new masters are up and functional, scale old set to 0.
Anything else we need to know?: ES 6.3.1
Environment:
Kubernetes version (use kubectl version): 1.9.6
Cloud provider or hardware configuration**: Azure
Install tools:
Others:
The text was updated successfully, but these errors were encountered:
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
Cluster went unavailable after a scale down of master pods.
What you expected to happen:
Reducing the number of masters should result in a new minimum master count applied to all nodes before the scaled down masters are terminated.
How to reproduce it (as minimally and precisely as possible):
Create a cluster with 3 masters (minimum 2). Add 3 more masters (bad i know, but was for a simple configuration change so should be quick). After 3 new masters are up and functional, scale old set to 0.
Anything else we need to know?: ES 6.3.1
Environment:
kubectl version
): 1.9.6The text was updated successfully, but these errors were encountered: