Skip to content
This repository has been archived by the owner on Apr 4, 2023. It is now read-only.

Scale down of masters results in unavailable cluster #370

Open
cehoffman opened this issue Jul 20, 2018 · 2 comments
Open

Scale down of masters results in unavailable cluster #370

cehoffman opened this issue Jul 20, 2018 · 2 comments
Labels

Comments

@cehoffman
Copy link

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

Cluster went unavailable after a scale down of master pods.

What you expected to happen:

Reducing the number of masters should result in a new minimum master count applied to all nodes before the scaled down masters are terminated.

How to reproduce it (as minimally and precisely as possible):

Create a cluster with 3 masters (minimum 2). Add 3 more masters (bad i know, but was for a simple configuration change so should be quick). After 3 new masters are up and functional, scale old set to 0.

Anything else we need to know?: ES 6.3.1

Environment:

  • Kubernetes version (use kubectl version): 1.9.6
  • Cloud provider or hardware configuration**: Azure
  • Install tools:
  • Others:
@cehoffman
Copy link
Author

I worked around this using the cluster settings api

http put localhost:9200/_cluster/settings\?flat_settings=true persistent:='{"discovery.zen.minimum_master_nodes": 2}'

@cehoffman
Copy link
Author

The above fix doesn't persist restarts and will result in an unavailable cluster.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants