Skip to content
This repository has been archived by the owner on Mar 28, 2020. It is now read-only.

Member storage scalability #873

Closed
davidquarles opened this issue Mar 8, 2017 · 1 comment
Closed

Member storage scalability #873

davidquarles opened this issue Mar 8, 2017 · 1 comment

Comments

@davidquarles
Copy link

As it stands, each member uses an emptyDir volume for storage, which consumes the root partition for most users. My primary concern here is that we (on GCE, unsure if this is doable in other cloud providers) can't do an online resize of the root partition, thereby necessitating a change to our instance template and manually cycling through / upgrading nodes when we experience etcd-induced disk pressure. This also impacts nodefs and increases the frequency of node GC cycles.

My first thought was to amend our instance templates and mount a disk on each node at /var/lib/kubelet/pods. I can imagine hostPath and persistent-storage-based solutions, as well, but am not entirely sure what the scope is here. Any insight would be very much appreciated.

See #552 for previous discussion.

@xiang90
Copy link
Collaborator

xiang90 commented Apr 25, 2017

this is basically identical to #957.

let's continue the discussion there. in short, we need k8s support of local-pv to move this forward.

@xiang90 xiang90 closed this as completed Apr 25, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants