You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 28, 2020. It is now read-only.
As it stands, each member uses an emptyDir volume for storage, which consumes the root partition for most users. My primary concern here is that we (on GCE, unsure if this is doable in other cloud providers) can't do an online resize of the root partition, thereby necessitating a change to our instance template and manually cycling through / upgrading nodes when we experience etcd-induced disk pressure. This also impacts nodefs and increases the frequency of node GC cycles.
My first thought was to amend our instance templates and mount a disk on each node at /var/lib/kubelet/pods. I can imagine hostPath and persistent-storage-based solutions, as well, but am not entirely sure what the scope is here. Any insight would be very much appreciated.
As it stands, each member uses an
emptyDir
volume for storage, which consumes the root partition for most users. My primary concern here is that we (on GCE, unsure if this is doable in other cloud providers) can't do an online resize of the root partition, thereby necessitating a change to our instance template and manually cycling through / upgrading nodes when we experience etcd-induced disk pressure. This also impacts nodefs and increases the frequency of node GC cycles.My first thought was to amend our instance templates and mount a disk on each node at
/var/lib/kubelet/pods
. I can imaginehostPath
and persistent-storage-based solutions, as well, but am not entirely sure what the scope is here. Any insight would be very much appreciated.See #552 for previous discussion.
The text was updated successfully, but these errors were encountered: