Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

K8s: Storage PD | PV | Partition | UseCase #248

Open
AmitKumarDas opened this issue Oct 16, 2019 · 0 comments
Open

K8s: Storage PD | PV | Partition | UseCase #248

AmitKumarDas opened this issue Oct 16, 2019 · 0 comments

Comments

@AmitKumarDas
Copy link
Owner

We're currently using PVC's here and:

  • We frequently encounter GCP zonal capacity issues.
    • Either a node or a gce persistent disk is unavailable in a region.
    • This often results in unschedulable replicas in the region.
  • Local persistent disks are all or nothing.
    • A 10GB claim will consume a 375 local SSD.
    • If we want different sizes we need pre-partition them.
    • This also would require some post-boot but pre "Readiness" scripting on our GKE nodes.
    • I don't believe that such a facility exists in GKE node pools specifically.
  • Our workloads tend to use about < 5% of the "burstable" maximum.
    • Being able to overcommit with e.g. a 20GB request but 100GB limit using ephemeral storage is appealing.
  • Cost - local SSDs on pre-emptible instances cost 1/5 per GB as a gce persistent disk.
  • PVs tends to induce significant API traffic in GCP.
    • We blew through our read request quota and it took 48 hours for Google to raise it.
    • Once they did our problems largely went away.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant