Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic volume provisioning support #118

Closed
rimusz opened this issue Nov 16, 2018 · 72 comments
Closed

Dynamic volume provisioning support #118

rimusz opened this issue Nov 16, 2018 · 72 comments
Assignees
Labels
kind/design Categorizes issue or PR as related to design. kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@rimusz
Copy link
Contributor

rimusz commented Nov 16, 2018

Dynamic volume provisioning support would be handy to have to test apps which need persistence.

@BenTheElder
Copy link
Member

So we do have a default storage class (host-path), though I haven't really tested it out yet. This is required for some conformance tests.

const defaultStorageClassManifest = `# host-path based default storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
namespace: kube-system
name: standard
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
provisioner: kubernetes.io/host-path`

@BenTheElder
Copy link
Member

I think we just need to document this
/kind documentation

If not, I'll update this issue with what else is required and follow-up.
/help
/priority important-longterm

@k8s-ci-robot k8s-ci-robot added kind/documentation Categorizes issue or PR as related to documentation. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Nov 16, 2018
@alejandrox1
Copy link
Contributor

alejandrox1 commented Nov 16, 2018

I can help document this and test it out

@rimusz
Copy link
Contributor Author

rimusz commented Nov 17, 2018

yes, I there is default storage class, but it doesn’t work for dynamic volumes provisioning, deployments get stuck waiting for PVCs

@BenTheElder
Copy link
Member

BenTheElder commented Nov 17, 2018 via email

@BenTheElder BenTheElder added kind/feature Categorizes issue or PR as related to a new feature. and removed kind/documentation Categorizes issue or PR as related to documentation. labels Nov 17, 2018
@davidz627
Copy link

hostpath doesn't have dynamic provisioning as a hostpath volume is highly tied to the final location of the pod (since you're exposing the host machines storage). Therefore it is only available for use in pre-provisioned/inline volumes.

For Dynamic Volume Provisioning without a cloud provider you can try nfs: https://github.com/kubernetes-incubator/external-storage/tree/master/nfs

If you're on a cloud provider it would probably be easiest to use the cloud volumes.

/cc @msau42

@BenTheElder
Copy link
Member

BenTheElder commented Nov 19, 2018 via email

@msau42
Copy link

msau42 commented Nov 19, 2018

Sorry can someone explain some context to me? Is this for testing only, or do we actually want to run real production workloads? If it's testing only, there's a hostpath dynamic provisioner that uses the new volume topology feature to schedule correctly to nodes. However, it doesn't handle anything like capacity isolation or accounting.

I forgot, someone at rancher was working on this project. I can't remember the name at the moment though :(

@BenTheElder
Copy link
Member

BenTheElder commented Nov 19, 2018 via email

@msau42
Copy link

msau42 commented Nov 19, 2018

I found it @yasker https://github.com/rancher/local-path-provisioner

@rimusz
Copy link
Contributor Author

rimusz commented Nov 20, 2018

@rimusz
Copy link
Contributor Author

rimusz commented Nov 20, 2018

OK, been messing with storage:

  1. https://github.com/kubernetes-incubator/external-storage/tree/master/nfs fails to work in kind
  2. https://github.com/rancher/local-path-provisioner / https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume nothing good went for there as well.

I go luck with https://github.com/rimusz/hostpath-provisioner which is based on https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/tree/master/examples/hostpath-provisioner

  1. deleted default storage class witch comes with kind
  2. installed hostpath-provisioner helm chart which install new default storage class for hostpath
    Then installed with helm 3 releases of postgress to test that it can handle multiple pods, did same for mysql, all worked fine:
mysql         mysql-5dbd494d67-fw7g6                         1/1     Running   0          4m
mysql2        mysql2-67976cdbc9-zd59h                        1/1     Running   0          4m
mysql3        mysql3-c79b9d5dd-tfkgp                         1/1     Running   0          4m
mysql4        mysql4-66d69d4ffc-l2c87                        1/1     Running   0          38s
pg            pg-postgresql-0                                1/1     Running   0          34m
pg2           pg2-postgresql-0                               1/1     Running   0          31m
pg3           pg3-postgresql-0                               1/1     Running   0          28m
$ k get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                       STORAGECLASS   REASON   AGE
pvc-14f6a7e0-ecc8-11e8-906e-02425a7d6bdf   8Gi        RWO            Delete           Bound    pg3/data-pg3-postgresql-0   hostpath                27m
pvc-46e3201b-ecc7-11e8-906e-02425a7d6bdf   8Gi        RWO            Delete           Bound    pg/data-pg-postgresql-0     hostpath                33m
pvc-6116f05a-eccb-11e8-906e-02425a7d6bdf   8Gi        RWO            Delete           Bound    mysql/mysql                 hostpath                4m
pvc-7027dff3-eccb-11e8-906e-02425a7d6bdf   8Gi        RWO            Delete           Bound    mysql2/mysql2               hostpath                3m
pvc-7668b7ae-eccb-11e8-906e-02425a7d6bdf   8Gi        RWO            Delete           Bound    mysql3/mysql3               hostpath                3m
pvc-99b0f805-ecc7-11e8-906e-02425a7d6bdf   8Gi        RWO            Delete           Bound    pg2/data-pg2-postgresql-0   hostpath                31m
pvc-f191c00d-eccb-11e8-906e-02425a7d6bdf   8Gi        RWO            Delete           Bound    mysql4/mysql4               hostpath                11s

How it look inside kind container:

$ docker exec -it a21a27399140 bash
root@kind-1-control-plane:/# ls -alh /var/kubernetes
total 36K
drwxr-xr-x 9 root root   4.0K Nov 20 13:55 .
drwxr-xr-x 1 root root   4.0K Nov 20 13:14 ..
drwxrwxrwx 5  999 docker 4.0K Nov 20 13:51 mysql-mysql-pvc-6116f05a-eccb-11e8-906e-02425a7d6bdf
drwxrwxrwx 5  999 docker 4.0K Nov 20 13:52 mysql2-mysql2-pvc-7027dff3-eccb-11e8-906e-02425a7d6bdf
drwxrwxrwx 5  999 docker 4.0K Nov 20 13:52 mysql3-mysql3-pvc-7668b7ae-eccb-11e8-906e-02425a7d6bdf
drwxrwxrwx 5  999 docker 4.0K Nov 20 13:55 mysql4-mysql4-pvc-f191c00d-eccb-11e8-906e-02425a7d6bdf
drwxrwxrwx 3 1001   1001 4.0K Nov 20 13:22 pg-data-pg-postgresql-0-pvc-46e3201b-ecc7-11e8-906e-02425a7d6bdf
drwxrwxrwx 3 1001   1001 4.0K Nov 20 13:24 pg2-data-pg2-postgresql-0-pvc-99b0f805-ecc7-11e8-906e-02425a7d6bdf
drwxrwxrwx 3 1001   1001 4.0K Nov 20 13:28 pg3-data-pg3-postgresql-0-pvc-14f6a7e0-ecc8-11e8-906e-02425a7d6bdf

I think for the time being I will stick with this solution, easy to install and it works very well :-)

It should not too difficult to port it to kind, @munnerz can do it with a blink of an eye :-)

Also docker4mac uses HostPath based provisioner, which is easier to implement comparing to local-volume one.

@yasker
Copy link

yasker commented Nov 20, 2018

@rimusz What the issue with https://github.com/rancher/local-path-provisioner? Just curious.

@rimusz
Copy link
Contributor Author

rimusz commented Nov 20, 2018

it did not work for me, PV did not get created, so I did not spend too much time digging why.
hostpath-provisioner worked for me straight away :-)

@yasker
Copy link

yasker commented Nov 20, 2018

@rimusz Weird... If you got time, can you open an issue with the log? You can see how to get the log using kubectl -n local-path-storage logs -f local-path-provisioner-d744ccf98-xfcbk(as seen in the doc). Though if you don't have time, I totally understand.

@rimusz
Copy link
Contributor Author

rimusz commented Nov 21, 2018

@yasker next time when I get free cycles I will look to local-path-provisioner again.
But when kind supports multi-node then we need something else, or maybe local-path-provisioner be used there too :)

@msau42
Copy link

msau42 commented Nov 21, 2018

Oh if you are currently only supporting single node, then the intree hostpath provisioner should have worked fine. It's the same one that localup.sh uses

@BenTheElder
Copy link
Member

Ah, yeah currently only a single-node, and it should indeed look like hack/local-up-cluster.sh's default storage, but multi-node will happen in the near future I suspect, an implementation exists but is not in currently and may take a bit.

@BenTheElder BenTheElder added this to the 2019 goals milestone Dec 18, 2018
@phisco
Copy link
Contributor

phisco commented Jan 10, 2019

confirm @rimusz solution is working for me too

@BenTheElder
Copy link
Member

exciting! perhaps we should ship this by default then :-)

@ks2211
Copy link

ks2211 commented Jan 17, 2019

Confirming this solution works for me as well. If anyone is interested in using this solution without going through helm, I converted the chart to k8s resource yaml

kubectl create -f filebelow.yaml

---
# Source: hostpath-provisioner/templates/storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: hostpath
  labels:
    app.kubernetes.io/name: hostpath-provisioner
    helm.sh/chart: hostpath-provisioner-0.2.3
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Tiller
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: hostpath

---
# Source: hostpath-provisioner/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: release-name-hostpath-provisioner
  labels:
    app.kubernetes.io/name: hostpath-provisioner
    helm.sh/chart: hostpath-provisioner-0.2.3
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Tiller
---
# Source: hostpath-provisioner/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: release-name-hostpath-provisioner
  labels:
    app.kubernetes.io/name: hostpath-provisioner
    helm.sh/chart: hostpath-provisioner-0.2.3
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Tiller
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
# Source: hostpath-provisioner/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: release-name-hostpath-provisioner
  labels:
    app.kubernetes.io/name: hostpath-provisioner
    helm.sh/chart: hostpath-provisioner-0.2.3
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: release-name-hostpath-provisioner
subjects:
  - kind: ServiceAccount
    name: release-name-hostpath-provisioner
    namespace: default
---
# Source: hostpath-provisioner/templates/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: release-name-hostpath-provisioner-leader-locking
  labels:
    app.kubernetes.io/name: hostpath-provisioner
    helm.sh/chart: hostpath-provisioner-0.2.3
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Tiller
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["list", "watch", "create"]
---
# Source: hostpath-provisioner/templates/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: release-name-hostpath-provisioner-leader-locking
  labels:
    app.kubernetes.io/name: hostpath-provisioner
    helm.sh/chart: hostpath-provisioner-0.2.3
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: release-name-hostpath-provisioner-leader-locking
subjects:
  - kind: ServiceAccount
    name: release-name-hostpath-provisioner
    namespace: default
---
# Source: hostpath-provisioner/templates/deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: release-name-hostpath-provisioner
  labels:
    app.kubernetes.io/name: hostpath-provisioner
    helm.sh/chart: hostpath-provisioner-0.2.3
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Tiller
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app.kubernetes.io/name: hostpath-provisioner
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: hostpath-provisioner
        app.kubernetes.io/instance: release-name
    spec:
      serviceAccountName: release-name-hostpath-provisioner
      containers:
        - name: hostpath-provisioner
          image: "quay.io/rimusz/hostpath-provisioner:v0.2.1"
          imagePullPolicy: IfNotPresent
          env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: pv-volume
              mountPath: /mnt/hostpath
          resources:
            limits:
              cpu: 100m
              memory: 128Mi
            requests:
              cpu: 100m
              memory: 128Mi
            
      volumes:
        - name: pv-volume
          hostPath:
            path: /mnt/hostpath

@BenTheElder
Copy link
Member

Very cool, I haven't managed to dig too deep into this yet (just starting to look more now) - is it feasible to adapt to multi-node at all? (I'd guess not so much...)
If not, perhaps we should check in documentation / example config for how to do this in the user guide(s).

@BenTheElder
Copy link
Member

I do really think we should try to offer a solution to this, and FWIW I think single-node clusters will be most common, but multi-node exists in limited capacity now and will be important for some CI scenarios.

jglick added a commit to jglick/kubernetes-plugin that referenced this issue Oct 8, 2019
@BenTheElder BenTheElder modified the milestones: v0.6.0, 1.0 Oct 15, 2019
@Xtigyro
Copy link

Xtigyro commented Oct 29, 2019

One limitation of Rancher's provisioner - it does not support selector.

For example:
Warning ProvisioningFailed 5s (x2 over 20s) rancher.io/local-path_local-path-provisioner-ccbdd96dc-796tn_f1095857-fa93-11e9-a90a-4a79b109c4fc failed to provision volume with StorageClass "local-path": claim.Spec.Selector is not supported

@aojea
Copy link
Contributor

aojea commented Oct 31, 2019

One limitation of Rancher's provisioner - it does not support selector.

For example:
Warning ProvisioningFailed 5s (x2 over 20s) rancher.io/local-path_local-path-provisioner-ccbdd96dc-796tn_f1095857-fa93-11e9-a90a-4a79b109c4fc failed to provision volume with StorageClass "local-path": claim.Spec.Selector is not supported

I'm sure they would love the feedback 😄
cc: @yasker

@yasker
Copy link

yasker commented Oct 31, 2019

@aojea sure, :)

@Xtigyro I might be wrong on this, but I am not sure what a provisioner needs to do to support claim.Spec.Selector. This selector is used by PVC to select an existing PV. Though the request already reached provisioner, I am not sure creating a new PV is the right thing to do. If there is a PV matched the selector, Kubernetes should already handle it before it reaches the provisioner. Btw, I haven't seen a provisioner support the field yet.

And if you're looking for a way to specify a PV for PVC, you can use pvc.spec.volumeName instead.

@Xtigyro
Copy link

Xtigyro commented Oct 31, 2019

If there is a PV matched the selector, Kubernetes should already handle it before it reaches the provisioner. Btw, I haven't seen a provisioner support the field yet.

And if you're looking for a way to specify a PV for PVC, you can use pvc.spec.volumeName instead.

@yasker Are we talking about dynamic provisioning? Because that's why I need it - so that the PVC can match the dynamically provisioned PV.

An example:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: puppet-hiera-claim
  labels:
    {{- include "puppetserver.hiera.labels" . | nindent 4 }}
{{- if .Values.storage.annotations }}
  annotations:
{{ toYaml .Values.storage.annotations | nindent 4 }}
{{- end }}
spec:
  {{- if .Values.storage.selector }}
  selector:
    matchLabels:
      {{- include "puppetserver.hiera.matchLabels" . | nindent 6 }}
  {{- end }}
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: {{ .Values.storage.size | quote }}
{{- if .Values.storage.storageClass }}
{{- if (eq "-" .Values.storage.storageClass) }}
  storageClassName: ""
{{- else }}
  storageClassName: "{{ .Values.storage.storageClass }}"
{{- end }}
{{- end }}

The standard K8s hostPath provisioner supports it though you're right - the one from AWS (AWS EBS), for example, does not support it, too. I haven't tried the others from GCP/Azure/etc.

@yasker
Copy link

yasker commented Oct 31, 2019

@Xtigyro I am not sure why you need the selector in the case above. The provisioners are always doing dynamic provisionings. You don't need to have a selector to ensure the matching between PVC and PV, since PV is always created based on the spec of PVC.

As for host path, is this the one you mentioned? https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/blob/master/examples/hostpath-provisioner/hostpath-provisioner.go#L64

I didn't see the usage of the selector in it. It seems just ignored it.

@Xtigyro
Copy link

Xtigyro commented Nov 2, 2019

@Xtigyro I am not sure why you need the selector in the case above. The provisioners are always doing dynamic provisionings. You don't need to have a selector to ensure the matching between PVC and PV, since PV is always created based on the spec of PVC.

The official explanation: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#selector

In my case it appears - if multiple pods and their common multiple PVCs are created on one or multiple nodes in a multi-node cluster (using AWS EKS), sometimes without a selector the PVCs cannot bind to the correct PVs.

@yasker
Copy link

yasker commented Nov 4, 2019

@Xtigyro As the Kubernetes document said, the selector is used to select existing volumes. The provisioner is used to create new volumes.

Since the provisioner is always creating a new volume, it should match the spec of PVC 100% of the time (since the new volume was created according to the requirement specified in the PVC spec), otherwise, it sounds like a Kubernetes's bug.

If you can provide reproduce steps for the issue, we can look more into it.

@averri
Copy link

averri commented Nov 25, 2019

I found it @yasker https://github.com/rancher/local-path-provisioner

This saved my day, thanks.

@yashbhutwala
Copy link
Contributor

@BenTheElder seems many issues have crept up with regards to storage and volume provisioning, I know I've had some too 😃 (#830, #430, buildpacks-community/kpack#217, buildpacks-community/kpack#201). Is making rancher/local-path-provisioner the default up for consideration, at least for the near future until upstream csi-driver stabilizes? Especially since local-path-provisioner also works well for multi-node setups 😄

@BenTheElder
Copy link
Member

@BenTheElder seems many issues have crept up with regards to storage and volume provisioning, I know I've had some too smiley (#830, #430, buildpacks-community/kpack#217, buildpacks-community/kpack#201). Is making rancher/local-path-provisioner the default up for consideration, at least for the near future until upstream csi-driver stabilizes?

It is but I'm not sure if we'd be leaving some people on other architectures SOL, and right now it should be ~two commands to swap out kind's for this 😅

If we add it upstream, it becomes an API 🙃

I'm also not sure the CSI driver is less stable. I'll take another look next week. Right now looking at the host restart issue again finally 🙃

Especially since local-path-provisioner also works well for multi-node setups smile

AFAICT local-path-provisioner "works" for multi-node setups to the same degree as the CSI driver...? I.E. all volumes will wind up on the same node where the single driver instance is running, which is not super acceptable for Kubernetes testing purposes. The CSI driver should "soon" actually support multi-node. The driver basically does, but CSI provisioner needs to support a daemonset.

kind would also need to drop support for old kubernetes versions, which is not something we've done yet.

@yasker
Copy link

yasker commented Dec 6, 2019

@BenTheElder Just one thing. Local Path Provisioner is able to provision the volume on any node from the beginning, not only the node running the provisioner. For example, you can configure it at https://github.com/rancher/local-path-provisioner#configuration

If there is anything I can help to make Local Path Provisioner better suit for the kind's usage, let me know.

@BenTheElder
Copy link
Member

Thanks @yasker, I looked into this deeper and we are shipping this with some customization 😅 #1157 is WIP to ship it.

See also #1151 (comment)

Kubernetes testing is going OK at the moment, so I am focusing on:

this, host restarts, user defined networks, and registries are on my near term list, in that order, other than ongoing maintenance stuff etc...

To fix local development issues. I appreciate everyone's patience with getting this right and juggling priorities.

@BenTheElder
Copy link
Member

Fixed in #1157, kind at HEAD should work fully with dynamic PVC. Thanks @yasker. https://kubernetes.slack.com/archives/C92G08FGD/p1576098305111500

alejandroEsc added a commit to mesosphere/charts that referenced this issue Apr 14, 2020
As of Dec 11, 2019: kubernetes-sigs/kind#118 Kind supports PVC, as such we do not need to install a new storage class. In addition the new storage class we do install is not set as default, which is a bug.
makkes pushed a commit to mesosphere/charts that referenced this issue Apr 14, 2020
As of Dec 11, 2019: kubernetes-sigs/kind#118 Kind supports PVC, as such we do not need to install a new storage class. In addition the new storage class we do install is not set as default, which is a bug.
stg-0 pushed a commit to stg-0/kind that referenced this issue Jun 20, 2023
[EOS-11269] [validaciones] error al indicar un grupo con az, zone_distribution=balanced y quantity<3
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/design Categorizes issue or PR as related to design. kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests