-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dynamic volume provisioning support #118
Comments
So we do have a default storage class (host-path), though I haven't really tested it out yet. This is required for some conformance tests. Lines 21 to 31 in 4d7dded
|
I think we just need to document this If not, I'll update this issue with what else is required and follow-up. |
I can help document this and test it out |
yes, I there is default storage class, but it doesn’t work for dynamic volumes provisioning, deployments get stuck waiting for PVCs |
Ok, thanks.
We'll need to look into options for providing something more suitable. Any
ideas @davidz627 ?
…On Sat, Nov 17, 2018, 01:04 Rimas Mocevicius ***@***.*** wrote:
yes, I there is default storage class, but it doesn’t work for dynamic
volumes prisioning, deployments get stuck waiting for PVCs
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#118 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AA4Bq5VlwFuKnWwno41Z6pEeuss521M8ks5uv9EqgaJpZM4YmbHK>
.
|
For Dynamic Volume Provisioning without a cloud provider you can try If you're on a cloud provider it would probably be easiest to use the cloud volumes. /cc @msau42 |
We're inherently not on a cloud provider (these clusters are locally like
minikube etc.) - is NFS the only option? Could we write one that handed out
"local disks" that are just dirs or docker volumes or something..?
I'm not the most experienced with stateful Kubernetes yet, admittedly, but
it seems like we could come up with something. 🤔
…On Mon, Nov 19, 2018, 11:26 David Zhu ***@***.*** wrote:
hostpath doesn't have dynamic provisioning as a hostpath volume is highly
tied to the final location of the pod (since you're exposing the host
machines storage). Therefore it is only available for use in
pre-provisioned/inline volumes.
For Dynamic Volume Provisioning without a cloud provider you can try nfs:
https://github.com/kubernetes-incubator/external-storage/tree/master/nfs
If you're on a cloud provider it would probably be easiest to use the
cloud volumes.
/cc @msau42 <https://github.com/msau42>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#118 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AA4Bqy5OUDi_-FJmADHljDAuQoN750rqks5uwwXwgaJpZM4YmbHK>
.
|
Sorry can someone explain some context to me? Is this for testing only, or do we actually want to run real production workloads? If it's testing only, there's a hostpath dynamic provisioner that uses the new volume topology feature to schedule correctly to nodes. However, it doesn't handle anything like capacity isolation or accounting. I forgot, someone at rancher was working on this project. I can't remember the name at the moment though :( |
Just testing. If anyone uses this project for production workloads they're
crazy 😆
I'll try to look around for that, if you remember or have any other ideas
that would be appreciated though. I wonder if minikube or kinvolk have
tried to sort this out yet as well.
…On Mon, Nov 19, 2018, 11:55 Michelle Au ***@***.*** wrote:
Sorry can someone explain some context to me? Is this for testing only, or
do we actually want to run real production workloads? If it's testing only,
there's a hostpath dynamic provisioner that uses the new volume topology
feature to schedule correctly to nodes. However, it doesn't handle anything
like capacity isolation or accounting.
I forgot, someone at rancher was working on this project. I can't remember
the name at the moment though :(
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#118 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AA4BqweRYm3Ug6kMl0FImUNgfQe1naW3ks5uwwzCgaJpZM4YmbHK>
.
|
OK, been messing with storage:
I go luck with https://github.com/rimusz/hostpath-provisioner which is based on https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/tree/master/examples/hostpath-provisioner
How it look inside
I think for the time being I will stick with this solution, easy to install and it works very well :-) It should not too difficult to port it to Also |
@rimusz What the issue with https://github.com/rancher/local-path-provisioner? Just curious. |
it did not work for me, PV did not get created, so I did not spend too much time digging why. |
@rimusz Weird... If you got time, can you open an issue with the log? You can see how to get the log using |
@yasker next time when I get free cycles I will look to |
Oh if you are currently only supporting single node, then the intree hostpath provisioner should have worked fine. It's the same one that localup.sh uses |
Ah, yeah currently only a single-node, and it should indeed look like |
confirm @rimusz solution is working for me too |
exciting! perhaps we should ship this by default then :-) |
Confirming this solution works for me as well. If anyone is interested in using this solution without going through helm, I converted the chart to k8s resource yaml
|
Very cool, I haven't managed to dig too deep into this yet (just starting to look more now) - is it feasible to adapt to multi-node at all? (I'd guess not so much...) |
I do really think we should try to offer a solution to this, and FWIW I think single-node clusters will be most common, but multi-node exists in limited capacity now and will be important for some CI scenarios. |
One limitation of Rancher's provisioner - it does not support For example: |
I'm sure they would love the feedback 😄 |
@aojea sure, :) @Xtigyro I might be wrong on this, but I am not sure what a provisioner needs to do to support And if you're looking for a way to specify a PV for PVC, you can use |
@yasker Are we talking about dynamic provisioning? Because that's why I need it - so that the PVC can match the dynamically provisioned PV. An example:
The standard K8s |
@Xtigyro I am not sure why you need the selector in the case above. The provisioners are always doing dynamic provisionings. You don't need to have a selector to ensure the matching between PVC and PV, since PV is always created based on the spec of PVC. As for host path, is this the one you mentioned? https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/blob/master/examples/hostpath-provisioner/hostpath-provisioner.go#L64 I didn't see the usage of the selector in it. It seems just ignored it. |
The official explanation: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#selector In my case it appears - if multiple pods and their common multiple PVCs are created on one or multiple nodes in a multi-node cluster (using AWS EKS), sometimes without a |
@Xtigyro As the Kubernetes document said, the selector is used to select Since the provisioner is always creating a new volume, it should match the spec of PVC 100% of the time (since the new volume was created according to the requirement specified in the PVC spec), otherwise, it sounds like a Kubernetes's bug. If you can provide reproduce steps for the issue, we can look more into it. |
This saved my day, thanks. |
@BenTheElder seems many issues have crept up with regards to storage and volume provisioning, I know I've had some too 😃 (#830, #430, buildpacks-community/kpack#217, buildpacks-community/kpack#201). Is making rancher/local-path-provisioner the default up for consideration, at least for the near future until upstream |
It is but I'm not sure if we'd be leaving some people on other architectures SOL, and right now it should be ~two commands to swap out kind's for this 😅 If we add it upstream, it becomes an API 🙃 I'm also not sure the CSI driver is less stable. I'll take another look next week. Right now looking at the host restart issue again finally 🙃
AFAICT local-path-provisioner "works" for multi-node setups to the same degree as the CSI driver...? I.E. all volumes will wind up on the same node where the single driver instance is running, which is not super acceptable for Kubernetes testing purposes. The CSI driver should "soon" actually support multi-node. The driver basically does, but CSI provisioner needs to support a daemonset. kind would also need to drop support for old kubernetes versions, which is not something we've done yet. |
@BenTheElder Just one thing. Local Path Provisioner is able to provision the volume on any node from the beginning, not only the node running the provisioner. For example, you can configure it at https://github.com/rancher/local-path-provisioner#configuration If there is anything I can help to make Local Path Provisioner better suit for the kind's usage, let me know. |
Thanks @yasker, I looked into this deeper and we are shipping this with some customization 😅 #1157 is WIP to ship it. See also #1151 (comment) Kubernetes testing is going OK at the moment, so I am focusing on:
To fix local development issues. I appreciate everyone's patience with getting this right and juggling priorities. |
Fixed in #1157, kind at HEAD should work fully with dynamic PVC. Thanks @yasker. https://kubernetes.slack.com/archives/C92G08FGD/p1576098305111500 |
As of Dec 11, 2019: kubernetes-sigs/kind#118 Kind supports PVC, as such we do not need to install a new storage class. In addition the new storage class we do install is not set as default, which is a bug.
As of Dec 11, 2019: kubernetes-sigs/kind#118 Kind supports PVC, as such we do not need to install a new storage class. In addition the new storage class we do install is not set as default, which is a bug.
[EOS-11269] [validaciones] error al indicar un grupo con az, zone_distribution=balanced y quantity<3
Dynamic volume provisioning support would be handy to have to test apps which need persistence.
The text was updated successfully, but these errors were encountered: