Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Persistent Volumes Clarification #13038

Closed
miaucl opened this issue Nov 29, 2021 · 17 comments · Fixed by #13040
Closed

Persistent Volumes Clarification #13038

miaucl opened this issue Nov 29, 2021 · 17 comments · Fixed by #13040
Labels
addon/storage-provisioner Issues relating to storage provisioner addon co/none-driver kind/documentation Categorizes issue or PR as related to documentation. kind/support Categorizes issue or PR as a support question.

Comments

@miaucl
Copy link

miaucl commented Nov 29, 2021

I am using minikube to run my cluster and ran across a problem. I am not sure wheter it belongs to the docs or source code.

The documentation gives an example for PVs and how to define that here. Using the persistant volume claims, you do not have to specify the host path and a generic one gets generated. BUT, why do you choose the tmp folder per default?! Please please put a warning in the docs that using PVCs and the default minikube provisioner does, in fact, only store your data for a limited time.

Alternatively change the default location to somewhere really persistant!

@afbjorklund
Copy link
Collaborator

afbjorklund commented Nov 29, 2021

There is a persistent directory mounted on that /tmp path, by the automounter. The actual disk or volume depends on the driver.

The files are not (supposed to be) stored in the temporary directory.

@miaucl
Copy link
Author

miaucl commented Nov 29, 2021

The default settings of minikube with driver none mounts the persistant volumes in the /tmp folder on the host machine. Whatever this folder on the host machine is. Everything inside minikube is fine, but WHY are you using the /tmp folder on the host machine instead of something else?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Nov 29, 2021

It was hardcoded in the original driver (the mount point). If you do you your own VM, you can use bind mounts.

This was supposed to be documented and eventually changed.

In the default minikube images everything is ephemeral, not only /tmp. So it didn't really matter.

Wherever the mountpoints lived, it still had to be persisted to real storage (data disk or docker volume)

@miaucl
Copy link
Author

miaucl commented Nov 29, 2021

So the only real solution to have something persistent is to create my own PVs and PCVs and use those claims?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Nov 29, 2021

Minikube uses /data as the top directory for custom host path volumes, but both that and the local path provisioner are supposed to be mounted from another (persistent) device. Depends on what disk options are available to the host ?

If you only use one device, you can use symlinks or bindmounts...

e.g. /data -> /mnt/sdb1/data (etc)

/data -> /var/lib/data

@miaucl
Copy link
Author

miaucl commented Nov 29, 2021

Minikube uses /data as the top directory for custom host path volumes

I do not get this. When I install minikube with no driver, the dynamic volume provisioner stores the data of my volume claims requested by the k8s resources in the /tmp of my host machine, not the /data folder.

I am not sure what my problem of understanding is here.

Do I correctly understand that the volume claims for a pod triggers the provisioner to provide a volume my-volume which then gets mounted into the pod at /my-path and stored on the host machine under /tmp/hostpath-provisioners/my-namespace/my-volume or do I miss a step in-between?

@afbjorklund afbjorklund added addon/storage-provisioner Issues relating to storage provisioner addon kind/support Categorizes issue or PR as a support question. co/none-driver labels Nov 29, 2021
@afbjorklund
Copy link
Collaborator

afbjorklund commented Nov 29, 2021

On your VM/host, you need to mount some persistent storage on the mount points. They were not supposed to be used as-is.

I meant all three of them, /data was just shorter to type than /tmp/hostpath... (sorry about that) It should be documented better.

$ findmnt /tmp/hostpath-provisioner
TARGET                    SOURCE                           FSTYPE OPTIONS
/tmp/hostpath-provisioner /dev/vda1[/hostpath-provisioner] ext4   rw,relatime
docker@docker:~$ findmnt --notrunc /tmp/hostpath-provisioner
TARGET SOURCE                                                 FSTYPE OPTIONS
/tmp/hostpath-provisioner
       /dev/nvme0n1p2[/var/lib/docker/volumes/docker/_data/hostpath-provisioner]
                                                              ext4   rw,relatime,errors=remount-ro

Assuming /mnt/$PARTNAME is the persistent location:

    mkdir -p /mnt/$PARTNAME/data
    mkdir /data
    mount --bind /mnt/$PARTNAME/data /data

    mkdir -p /mnt/$PARTNAME/hostpath_pv
    mkdir /tmp/hostpath_pv
    mount --bind /mnt/$PARTNAME/hostpath_pv /tmp/hostpath_pv

    mkdir -p /mnt/$PARTNAME/hostpath-provisioner
    mkdir /tmp/hostpath-provisioner
    mount --bind /mnt/$PARTNAME/hostpath-provisioner /tmp/hostpath-provisioner
$ findmnt /mnt/vda1         
TARGET    SOURCE    FSTYPE OPTIONS
/mnt/vda1 /dev/vda1 ext4   rw,relatime

You could also use /var on the OS disk, instead of a data disk.

see https://github.com/kubernetes/minikube/tree/master/deploy/kicbase/automount

@afbjorklund afbjorklund added the kind/documentation Categorizes issue or PR as related to documentation. label Nov 29, 2021
@skol101
Copy link

skol101 commented Dec 28, 2021

The issue as I understand is pretty simple -- when linux is restarted, it removes 'hostpath-provisioner' directory, and users loose all data without warning.

Also, your suggestion should be updated with
sudo chown 1001:1001 -R /mnt/permanent/hostpath-provisioner

@afbjorklund
Copy link
Collaborator

afbjorklund commented Dec 28, 2021

when linux is restarted, it removes 'hostpath-provisioner' directory, and users loose all data without warning.

As long as the data is being stored in a persistent location, there should be no data loss with the regular drivers ?

The empty directories in /tmp or in /mnt just serve as mount points, the real data is in an image or in a volume.

It could have been fixed sooner for the docker driver, and it could have been documented better for the none driver.

@skol101
Copy link

skol101 commented Dec 29, 2021

I'm using driver=none because of Nvidia GPU. Also using Bitnami/Mariadb which installs 'persistent volume' on /tmp/hostpath-provisioner, which then gets deleted on machine reboot.

Is there a way to start minikube cluster with a different hostpath-provisioner location?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Dec 29, 2021

Is there a way to start minikube cluster with a different hostpath-provisioner location?

There should have been, but for now you will have to set up the symlinks or bind mounts.

For the minikube OS, we do this with a service that is started at launch. (the automount)

Added some minimal docs: https://minikube.sigs.k8s.io/docs/drivers/none/#persistent-storage

@skol101
Copy link

skol101 commented Dec 29, 2021

I symlinked /tmp/hostpath-provider to /data/hostpath-provider , where data is a non temporary dir.

0 lrwxrwxrwx 1 sk sk 26 Dec 29 12:22 hostpath-provisioner -> /data/hostpath-provisioner

When I try 'helm install mdb bitnami/mariadb I observe this in minikube logs

I1229 10:25:14.806860       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-mdb-mariadb-0", UID:"27fde345-1390-4283-ac28-c1e222a009c6", APIVersion:"v1", ResourceVersion:"8653", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/data-mdb-mariadb-0"
I1229 10:25:14.806797       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    14609b5e-fba7-4e26-b1d2-8800299fc63a 304 0 2021-12-29 08:43:04 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2021-12-29 08:43:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-27fde345-1390-4283-ac28-c1e222a009c6 &PersistentVolumeClaim{ObjectMeta:{data-mdb-mariadb-0  default  27fde345-1390-4283-ac28-c1e222a009c6 8653 0 2021-12-29 10:25:14 +0000 UTC <nil> <nil> map[app.kubernetes.io/component:primary app.kubernetes.io/instance:mdb app.kubernetes.io/name:mariadb] map[volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2021-12-29 10:25:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volume.beta.kubernetes.io/storage-provisioner":{}},"f:labels":{".":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/name":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{8589934592 0} {<nil>}  BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/data-mdb-mariadb-0
W1229 10:25:14.807115       1 controller.go:961] Retrying syncing claim "27fde345-1390-4283-ac28-c1e222a009c6" because failures 0 < threshold 15
E1229 10:25:14.807137       1 controller.go:981] error syncing claim "27fde345-1390-4283-ac28-c1e222a009c6": failed to provision volume with StorageClass "standard": mkdir /tmp/hostpath-provisioner: file exists
I1229 10:25:14.807139       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-mdb-mariadb-0", UID:"27fde345-1390-4283-ac28-c1e222a009c6", APIVersion:"v1", ResourceVersion:"8653", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "standard": mkdir /tmp/hostpath-provisioner: file exists

What should I do?

Also, I'm loosing all my custom deployments on host reboot -- cluster is recreated as if it didn't exist. /mnt is empty on my machine, no directories are actually mounted there.

Running findmnt /tmp/hostpath-provisioner -- nothing found. The 'hostpath-provisioner' is created only after I run 'helm install mdb bitnami/mariadb'

@skol101
Copy link

skol101 commented Dec 29, 2021

Anyway, I switched to microk8s, and it seems to be working as needed.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Dec 29, 2021

@Sergiodcm00
Copy link

I symlinked /tmp/hostpath-provider to /data/hostpath-provider , where data is a non temporary dir.

0 lrwxrwxrwx 1 sk sk 26 Dec 29 12:22 hostpath-provisioner -> /data/hostpath-provisioner

When I try 'helm install mdb bitnami/mariadb I observe this in minikube logs

You can't make links. Because hostpath try to make mkdir /tmp/hostpath-provider (if dir not exist) and if a link exist fails. Link is not a dir.
You must use binds, but for me no works too.
Hostpath-provisioner insert de data behind the bind ( I donn't understand how) so pod can't see it. I'm looking for solutions.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Dec 30, 2021

because hostpath try to make mkdir /tmp/hostpath-provider (if dir not exist) and if a link exist fails

The shell command mkdir -p works fine, it is the Go implementation that throws an error. My bad.

Link is not a dir.

Actually it is both.

@Sergiodcm00
Copy link

Sergiodcm00 commented Dec 31, 2021

I'm speaking about provisioner, not about shell.

With minikube driver=none the vols are stored under /tmp (the mount point and its data). It is not possible to create links to move its content to another path and it seems that neither binds (or at least in my case, with surprising behavior).
Using /tmp is not a good idea neither for mount point nor for data. As has been commented in this or in some other thread the /tmp is hardcoded in the provisioner: https://github.com/kubernetes/minikube/blob/v0.30.0/pkg/storage/storage_provisioner.go#L49

In addition, the minikube doc is wrong in this regard: https://minikube.sigs.k8s.io/docs/handbook/persistent_volumes/ since it does not indicate the scenario that one encounters with minikube driver=none and nothing to change is indicated about OS clear config over tmp for those directories. So you will find (if you forget consider that) after 10 days of inactivity that your vols have been deleted (issue 9926).

There are some threds about tis:
Set default location for PV mounts => #3318
Driver: none host path should be /var/tmp for PVCs => #7511
Feature request: Allow customizing host path used for dynamically provisioned PersistentVolumes => #5144

The SOLUTION we have found (great Fran) is to change the minikube's hostpath provisioner and use rancher's local-path-provisioner: https://github.com/rancher/local-path-provisioner changing "paths": ["/opt /local-path-provisioner"].

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addon/storage-provisioner Issues relating to storage provisioner addon co/none-driver kind/documentation Categorizes issue or PR as related to documentation. kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants