Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make the persistent storage actually survive reboots #8151

Closed
afbjorklund opened this issue May 14, 2020 · 9 comments · Fixed by #8780
Closed

Make the persistent storage actually survive reboots #8151

afbjorklund opened this issue May 14, 2020 · 9 comments · Fixed by #8780
Assignees
Labels
addon/storage-provisioner Issues relating to storage provisioner addon co/docker-driver Issues related to kubernetes in container co/none-driver co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. kind/documentation Categorizes issue or PR as related to documentation. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@afbjorklund
Copy link
Collaborator

afbjorklund commented May 14, 2020

For the KIC (and for "none"), we need to make sure that the persistant directories are preserved:

https://minikube.sigs.k8s.io/docs/handbook/persistent_volumes/

These are persisted today:

/var/lib/minikube
/var/lib/docker
/var/lib/containers

(the third one is for crio, and for podman)

These also need to be kept:

/data
/tmp/hostpath_pv
/tmp/hostpath-provisioner

(I don't think the third one is used anymore)

We can put them on the docker/podman volume, and at least mention them in the "none" docs...

They are normally bind mounts, to some place that is persistent (like disk image or docker volume)

TARGET                    SOURCE                           FSTYPE OPTIONS
/tmp/hostpath_pv          /dev/sda1[/hostpath_pv]          ext4   rw,relatime
/tmp/hostpath-provisioner /dev/sda1[/hostpath-provisioner] ext4   rw,relatime
/mnt/sda1                 /dev/sda1                        ext4   rw,relatime
/var/lib/docker           /dev/sda1[/var/lib/docker]       ext4   rw,relatime
/var/lib/containers       /dev/sda1[/var/lib/containers]   ext4   rw,relatime
/data                     /dev/sda1[/data]                 ext4   rw,relatime
/var/lib/minikube         /dev/sda1[/var/lib/minikube]     ext4   rw,relatime

A nice change to go with this one, would be to stop volume mounting all of /var (at the top level).

Because it conflicts with existing paths, in the ubuntu image. Like /var/lib/dpkg/alternatives.

See #8056 and #8100

The kicbase doesn't boot without it.

@afbjorklund afbjorklund added addon/storage-provisioner Issues relating to storage provisioner addon co/docker-driver Issues related to kubernetes in container co/podman-driver podman driver issues co/none-driver kind/bug Categorizes issue or PR as related to a bug. kind/documentation Categorizes issue or PR as related to documentation. labels May 14, 2020
@afbjorklund
Copy link
Collaborator Author

afbjorklund commented May 14, 2020

Suggestion is to add a compatibility symlink, for the existing docker/podman volumes:

lib -> var/lib

And then move the content down a level, before doing so. Same with "tmp" and "logs".

tmp -> var/tmp
log -> var/log

@afbjorklund
Copy link
Collaborator Author

The other stuff in the current volume, we don't have to care about:

backups cache lib local lock log mail opt run spool tmp

Most of it is empty anyway.

backups/

0 directories, 0 files
cache/
├── apt
│   └── archives
│       ├── lock
│       └── partial
├── debconf
│   ├── config.dat
│   ├── passwords.dat
│   └── templates.dat
├── ldconfig
│   └── aux-cache
└── private

6 directories, 5 files
local/

0 directories, 0 files

lock -> /run/lock

mail/

0 directories, 0 files
opt/

0 directories, 0 files

run -> /run

spool/
└── mail -> ../mail

1 directory, 0 files

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented May 14, 2020

I might have missed some stuff above, like "containerd" and "boot2docker". Imagine those there.
(they should be added as well, but are not used very often and not important to the discussion)

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented May 14, 2020

We also discussed this briefly for the "none" driver, in the #7511 issue about the mountpoint.
(basically the ultimate location of the persistent storage is up to the setup of the host itself)

@medyagh medyagh added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label May 15, 2020
@medyagh
Copy link
Member

medyagh commented May 15, 2020

symlink could work !

@klingenm
Copy link

klingenm commented Jun 8, 2020

Hello!

Not sure if this is the same issue.

My setup:

  • minikube 1.11
  • --driver=docker
  • docker-desktop for docker daemon.

I have a PV, provisioned by minikube-hostpath:

Name:            pvc-033468d9-fb2a-4e7d-977c-4295422ae691
Labels:          <none>
Annotations:     hostPathProvisionerIdentity: 00670d1d-a643-11ea-8344-02429dc827cb
                 pv.kubernetes.io/provisioned-by: k8s.io/minikube-hostpath
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    standard
Status:          Bound
Claim:           core-local/db-data-local-db-0
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        5Gi
Node Affinity:   <none>
Message:
Source:
    Type:          HostPath (bare host directory volume)
    Path:          /tmp/hostpath-provisioner/pvc-033468d9-fb2a-4e7d-977c-4295422ae691
    HostPathType:
Events:            <none>

@afbjorklund

/data
/tmp/hostpath_pv
/tmp/hostpath-provisioner
(I don't think the third one is used anymore)

As you can see from the provisioned host path, the third one is still used.

Looking at the mount table in the minikube container, I can only see /tmp being mounted as a tmpfs, and the hostpath-provisioner directory being an ordinary directory in it, thus disappearing everytime I run minikube stop.

Should my use-case for "docker-desktop+minikube start --driver=docker" be covered by this issue?

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Jun 8, 2020

@klingenm
Looks like the same issue, I don't think anything has changed yet. Only /var is mounted.
We also currently have an issue when we mount it twice, and run into race conditions... #8100

@afbjorklund
Copy link
Collaborator Author

(I don't think the third one is used anymore)

As you can see from the provisioned host path, the third one is still used

Yeah, maybe it was the second one that is gone ? Let's persist both of them.

@afbjorklund
Copy link
Collaborator Author

I think we will leave the /var volume as-is, since it comes from the KIC image.
So that means we will still have /var as the implicit top-level in the volume...

We probably don't want to use:
/var/hostpath_pv
/var/hostpath-provisioner
/var/data

But maybe that is the easiest ? (I don't really believe in using /var/tmp for this)

At any point, we need to make an automount service for the kicbase image.
It will be similar but not identical to the one that we are using for the iso image.

    mkdir -p /mnt/$PARTNAME/data
    mkdir /data
    mount --bind /mnt/$PARTNAME/data /data

    mkdir -p /mnt/$PARTNAME/hostpath_pv
    mkdir /tmp/hostpath_pv
    mount --bind /mnt/$PARTNAME/hostpath_pv /tmp/hostpath_pv

    mkdir -p /mnt/$PARTNAME/hostpath-provisioner
    mkdir /tmp/hostpath-provisioner
    mount --bind /mnt/$PARTNAME/hostpath-provisioner /tmp/hostpath-provisioner

The "/mnt/$PARTNAME" will need to be replaced with "/var" (or a subdirectory)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addon/storage-provisioner Issues relating to storage provisioner addon co/docker-driver Issues related to kubernetes in container co/none-driver co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. kind/documentation Categorizes issue or PR as related to documentation. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants