Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot write data to local PVC #3704

Closed
profhase opened this issue Jul 23, 2021 · 30 comments
Closed

Cannot write data to local PVC #3704

profhase opened this issue Jul 23, 2021 · 30 comments
Assignees
Milestone

Comments

@profhase
Copy link

Environmental Info:
K3s Version:

k3s version v1.21.3+k3s1 (1d1f220f)
go version go1.16.6

Node(s) CPU architecture, OS, and Version:

Linux debian-8gb-nbg1-1 4.19.0-17-amd64 #1 SMP Debian 4.19.194-2 (2021-06-21) x86_64 GNU/Linux

Cluster Configuration:
Single node

Describe the bug:
Postgres does not come up due to mkdir: cannot create directory ‘/var/lib/postgresql/data’: Permission denied

    Container ID:   containerd://fb0246e6a5aa94fe5f14c5c387a2609616d0c198d8a5c5606a41a4792b2c90aa
    Image:          postgres:12
...
    Mounts:
      /var/lib/postgresql/data from postgres (rw,path="data")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7jkg4 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  postgres:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  postgres-awx-postgres-0
    ReadOnly:   false
  kube-api-access-7jkg4:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true

Steps To Reproduce:

Expected behavior:
postgres comes up

Actual behavior:
postgres crashes

Additional context / logs:
mkdir: cannot create directory ‘/var/lib/postgresql/data’: Permission denied

@eli-kaplan
Copy link

Experiencing the same issue with this latest version.

The issue appears to be a result of the following change:

Directories created for local-path-provisioner now have more restrictive permissions (#3548)

As a result, local-path persistent volumes appear to now only be writable by containers running as root, unless you explicitly change their permissions out-of-band or with a root initContainer.

Further, the local-path provisioner by design seems to not support securityContext.fsGroup in order to mitigate a possible privilege escalation (see: rancher/local-path-provisioner#7 (comment)), so we can't simply tell it to create the volume with the correct permissions.

Unless I am missing something, it seems this latest feature introduces a hard requirement of running a root container (either as an initContainer or the service container itself) in order to use local-path persistent volumes, which is not ideal.

@ChristianCiach
Copy link

ChristianCiach commented Jul 23, 2021

Yeah, it looks like the 0700 permissions are applied to every volume directory, even though the original plan was to apply these permissions to the parent storage folder (--default-local-storage-path) only.

#2348 (comment)

I'm pretty sure ensuring /var/lib/rancher/k3s/storage (and maybe /var/lib/rancher/k3s/data?) have permissions 700 would prevent non-root users from accessing the volumes while allowing them to be used by containers (no matter what user the container runs as).

rancher/local-path-provisioner#182

So volume itself is 0777, but the parent directory secured with 0700 and accessible by root only.

I wonder why the PR that got merged didn't implement it this way.

@brandond
Copy link
Member

brandond commented Jul 26, 2021

@dereknola can you take a look at this? It appears that with the permissions change, LocalStorage no longer supports containers that don't run as root.

@dereknola
Copy link
Member

Yeah, I'll take a look.

@georglauterbach
Copy link

I can confirm this behavior. This problem cripples all deployments with PVCs and non-root-containers, which make up about 60% of my complete workload.

Is the only workaround ATM to use an init-container?

@ChristianCiach
Copy link

@georglauterbach You could also downgrade to K3s 1.21.2 until this is fixed.

@georglauterbach
Copy link

georglauterbach commented Aug 4, 2021

@ChristianCiach how do I do that in the best way possible ?:)

Btw. thanks for the fast reply :D

PS: I figured it out. Thanks for the hint nevertheless :)

@rancher-max
Copy link
Contributor

Validated on master branch commit 338f9cae3f5004e8a00489bf865025b76484b510

  • storage directory is now 701 perms:
$ stat -c %a /var/lib/rancher/k3s/storage/
701
  • subdirectory is 777:
$ sudo stat -c %a /var/lib/rancher/k3s/storage/pvc-35801d3f-b6fc-45a8-b3e3-e7aba21343ba_default_postgres-awx-demo-postgres-0
777
  • Permissions locally with non-root user appear to be functional:
ubuntu@maxnode:/var/lib/rancher/k3s/storage$ cd /var/lib/rancher/k3s/storage/pvc-35801d3f-b6fc-45a8-b3e3-e7aba21343ba_default_postgres-awx-demo-postgres-0 && ls
data
ubuntu@maxnode:/var/lib/rancher/k3s/storage/pvc-35801d3f-b6fc-45a8-b3e3-e7aba21343ba_default_postgres-awx-demo-postgres-0$ cd /var/lib/rancher/k3s/storage && ls
ls: cannot open directory '.': Permission denied
  • Can successfully write data to local pvc as non-root containers, such as awx:
$ kubectl get pods -l "app.kubernetes.io/managed-by=awx-operator"
NAME                       READY   STATUS    RESTARTS   AGE
awx-demo-postgres-0        1/1     Running   0          4m25s
awx-demo-9975db9b6-x9zdw   4/4     Running   0          4m15s
$ kubectl get svc -l "app.kubernetes.io/managed-by=awx-operator"
NAME                TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
awx-demo-postgres   ClusterIP   None         <none>        5432/TCP       4m29s
awx-demo-service    NodePort    10.43.2.11   <none>        80:30474/TCP   4m20s

$ k get pv,pvc -A
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                  STORAGECLASS   REASON   AGE
persistentvolume/pvc-a762866b-aa11-4477-a4c9-1e55a8a7767c   8Gi        RWO            Delete           Bound    default/postgres-awx-demo-postgres-0   local-path              43s

NAMESPACE   NAME                                                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default     persistentvolumeclaim/postgres-awx-demo-postgres-0   Bound    pvc-a762866b-aa11-4477-a4c9-1e55a8a7767c   8Gi        RWO            local-path     45s
  • Confirmed root containers continue to work as before but now the subdirectories also have 777 permissions as expected

@samip5
Copy link

samip5 commented Aug 9, 2021

I take it that this will be available on v1.21.4 and not on v1.21.3? How would we get this fix before .4 release?

@ChristianCiach
Copy link

ChristianCiach commented Aug 9, 2021

I am a bit surprised about that, too. I think this bug is bad enough to justify an early v1.21.3+k3s2 bugfix release.

@flokli
Copy link

flokli commented Aug 9, 2021

I personally don't care too much if it'd be in a v1.21.3+k3s2 or a v1.21.4 release, but right now, since the release (18 days ago), the (only) default storage class is broken - so everyone not explicitly pinning a version will get a broken cluster.

@dereknola
Copy link
Member

@samip5 Yes, upstream meaning Kubernetes.

@samip5
Copy link

samip5 commented Aug 11, 2021

@samip5 Yes, upstream meaning Kubernetes.

So how does it help us with broken provisioner on k3s?

@dereknola
Copy link
Member

The k3s release schedule is generally in lock-step with upstream Kubernetes. So k3s v1.21.4 release comes after Kubernetes has released v1.21.4. K3s releases integrate the changes made to upstream.

@brandond
Copy link
Member

You can revert to the previous release, or wait until v1.21.4 is released within the next day or two.

@dereknola
Copy link
Member

K3s v1.21.4 is now out with the fix for this issue.
https://github.com/k3s-io/k3s/releases/tag/v1.21.4%2Bk3s1

@andrewwebber
Copy link

Is there a matching k3d release as k3d seems also to be effected

@brandond
Copy link
Member

K3d can run any k3s release, just use the --image flag to specify the image and tag you want.

@andrewwebber
Copy link

andrewwebber commented Aug 26, 2021

@brandond fantastic, I will use that workaround thanks. Usually just like to use the latest version k3d recommends as default. This is what ultimately broke our CI

phlogistonjohn added a commit to phlogistonjohn/samba-operator that referenced this issue Sep 23, 2021
See: k3s-io/k3s#3704

Figuring out what to actually put on the --image option was far harder
than it seems it should have been. :-\

Signed-off-by: John Mulligan <[email protected]>
phlogistonjohn added a commit to samba-in-kubernetes/samba-operator that referenced this issue Sep 24, 2021
See: k3s-io/k3s#3704

Figuring out what to actually put on the --image option was far harder
than it seems it should have been. :-\

Signed-off-by: John Mulligan <[email protected]>
@antonioberben
Copy link

Hi, I have now what it seems to be the same issue with v1.22.2-rc1+k3s2

My cluster is a k3s and inside, I deploy a virtual cluster with vcluster which is again another k3s. This cluster deploys a PVC and the local-path creates the PV, but as read-only.

Here the error I retrieve:

vcluster time="2021-12-28xxxxxxxx" level=fatal msg="failed to evacuate root cgroup: mkdir /sys/fs/cgroup/init: read-only file system"

More details in this other issue: loft-sh/vcluster#264 (comment)

Any idea if it will be fixed in the next releases?

Thank you

@brandond
Copy link
Member

@antonioberben that appears to be a completely different problem, related to cgroups. Can you open a new issue?

@smolinari
Copy link

smolinari commented Aug 12, 2023

Hi,

I'm running into the same problem as noted in this issue's OP. I could fix the permissions issue by running:

chmod 777 /var/lib/rancher/k3s/storage/*

The permissions were previously set to 755.

I'm running v1.25.12+k3s1.

Is this a regression? If not, what could cause the storage to be set with 755, instead of 777?

Scott

@brandond
Copy link
Member

The /var/lib/rancher/k3s/storage/ directory should be 700. Subdirectories should be 777. These permissions are set when the LocalPath volume is created, and older releases of K3s used different permissions. Confirm that you're on an up-to-date release of K3s and that your local-path-config configmap shows the correct permissions in the setup script.

@smolinari
Copy link

smolinari commented Aug 14, 2023

@brandond - Yea, I meant the folders under /storage/. They were all set to 755. As I also noted, I'm running v1.25.12+k3s1. In the end, it was a fresh install, as I wiped out the node, reinstalled Ubuntu 22.04 and then k3s.

This is what is in the local-path-config configMap.

#!/bin/sh
while getopts "m:s:p:" opt
do
    case $opt in
        p)
        absolutePath=$OPTARG
        ;;
        s)
        sizeInBytes=$OPTARG
        ;;
        m)
        volMode=$OPTARG
        ;;
    esac
done
mkdir -m 0777 -p ${absolutePath}
chmod 700 ${absolutePath}/..

Looks correct to me?

What would happen, if the user doing the k3s install isn't root? Could that also possibly cause this issue? I didn't want to mess with testing it, as now everything is working, thus my "laziness". 🤷

Scott

@brandond
Copy link
Member

If you're not root when running the install script, the script will use sudo to become root. You can check out the docs section on rootless operation if you are curious about running it as an unprivileged user - but what you're asking about wouldn't cause this.

@brandond
Copy link
Member

Old releases of K3s used different permissions. As I mentioned, the permissions are set when the volumes are created, so its likely they were created on a different version of K3s?

@smolinari
Copy link

they were created on a different version of K3s

Yes, I believe originally they were created with v1.23x.

Scott

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests