Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] local-path-provisioner creating directory with insufficent rights for postgresql #743

Closed
FP-Guitar opened this issue Sep 15, 2021 · 7 comments
Labels
bug Something isn't working k3s This is likely an issue with k3s not k3d itself

Comments

@FP-Guitar
Copy link

FP-Guitar commented Sep 15, 2021

What did you do

I started a three server/three agent cluster on Ubuntu Server and deployed the bitnami/postgresql helm chart as part of a complex Pulumi setup. Setup failed with

mkdir: cannot create directory ‘/bitnami/postgresql’: Permission denied

Similar errors on other components. Deployment works fine on Azure.

  • How was the cluster created?
k3d cluster create -a 3 -s 3 --api-port x.x.x.x:6443  -p "x.x.x.x:80@loadbalancer"
  • What did you do afterwards?
    Used kubectl describe to find mount path inside
1.  sudo docker exec -it k3d-k3s-default-server-0 /bin/sh
2. cd /var/lib/rancher/k3s/storage
3. chmod 777 pvc-fdc3227f-3027-4953-8922-a1c6eed2bb8a_xxxxxxx_data-workpiece-storage-postgresql-0

-> PostgresQL now working as expected.

Investigated the provided local-path-config using:

kubectl get configmap local-path-config -o=json  and compared to working version on my local machine.

Problem is in line before teardown:

mkdir -m 0777 -p ${absolutePath}* instead of *mkdir -m 0700

This is different from example config from https://github.com/rancher/local-path-provisioner

What did you expect to happen

PostgresQL coming up without errors.

Which OS & Architecture

cat /etc/os-release 
NAME="Ubuntu"
VERSION="20.04.3 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
uname -m
x86_64

Which version of k3d

k3d version v4.4.8
k3s version v1.21.3-k3s1 (default)

Which version of docker

Client: Docker Engine - Community
 Version:           20.10.8
 API version:       1.41
 Go version:        go1.16.6
 Git commit:        3967b7d
 Built:             Fri Jul 30 19:54:27 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.8
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.6
  Git commit:       75249d8
  Built:            Fri Jul 30 19:52:33 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.9
  GitCommit:        e25210fe30a0a703442421b0f60afac609f950a3
 runc:
  Version:          1.0.1
  GitCommit:        v1.0.1-0-g4144b63
 docker-init:
  Version:          0.19.0Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.6.1-docker)
  scan: Docker Scan (Docker Inc., v0.8.0)

Server:
 Containers: 8
  Running: 7
  Paused: 0
  Stopped: 1
 Images: 3
 Server Version: 20.10.8
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: e25210fe30a0a703442421b0f60afac609f950a3
 runc version: v1.0.1-0-g4144b63
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.4.0-84-generic
 Operating System: Ubuntu 20.04.3 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 15.54GiB
 Name: ubuntu-server
 ID: 25VB:RB5Z:K65E:YERG:HV23:F4IB:JDMC:HKSL:GADE:WIWA:V7YG:YXM4
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support

  GitCommit:        de40ad0
@FP-Guitar FP-Guitar added the bug Something isn't working label Sep 15, 2021
@hcoatanhay
Copy link

hcoatanhay commented Sep 15, 2021

I think I have the same permission issue with all my charts since I updated from 4.4.7 to 4.4.8.

As a workaround I can manually change permissions of pvc-* folders in volume configured for k3s/storage.

k3d installed with option --volume /data/k3s-storage:/var/lib/rancher/k3s/storage .

All folder created in /data/k3s-storage before 14th sept. were rxw for everybody. Since I updated to 4.4.8 they're only rxw for root.

@FP-Guitar
Copy link
Author

FP-Guitar commented Sep 15, 2021

Hi, for a non manual workaround I did edit the config map. @hcoatanhay

kubectl edit configmap local-path-config --namespace=kube-system

Then change the permission in the line with:

    mkdir -m 0777 -p ${absolutePath} <---change here
  teardown: |-
    #!/bin/sh
    while getopts "m:s:p:" opt

@hcoatanhay
Copy link

@FP-Guitar thanks for the tip, but it seems that the configmap is overwritten at cluster startup.

@iwilltry42
Copy link
Member

Hi @FP-Guitar , thanks for opening this issue and also providing a workaround already! 👍
Unfortunately, this is related to a change in K3s and the local-path-provisioner, which is out of scope for k3d.
The change was introduced in https://github.com/k3s-io/k3s/releases/tag/v1.21.3%2Bk3s1 which happens to be the default for k3d as of v4.4.8 (use the --image flag to use a different version).
Be aware, that the change will thus be present in all future versions of K3s, which will become the new defaults in k3d at build time.

Feel free to reopen this issue if you feel like k3d could do something here.

@FP-Guitar
Copy link
Author

Thanks for answering that fast.
Although it is an "upstream" change out of your direct scope. I think it will render k3d useless for anything which uses persistent volume claims.
So for any test of "real production" deployments.

I wasn't aware that k3d and k3s are completely separated things.

@iwilltry42
Copy link
Member

iwilltry42 commented Sep 20, 2021

@FP-Guitar , I actually went through everything again and it seems like the bug was fixed in k3d v1.21.4 already: https://github.com/k3s-io/k3s/releases/tag/v1.21.4%2Bk3s1
Related line in the release notes:

Containers not running as root can once again use volumes created by the local-path-provisioner (#3721)
Link from there: k3s-io/k3s#3721

Bitnami built tools typically run as non-root, that's why deployments from there failed first with this.

Here's the full path to follow the train of thoughts:

I wasn't aware that k3d and k3s are completely separated things.

k3d just leverages K3s in containers, but it's a community-driven project, so it's not involved in any planning on K3s side and there are no full-time Rancher/SUSE employees assigned to k3d.
That's why we rely on community input :)

@iwilltry42 iwilltry42 added the k3s This is likely an issue with k3s not k3d itself label Sep 20, 2021
@FP-Guitar
Copy link
Author

Thank you very much.
I really appreciate your work and use it almost every day.
Complaining is always very easy and not appropriate if you use "free" work of others.

So thanks again for taking a second look and doing the information collecting for me. These are good news for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working k3s This is likely an issue with k3s not k3d itself
Projects
None yet
Development

No branches or pull requests

3 participants