Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Pod securityContext fsGroup is not taking effect #814

Closed
lulf opened this issue Nov 15, 2019 · 4 comments
Closed

[BUG] Pod securityContext fsGroup is not taking effect #814

lulf opened this issue Nov 15, 2019 · 4 comments
Labels
kind/bug Something isn't working status/stale Issue went stale; did not receive attention or no reply from the OP

Comments

@lulf
Copy link

lulf commented Nov 15, 2019

General information

  • OS: Linux / macOS / Windows
  • Hypervisor: KVM / Hyper-V / VirtualBox / hyperkit
  • Did you run crc setup before starting it (Yes/No)?

With CRC, attaching a persistentVolumeClaim to a pod with securityContext.fsGroup set, does not set the appropriate permissions on the persistent volume. With OpenShift 4.2 on AWS, the permissions are set as expected.

CRC version

crc version: 1.1.0+95966a9

CRC status

CRC VM:          Running
OpenShift:       Running (v4.2.2)
Disk Usage:      12.93GB of 32.2GB (Inside the CRC VM)
Cache Usage:     9.467GB
Cache Directory: /home/lulf/.crc/cache

CRC config

Host Operating System

NAME=Fedora
VERSION="30 (Workstation Edition)"
ID=fedora
VERSION_ID=30
VERSION_CODENAME=""
PLATFORM_ID="platform:f30"
PRETTY_NAME="Fedora 30 (Workstation Edition)"
ANSI_COLOR="0;34"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:30"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f30/system-administrators-guide/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=30
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=30
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Workstation Edition"
VARIANT_ID=workstation

Steps to reproduce

  1. Create deployment with securityContext fsGroup: 1234 set and mount a persistentvolumeclaim in the deployment, i.e. to /mnt/data (see reproducer.txt for an example)
  2. oc rsh reproducer ls -la /mnt/data

Expected

On Openshift 4.2 on AWS:

$ oc rsh reproducer ls -la /mnt/data
total 20
drwxrwsr-x    3 root     1234          4096 Nov 15 18:49 .
drwxr-xr-x    3 root     root            18 Nov 15 18:49 ..

Actual

On CRC:

oc rsh reproducer ls -la /mnt/data
total 0
drwxrwx---    2 root     root             6 Nov 15 15:14 .
drwxr-xr-x    3 root     root            18 Nov 15 18:43 ..
@lulf lulf added the kind/bug Something isn't working label Nov 15, 2019
@praveenkumar
Copy link
Member

@lulf This is because in kubernetes hostpath way of provision the pv doesn't work with securitycontext, have a look to rancher/local-path-provisioner#7 (comment) If your really want to try out the security context on crc better to use the emptyDir instead pvc since on crc we have pv as hostpath directory.

$ cat reproducer.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: reproducer
spec:
  securityContext:
    fsGroup: 1234
  containers:
  - name: reproducer
    image: busybox
    command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
    volumeMounts:
    - name: data
      mountPath: /mnt/data
  volumes:
  - name: data
    emptyDir: {}

$ oc create -f reproducer.yaml
pod/reproducer created

$ oc get pods
NAME                    READY   STATUS    RESTARTS   AGE
reproducer              1/1     Running   0          24s

$ oc rsh reproducer ls -l /mnt/
total 0
drwxrwsrwx    2 root     1234             6 Nov 19 08:40 data

PV specification on crc.

$ oc get pv pv0002 -oyaml
apiVersion: v1
kind: PersistentVolume
metadata:
  creationTimestamp: "2019-10-30T08:52:25Z"
  finalizers:
  - kubernetes.io/pv-protection
  labels:
    volume: pv0002
  name: pv0002
  resourceVersion: "14377"
  selfLink: /api/v1/persistentvolumes/pv0002
  uid: 9828d18e-faf2-11e9-976d-525400d602f2
spec:
  accessModes:
  - ReadWriteOnce
  - ReadWriteMany
  - ReadOnlyMany
  capacity:
    storage: 100Gi
  hostPath:
    path: /mnt/pv-data/pv0002
    type: ""
  persistentVolumeReclaimPolicy: Recycle
  volumeMode: Filesystem
status:
  phase: Available

Hope this will help why this is a different behavior here.

@lulf
Copy link
Author

lulf commented Nov 20, 2019

@praveenkumar Thanks for the explanation. Since what we are testing are end-user examples that uses persistenvolumeclaims, its not something we can change. Is there a workaround we can apply to CRC in order for this to work?

@stale
Copy link

stale bot commented Feb 11, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the status/stale Issue went stale; did not receive attention or no reply from the OP label Feb 11, 2020
@stale stale bot closed this as completed Feb 25, 2020
@cfergeau cfergeau reopened this May 4, 2020
@stale stale bot removed the status/stale Issue went stale; did not receive attention or no reply from the OP label May 4, 2020
@stale
Copy link

stale bot commented Jul 3, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the status/stale Issue went stale; did not receive attention or no reply from the OP label Jul 3, 2020
@stale stale bot closed this as completed Jul 17, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working status/stale Issue went stale; did not receive attention or no reply from the OP
Projects
None yet
Development

No branches or pull requests

3 participants