Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to mount Openstack Cinder Volume in a pod #16

Closed
dims opened this issue Mar 22, 2018 · 3 comments
Closed

Unable to mount Openstack Cinder Volume in a pod #16

dims opened this issue Mar 22, 2018 · 3 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@dims
Copy link
Member

dims commented Mar 22, 2018

From @walteraa on March 2, 2018 20:56

Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug

What happened:
Using Openstack cloud provider, I am not able to mount my Cinder volumes in pods. I am getting this error in my pod events:

Events:
  Type     Reason                 Age   From                               Message
  ----     ------                 ----  ----                               -------
  Normal   Scheduled              16s   default-scheduler                  Successfully assigned mongo-controller-5sktj to walter-atmosphere-minion
  Normal   SuccessfulMountVolume  16s   kubelet, walter-atmosphere-minion  MountVolume.SetUp succeeded for volume "default-token-7cx2x"
  Warning  FailedMount            16s   kubelet, walter-atmosphere-minion  MountVolume.SetUp failed for volume "walter-test" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/7df75a03-1e58-11e8-93a7-fa163ec86641/volumes/kubernetes.io~cinder/walter-test --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/cinder/mounts/ea7e96fe-24cb-40f3-9fb3-420ac7ac1752 /var/lib/kubelet/pods/7df75a03-1e58-11e8-93a7-fa163ec86641/volumes/kubernetes.io~cinder/walter-test
Output: Running scope as unit run-r488c59ffc9324542af0c41f646f6ff99.scope.
mount: special device /var/lib/kubelet/plugins/kubernetes.io/cinder/mounts/ea7e96fe-24cb-40f3-9fb3-420ac7ac1752 does not exist
  Warning  FailedMount  15s  kubelet, walter-atmosphere-minion  MountVolume.SetUp failed for volume "walter-test" : mount failed: exit status 32
Mounting command: systemd-run

My openstack-cloud-provider is showing the following error:

ERROR: logging before flag.Parse: I0302 20:36:33.026783       1 openstack_instances.go:46] Claiming to support Instances
ERROR: logging before flag.Parse: I0302 20:36:38.029334       1 openstack_instances.go:46] Claiming to support Instances
ERROR: logging before flag.Parse: I0302 20:36:43.035928       1 openstack_instances.go:46] Claiming to support Instances
(...)

Is it important to know that, before this error, I was getting another error:

ERROR: logging before flag.Parse: E0302 18:34:19.759493       1 reflector.go:205] git.openstack.org/openstack/openstack-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/cloud/pvlcontroller.go:109: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:serviceaccount:kube-system:pvl-controller" cannot list persistentvolumes at the cluster scope

Then, I work around it by runnig the following commands:

kubectl create clusterrolebinding --user system:serviceaccount:kube-system:default kube-system-cluster-admin-1 --clusterrole cluster-admin
kubectl create clusterrolebinding --user system:serviceaccount:kube-system:pvl-controller kube-system-cluster-admin-2 --clusterrole cluster-admin
kubectl create clusterrolebinding --user system:serviceaccount:kube-system:cloud-node-controller kube-system-cluster-admin-3 --clusterrole cluster-admin
kubectl create clusterrolebinding --user system:serviceaccount:kube-system:cloud-controller-manager kube-system-cluster-admin-4 --clusterrole cluster-admin
kubectl create clusterrolebinding --user system:serviceaccount:kube-system:shared-informers kube-system-cluster-admin-5 --clusterrole cluster-admin
kubectl create clusterrolebinding --user system:kube-controller-manager  kube-system-cluster-admin-6 --clusterrole cluster-admin

What you expected to happen:
I expect that my Cinder Openstack volume could be mounted in my pod.

How to reproduce it (as minimally and precisely as possible):

  • Deploy the openstack-cloud-provider in your cluster by running the command kubectl create -f https://raw.githubusercontent.com/dims/openstack-cloud-controller-manager/master/manifests/controller-manager/openstack-cloud-controller-manager-ds.yaml
    • I made sure that it works by creating an internal service LoadBalancer and it works fine for me.
    • I should to did a workaround(creating the permissive bind) mentioned before, because my controller wasn't able to access the persistent volume API.
  • Create a volume in OpenStack
    • I created it by running the command openstack volume create walter-test --size 10, which gave me a volume:
+---------------------+------------------------------------------------------------------+
| Field               | Value                                                            |
+---------------------+------------------------------------------------------------------+
| attachments         | []                                                               |
| availability_zone   | cinderAZ_1                                                       |
| bootable            | false                                                            |
| consistencygroup_id | None                                                             |
| created_at          | 2018-03-02T20:17:31.408441                                       |
| description         | None                                                             |
| encrypted           | False                                                            |
| id                  | ea7e96fe-24cb-40f3-9fb3-420ac7ac1752                             |
| multiattach         | False                                                            |
| name                | walter-test                                                      |
| properties          |                                                                  |
| replication_status  | disabled                                                         |
| size                | 10                                                               |
| snapshot_id         | None                                                             |
| source_volid        | None                                                             |
| status              | creating                                                         |
| type                | None                                                             |
| updated_at          | None                                                             |
| user_id             | f0cc6d2bcea9d6fe9c2b68264e7d9343c537323c0243d068a0eb119c05fc3c45 |
+---------------------+------------------------------------------------------------------+
  • I've created the following resources:
---

apiVersion: "v1"
kind: "PersistentVolume"
metadata:
  name: "walter-test"
spec:
  storageClassName: cinder
  capacity:
    storage: "5Gi"
  accessModes:
    - "ReadWriteOnce"
  cinder:
    fsType: ext4
    volumeID: ea7e96fe-24cb-40f3-9fb3-420ac7ac1752

---

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: atmosphere-pv-claim
spec:
  storageClassName: cinder
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

---

apiVersion: v1
kind: ReplicationController
metadata:
  labels:
    name: mongo
  name: mongo-controller
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: mongo
    spec:
      volumes:
        - name: atmosphere-storage
          persistentVolumeClaim:
           claimName: atmosphere-pv-claim
      containers:
      - image: mongo
        name: mongo
        ports:
        - name: mongo
          containerPort: 27017
          hostPort: 27017
        volumeMounts:
            - name: atmosphere-storage
              mountPath: /data/db
  • Now you can check the pod stucked in "ContainerCreating" status
ubuntu@walter-atmosphere:~$ kubectl get pods
NAME                     READY     STATUS              RESTARTS   AGE
mongo-controller-5sktj   0/1       ContainerCreating   0          21m

Anything else we need to know?:

Environment:

  • openstack-cloud-controller-manager version: dims/openstack-cloud-controller-manager:0.1.0
  • OS (e.g. from /etc/os-release): Ubuntu
  • Kernel (e.g. uname -a): Linux walter-atmosphere 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: kubeadm
  • Others:

Copied from original issue: dims/openstack-cloud-controller-manager#81

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Mar 22, 2018
@dims
Copy link
Member Author

dims commented Apr 9, 2018

need to set the --cloud-config to a specific/exact path /etc/kubernetes/cloud-config

@dims
Copy link
Member Author

dims commented Apr 9, 2018

@dims
Copy link
Member Author

dims commented Apr 9, 2018

/close

Fedosin pushed a commit to Fedosin/cloud-provider-openstack that referenced this issue Jan 3, 2020
powellchristoph pushed a commit to powellchristoph/cloud-provider-openstack that referenced this issue Jan 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants