diff --git a/examples/kubernetes/README.md b/examples/kubernetes/README.md index 1eef1bd7d..d462859b3 100644 --- a/examples/kubernetes/README.md +++ b/examples/kubernetes/README.md @@ -3,7 +3,24 @@ Ceph on Kubernetes This Guide will take you through the process of deploying a Ceph cluster on to a Kubernetes cluster. Sigil is required for template handling and must be installed in system PATH. Instructions can be found here: [https://github.com/gliderlabs/sigil](https://github.com/gliderlabs/sigil) -# Generate keys and configuration + +# Quickstart + +If you're feeling confident: + +``` +./create_ceph_cluster.sh +kubectl create -f ceph-cephfs-test.yaml --namespace=ceph +kubectl get all --namespace=ceph +``` + +This will most likely not work on your setup, see the rest of the guide if you encounter errors. + +We will be working on making this setup more agnostic, especially in regards to the network IP ranges. + +# Tutorial + +### Generate keys and configuration Run the following commands to generate the required configuration and keys. @@ -24,7 +41,7 @@ cd .. Please note that you should save the output files of this command, they will overwrite existing keys and configuration. If you lose these files they can still be retrieved from Kubernetes via `kubectl get secret`. -# Deploy Ceph Components +### Deploy Ceph Components With the secrets created, you can now deploy ceph. @@ -52,12 +69,12 @@ ceph-mds-6kz0n 0/1 Pending 0 24s ceph-mon-check-deek9 1/1 Running 0 24s ``` -# Label your storage nodes +### Label your storage nodes You must label your storage nodes in order to run Ceph pods on them. ``` -kubectl label node node-type=storage +kubectl label node node-type-storage ``` If you want all nodes in your Kubernetes cluster to be a part of your Ceph cluster, label them all. @@ -83,7 +100,7 @@ ceph-osd-ieio7 1/1 Running 2 2m ceph-osd-j1gyd 1/1 Running 2 2m ``` -# Mounting CephFS in a pod +### Mounting CephFS in a pod First you must add the admin client key to your current namespace (or the namespace of your pod). @@ -94,7 +111,13 @@ kubectl create secret generic ceph-client-key --from-file=./generator/ceph-clien Now, if skyDNS is set as a resolver for your host nodes: ``` -kubectl create -f ceph-mount-test.yaml --namespace=ceph +kubectl create -f ceph-cephfs-test.yaml --namespace=ceph +``` + +You should be able to see the filesystem mounted now + +``` +kubectl exec -it --namespace=ceph ceph-cephfs-test df ``` Otherwise you must edit the file and replace `ceph-mon.ceph` with a Pod IP. It is highly reccomended that you place skyDNS as a resolver, otherwise your configuration WILL eventually stop working as Mons are rescheduled. @@ -113,12 +136,46 @@ nameserver If your pod has issues mounting, make sure mount.ceph is installed on all nodes. +For Debian-based distros: + +``` +apt-get install ceph-fs-common ceph-common +``` + +For Redhat: + +``` +yum install ceph ``` -apt-get install ceph-fs-common + +### Mounting a Ceph RBD in a pod + +First we have to create an RBD volume. + ``` +# This gets a random MON pod. +export PODNAME=`kubectl get pods --selector="app=ceph,daemon=mon" --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}" --namespace=ceph` + +kubectl exec -it $PODNAME --namespace=ceph -- rbd create ceph-rbd-test --size 20G + +kubectl exec -it $PODNAME --namespace=ceph -- rbd info ceph-rbd-test +``` + +The same caveats apply for RBDs as Ceph FS volumes. Edit the pod accordingly. Once you're set: + +``` +kubectl create -f ceph-rbd-test.yaml --namespace=ceph +``` + +And again you should see your mount, but with 20 gigs free + +``` +kubectl exec -it --namespace=ceph ceph-rbd-test -- df -h +``` + -# Common Modifications +### Common Modifications By default `emptyDir` is used for everything. If you have durable storage on your nodes, replace the emptyDirs with a `hostPath` to that storage. -Also, 10.244.0.0/16 is used for the default network settings, change these in the Kubernetes yaml objects and the sigil templates to reflect your network. +Also, 10.244.0.0/16 is used for the default network settings, change these in the Kubernetes yaml objects and the sigil templates to reflect your network. \ No newline at end of file diff --git a/examples/kubernetes/ceph-mount-test.yaml b/examples/kubernetes/ceph-cephfs-test.yaml similarity index 95% rename from examples/kubernetes/ceph-mount-test.yaml rename to examples/kubernetes/ceph-cephfs-test.yaml index 675a967e6..f60b544e7 100644 --- a/examples/kubernetes/ceph-mount-test.yaml +++ b/examples/kubernetes/ceph-cephfs-test.yaml @@ -1,7 +1,7 @@ apiVersion: v1 kind: Pod metadata: - name: ceph-mount-test + name: ceph-cephfs-test spec: containers: - name: cephfs-rw diff --git a/examples/kubernetes/ceph-rbd-test.yaml b/examples/kubernetes/ceph-rbd-test.yaml new file mode 100644 index 000000000..7001c8afa --- /dev/null +++ b/examples/kubernetes/ceph-rbd-test.yaml @@ -0,0 +1,27 @@ +apiVersion: v1 +kind: Pod +metadata: + name: ceph-rbd-test +spec: + containers: + - name: cephfs-rw + image: busybox + command: + - sh + - -c + - while true; do sleep 1; done + volumeMounts: + - mountPath: "/mnt/cephfs" + name: cephrbd + volumes: + - name: cephrbd + rbd: + monitors: +#This only works if you have skyDNS resolveable from the kubernetes node. Otherwise you must manually put in one or more mon pod ips. + - ceph-mon.ceph:6789 + user: admin + image: ceph-rbd-test + pool: rbd + secretRef: + name: ceph-client-key +# keyring: AQBEbzpXAAAAABAAaTBhNokmayB4DFhirwMU7w== diff --git a/examples/kubernetes/create_ceph_cluster.sh b/examples/kubernetes/create_ceph_cluster.sh index b72bb3b9f..05c14227b 100755 --- a/examples/kubernetes/create_ceph_cluster.sh +++ b/examples/kubernetes/create_ceph_cluster.sh @@ -1,5 +1,18 @@ #!/bin/bash +cd generator +./generate_secrets.sh all `./generate_secrets.sh fsid` + +kubectl create namespace ceph + +kubectl create secret generic ceph-conf-combined --from-file=ceph.conf --from-file=ceph.client.admin.keyring --from-file=ceph.mon.keyring --namespace=ceph +kubectl create secret generic ceph-bootstrap-rgw-keyring --from-file=ceph.keyring=ceph.rgw.keyring --namespace=ceph +kubectl create secret generic ceph-bootstrap-mds-keyring --from-file=ceph.keyring=ceph.mds.keyring --namespace=ceph +kubectl create secret generic ceph-bootstrap-osd-keyring --from-file=ceph.keyring=ceph.osd.keyring --namespace=ceph +kubectl create secret generic ceph-client-key --from-file=ceph-client-key --namespace=ceph + +cd .. + kubectl create \ -f ceph-mds-v1-rc.yaml \ -f ceph-mon-v1-svc.yaml \