Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a storage-provisioner-gluster addon #3521

Merged
merged 3 commits into from
Jan 16, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 8 additions & 2 deletions cmd/minikube/cmd/config/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ var settings = []Setting{
name: "default-storageclass",
set: SetBool,
validations: []setFn{IsValidAddon},
callbacks: []setFn{EnableOrDisableAddon},
callbacks: []setFn{EnableOrDisableStorageClasses},
},
{
name: "heapster",
Expand Down Expand Up @@ -186,14 +186,20 @@ var settings = []Setting{
name: "default-storageclass",
set: SetBool,
validations: []setFn{IsValidAddon},
callbacks: []setFn{EnableOrDisableDefaultStorageClass},
callbacks: []setFn{EnableOrDisableStorageClasses},
},
{
name: "storage-provisioner",
set: SetBool,
validations: []setFn{IsValidAddon},
callbacks: []setFn{EnableOrDisableAddon},
},
{
name: "storage-provisioner-gluster",
set: SetBool,
validations: []setFn{IsValidAddon},
callbacks: []setFn{EnableOrDisableStorageClasses},
},
{
name: "metrics-server",
set: SetBool,
Expand Down
23 changes: 18 additions & 5 deletions cmd/minikube/cmd/config/util.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ import (
"k8s.io/minikube/pkg/minikube/assets"
"k8s.io/minikube/pkg/minikube/cluster"
"k8s.io/minikube/pkg/minikube/config"
"k8s.io/minikube/pkg/minikube/constants"
"k8s.io/minikube/pkg/minikube/machine"
"k8s.io/minikube/pkg/minikube/storageclass"
)
Expand Down Expand Up @@ -138,18 +139,30 @@ func EnableOrDisableAddon(name string, val string) error {
return nil
}

func EnableOrDisableDefaultStorageClass(name, val string) error {
func EnableOrDisableStorageClasses(name, val string) error {
enable, err := strconv.ParseBool(val)
if err != nil {
return errors.Wrap(err, "Error parsing boolean")
}

// Special logic to disable the default storage class
if !enable {
err := storageclass.DisableDefaultStorageClass()
class := constants.DefaultStorageClassProvisioner
if name == "storage-provisioner-gluster" {
class = "glusterfile"
}

if enable {
// Only StorageClass for 'name' should be marked as default
err := storageclass.SetDefaultStorageClass(class)
if err != nil {
return errors.Wrap(err, "Error disabling default storage class")
return errors.Wrapf(err, "Error making %s the default storage class", class)
}
} else {
// Unset the StorageClass as default
err := storageclass.DisableDefaultStorageClass(class)
if err != nil {
return errors.Wrapf(err, "Error disabling %s as the default storage class", class)
}
}

return EnableOrDisableAddon(name, val)
}
141 changes: 141 additions & 0 deletions deploy/addons/storage-provisioner-gluster/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
## storage-provisioner-gluster addon
[Gluster](https://gluster.org/), a scalable network filesystem that provides dynamic provisioning of PersistenVolumeClaims.

### Starting Minikube
This addon works within Minikube, without any additional configuration.

```shell
$ minikube start
```

### Enabling storage-provisioner-gluster
To enable this addon, simply run:

```
$ minikube addons enable storage-provisioner-gluster
```

Within one minute, the addon manager should pick up the change and you should see several Pods in the `storage-gluster` namespace:

```
$ kubectl -n storage-gluster get pods
NAME READY STATUS RESTARTS AGE
glusterfile-provisioner-dbcbf54fc-726vv 1/1 Running 0 1m
glusterfs-rvdmz 0/1 Running 0 40s
heketi-79997b9d85-42c49 0/1 ContainerCreating 0 40s
```

Some of the Pods need a little more time to get up an running than others, but in a few minutes everything should have been deployed and all Pods should be `READY`:

```
$ kubectl -n storage-gluster get pods
NAME READY STATUS RESTARTS AGE
glusterfile-provisioner-dbcbf54fc-726vv 1/1 Running 0 5m
glusterfs-rvdmz 1/1 Running 0 4m
heketi-79997b9d85-42c49 1/1 Running 1 4m
```

Once the Pods have status `Running`, the `glusterfile` StorageClass should have been marked as `default`:

```
$ kubectl get sc
NAME PROVISIONER AGE
glusterfile (default) gluster.org/glusterfile 3m
```

### Creating PVCs
The storage in the Gluster environment is limited to 10 GiB. This is because the data is stored in the Minikube VM (a sparse file `/srv/fake-disk.img`).

The following `yaml` creates a PVC, starts a CentOS developer Pod that generates a website and deploys an NGINX webserver that provides access to the website:

```
---
#
# Minimal PVC where a developer can build a website.
#
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: website
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Mi
storageClassName: glusterfile
---
#
# This pod will just download a fortune phrase and store it (as plain text) in
# index.html on the PVC. This is how we create websites?
#
# The root of the website stored on the above PVC is mounted on /mnt.
#
apiVersion: v1
kind: Pod
metadata:
name: centos-webdev
spec:
containers:
- image: centos:latest
name: centos
args:
- curl
- -o/mnt/index.html
- https://api.ef.gy/fortune
volumeMounts:
- mountPath: /mnt
name: website
# once the website is created, the pod will exit
restartPolicy: Never
volumes:
- name: website
persistentVolumeClaim:
claimName: website
---
#
# Start a NGINX webserver with the website.
# We'll skip creating a service, to keep things minimal.
#
apiVersion: v1
kind: Pod
metadata:
name: website-nginx
spec:
containers:
- image: gcr.io/google_containers/nginx-slim:0.8
name: nginx
ports:
- containerPort: 80
name: web
volumeMounts:
- mountPath: /usr/share/nginx/html
name: website
volumes:
- name: website
persistentVolumeClaim:
claimName: website
```

Because the PVC has been created with the `ReadWriteMany` accessMode, both Pods can access the PVC at the same time. Other website developer Pods can use the same PVC to update the contents of the site.

The above configuration does not expose the website on the Minikube VM. One way to see the contents of the website is to SSH into the Minikube VM and fetch the website there:

```
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
centos-webdev 0/1 Completed 0 1m 172.17.0.9 minikube
website-nginx 1/1 Running 0 24s 172.17.0.9 minikube
$ minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ curl http://172.17.0.9
I came, I saw, I deleted all your files.
$
```

138 changes: 138 additions & 0 deletions deploy/addons/storage-provisioner-gluster/glusterfs-daemonset.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,138 @@
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
namespace: storage-gluster
name: glusterfs
labels:
glusterfs: daemonset
k8s-app: storage-provisioner-gluster
kubernetes.io/minikube-addons: storage-provisioner-gluster
addonmanager.kubernetes.io/mode: Reconcile
annotations:
description: GlusterFS DaemonSet
tags: glusterfs
spec:
selector:
matchLabels:
k8s-app: storage-provisioner-gluster
template:
metadata:
namespace: storage-gluster
name: glusterfs
labels:
glusterfs: pod
glusterfs-node: pod
k8s-app: storage-provisioner-gluster
spec:
#nodeSelector:
# kubernetes.io/hostname: minikube
hostNetwork: true
containers:
- image: quay.io/nixpanic/glusterfs-server:pr_fake-disk
imagePullPolicy: IfNotPresent
name: glusterfs
env:
- name: USE_FAKE_DISK
value: "enabled"
#- name: USE_FAKE_FILE
# value: "/srv/fake-disk.img"
#- name: USE_FAKE_SIZE
# value: "10G"
#- name: USE_FAKE_DEV
# value: "/dev/fake"
resources:
requests:
memory: 100Mi
cpu: 100m
volumeMounts:
# default location for fake-disk.img, it needs to be persistent
- name: fake-disk
mountPath: /srv
# the fstab for the bricks is under /var/lib/heketi
- name: glusterfs-heketi
mountPath: "/var/lib/heketi"
- name: glusterfs-run
mountPath: "/run"
- name: glusterfs-lvm
mountPath: "/run/lvm"
#- name: glusterfs-etc
# mountPath: "/etc/glusterfs"
- name: glusterfs-logs
mountPath: "/var/log/glusterfs"
- name: glusterfs-config
mountPath: "/var/lib/glusterd"
- name: glusterfs-dev
mountPath: "/dev"
# glusterfind uses /var/lib/misc/glusterfsd, yuck
- name: glusterfs-misc
mountPath: "/var/lib/misc/glusterfsd"
- name: glusterfs-cgroup
mountPath: "/sys/fs/cgroup"
readOnly: true
- name: glusterfs-ssl
mountPath: "/etc/ssl"
readOnly: true
- name: kernel-modules
mountPath: "/usr/lib/modules"
readOnly: true
securityContext:
capabilities: {}
privileged: true
readinessProbe:
timeoutSeconds: 3
initialDelaySeconds: 40
exec:
command:
- "/bin/bash"
- "-c"
- systemctl status glusterd.service
periodSeconds: 25
successThreshold: 1
failureThreshold: 50
livenessProbe:
timeoutSeconds: 3
initialDelaySeconds: 40
exec:
command:
- "/bin/bash"
- "-c"
- systemctl status glusterd.service
periodSeconds: 25
successThreshold: 1
failureThreshold: 50
volumes:
- name: fake-disk
hostPath:
path: /srv
- name: glusterfs-heketi
hostPath:
path: "/var/lib/heketi"
- name: glusterfs-run
- name: glusterfs-lvm
hostPath:
path: "/run/lvm"
- name: glusterfs-etc
hostPath:
path: "/etc/glusterfs"
- name: glusterfs-logs
hostPath:
path: "/var/log/glusterfs"
- name: glusterfs-config
hostPath:
path: "/var/lib/glusterd"
- name: glusterfs-dev
hostPath:
path: "/dev"
- name: glusterfs-misc
hostPath:
path: "/var/lib/misc/glusterfsd"
- name: glusterfs-cgroup
hostPath:
path: "/sys/fs/cgroup"
- name: glusterfs-ssl
hostPath:
path: "/etc/ssl"
- name: kernel-modules
hostPath:
path: "/usr/lib/modules"
Loading