Skip to content

Commit

Permalink
documentation and minor Makefile changes for EKS support
Browse files Browse the repository at this point in the history
addresses issue bluek8s#132
  • Loading branch information
joel-bluedata committed May 22, 2019
1 parent e09b99a commit 6fdb611
Show file tree
Hide file tree
Showing 5 changed files with 98 additions and 2 deletions.
5 changes: 4 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -221,7 +221,7 @@ undeploy:
@echo \* Deleting headless service...
-kubectl delete svc/${project_name}
@echo
@echo -n \* Waiting for all resources to finish cleanup...
@echo -n \* Waiting for all cluster resources to finish cleanup...
@set -e; \
retries=100; \
while [ $$retries ]; do \
Expand All @@ -241,6 +241,9 @@ undeploy:
done
@echo
@echo
@echo \* Deleting any storage class labelled kubedirector-support...
-kubectl delete storageclass -l kubedirector-support=true
@echo
@echo done
@echo

Expand Down
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ The [wiki](https://github.com/bluek8s/kubedirector/wiki) describes KubeDirector
See the files in the "doc" directory for information about deploying and using KubeDirector:
* [quickstart.md](doc/quickstart.md): deploying a pre-built KubeDirector image
* [gke-notes.md](doc/gke-notes.md): important information if you intend to deploy KubeDirector using Google Kubernetes Engine
* [eks-notes.md](doc/eks-notes.md): important information if you intend to deploy KubeDirector using Amazon Elastic Container Service for Kubernetes
* [virtual-clusters.md](doc/virtual-clusters.md): creating and managing virtual clusters with KubeDirector
* [app-authoring.md](doc/app-authoring.md): creating app definitions for KubeDirector virtual clusters
* [kubedirector-development.md](doc/kubedirector-development.md): building KubeDirector from source
Expand Down
31 changes: 31 additions & 0 deletions deploy/example_config/eks-gp2-for-kd.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# This is an example of one solution for the default StorageClass on EKS
# not setting its volumeBindingMode to WaitForFirstConsumer. It creates
# another storage class, which is identical to the "gp2" class except with
# the necessary setting for volumeBindingMode, and then configures KD to use
# that storage class (even though it is not the K8s default storage class).

# The kubedirector-support label ensures that this storage class will be
# removed by "make teardown". This is purely a behavior of the Makefile in
# this repo and is not a fundamental KubeDirector functionality.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp2-for-kd
labels:
kubedirector-support: "true"
parameters:
fsType: ext4
type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

---

apiVersion: "kubedirector.bluedata.io/v1alpha1"
kind: "KubeDirectorConfig"
metadata:
name: "kd-global-config"
spec:
defaultStorageClassName: "gp2-for-kd"
61 changes: 61 additions & 0 deletions doc/eks-notes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
#### KUBERNETES SETUP

If you intend to deploy KubeDirector on EKS, you will need to have AWS credentials. You must also have kubectl, and the aws CLI, and (for aws CLI versions before 1.16.156) the aws-iam-authenticator utility ready to use.

The [Getting Started with Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) guide will walk you through all first-time setup as well as the process of creating a cluster. Both the AWS Management Console (web UI) process as well as the eksctl (command-line) process will work fine, but we recommend becoming familiar with the eksctl process if you will be repeatedly deploying EKS clusters.

Two important notes to be aware of when creating an EKS cluster:
* Be sure to specify Kubernetes version 1.12 or later.
* Choose a worker (instance type)[https://aws.amazon.com/ec2/instance-types/] with enough resources to host at least one virtual cluster member. The example type t3.medium is probably too small; consider using t3.xlarge or an m5 instance type.

Use of eksctl and the AWS Management Console can be somewhat intermixed, because in the end they are just manipulating standard AWS resources, but this doc will assume you're just using one process or the other.

#### KUBECTL SETUP

In the AWS Management Console process, step 2 of [the guide](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html) describes how to update your kubectl config using the aws CLI. The guide then walks you through using kubectl to add workers to the EKS cluster, so by the time you have a complete cluster you should definitely know that your kubectl is correctly configured.

In the eksctl process, your kubectl config will be automatically updated as a consequence of the EKS cluster creation.

In either case, kubectl will now access your EKS cluster as a member of the system:masters group that is granted the cluster-admin role.

#### DEPLOYING KUBEDIRECTOR

From here you can proceed to deploy KubeDirector as described in [quickstart.md](quickstart.md).

#### CONFIGURING KUBEDIRECTOR

After deploying KubeDirector but before creating virtual clusters, you may wish to create a KubeDirectorConfig object as described in [quickstart.md](quickstart.md).

This is particularly useful to address [an issue with storage classes](https://github.com/kubernetes/kubernetes/issues/34583) that is peculiar to EKS. In EKS, a storage class that will be used for container persistent storage must have its volumeBindingMode property set to the value "WaitForFirstConsumer". However, the "gp2" storage class that is the default in EKS clusters is not currently configured this way.

The volumeBindingMode property of an existing storage class cannot be modified, so to deal with this issue you must create another storage class and then either set it as the default or else explicitly configure KubeDirector to use it.

A YAML file is available in the "deploy/example_config" subdirectory to address this issue. It creates a storage class with the necessary property, and also creates a KubeDirectorConfig to direct KubeDirector to use that storage class. You can use kubectl to apply this solution:
```
kubectl create -f deploy/example_config/eks-gp2-for-kd.yaml
```

If you teardown and then re-deploy KubeDirector, you will need to repeat this step before using persistent storage.

Note: if that command fails by rejecting the storage class creation, it may be the case that you are not using Kubernetes version 1.12 or later (as required) in your EKS cluster.

#### WORKING WITH KUBEDIRECTOR

The process of creating and managing virtual clusters is described in [virtual-clusters.md](virtual-clusters.md).

#### TEARDOWN

When you're finished working with KubeDirector, you can tear down your KubeDirector deployment:
```bash
make teardown
```

If you now want to completely delete your EKS cluster, you can.

If are using the AWS Management Console process, you should delete the cluster in the Amazon EKS console UI and delete the CloudFormation stack used to create the worker nodes. You can also delete the CloudFormation stack used to create the cluster VPC, or you can leave it for re-use with future clusters.

If you are using the eksctl process, the "eksctl delete cluster" command should clean up all resources it created.

The "eksctl delete cluster" command will also delete the related context from your kubectl config, but if you are using the AWS Management Console process you will need to do this cleanup yourself. You can use "kubectl config get-contexts" to see which contexts exist, and then use "kubectl config delete-context" to remove the context associated with the deleted cluster.

If you have some other kubectl context that you wish to return to using at this point, you will want to run "kubectl config get-contexts" to see which contexts exist, and then use "kubectl config use-context" to select one.
2 changes: 1 addition & 1 deletion doc/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

You will need a K8s (Kubernetes) cluster for deploying KubeDirector and KubeDirector-managed virtual clusters. Currently we require using K8s version 1.9 or later, with 1.11 or later recommended simply because our testing is focussed on the more recent versions. (It is likely that a near-future KubeDirector release will raise the minimum supported K8s version to 1.11.)

We usually run KubeDirector on Google Kubernetes Engine; see [gke-notes.md](gke-notes.md) for GKE-specific elaborations on the various steps in this document. We have also run it on DigitalOcean Kubernetes without issues. Other K8s cloud providers may also work, although Amazon Elastic Container Service for Kubernetes deployments have [known issues with using persistent storage](https://github.com/bluek8s/kubedirector/issues/132).
We usually run KubeDirector on Google Kubernetes Engine; see [gke-notes.md](gke-notes.md) for GKE-specific elaborations on the various steps in this document. Or if you would rather use Amazon Elastic Container Service for Kubernetes, see [eks-notes.md](eks-notes.md). We have also run it on DigitalOcean Kubernetes without issues.

We have also run KubeDirector on a local K8s installation created with RPMs from kubernetes.io, so this is another possible approach. If you are going this route, you will need to ensure that [admission webhooks](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites) are enabled and that root-user containers are allowed. If you are using [kubeadm](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/) for setting up K8s version 1.10 or later, you shouldn't have to explicitly worry about those requirements -- its default configuration should be good.

Expand Down

0 comments on commit 6fdb611

Please sign in to comment.