Skip to content

Commit

Permalink
Update minikube docs post rename (#1332)
Browse files Browse the repository at this point in the history
The docs for setting up minikube were using the namespaces and
resource names from elafros instead of knative. The naming changed
slightly, e.g. a knative controller is now called `controller`
instead of `knative-serving-controller`, so one of the loops had
to be broken into 2 statements.

Added steps about redeploying pods after setting up GCR
secrets b/c there is a chicken and egg problem where the namespaces
must exist before you can setup the secrets, but the secrets must
exist before the images can be pulled.

The PR that enabled `MutatingAdmissionWebhook` by default
(kubernetes/minikube#2547) was merged, but
the latest minikube (0.28.0) still did not enable this option
by default b/c providing any arugments overrides all of the defaults,
so we must still set it explicitly.

Made it clear in the setting up knative serving docs that the cluster
admin binding is required, not just for istio.

Use a `NodePort` instead of a `LoadBalancer`
(see kubernetes/minikube#384) - another
step along the road to #608.
  • Loading branch information
bobcatfish authored and google-prow-robot committed Jun 27, 2018
1 parent 772ed64 commit 8752b6b
Show file tree
Hide file tree
Showing 2 changed files with 117 additions and 49 deletions.
20 changes: 17 additions & 3 deletions DEVELOPMENT.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,19 +99,33 @@ Once you reach this point you are ready to do a full build and deploy as describ

## Starting Knative Serving

Once you've [setup your development environment](#getting-started), stand up `Knative Serving` with:
Once you've [setup your development environment](#getting-started), stand up
`Knative Serving`:
### Deploy Istio
1. [Setup cluster admin](#setup-cluster-admin)
1. [Deploy istio](#deploy-istio)
1. [Deploy build](#deploy-build)
1. [Deploy Knative Serving](#deploy-knative-serving)
1. [Enable log and metric collection](#enable-log-and-metric-collection)
### Setup cluster admin
Your `$K8S_USER_OVERRIDE` must be a cluster admin to perform
the setup needed for Knative:
```shell
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user="${K8S_USER_OVERRIDE}"
```
### Deploy Istio
```shell
kubectl apply -f ./third_party/istio-0.8.0/istio.yaml
```
Then label namespaces with `istio-injection=enabled`:
Optionally label namespaces with `istio-injection=enabled`:
```shell
kubectl label namespace default istio-injection=enabled
Expand Down
146 changes: 100 additions & 46 deletions docs/creating-a-kubernetes-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,39 +83,94 @@ To use a k8s cluster running in GKE:
Linux or `hyperkit` on macOS.
1. [Create a cluster](https://github.com/kubernetes/minikube#quickstart) with
version 1.10 or greater and your chosen VM driver:
version 1.10 or greater and your chosen VM driver.
_Until minikube [enables it by
default](https://github.com/kubernetes/minikube/pull/2547),the
MutatingAdmissionWebhook plugin must be manually enabled._
The following commands will setup a cluster with `8192 MB` of memory and `4`
CPUs. If you want to [enable metric and log collection](./DEVELOPMENT.md#enable-log-and-metric-collection),
bump the memory to `12288 MB`.
_Providing any admission control pluins overrides the default set provided
by minikube so we must explicitly list all plugins we want enabled._
_Until minikube [makes this the
default](https://github.com/kubernetes/minikube/issues/1647), the
certificate controller must be told where to find the cluster CA certs on
the VM._
For Linux use:
```shell
minikube start --memory=8192 --cpus=4 \
--kubernetes-version=v1.10.4 \
--vm-driver=kvm2 \
--bootstrapper=kubeadm \
--extra-config=controller-manager.cluster-signing-cert-file="/var/lib/localkube/certs/ca.crt" \
--extra-config=controller-manager.cluster-signing-key-file="/var/lib/localkube/certs/ca.key" \
--extra-config=apiserver.admission-control="DenyEscalatingExec,LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook"
```
For macOS use:
```shell
minikube start --memory=8192 --cpus=4 \
--kubernetes-version=v1.10.4 \
--vm-driver=hyperkit \
--bootstrapper=kubeadm \
--extra-config=controller-manager.cluster-signing-cert-file="/var/lib/localkube/certs/ca.crt" \
--extra-config=controller-manager.cluster-signing-key-file="/var/lib/localkube/certs/ca.key" \
--extra-config=apiserver.admission-control="DenyEscalatingExec,LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook"
```
For Linux use:
```shell
minikube start --memory=8192 --cpus=4 \
--kubernetes-version=v1.10.4 \
--vm-driver=kvm2 \
--bootstrapper=kubeadm \
--extra-config=controller-manager.cluster-signing-cert-file="/var/lib/localkube/certs/ca.crt" \
--extra-config=controller-manager.cluster-signing-key-file="/var/lib/localkube/certs/ca.key" \
--extra-config=apiserver.admission-control="DenyEscalatingExec,LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook"
```
For macOS use:
```shell
minikube start --memory=8192 --cpus=4 \
--kubernetes-version=v1.10.4 \
--vm-driver=hyperkit \
--bootstrapper=kubeadm \
--extra-config=controller-manager.cluster-signing-cert-file="/var/lib/localkube/certs/ca.crt" \
--extra-config=controller-manager.cluster-signing-key-file="/var/lib/localkube/certs/ca.key" \
--extra-config=apiserver.admission-control="DenyEscalatingExec,LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook"
```
1. [Configure your shell environment](../DEVELOPMENT.md#environment-setup)
to use your minikube cluster:
```shell
export K8S_CLUSTER_OVERRIDE='minikube'
# When using Minikube, the K8s user is your local user.
export K8S_USER_OVERRIDE=$USER
```
1. [Start Knative Serving](../DEVELOPMENT.md#starting-knative-serving):
You need to deploy Knative itself a bit differently if you would like to have
routeable `Route` endpoints, and if you would like to setup secrets for
accessing an image registry from within your cluster, e.g.
[GCR](#minikube-with-gcr):
1. [Deploy istio](../DEVELOPMENT.md#deploy-istio):
By default istio uses a `LoadBalancer` which is not available for Minikube
but is required for Knative to function properly (`Route` endpoints MUST
be available via a routable IP before they will be marked as ready)
so deploy istio with `LoadBalancer` replaced by `NodePort`:
```bash
sed 's/LoadBalancer/NodePort/' third_party/istio-0.8.0/istio.yaml | kubectl apply -f -
```
(Then optionally [enable istio injection](../DEVELOPMENT.md#deploy-istio).)
1. [Deploy build](../DEVELOPMENT.md#deploy-build):
```shell
kubectl apply -f ./third_party/config/build/release.yaml
```
1. [Deploy Knative Serving](../DEVELOPMENT.md#deploy-knative-serving):
1. Create the namespaces and service accounts you'll be modifying:
```bash
ko apply -f config/100-namespace.yaml
ko apply -f config/200-serviceaccount.yaml
```
1. Create the secrets you need and patch the service accounts to use them, e.g.
[to setup access to GCR](#minikube-with-gcr).
1. Deploy the rest of Knative:
```bash
ko apply -f config/
```

1. [Enable log and metric collection](../DEVELOPMENT.md#enable-log-and-metric-collection)
(Note you should be using a cluster with 12 GB of memory, see commands above.)


### Minikube with GCR

Expand Down Expand Up @@ -174,14 +229,19 @@ _This is only necessary if you are not using public Knative Serving and Build im
1. Create a Kubernetes secret in the `knative-serving` and `build-system` namespace:
```shell
for prefix in ela build; do
kubectl create secret docker-registry "gcr" \
--docker-server=$GCR_DOMAIN \
--docker-username=_json_key \
--docker-password="$(cat minikube-gcr-key.json)" \
[email protected] \
-n "${prefix}-system"
done
export [email protected]
kubectl create secret docker-registry "knative-serving-gcr" \
--docker-server=$GCR_DOMAIN \
--docker-username=_json_key \
--docker-password="$(cat minikube-gcr-key.json)" \
--docker-email=$DOCKER_EMAIL \
-n "knative-serving"
kubectl create secret docker-registry "build-gcr" \
--docker-server=$GCR_DOMAIN \
--docker-username=_json_key \
--docker-password="$(cat minikube-gcr-key.json)" \
--docker-email=$DOCKER_EMAIL \
-n "build-system"
```
_The secret must be created in the same namespace as the pod or service
Expand All @@ -191,19 +251,13 @@ _This is only necessary if you are not using public Knative Serving and Build im
`build-controller` service accounts:
```shell
for prefix in ela build; do
kubectl patch serviceaccount "${prefix}-controller" \
-p '{"imagePullSecrets": [{"name": "gcr"}]}' \
-n "${prefix}-system"
done
kubectl patch serviceaccount "build-controller" \
-p '{"imagePullSecrets": [{"name": "build-gcr"}]}' \
-n "build-system"
kubectl patch serviceaccount "controller" \
-p '{"imagePullSecrets": [{"name": "knative-serving-gcr"}]}' \
-n "knative-serving"
```

1. Add to your .bashrc:
```shell
# When using Minikube, the K8s user is your local user.
export K8S_USER_OVERRIDE=$USER
```

Use the same procedure to add imagePullSecrets to service accounts in any
namespace. Use the `default` service account for pods that do not specify a
service account.
Expand Down

0 comments on commit 8752b6b

Please sign in to comment.