Skip to content

Commit

Permalink
Refer to knative/docs cluster setup instead of duplicating
Browse files Browse the repository at this point in the history
The DEVELOPMENT.md in knative/serving referred to a doc on setting up a
kubernetes cluster (either in GKE or with minikube) which had fallen out
of date with very similar installation docs in knative/docs.

I ran into this when trying to figure out the correct scopes to use for
creating a cluster which could pass the knative/build-pipeline kaniko
integration test (tektoncd/pipeline#150)
and it turned out that the `--scopes` in the doc referenced in this
repo are different from the `--scopes` in the knative/docs repo.(I
worked around my problem my using `storage-full`, which isn't used in
either set of docs but that's a different story!)

The minikube docs that were in this repo also contained args for
specifying the location of the cluster CA certs, but I'm assuming this
is no longer needed since knative/docs doesn't have this and
kubernetes/minikube#1647 is resolved.
  • Loading branch information
bobcatfish committed Nov 6, 2018
1 parent 81ef6ec commit 3ec3c75
Show file tree
Hide file tree
Showing 2 changed files with 108 additions and 194 deletions.
7 changes: 5 additions & 2 deletions DEVELOPMENT.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ to `Knative Serving`. Also take a look at:
1. Setup [GitHub access via
SSH](https://help.github.com/articles/connecting-to-github-with-ssh/)
1. Install [requirements](#requirements)
1. [Set up a kubernetes cluster](./docs/creating-a-kubernetes-cluster.md)
1. [Set up a kubernetes cluster](docs/creating-a-kubernetes-cluster.md)
1. [Set up a docker repository you can push
to](./docs/setting-up-a-docker-registry.md)
1. Set up your [shell environment](#environment-setup)
Expand Down Expand Up @@ -62,7 +62,10 @@ export PATH="${PATH}:${GOPATH}/bin"
export KO_DOCKER_REPO='gcr.io/my-gcloud-project-name'
export DOCKER_REPO_OVERRIDE="${KO_DOCKER_REPO}"
export K8S_CLUSTER_OVERRIDE='my-k8s-cluster-name'
export K8S_USER_OVERRIDE='my-k8s-user'
# When using GKE, the K8s user is your GCP user.
export K8S_USER_OVERRIDE=$(gcloud config get-value core/account)
# When using Minikube, the K8s user is your local user.
export K8S_USER_OVERRIDE=$USER
```

Make sure to configure [authentication](
Expand Down
295 changes: 103 additions & 192 deletions docs/creating-a-kubernetes-cluster.md
Original file line number Diff line number Diff line change
@@ -1,144 +1,54 @@
# Creating a Kubernetes Cluster for Knative Serving
# Creating a Kubernetes cluster

Two options:
This doc describes two options for creating a k8s cluster:

* Setup a [GKE cluster](#gke)
* Run [minikube](#minikube) locally
* Setup a [GKE cluster](#gke)
* Run [minikube](#minikube) locally

## GKE

To use a k8s cluster running in GKE:
1. [Install required tools and setup GCP project](https://github.com/knative/docs/blob/master/install/Knative-with-GKE.md#before-you-begin)
(You may find it useful to save the ID of the project in an environment variable (e.g. `PROJECT_ID`).
1. [Create a GKE cluster for knative](https://github.com/knative/docs/blob/master/install/Knative-with-GKE.md#creating-a-kubernetes-cluster)
1. Add `K8S_USER_OVERRIDE` to your .bashrc (see [Environment Setup](#environment-setup)):

1. Install `gcloud` using [the instructions for your
platform](https://cloud.google.com/sdk/downloads).
```shell
# When using GKE, the K8s user is your GCP user.
export K8S_USER_OVERRIDE=$(gcloud config get-value core/account)
```

1. Create a GCP project (or use an existing project if you've already created
one) at http://console.cloud.google.com/home/dashboard. Set the ID of the
project in an environment variable (e.g. `PROJECT_ID`).
_If you are a new GCP user, you might be eligible for a trial credit making
your GKE cluster and other resources free for a short time. Otherwise, any
GCP resources you create will cost money._

_If you are a new GCP user, you might be eligible for a trial credit making
your GKE cluster and other resources free for a short time. Otherwise, any
GCP resources you create will cost money._
If you have an existing GKE cluster you'd like to use, you can fetch your credentials with:
1. Enable the k8s API:

```shell
gcloud --project=$PROJECT_ID services enable container.googleapis.com
```

1. Create a k8s cluster (version 1.10 or greater):

```shell
gcloud --project=$PROJECT_ID container clusters create \
--cluster-version=latest \
--zone=us-east1-d \
--scopes=cloud-platform \
--machine-type=n1-standard-4 \
--enable-autoscaling --min-nodes=1 --max-nodes=3 \
knative-demo
```

* Version 1.10+ is required
* Change this to whichever zone you choose
* cloud-platform scope is required to access GCB
* Knative Serving currently requires 4-cpu nodes to run conformance tests.
Changing the machine type from the default may cause failures.
* Autoscale from 1 to 3 nodes. Adjust this for your use case
* Change this to your preferred cluster name

You can see the list of supported cluster versions in a particular zone by
running:

```shell
# Get the list of valid versions in us-east1-d
gcloud container get-server-config --zone us-east1-d
```

1. **Alternately**, if you wish to re-use an already-created cluster,
you can fetch the credentials to your local machine with:

```shell
# Load credentials for the new cluster in us-east1-d
gcloud container clusters get-credentials --zone us-east1-d knative-demo
```

1. If you haven't installed `kubectl` yet, you can install it now with
`gcloud`:
```shell
# Load credentials for the new cluster in us-east1-d
gcloud container clusters get-credentials --zone us-east1-d knative-demo
```
```shell
gcloud components install kubectl
```
## Minikube
1. Add to your .bashrc:
```shell
# When using GKE, the K8s user is your GCP user.
export K8S_USER_OVERRIDE=$(gcloud config get-value core/account)
```
1. [Install required tools](https://github.com/knative/docs/blob/master/install/Knative-with-Minikube.md#before-you-begin)
1. [Create a Kubernetes cluster with minikube](https://github.com/knative/docs/blob/master/install/Knative-with-Minikube.md#creating-a-kubernetes-cluster)
1. [Configure your shell environment](../DEVELOPMENT.md#environment-setup)
to use your minikube cluster:
## Minikube
```shell
export K8S_CLUSTER_OVERRIDE='minikube'
# When using Minikube, the K8s user is your local user.
export K8S_USER_OVERRIDE=$USER
```
1. [Install and configure
minikube](https://github.com/kubernetes/minikube#minikube) with a [VM
driver](https://github.com/kubernetes/minikube#requirements), e.g. `kvm2` on
Linux or `hyperkit` on macOS.
1. [Create a cluster](https://github.com/kubernetes/minikube#quickstart) with
version 1.10 or greater and your chosen VM driver.
The following commands will setup a cluster with `8192 MB` of memory and `4`
CPUs. If you want to [enable metric and log collection](./DEVELOPMENT.md#enable-log-and-metric-collection),
bump the memory to `12288 MB`.
_Providing any admission control pluins overrides the default set provided
by minikube so we must explicitly list all plugins we want enabled._
_Until minikube [makes this the
default](https://github.com/kubernetes/minikube/issues/1647), the
certificate controller must be told where to find the cluster CA certs on
the VM._
For Linux use:
```shell
minikube start --memory=8192 --cpus=4 \
--kubernetes-version=v1.10.5 \
--vm-driver=kvm2 \
--bootstrapper=kubeadm \
--extra-config=controller-manager.cluster-signing-cert-file="/var/lib/localkube/certs/ca.crt" \
--extra-config=controller-manager.cluster-signing-key-file="/var/lib/localkube/certs/ca.key" \
--extra-config=apiserver.admission-control="LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook"
```
For macOS use:
```shell
minikube start --memory=8192 --cpus=4 \
--kubernetes-version=v1.10.5 \
--vm-driver=hyperkit \
--bootstrapper=kubeadm \
--extra-config=controller-manager.cluster-signing-cert-file="/var/lib/localkube/certs/ca.crt" \
--extra-config=controller-manager.cluster-signing-key-file="/var/lib/localkube/certs/ca.key" \
--extra-config=apiserver.admission-control="LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook"
```
1. [Configure your shell environment](../DEVELOPMENT.md#environment-setup)
to use your minikube cluster:
```shell
export K8S_CLUSTER_OVERRIDE='minikube'
# When using Minikube, the K8s user is your local user.
export K8S_USER_OVERRIDE=$USER
```
1. Install Knative Serving
Before installing knative on minikube, we need to do two things:
1. [Workaround Minikube's lack of support for `LoadBalancer` type services](#loadbalancer-support-in-minikube)
1. [Configure `ko` for local publishing](#minikube-with-ko)

After doing those, you can [deploy the Knative Serving
components](../DEVELOPMENT.md#starting-knative-serving) to
Minikube the same way you would any other Kubernetes cluster.
1. Take note of the workarounds required for:
* [Installing Istio](https://github.com/knative/docs/blob/master/install/Knative-with-Minikube.md#installing-istio)
* [Installing Serving](https://github.com/knative/docs/blob/master/install/Knative-with-Minikube.md#installing-knative-serving)
* [Loadbalancer support](#loadbalancer-support-in-minikube)
* [`ko`](#minikube-with-ko)
* [Images](#enabling-knative-to-use-images-in-minikube)
* [GCR](#minikube-with-gcr)
### `LoadBalancer` Support in Minikube
Expand Down Expand Up @@ -194,47 +104,47 @@ docker tag gcr.io/knative-samples/primer:latest dev.local/knative-samples/primer
You can use Google Container Registry as the registry for a Minikube cluster.
1. [Set up a GCR repo](setting-up-a-docker-registry.md). Export the environment
variable `PROJECT_ID` as the name of your project. Also export `GCR_DOMAIN`
as the domain name of your GCR repo. This will be either `gcr.io` or a
region-specific variant like `us.gcr.io`.
1. [Set up a GCR repo](docs/setting-up-a-docker-registry.md). Export the environment
variable `PROJECT_ID` as the name of your project. Also export `GCR_DOMAIN`
as the domain name of your GCR repo. This will be either `gcr.io` or a
region-specific variant like `us.gcr.io`.
```shell
export PROJECT_ID=knative-demo-project
export GCR_DOMAIN=gcr.io
```
```shell
export PROJECT_ID=knative-demo-project
export GCR_DOMAIN=gcr.io
```
To publish builds push to GCR, set `KO_DOCKER_REPO` or
`DOCKER_REPO_OVERRIDE` to the GCR repo's url.
To publish builds push to GCR, set `KO_DOCKER_REPO` or
`DOCKER_REPO_OVERRIDE` to the GCR repo's url.

```shell
export KO_DOCKER_REPO="${GCR_DOMAIN}/${PROJECT_ID}"
export DOCKER_REPO_OVERRIDE="${KO_DOCKER_REPO}"
```
```shell
export KO_DOCKER_REPO="${GCR_DOMAIN}/${PROJECT_ID}"
export DOCKER_REPO_OVERRIDE="${KO_DOCKER_REPO}"
```

1. Create a GCP service account:
1. Create a GCP service account:

```shell
gcloud iam service-accounts create minikube-gcr \
--display-name "Minikube GCR Pull" \
--project $PROJECT_ID
```
```shell
gcloud iam service-accounts create minikube-gcr \
--display-name "Minikube GCR Pull" \
--project $PROJECT_ID
```

1. Give your service account the `storage.objectViewer` role:
1. Give your service account the `storage.objectViewer` role:

```shell
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member "serviceAccount:minikube-gcr@${PROJECT_ID}.iam.gserviceaccount.com" \
--role roles/storage.objectViewer
```
```shell
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member "serviceAccount:minikube-gcr@${PROJECT_ID}.iam.gserviceaccount.com" \
--role roles/storage.objectViewer
```

1. Create a key credential file for the service account:
1. Create a key credential file for the service account:

```shell
gcloud iam service-accounts keys create \
--iam-account "minikube-gcr@${PROJECT_ID}.iam.gserviceaccount.com" \
minikube-gcr-key.json
```
```shell
gcloud iam service-accounts keys create \
--iam-account "minikube-gcr@${PROJECT_ID}.iam.gserviceaccount.com" \
minikube-gcr-key.json
```

Now you can use the `minikube-gcr-key.json` file to create image pull secrets
and link them to Kubernetes service accounts. _A secret must be created and
Expand All @@ -244,40 +154,41 @@ For example, use these steps to allow Minikube to pull Knative Serving and Build
from GCR as published in our development flow (`ko apply -f config/`).
_This is only necessary if you are not using public Knative Serving and Build images._

1. Create a Kubernetes secret in the `knative-serving` and `knative-build` namespace:
```shell
export [email protected]
kubectl create secret docker-registry "knative-serving-gcr" \
--docker-server=$GCR_DOMAIN \
--docker-username=_json_key \
--docker-password="$(cat minikube-gcr-key.json)" \
--docker-email=$DOCKER_EMAIL \
-n "knative-serving"
kubectl create secret docker-registry "build-gcr" \
--docker-server=$GCR_DOMAIN \
--docker-username=_json_key \
--docker-password="$(cat minikube-gcr-key.json)" \
--docker-email=$DOCKER_EMAIL \
-n "knative-build"
```
_The secret must be created in the same namespace as the pod or service
account._
1. Add the secret as an imagePullSecret to the `controller` and
`build-controller` service accounts:
```shell
kubectl patch serviceaccount "build-controller" \
-p '{"imagePullSecrets": [{"name": "build-gcr"}]}' \
-n "knative-build"
kubectl patch serviceaccount "controller" \
-p '{"imagePullSecrets": [{"name": "knative-serving-gcr"}]}' \
-n "knative-serving"
```
1. Create a Kubernetes secret in the `knative-serving` and `knative-build` namespace:

```shell
export [email protected]
kubectl create secret docker-registry "knative-serving-gcr" \
--docker-server=$GCR_DOMAIN \
--docker-username=_json_key \
--docker-password="$(cat minikube-gcr-key.json)" \
--docker-email=$DOCKER_EMAIL \
-n "knative-serving"
kubectl create secret docker-registry "build-gcr" \
--docker-server=$GCR_DOMAIN \
--docker-username=_json_key \
--docker-password="$(cat minikube-gcr-key.json)" \
--docker-email=$DOCKER_EMAIL \
-n "knative-build"
```

_The secret must be created in the same namespace as the pod or service
account._

1. Add the secret as an imagePullSecret to the `controller` and
`build-controller` service accounts:

```shell
kubectl patch serviceaccount "build-controller" \
-p '{"imagePullSecrets": [{"name": "build-gcr"}]}' \
-n "knative-build"
kubectl patch serviceaccount "controller" \
-p '{"imagePullSecrets": [{"name": "knative-serving-gcr"}]}' \
-n "knative-serving"
```

Use the same procedure to add imagePullSecrets to service accounts in any
namespace. Use the `default` service account for pods that do not specify a
service account.

See also the [private-repo sample README](./../sample/private-repos/README.md).
See also the [private-repo sample README](/sample/private-repos/README.md).

0 comments on commit 3ec3c75

Please sign in to comment.