Two options:
- Setup a GKE cluster
- Run minikube locally
To use a k8s cluster running in GKE:
-
Install
gcloud
using the instructions for your platform. -
Create a GCP project (or use an existing project if you've already created one) at http://console.cloud.google.com/home/dashboard. Set the ID of the project in an environment variable (e.g.
PROJECT_ID
).If you are a new GCP user, you might be eligible for a trial credit making your GKE cluster and other resources free for a short time. Otherwise, any GCP resources you create will cost money.
-
Enable the k8s API:
gcloud --project=$PROJECT_ID services enable container.googleapis.com
-
Create a k8s cluster (version 1.10 or greater):
gcloud --project=$PROJECT_ID container clusters create \ --cluster-version=latest \ --zone=us-east1-d \ --scopes=cloud-platform \ --machine-type=n1-standard-4 \ --enable-autoscaling --min-nodes=1 --max-nodes=3 \ knative-demo
- Version 1.10+ is required
- Change this to whichever zone you choose
- cloud-platform scope is required to access GCB
- Knative Serving currently requires 4-cpu nodes to run conformance tests. Changing the machine type from the default may cause failures.
- Autoscale from 1 to 3 nodes. Adjust this for your use case
- Change this to your preferred cluster name
You can see the list of supported cluster versions in a particular zone by running:
# Get the list of valid versions in us-east1-d gcloud container get-server-config --zone us-east1-d
-
Alternately, if you wish to re-use an already-created cluster, you can fetch the credentials to your local machine with:
# Load credentials for the new cluster in us-east1-d gcloud container clusters get-credentials --zone us-east1-d knative-demo
-
If you haven't installed
kubectl
yet, you can install it now withgcloud
:gcloud components install kubectl
-
Add to your .bashrc:
# When using GKE, the K8s user is your GCP user. export K8S_USER_OVERRIDE=$(gcloud config get-value core/account)
-
Install and configure minikube with a VM driver, e.g.
kvm2
on Linux orhyperkit
on macOS. -
Create a cluster with version 1.10 or greater and your chosen VM driver.
The following commands will setup a cluster with
8192 MB
of memory and4
CPUs. If you want to enable metric and log collection, bump the memory to12288 MB
.Providing any admission control pluins overrides the default set provided by minikube so we must explicitly list all plugins we want enabled.
Until minikube makes this the default, the certificate controller must be told where to find the cluster CA certs on the VM.
For Linux use:
minikube start --memory=8192 --cpus=4 \ --kubernetes-version=v1.10.5 \ --vm-driver=kvm2 \ --bootstrapper=kubeadm \ --extra-config=controller-manager.cluster-signing-cert-file="/var/lib/localkube/certs/ca.crt" \ --extra-config=controller-manager.cluster-signing-key-file="/var/lib/localkube/certs/ca.key" \ --extra-config=apiserver.admission-control="LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook"
For macOS use:
minikube start --memory=8192 --cpus=4 \ --kubernetes-version=v1.10.5 \ --vm-driver=hyperkit \ --bootstrapper=kubeadm \ --extra-config=controller-manager.cluster-signing-cert-file="/var/lib/localkube/certs/ca.crt" \ --extra-config=controller-manager.cluster-signing-key-file="/var/lib/localkube/certs/ca.key" \ --extra-config=apiserver.admission-control="LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook"
-
Configure your shell environment to use your minikube cluster:
export K8S_CLUSTER_OVERRIDE='minikube' # When using Minikube, the K8s user is your local user. export K8S_USER_OVERRIDE=$USER
-
Install Knative Serving
Before installing knative on minikube, we need to do two things:
- Workaround Minikube's lack of support for
LoadBalancer
type services - Configure
ko
for local publishing
After doing those, you can deploy the Knative Serving components to Minikube the same way you would any other Kubernetes cluster.
- Workaround Minikube's lack of support for
By default istio uses a LoadBalancer
which is not yet supported by
Minikube but is
required for Knative to function properly (Route
endpoints MUST be
available via a routable IP before they will be marked as ready). One
possible
workaround is to
install a custom controller that provisions an external IP using the
service's ClusterIP, which must be made routable on the minikube host.
These two commands accomplish this, and should be run once whenever
you start a new minikube cluster:
sudo ip route add $(cat ~/.minikube/profiles/minikube/config.json | jq -r ".KubernetesConfig.ServiceCIDR") via $(minikube ip)
kubectl run minikube-lb-patch --replicas=1 --image=elsonrodriguez/minikube-lb-patch:0.1 --namespace=kube-system
You can instruct ko
to sideload images into your Docker daemon
instead of publishing them to a registry by setting
KO_DOCKER_REPO=ko.local
:
# Use the minikube docker daemon (among other things)
eval $(minikube docker-env)
# Switch the current kubectl context to minikube
kubectl config use-context minikube
# Set KO_DOCKER_REPO to a sentinel value for ko to sideload into the daemon.
export KO_DOCKER_REPO="ko.local"
In order to have Knative access an image in Minikube's Docker daemon you
should prefix your image name with the dev.local
registry. This will cause
Knative to use the cached image. You must not tag your image as latest
since
this causes Kubernetes to always attempt a pull.
For example:
eval $(minikube docker-env)
docker pull gcr.io/knative-samples/primer:latest
docker tag gcr.io/knative-samples/primer:latest dev.local/knative-samples/primer:v1
You can use Google Container Registry as the registry for a Minikube cluster.
-
Set up a GCR repo. Export the environment variable
PROJECT_ID
as the name of your project. Also exportGCR_DOMAIN
as the domain name of your GCR repo. This will be eithergcr.io
or a region-specific variant likeus.gcr.io
.export PROJECT_ID=knative-demo-project export GCR_DOMAIN=gcr.io
To publish builds push to GCR, set
KO_DOCKER_REPO
orDOCKER_REPO_OVERRIDE
to the GCR repo's url.export KO_DOCKER_REPO="${GCR_DOMAIN}/${PROJECT_ID}" export DOCKER_REPO_OVERRIDE="${KO_DOCKER_REPO}"
-
Create a GCP service account:
gcloud iam service-accounts create minikube-gcr \ --display-name "Minikube GCR Pull" \ --project $PROJECT_ID
-
Give your service account the
storage.objectViewer
role:gcloud projects add-iam-policy-binding $PROJECT_ID \ --member "serviceAccount:minikube-gcr@${PROJECT_ID}.iam.gserviceaccount.com" \ --role roles/storage.objectViewer
-
Create a key credential file for the service account:
gcloud iam service-accounts keys create \ --iam-account "minikube-gcr@${PROJECT_ID}.iam.gserviceaccount.com" \ minikube-gcr-key.json
Now you can use the minikube-gcr-key.json
file to create image pull secrets
and link them to Kubernetes service accounts. A secret must be created and
linked to a service account in each namespace that will pull images from GCR.
For example, use these steps to allow Minikube to pull Knative Serving and Build images
from GCR as published in our development flow (ko apply -f config/
).
This is only necessary if you are not using public Knative Serving and Build images.
-
Create a Kubernetes secret in the
knative-serving
andknative-build
namespace:export [email protected] kubectl create secret docker-registry "knative-serving-gcr" \ --docker-server=$GCR_DOMAIN \ --docker-username=_json_key \ --docker-password="$(cat minikube-gcr-key.json)" \ --docker-email=$DOCKER_EMAIL \ -n "knative-serving" kubectl create secret docker-registry "build-gcr" \ --docker-server=$GCR_DOMAIN \ --docker-username=_json_key \ --docker-password="$(cat minikube-gcr-key.json)" \ --docker-email=$DOCKER_EMAIL \ -n "knative-build"
The secret must be created in the same namespace as the pod or service account.
-
Add the secret as an imagePullSecret to the
controller
andbuild-controller
service accounts:kubectl patch serviceaccount "build-controller" \ -p '{"imagePullSecrets": [{"name": "build-gcr"}]}' \ -n "knative-build" kubectl patch serviceaccount "controller" \ -p '{"imagePullSecrets": [{"name": "knative-serving-gcr"}]}' \ -n "knative-serving"
Use the same procedure to add imagePullSecrets to service accounts in any
namespace. Use the default
service account for pods that do not specify a
service account.
See also the private-repo sample README.