diff --git a/.travis.yml b/.travis.yml index b6863c4301d03..4f150089441b2 100644 --- a/.travis.yml +++ b/.travis.yml @@ -1,10 +1,13 @@ language: go go: - - 1.10.2 + - 1.11.5 jobs: include: - name: "Testing examples" + cache: + directories: + - $HOME/.cache/go-build # Don't want default ./... here: install: - export PATH=$GOPATH/bin:$PATH @@ -13,7 +16,7 @@ jobs: # Make sure we are testing against the correct branch - pushd $GOPATH/src/k8s.io && git clone https://github.com/kubernetes/kubernetes && popd - - pushd $GOPATH/src/k8s.io/kubernetes && git checkout release-1.11 && popd + - pushd $GOPATH/src/k8s.io/kubernetes && git checkout release-1.13 && make generated_files && popd - cp -L -R $GOPATH/src/k8s.io/kubernetes/vendor/ $GOPATH/src/ - rm -r $GOPATH/src/k8s.io/kubernetes/vendor/ diff --git a/Makefile b/Makefile index cfa69df40cb43..f44e04b46eb62 100644 --- a/Makefile +++ b/Makefile @@ -36,7 +36,7 @@ sass-develop: scripts/sass.sh develop serve: ## Boot the development server. - hugo server --ignoreCache --buildFuture + hugo server --buildFuture docker-image: $(DOCKER) build . --tag $(DOCKER_IMAGE) --build-arg HUGO_VERSION=$(HUGO_VERSION) diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index ecf6d21378fad..8ee41f9a7268d 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -137,6 +137,40 @@ aliases: - stewart-yu - xiangpengzhao - zhangxiaoyu-zidif + sig-docs-fr-owners: #Team: Documentation; GH: sig-docs-fr-owners + - sieben + - perriea + - rekcah78 + - lledru + - yastij + - smana + - rbenzair + - abuisine + - erickhun + - jygastaud + - awkif + - oussemos + sig-docs-fr-reviews: #Team: Documentation; GH: sig-docs-fr-reviews + - sieben + - perriea + - rekcah78 + - lledru + - yastij + - smana + - rbenzair + - abuisine + - erickhun + - jygastaud + - awkif + - oussemos + sig-docs-it-owners: #Team: Italian docs localization; GH: sig-docs-it-owners + - rlenferink + - lledru + - micheleberardi + sig-docs-it-reviews: #Team: Italian docs PR reviews; GH:sig-docs-it-reviews + - rlenferink + - lledru + - micheleberardi sig-docs-ja-owners: #Team: Japanese docs localization; GH: sig-docs-ja-owners - cstoku - nasa9084 @@ -169,6 +203,7 @@ aliases: - markthink - tengqm - xiangpengzhao + - xichengliudui - zacharysarah - zhangxiaoyu-zidif sig-docs-zh-reviews: #Team Chinese docs reviews; GH: sig-docs-zh-reviews @@ -177,6 +212,7 @@ aliases: - markthink - tengqm - xiangpengzhao + - xichengliudui - zhangxiaoyu-zidif - pigletfly sig-federation: #Team: Federation; e.g. Federated Clusters diff --git a/README-fr.md b/README-fr.md new file mode 100644 index 0000000000000..cca46595ad023 --- /dev/null +++ b/README-fr.md @@ -0,0 +1,83 @@ +# Documentation de Kubernetes + +[![Build Status](https://api.travis-ci.org/kubernetes/website.svg?branch=master)](https://travis-ci.org/kubernetes/website) +[![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest) + +Bienvenue ! +Ce référentiel contient toutes les informations nécessaires à la construction du site web et de la documentation de Kubernetes. +Nous sommes très heureux que vous vouliez contribuer ! + +## Contribuer à la rédaction des docs + +Vous pouvez cliquer sur le bouton **Fork** en haut à droite de l'écran pour créer une copie de ce dépôt dans votre compte GitHub. +Cette copie s'appelle un *fork*. +Faites tous les changements que vous voulez dans votre fork, et quand vous êtes prêt à nous envoyer ces changements, allez dans votre fork et créez une nouvelle pull request pour nous le faire savoir. + +Une fois votre pull request créée, un examinateur de Kubernetes se chargera de vous fournir une revue claire et exploitable. +En tant que propriétaire de la pull request, **il est de votre responsabilité de modifier votre pull request pour tenir compte des commentaires qui vous ont été fournis par l'examinateur de Kubernetes.** +Notez également que vous pourriez vous retrouver avec plus d'un examinateur de Kubernetes pour vous fournir des commentaires ou vous pourriez finir par recevoir des commentaires d'un autre examinateur que celui qui vous a été initialement affecté pour vous fournir ces commentaires. +De plus, dans certains cas, l'un de vos examinateur peut demander un examen technique à un [examinateur technique de Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers) au besoin. +Les examinateurs feront de leur mieux pour fournir une revue rapidement, mais le temps de réponse peut varier selon les circonstances. + +Pour plus d'informations sur la contribution à la documentation Kubernetes, voir : + +* [Commencez à contribuer](https://kubernetes.io/docs/contribute/start/) +* [Apperçu des modifications apportées à votre documentation](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally) +* [Utilisation des modèles de page](http://kubernetes.io/docs/contribute/style/page-templates/) +* [Documentation Style Guide](http://kubernetes.io/docs/contribute/style/style-guide/) +* [Traduction de la documentation Kubernetes](https://kubernetes.io/docs/contribute/localization/) + +## Exécuter le site localement en utilisant Docker + +La façon recommandée d'exécuter le site web Kubernetes localement est d'utiliser une image spécialisée [Docker](https://docker.com) qui inclut le générateur de site statique [Hugo](https://gohugo.io). + +> Si vous êtes sous Windows, vous aurez besoin de quelques outils supplémentaires que vous pouvez installer avec [Chocolatey](https://chocolatey.org). `choco install install make` + +> Si vous préférez exécuter le site Web localement sans Docker, voir [Exécuter le site localement avec Hugo](#running-the-site-locally-using-hugo) ci-dessous. + +Si vous avez Docker [up and running](https://www.docker.com/get-started), construisez l'image Docker `kubernetes-hugo' localement: + +```bash +make docker-image +``` + +Une fois l'image construite, vous pouvez exécuter le site localement : + +```bash +make docker-serve +``` + +Ouvrez votre navigateur à l'adresse: http://localhost:1313 pour voir le site. +Lorsque vous apportez des modifications aux fichiers sources, Hugo met à jour le site et force le navigateur à rafraîchir la page. + +## Exécuter le site localement en utilisant Hugo + +Voir la [documentation officielle Hugo](https://gohugo.io/getting-started/installing/) pour les instructions d'installation Hugo. +Assurez-vous d'installer la version Hugo spécifiée par la variable d'environnement `HUGO_VERSION` dans le fichier [`netlify.toml`](netlify.toml#L9). + +Pour exécuter le site localement lorsque vous avez Hugo installé : + +```bash +make serve +``` + +Le serveur Hugo local démarrera sur le port 1313. +Ouvrez votre navigateur à l'adresse: http://localhost:1313 pour voir le site. +Lorsque vous apportez des modifications aux fichiers sources, Hugo met à jour le site et force le navigateur à rafraîchir la page. + +## Communauté, discussion, contribution et assistance + +Apprenez comment vous engager avec la communauté Kubernetes sur la [page communauté](http://kubernetes.io/community/). + +Vous pouvez joindre les responsables de ce projet à l'adresse : + +- [Slack](https://kubernetes.slack.com/messages/sig-docs) +- [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs) + +### Code de conduite + +La participation à la communauté Kubernetes est régie par le [Code de conduite de Kubernetes](code-of-conduct.md). + +## Merci ! + +Kubernetes prospère grâce à la participation de la communauté, et nous apprécions vraiment vos contributions à notre site et à notre documentation ! diff --git a/README-ko.md b/README-ko.md index 6b1075d4a3a3c..5ac8d1e61721a 100644 --- a/README-ko.md +++ b/README-ko.md @@ -1,5 +1,8 @@ # 쿠버네티스 문서화 +[![Build Status](https://api.travis-ci.org/kubernetes/website.svg?branch=master)](https://travis-ci.org/kubernetes/website) +[![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest) + 환영합니다! 이 저장소는 쿠버네티스 웹사이트 및 문서화를 만드는 데 필요로 하는 모든 asset에 대한 공간을 제공합니다. 여러 분이 기여를 원한다는 사실에 매우 기쁩니다! ## 문서에 기여하기 diff --git a/README.md b/README.md index b66fec640e042..fd22b3342c853 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ [![Build Status](https://api.travis-ci.org/kubernetes/website.svg?branch=master)](https://travis-ci.org/kubernetes/website) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest) -Welcome! This repository houses all of the assets required to build the Kubernetes website and documentation. We're very pleased that you want to contribute! +Welcome! This repository houses all of the assets required to build the [Kubernetes website and documentation](https://kubernetes.io/). We're very pleased that you want to contribute! ## Contributing to the docs diff --git a/config.toml b/config.toml index 98bf0d4a33d42..071c527e98a16 100644 --- a/config.toml +++ b/config.toml @@ -154,15 +154,49 @@ contentDir = "content/ko" time_format_blog = "2006.01.02" language_alternatives = ["en"] +[languages.ja] +title = "Kubernetes" +description = "Production-Grade Container Orchestration" +languageName = "日本語 Japanese" +weight = 4 +contentDir = "content/ja" + +[languages.ja.params] +time_format_blog = "2006.01.02" +language_alternatives = ["en"] + +[languages.fr] +title = "Kubernetes" +description = "Production-Grade Container Orchestration" +languageName ="Français" +weight = 5 +contentDir = "content/fr" + +[languages.fr.params] +time_format_blog = "02.01.2006" +# A list of language codes to look for untranslated content, ordered from left to right. +language_alternatives = ["en"] + +[languages.it] +title = "Kubernetes" +description = "Production-Grade Container Orchestration" +languageName ="Italian" +weight = 6 +contentDir = "content/it" + +[languages.it.params] +time_format_blog = "02.01.2006" +# A list of language codes to look for untranslated content, ordered from left to right. +language_alternatives = ["en"] + [languages.no] title = "Kubernetes" description = "Production-Grade Container Orchestration" languageName ="Norsk" -weight = 4 +weight = 7 contentDir = "content/no" [languages.no.params] time_format_blog = "02.01.2006" # A list of language codes to look for untranslated content, ordered from left to right. language_alternatives = ["en"] - diff --git a/content/en/_index.html b/content/en/_index.html index 0fccb173cb070..57a8b5b8baada 100644 --- a/content/en/_index.html +++ b/content/en/_index.html @@ -8,7 +8,7 @@ {{< blocks/section id="oceanNodes" >}} {{% blocks/feature image="flower" %}} -### [Kubernetes (k8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) is an open-source system for automating deployment, scaling, and management of containerized applications. +### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon [15 years of experience of running production workloads at Google](http://queue.acm.org/detail.cfm?id=2898444), combined with best-of-breed ideas and practices from the community. {{% /blocks/feature %}} diff --git a/content/en/blog/_posts/2016-08-00-Security-Best-Practices-Kubernetes-Deployment.md b/content/en/blog/_posts/2016-08-00-Security-Best-Practices-Kubernetes-Deployment.md index 74bb85843ad28..28b9a2ccb77fe 100644 --- a/content/en/blog/_posts/2016-08-00-Security-Best-Practices-Kubernetes-Deployment.md +++ b/content/en/blog/_posts/2016-08-00-Security-Best-Practices-Kubernetes-Deployment.md @@ -4,6 +4,8 @@ date: 2016-08-31 slug: security-best-practices-kubernetes-deployment url: /blog/2016/08/Security-Best-Practices-Kubernetes-Deployment --- +_Note: some of the recommendations in this post are no longer current. Current cluster hardening options are described in this [documentation](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/)._ + _Editor’s note: today’s post is by Amir Jerbi and Michael Cherny of Aqua Security, describing security best practices for Kubernetes deployments, based on data they’ve collected from various use-cases seen in both on-premises and cloud deployments._ Kubernetes provides many controls that can greatly improve your application security. Configuring them requires intimate knowledge with Kubernetes and the deployment’s security requirements. The best practices we highlight here are aligned to the container lifecycle: build, ship and run, and are specifically tailored to Kubernetes deployments. We adopted these best practices in [our own SaaS deployment](http://blog.aquasec.com/running-a-security-service-in-google-cloud-real-world-example) that runs Kubernetes on Google Cloud Platform. diff --git a/content/en/blog/_posts/2017-07-00-How-Watson-Health-Cloud-Deploys.md b/content/en/blog/_posts/2017-07-00-How-Watson-Health-Cloud-Deploys.md index 2917f55e79394..0d6e6481cd2cc 100644 --- a/content/en/blog/_posts/2017-07-00-How-Watson-Health-Cloud-Deploys.md +++ b/content/en/blog/_posts/2017-07-00-How-Watson-Health-Cloud-Deploys.md @@ -19,7 +19,7 @@ I was able to run more processes on a single physical server than I could using -To orchestrate container deployment, we are using[Armada infrastructure](https://console.bluemix.net/containers-kubernetes/launch), a Kubernetes implementation by IBM for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure. +To orchestrate container deployment, we are using [IBM Cloud Kubernetes Service infrastructure](https://cloud.ibm.com/containers-kubernetes/landing), a Kubernetes implementation by IBM for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure. @@ -39,7 +39,7 @@ Here is a snapshot of Watson Care Manager, running inside a Kubernetes cluster: -Before deploying an app, a user must create a worker node cluster. I can create a cluster using the kubectl cli commands or create it from[a Bluemix](http://bluemix.net/) dashboard. +Before deploying an app, a user must create a worker node cluster. I can create a cluster using the kubectl cli commands or create it from the [IBM Cloud](https://cloud.ibm.com/) dashboard. @@ -107,16 +107,16 @@ If needed, run a rolling update to update the existing pod. -Deploying the application in Armada: +Deploying the application in IBM Cloud Kubernetes Service: -Provision a cluster in Armada with \ worker nodes. Create Kubernetes controllers for deploying the containers in worker nodes, the Armada infrastructure pulls the Docker images from IBM Bluemix Docker registry to create containers. We tried deploying an application container and running a logmet agent (see Reading and displaying logs using logmet container, below) inside the containers that forwards the application logs to an IBM cloud logging service. As part of the process, YAML files are used to create a controller resource for the UrbanCode Deploy (UCD). UCD agent is deployed as a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) controller, which is used to connect to the UCD server. The whole process of deployment of application happens in UCD. To support the application for public access, we created a service resource to interact between pods and access container services. For storage support, we created persistent volume claims and mounted the volume for the containers. +Provision a cluster in IBM Cloud Kubernetes Service with \ worker nodes. Create Kubernetes controllers for deploying the containers in worker nodes, the IBM Cloud Kubernetes Service infrastructure pulls the Docker images from IBM Cloud Container Registry to create containers. We tried deploying an application container and running a logmet agent (see Reading and displaying logs using logmet container, below) inside the containers that forwards the application logs to an IBM Cloud logging service. As part of the process, YAML files are used to create a controller resource for the UrbanCode Deploy (UCD). UCD agent is deployed as a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) controller, which is used to connect to the UCD server. The whole process of deployment of application happens in UCD. To support the application for public access, we created a service resource to interact between pods and access container services. For storage support, we created persistent volume claims and mounted the volume for the containers. | ![](https://lh6.googleusercontent.com/iFKlbBX8rjWTuygIfjImdxP8R7xXuvaaoDwldEIC3VRL03XIehxagz8uePpXllYMSxoyai5a6N-0NB4aTGK9fwwd8leFyfypxtbmaWBK-b2Kh9awcA76-_82F7ZZl7lgbf0gyFN7) | -| UCD: IBM UrbanCode Deploy is a tool for automating application deployments through your environments. Armada: Kubernetes implementation of IBM. WH Docker Registry: Docker Private image registry. Common agent containers: We expect to configure our services to use the WHC mandatory agents. We deployed all ion containers. | +| UCD: IBM UrbanCode Deploy is a tool for automating application deployments through your environments. IBM Cloud Kubernetes Service: Kubernetes implementation of IBM. WH Docker Registry: Docker Private image registry. Common agent containers: We expect to configure our services to use the WHC mandatory agents. We deployed all ion containers. | @@ -142,7 +142,7 @@ Exposing services with Ingress: -To expose our services to outside the cluster, we used Ingress. In Armada, if we create a paid cluster, an Ingress controller is automatically installed for us to use. We were able to access services through Ingress by creating a YAML resource file that specifies the service path. +To expose our services to outside the cluster, we used Ingress. In IBM Cloud Kubernetes Service, if we create a paid cluster, an Ingress controller is automatically installed for us to use. We were able to access services through Ingress by creating a YAML resource file that specifies the service path. diff --git a/content/en/blog/_posts/2018-05-04-Announcing-Kubeflow-0-1.md b/content/en/blog/_posts/2018-05-04-Announcing-Kubeflow-0-1.md index 2b23ac523b961..9bc8ceb2c9cf7 100644 --- a/content/en/blog/_posts/2018-05-04-Announcing-Kubeflow-0-1.md +++ b/content/en/blog/_posts/2018-05-04-Announcing-Kubeflow-0-1.md @@ -94,7 +94,7 @@ If you’d like to try out Kubeflow, we have a number of options for you: 1. You can use sample walkthroughs hosted on [Katacoda](https://www.katacoda.com/kubeflow) 2. You can follow a guided tutorial with existing models from the [examples repository](https://github.com/kubeflow/examples). These include the [Github Issue Summarization](https://github.com/kubeflow/examples/tree/master/github_issue_summarization), [MNIST](https://github.com/kubeflow/examples/tree/master/mnist) and [Reinforcement Learning with Agents](https://github.com/kubeflow/examples/tree/master/agents). -3. You can start a cluster on your own and try your own model. Any Kubernetes conformant cluster will support Kubeflow including those from contributors [Caicloud](https://www.prnewswire.com/news-releases/caicloud-releases-its-kubernetes-based-cluster-as-a-service-product-claas-20-and-the-first-tensorflow-as-a-service-taas-11-while-closing-6m-series-a-funding-300418071.html), [Canonical](https://jujucharms.com/canonical-kubernetes/), [Google](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-container-cluster), [Heptio](https://heptio.com/products/kubernetes-subscription/), [Mesosphere](https://github.com/mesosphere/dcos-kubernetes-quickstart), [Microsoft](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough), [IBM](https://console.bluemix.net/docs/containers/cs_tutorials.html#cs_cluster_tutorial), [Red Hat/Openshift ](https://docs.openshift.com/container-platform/3.3/install_config/install/quick_install.html#install-config-install-quick-install)and [Weaveworks](https://www.weave.works/product/cloud/). +3. You can start a cluster on your own and try your own model. Any Kubernetes conformant cluster will support Kubeflow including those from contributors [Caicloud](https://www.prnewswire.com/news-releases/caicloud-releases-its-kubernetes-based-cluster-as-a-service-product-claas-20-and-the-first-tensorflow-as-a-service-taas-11-while-closing-6m-series-a-funding-300418071.html), [Canonical](https://jujucharms.com/canonical-kubernetes/), [Google](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-container-cluster), [Heptio](https://heptio.com/products/kubernetes-subscription/), [Mesosphere](https://github.com/mesosphere/dcos-kubernetes-quickstart), [Microsoft](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough), [IBM](https://cloud.ibm.com/docs/containers?topic=containers-cs_cluster_tutorial#cs_cluster_tutorial), [Red Hat/Openshift ](https://docs.openshift.com/container-platform/3.3/install_config/install/quick_install.html#install-config-install-quick-install)and [Weaveworks](https://www.weave.works/product/cloud/). There were also a number of sessions at KubeCon + CloudNativeCon EU 2018 covering Kubeflow. The links to the talks are here; the associated videos will be posted in the coming days. diff --git a/content/en/blog/_posts/2019-02-11-runc-CVE-2019-5736.md b/content/en/blog/_posts/2019-02-11-runc-CVE-2019-5736.md new file mode 100644 index 0000000000000..84482daf797cf --- /dev/null +++ b/content/en/blog/_posts/2019-02-11-runc-CVE-2019-5736.md @@ -0,0 +1,96 @@ +--- +title: Runc and CVE-2019-5736 +date: 2019-02-11 +--- + +This morning [a container escape vulnerability in runc was announced](https://www.openwall.com/lists/oss-security/2019/02/11/2). We wanted to provide some guidance to Kubernetes users to ensure everyone is safe and secure. + +## What Is Runc? + +Very briefly, runc is the low-level tool which does the heavy lifting of spawning a Linux container. Other tools like Docker, Containerd, and CRI-O sit on top of runc to deal with things like data formatting and serialization, but runc is at the heart of all of these systems. + +Kubernetes in turn sits on top of those tools, and so while no part of Kubernetes itself is vulnerable, most Kubernetes installations are using runc under the hood. + +### What Is The Vulnerability? + +While full details are still embargoed to give people time to patch, the rough version is that when running a process as root (UID 0) inside a container, that process can exploit a bug in runc to gain root privileges on the host running the container. This then allows them unlimited access to the server as well as any other containers on that server. + +If the process inside the container is either trusted (something you know is not hostile) or is not running as UID 0, then the vulnerability does not apply. It can also be prevented by SELinux, if an appropriate policy has been applied. RedHat Enterprise Linux and CentOS both include appropriate SELinux permissions with their packages and so are believed to be unaffected if SELinux is enabled. + +The most common source of risk is attacker-controller container images, such as unvetted images from public repositories. + +### What Should I Do? + +As with all security issues, the two main options are to mitigate the vulnerability or upgrade your version of runc to one that includes the fix. + +As the exploit requires UID 0 within the container, a direct mitigation is to ensure all your containers are running as a non-0 user. This can be set within the container image, or via your pod specification: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: run-as-uid-1000 +spec: + securityContext: + runAsUser: 1000 + # ... +``` + +This can also be enforced globally using a PodSecurityPolicy: + +```yaml +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: non-root +spec: + privileged: false + allowPrivilegeEscalation: false + runAsUser: + # Require the container to run without root privileges. + rule: 'MustRunAsNonRoot' +``` + +Setting a policy like this is highly encouraged given the overall risks of running as UID 0 inside a container. + +Another potential mitigation is to ensure all your container images are vetted and trusted. This can be accomplished by building all your images yourself, or by vetting the contents of an image and then pinning to the image version hash (`image: external/someimage@sha256:7832659873hacdef`). + +Upgrading runc can generally be accomplished by upgrading the package `runc` for your distribution or by upgrading your OS image if using immutable images. This is a list of known safe versions for various distributions and platforms: + +* Ubuntu - [`runc 1.0.0~rc4+dfsg1-6ubuntu0.18.10.1`](https://people.canonical.com/~ubuntu-security/cve/2019/CVE-2019-5736.html) +* Debian - [`runc 1.0.0~rc6+dfsg1-2`](https://security-tracker.debian.org/tracker/CVE-2019-5736) +* RedHat Enterprise Linux - [`docker 1.13.1-91.git07f3374.el7`](https://access.redhat.com/security/vulnerabilities/runcescape) (if SELinux is disabled) +* Amazon Linux - [`docker 18.06.1ce-7.25.amzn1.x86_64`](https://alas.aws.amazon.com/ALAS-2019-1156.html) +* CoreOS - Stable: [`1967.5.0`](https://coreos.com/releases/#1967.5.0) / Beta: [`2023.2.0`](https://coreos.com/releases/#2023.2.0) / Alpha: [`2051.0.0`](https://coreos.com/releases/#2051.0.0) +* Kops Debian - [in progress](https://github.com/kubernetes/kops/pull/6460) (see [advisory](https://github.com/kubernetes/kops/blob/master/docs/advisories/cve_2019_5736.md) for how to address until Kops Debian is patched) +* Docker - [`18.09.2`](https://github.com/docker/docker-ce/releases/tag/v18.09.2) + +Some platforms have also posted more specific instructions: + +#### Google Container Engine (GKE) + +Google has issued a [security bulletin](https://cloud.google.com/kubernetes-engine/docs/security-bulletins#february-11-2019-runc) with more detailed information but in short, if you are using the default GKE node image then you are safe. If you are using an Ubuntu node image then you will need to mitigate or upgrade to an image with a fixed version of runc. + +#### Amazon Elastic Container Service for Kubernetes (EKS) + +Amazon has also issued a [security bulletin](https://aws.amazon.com/security/security-bulletins/AWS-2019-002/) with more detailed information. All EKS users should mitigate the issue or upgrade to a new node image. + +#### Azure Kubernetes Service (AKS) + +Microsoft has issued a [security bulletin](https://azure.microsoft.com/en-us/updates/cve-2019-5736-and-runc-vulnerability/) with detailed information on mitigating the issue. Microsoft recommends all AKS users to upgrade their cluster to mitigate the issue. + +#### Kops + +Kops has issued an [advisory](https://github.com/kubernetes/kops/blob/master/docs/advisories/cve_2019_5736.md) with detailed information on mitigating this issue. + +### Docker + +We don't have specific confirmation that Docker for Mac and Docker for Windows are vulnerable, however it seems likely. Docker has released a fix in [version 18.09.2](https://github.com/docker/docker-ce/releases/tag/v18.09.2) and it is recommended you upgrade to it. This also applies to other deploy systems using Docker under the hood. + +If you are unable to upgrade Docker, the Rancher team has provided backports of the fix for many older versions at [github.com/rancher/runc-cve](https://github.com/rancher/runc-cve). + +## Getting More Information + +If you have any further questions about how this vulnerability impacts Kubernetes, please join us at [discuss.kubernetes.io](https://discuss.kubernetes.io/). + +If you would like to get in contact with the [runc team](https://github.com/opencontainers/org/blob/master/README.md#communications), you can reach them on [Google Groups](https://groups.google.com/a/opencontainers.org/forum/#!forum/dev) or `#opencontainers` on Freenode IRC. diff --git a/content/en/blog/_posts/2019-02-12-building-a-kubernetes-edge-control-plane-for-envoy-v2.md b/content/en/blog/_posts/2019-02-12-building-a-kubernetes-edge-control-plane-for-envoy-v2.md new file mode 100644 index 0000000000000..7751712f0e703 --- /dev/null +++ b/content/en/blog/_posts/2019-02-12-building-a-kubernetes-edge-control-plane-for-envoy-v2.md @@ -0,0 +1,107 @@ +--- +title: Building a Kubernetes Edge (Ingress) Control Plane for Envoy v2 +date: 2019-02-12 +slug: building-a-kubernetes-edge-control-plane-for-envoy-v2 +--- + + +**Author:** +Daniel Bryant, Product Architect, Datawire; +Flynn, Ambassador Lead Developer, Datawire; +Richard Li, CEO and Co-founder, Datawire + + +Kubernetes has become the de facto runtime for container-based microservice applications, but this orchestration framework alone does not provide all of the infrastructure necessary for running a distributed system. Microservices typically communicate through Layer 7 protocols such as HTTP, gRPC, or WebSockets, and therefore having the ability to make routing decisions, manipulate protocol metadata, and observe at this layer is vital. However, traditional load balancers and edge proxies have predominantly focused on L3/4 traffic. This is where the [Envoy Proxy](https://www.envoyproxy.io/) comes into play. + +Envoy proxy was designed as a [universal data plane](https://blog.envoyproxy.io/the-universal-data-plane-api-d15cec7a) from the ground-up by the Lyft Engineering team for today's distributed, L7-centric world, with broad support for L7 protocols, a real-time API for managing its configuration, first-class observability, and high performance within a small memory footprint. However, Envoy's vast feature set and flexibility of operation also makes its configuration highly complicated -- this is evident from looking at its rich but verbose [control plane](https://blog.envoyproxy.io/service-mesh-data-plane-vs-control-plane-2774e720f7fc) syntax. + +With the open source [Ambassador API Gateway](https://www.getambassador.io), we wanted to tackle the challenge of creating a new control plane that focuses on the use case of deploying Envoy as an forward-facing edge proxy within a Kubernetes cluster, in a way that is idiomatic to Kubernetes operators. In this article, we'll walk through two major iterations of the Ambassador design, and how we integrated Ambassador with Kubernetes. + + +## Ambassador pre-2019: Envoy v1 APIs, Jinja Template Files, and Hot Restarts + +Ambassador itself is deployed within a container as a Kubernetes service, and uses annotations added to Kubernetes Services as its [core configuration model](https://www.getambassador.io/reference/configuration). This approach [enables application developers to manage routing](https://www.getambassador.io/concepts/developers) as part of the Kubernetes service definition. We explicitly decided to go down this route because of [limitations](https://blog.getambassador.io/kubernetes-ingress-nodeport-load-balancers-and-ingress-controllers-6e29f1c44f2d) in the current [Ingress API spec](https://kubernetes.io/docs/concepts/services-networking/ingress/), and we liked the simplicity of extending Kubernetes services, rather than introducing another custom resource type. An example of an Ambassador annotation can be seen here: + + +``` +kind: Service +apiVersion: v1 +metadata: + name: my-service + annotations: + getambassador.io/config: | + --- + apiVersion: ambassador/v0 + kind: Mapping + name: my_service_mapping + prefix: /my-service/ + service: my-service +spec: + selector: + app: MyApp + ports: + - protocol: TCP + port: 80 + targetPort: 9376 +``` + + +Translating this simple Ambassador annotation config into valid [Envoy v1](https://www.envoyproxy.io/docs/envoy/v1.6.0/configuration/overview/v1_overview) config was not a trivial task. By design, Ambassador's configuration isn't based on the same conceptual model as Envoy's configuration -- we deliberately wanted to aggregate and simplify operations and config. Therefore, translating between one set of concepts to the other involves a fair amount of logic within Ambassador. + +In this first iteration of Ambassador we created a Python-based service that watched the Kubernetes API for changes to Service objects. When new or updated Ambassador annotations were detected, these were translated from the Ambassador syntax into an intermediate representation (IR) which embodied our core configuration model and concepts. Next, Ambassador translated this IR into a representative Envoy configuration which was saved as a file within pods associated with the running Ambassador k8s Service. Ambassador then "hot-restarted" the Envoy process running within the Ambassador pods, which triggered the loading of the new configuration. + +There were many benefits with this initial implementation. The mechanics involved were fundamentally simple, the transformation of Ambassador config into Envoy config was reliable, and the file-based hot restart integration with Envoy was dependable. + +However, there were also notable challenges with this version of Ambassador. First, although the hot restart was effective for the majority of our customers' use cases, it was not very fast, and some customers (particularly those with huge application deployments) found it was limiting the frequency with which they could change their configuration. Hot restart can also drop connections, especially long-lived connections like WebSockets or gRPC streams. + +More crucially, though, the first implementation of the IR allowed rapid prototyping but was primitive enough that it proved very difficult to make substantial changes. While this was a pain point from the beginning, it became a critical issue as Envoy shifted to the [Envoy v2 API](https://www.envoyproxy.io/docs/envoy/latest/configuration/overview/v2_overview). It was clear that the v2 API would offer Ambassador many benefits -- as Matt Klein outlined in his blog post, "[The universal data plane API](https://blog.envoyproxy.io/the-universal-data-plane-api-d15cec7a)" -- including access to new features and a solution to the connection-drop problem noted above, but it was also clear that the existing IR implementation was not capable of making the leap. + + +## Ambassador >= v0.50: Envoy v2 APIs (ADS), Testing with KAT, and Golang + +In consultation with the [Ambassador community](http://d6e.co/slack), the [Datawire](www.datawire.io) team undertook a redesign of the internals of Ambassador in 2018. This was driven by two key goals. First, we wanted to integrate Envoy's v2 configuration format, which would enable the support of features such as [SNI](https://www.getambassador.io/user-guide/sni/), [rate limiting](https://www.getambassador.io/user-guide/rate-limiting) and [gRPC authentication APIs](https://www.getambassador.io/user-guide/auth-tutorial). Second, we also wanted to do much more robust semantic validation of Envoy configuration due to its increasing complexity (particularly when operating with large-scale application deployments). + + +### Initial stages + +We started by restructuring the Ambassador internals more along the lines of a multipass compiler. The class hierarchy was made to more closely mirror the separation of concerns between the Ambassador configuration resources, the IR, and the Envoy configuration resources. Core parts of Ambassador were also redesigned to facilitate contributions from the community outside Datawire. We decided to take this approach for several reasons. First, Envoy Proxy is a very fast moving project, and we realized that we needed an approach where a seemingly minor Envoy configuration change didn't result in days of reengineering within Ambassador. In addition, we wanted to be able to provide semantic verification of configuration. + +As we started working more closely with Envoy v2, a testing challenge was quickly identified. As more and more features were being supported in Ambassador, more and more bugs appeared in Ambassador's handling of less common but completely valid combinations of features. This drove to creation of a new testing requirement that meant Ambassador's test suite needed to be reworked to automatically manage many combinations of features, rather than relying on humans to write each test individually. Moreover, we wanted the test suite to be fast in order to maximize engineering productivity. + +Thus, as part of the Ambassador rearchitecture, we introduced the [Kubernetes Acceptance Test (KAT)](https://github.com/datawire/ambassador/tree/master/kat) framework. KAT is an extensible test framework that: + + + +1. Deploys a bunch of services (along with Ambassador) to a Kubernetes cluster +1. Run a series of verification queries against the spun up APIs +1. Perform a bunch of assertions on those query results + +KAT is designed for performance -- it batches test setup upfront, and then runs all the queries in step 3 asynchronously with a high performance client. The traffic driver in KAT runs locally using [Telepresence](https://www.telepresence.io), which makes it easier to debug issues. + +### Introducing Golang to the Ambassador Stack + +With the KAT test framework in place, we quickly ran into some issues with Envoy v2 configuration and hot restart, which presented the opportunity to switch to use Envoy’s Aggregated Discovery Service (ADS) APIs instead of hot restart. This completely eliminated the requirement for restart on configuration changes, which we found could lead to dropped connection under high loads or long-lived connections. + +However, we faced an interesting question as we considered the move to the ADS. The ADS is not as simple as one might expect: there are explicit ordering dependencies when sending updates to Envoy. The Envoy project has reference implementations of the ordering logic, but only in Go and Java, where Ambassador was primarily in Python. We agonized a bit, and decided that the simplest way forward was to accept the polyglot nature of our world, and do our ADS implementation in Go. + +We also found, with KAT, that our testing had reached the point where Python’s performance with many network connections was a limitation, so we took advantage of Go here, as well, writing KAT’s querying and backend services primarily in Go. After all, what’s another Golang dependency when you’ve already taken the plunge? + +With a new test framework, new IR generating valid Envoy v2 configuration, and the ADS, we thought we were done with the major architectural changes in Ambassador 0.50. Alas, we hit one more issue. On the Azure Kubernetes Service, Ambassador annotation changes were no longer being detected. + +Working with the highly-responsive AKS engineering team, we were able to identify the issue -- namely, the Kubernetes API server in AKS is exposed through a chain of proxies, requiring clients to be updating to understand how to connect using the FQDN of the API server, which is provided through a mutating webhook in AKS. Unfortunately, support for this feature was not available in the official Kubernetes Python client, so this was the third spot where we chose to switch to Go instead of Python. + +This raises the interesting question of, “why not ditch all the Python code, and just rewrite Ambassador entirely in Go?” It’s a valid question. The main concern with a rewrite is that Ambassador and Envoy operate at different conceptual levels rather than simply expressing the same concepts with different syntax. Being certain that we’ve expressed the conceptual bridges in a new language is not a trivial challenge, and not something to undertake without already having really excellent test coverage in place + +At this point, we use Go to coverage very specific, well-contained functions that can be verified for correctness much more easily that we could verify a complete Golang rewrite. In the future, who knows? But for 0.50.0, this functional split let us both take advantage of Golang’s strengths, while letting us retain more confidence about all the changes already in 0.50. + +## Lessons Learned + +We've learned a lot in the process of building [Ambassador 0.50](https://blog.getambassador.io/ambassador-0-50-ga-release-notes-sni-new-authservice-and-envoy-v2-support-3b30a4d04c81). Some of our key takeaways: + +* Kubernetes and Envoy are very powerful frameworks, but they are also extremely fast moving targets -- there is sometimes no substitute for reading the source code and talking to the maintainers (who are fortunately all quite accessible!) +* The best supported libraries in the Kubernetes / Envoy ecosystem are written in Go. While we love Python, we have had to adopt Go so that we're not forced to maintain too many components ourselves. +* Redesigning a test harness is sometimes necessary to move your software forward. +* The real cost in redesigning a test harness is often in porting your old tests to the new harness implementation. +* Designing (and implementing) an effective control plane for the edge proxy use case has been challenging, and the feedback from the open source community around Kubernetes, Envoy and Ambassador has been extremely useful. + +Migrating Ambassador to the Envoy v2 configuration and ADS APIs was a long and difficult journey that required lots of architecture and design discussions and plenty of coding, but early feedback from results have been positive. [Ambassador 0.50 is available now](https://blog.getambassador.io/announcing-ambassador-0-50-8dffab5b05e0), so you can take it for a test run and share your feedback with the community on our [Slack channel](http://d6e.co/slack) or on [Twitter](https://www.twitter.com/getambassadorio). diff --git a/content/en/blog/_posts/2019-02-28-automate-operations-on-your-cluster-with-operatorhub.md b/content/en/blog/_posts/2019-02-28-automate-operations-on-your-cluster-with-operatorhub.md new file mode 100644 index 0000000000000..a8f91716e72b5 --- /dev/null +++ b/content/en/blog/_posts/2019-02-28-automate-operations-on-your-cluster-with-operatorhub.md @@ -0,0 +1,67 @@ +--- +title: Automate Operations on your Cluster with OperatorHub.io +date: 2019-02-28 +--- + +**Author:** +Diane Mueller, Director of Community Development, Cloud Platforms, Red Hat + +One of the important challenges facing developers and Kubernetes administrators has been a lack of ability to quickly find common services that are operationally ready for Kubernetes. Typically, the presence of an Operator for a specific service - a pattern that was introduced in 2016 and has gained momentum - is a good signal for the operational readiness of the service on Kubernetes. However, there has to date not existed a registry of Operators to simplify the discovery of such services. + +To help address this challenge, today Red Hat is launching OperatorHub.io in collaboration with AWS, Google Cloud and Microsoft. OperatorHub.io enables developers and Kubernetes administrators to find and install curated Operator-backed services with a base level of documentation, active maintainership by communities or vendors, basic testing, and packaging for optimized life-cycle management on Kubernetes. + +The Operators currently in OperatorHub.io are just the start. We invite the Kubernetes community to join us in building a vibrant community for Operators by developing, packaging, and publishing Operators on OperatorHub.io. + +## What does OperatorHub.io provide? + +OperatorHub.io is designed to address the needs of both Kubernetes developers and users. For the former it provides a common registry where they can publish their Operators alongside with descriptions, relevant details like version, image, code repository and have them be readily packaged for installation. They can also update already published Operators to new versions when they are released. + + +Users get the ability to discover and download Operators at a central location, that has content which has been screened for the previously mentioned criteria and scanned for known vulnerabilities. In addition, developers can guide users of their Operators with prescriptive examples of the `CustomResources` that they introduce to interact with the application. + +## What is an Operator? + +Operators were first introduced in 2016 by CoreOS and have been used by Red Hat and the Kubernetes community as a way to package, deploy and manage a Kubernetes-native application. A Kubernetes-native application is an application that is both deployed on Kubernetes and managed using the Kubernetes APIs and well-known tooling, like kubectl. + +An Operator is implemented as a custom controller that watches for certain Kubernetes resources to appear, be modified or deleted. These are typically `CustomResourceDefinitions` that the Operator “owns.” In the spec properties of these objects the user declares the desired state of the application or the operation. The Operator’s reconciliation loop will pick these up and perform the required actions to achieve the desired state. For example, the intent to create a highly available etcd cluster could be expressed by creating an new resource of type `EtcdCluster`: + +``` +apiVersion: "etcd.database.coreos.com/v1beta2" +kind: "EtcdCluster" +metadata: + name: "my-etcd-cluster" +spec: + size: 3 + version: "3.3.12" +``` + +The `EtcdOperator` would be responsible for creating a 3-node etcd cluster running version v3.3.12 as a result. Similarly, an object of type `EtcdBackup` could be defined to express the intent to create a consistent backup of the etcd database to an S3 bucket. + +## How do I create and run an Operator? + +One way to get started is with the [Operator Framework](https://github.com/operator-framework), an open source toolkit that provides an SDK, lifecycle management, metering and monitoring capabilities. It enables developers to build, test, and package Operators. Operators can be implemented in several programming and automation languages, including Go, Helm, and Ansible, all three of which are supported directly by the SDK. + +If you are interested in creating your own Operator, we recommend checking out the Operator Framework to [get started](https://github.com/operator-framework/getting-started). + +Operators vary in where they fall along [the capability spectrum](https://github.com/operator-framework/operator-sdk/blob/master/doc/images/operator-maturity-model.png) ranging from basic functionality to having specific operational logic for an application to automate advanced scenarios like backup, restore or tuning. Beyond basic installation, advanced Operators are designed to handle upgrades more seamlessly and react to failures automatically. Currently, Operators on OperatorHub.io span the maturity spectrum, but we anticipate their continuing maturation over time. + +While Operators on OperatorHub.io don’t need to be implemented using the SDK, they are packaged for deployment through the [Operator Lifecycle Manager](https://github.com/operator-framework/operator-lifecycle-manager) (OLM). The format mainly consists of a YAML manifest referred to as `[ClusterServiceVersion]`(https://github.com/operator-framework/operator-lifecycle-manager/blob/master/Documentation/design/building-your-csv.md) which provides information about the `CustomResourceDefinitions` the Operator owns or requires, which RBAC definition it needs, where the image is stored, etc. This file is usually accompanied by additional YAML files which define the Operators’ own CRDs. This information is processed by OLM at the time a user requests to install an Operator to provide dependency resolution and automation. + +## What does listing of an Operator on OperatorHub.io mean? + +To be listed, Operators must successfully show cluster lifecycle features, be packaged as a CSV to be maintained through OLM, and have acceptable documentation for its intended users. + +Some examples of Operators that are currently listed on OperatorHub.io include: Amazon Web Services Operator, Couchbase Autonomous Operator, CrunchyData’s PostgreSQL, etcd Operator, Jaeger Operator for Kubernetes, Kubernetes Federation Operator, MongoDB Enterprise Operator, Percona MySQL Operator, PlanetScale’s Vitess Operator, Prometheus Operator, and Redis Operator. + +## Want to add your Operator to OperatorHub.io? Follow these steps + +If you have an existing Operator, follow the [contribution guide](https://www.operatorhub.io/contribute) using a fork of the [community-operators](https://github.com/operator-framework/community-operators/) repository. Each contribution contains the CSV, all of the `CustomResourceDefinitions`, access control rules and references to the container image needed to install and run your Operator, plus other info like a description of its features and supported Kubernetes versions. A complete example, including multiple versions of the Operator, can be found with the [EtcdOperator](https://github.com/operator-framework/community-operators/tree/master/community-operators/etcd). + +After testing out your Operator on your own cluster, submit a PR to the [community repository](https://github.com/operator-framework/community-operators) with all of YAML files following [this directory structure](https://github.com/operator-framework/community-operators#adding-your-operator). Subsequent versions of the Operator can be published in the same way. At first this will be reviewed manually, but automation is on the way. After it’s merged by the maintainers, it will show up on OperatorHub.io along with its documentation and a convenient installation method. + +## Want to learn more? + +- Attend one of the upcoming Kubernetes Operator Framework hands-on workshops at [ScaleX](https://www.socallinuxexpo.org/scale/17x/presentations/workshop-kubernetes-operator-framework) in Pasadena on March 7 and at the [OpenShift Commons Gathering on Operating at Scale in Santa Clara on March 11](https://commons.openshift.org/gatherings/Santa_Clara_2019.html) +- Listen to this [OpenShift Commons Briefing on “The State of Operators” with Daniel Messer and Diane Mueller](https://www.youtube.com/watch?v=GgEKEYH9MMM&feature=youtu.be) +- Join in on the online conversations in the community [Kubernetes-Operator Slack Channel](https://kubernetes.slack.com/messages/CAW0GV7A5) and the [Operator Framework Google Group](https://groups.google.com/forum/#!forum/operator-framework) +- Finally, read up on how to add your Operator to OperatorHub.io: https://operatorhub.io/contribute diff --git a/content/en/blog/_posts/2019-03-07-raw-block-volume-support-to-beta.md b/content/en/blog/_posts/2019-03-07-raw-block-volume-support-to-beta.md new file mode 100644 index 0000000000000..fc08eadf833f2 --- /dev/null +++ b/content/en/blog/_posts/2019-03-07-raw-block-volume-support-to-beta.md @@ -0,0 +1,131 @@ +--- +title: Raw Block Volume support to Beta +date: 2019-03-07 +--- + +**Authors:** +Ben Swartzlander (NetApp), Saad Ali (Google) + +Kubernetes v1.13 moves raw block volume support to beta. This feature allows persistent volumes to be exposed inside containers as a block device instead of as a mounted file system. + +## What are block devices? + +Block devices enable random access to data in fixed-size blocks. Hard drives, SSDs, and CD-ROMs drives are all examples of block devices. + +Typically persistent storage is implemented in a layered maner with a file system (like ext4) on top of a block device (like a spinning disk or SSD). Applications then read and write files instead of operating on blocks. The operating systems take care of reading and writing files, using the specified filesystem, to the underlying device as blocks. + +It's worth noting that while whole disks are block devices, so are disk partitions, and so are LUNs from a storage area network (SAN) device. + +## Why add raw block volumes to kubernetes? + +There are some specialized applications that require direct access to a block device because, for example, the file system layer introduces unneeded overhead. The most common case is databases, which prefer to organize their data directly on the underlying storage. Raw block devices are also commonly used by any software which itself implements some kind of storage service (software defined storage systems). + +From a programmer's perspective, a block device is a very large array of bytes, usually with some minimum granularity for reads and writes, often 512 bytes, but frequently 4K or larger. + +As it becomes more common to run database software and storage infrastructure software inside of Kubernetes, the need for raw block device support in Kubernetes becomes more important. + +## Which volume plugins support raw blocks? + +As of the publishing of this blog, the following in-tree volumes types support raw blocks: + +- AWS EBS +- Azure Disk +- Cinder +- Fibre Channel +- GCE PD +- iSCSI +- Local volumes +- RBD (Ceph) +- Vsphere + +Out-of-tree [CSI volume drivers](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/) may also support raw block volumes. Kubernetes CSI support for raw block volumes is currently alpha. See documentation [here](https://kubernetes-csi.github.io/docs/raw-block.html). + +## Kubernetes raw block volume API + +Raw block volumes share a lot in common with ordinary volumes. Both are requested by creating `PersistentVolumeClaim` objects which bind to `PersistentVolume` objects, and are attached to Pods in Kubernetes by including them in the volumes array of the `PodSpec`. + +There are 2 important differences however. First, to request a raw block `PersistentVolumeClaim`, you must set `volumeMode = "Block"` in the `PersistentVolumeClaimSpec`. Leaving `volumeMode` blank is the same as specifying `volumeMode = "Filesystem"` which results in the traditional behavior. `PersistentVolumes` also have a `volumeMode` field in their `PersistentVolumeSpec`, and `"Block"` type PVCs can only bind to `"Block"` type PVs and `"Filesystem"` PVCs can only bind to `"Filesystem"` PVs. + +Secondly, when using a raw block volume in your Pods, you must specify a `VolumeDevice` in the Container portion of the `PodSpec` rather than a `VolumeMount`. `VolumeDevices` have `devicePaths` instead of `mountPaths`, and inside the container, applications will see a device at that path instead of a mounted file system. + +Applications open, read, and write to the device node inside the container just like they would interact with any block device on a system in a non-containerized or virtualized context. + +## Creating a new raw block PVC + +First, ensure that the provisioner associated with the storage class you choose is one that support raw blocks. Then create the PVC. + +``` +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: my-pvc +spec: + accessModes: + - ReadWriteMany + volumeMode: Block + storageClassName: my-sc + resources: + requests: + storage: 1Gi +``` + +## Using a raw block PVC + +When you use the PVC in a pod definition, you get to choose the device path for the block device rather than the mount path for the file system. + +``` +apiVersion: v1 +kind: Pod +metadata: + name: my-pod +spec: + containers: + - name: my-container + image: busybox + command: + - sleep + - “3600” + volumeDevices: + - devicePath: /dev/block + name: my-volume + imagePullPolicy: IfNotPresent + volumes: + - name: my-volume + persistentVolumeClaim: + claimName: my-pvc +``` + +## As a storage vendor, how do I add support for raw block devices to my CSI plugin? + +Raw block support for CSI plugins is still alpha, but support can be added today. The [CSI specification](https://github.com/container-storage-interface/spec/blob/master/spec.md) details how to handle requests for volume that have the `BlockVolume` capability instead of the `MountVolume` capability. CSI plugins can support both kinds of volumes, or one or the other. For more details see [documentation here](https://kubernetes-csi.github.io/docs/raw-block.html). + + +## Issues/gotchas + +Because block devices are actually devices, it’s possible to do low-level actions on them from inside containers that wouldn’t be possible with file system volumes. For example, block devices that are actually SCSI disks support sending SCSI commands to the device using Linux ioctls. + +By default, Linux won’t allow containers to send SCSI commands to disks from inside containers though. In order to do so, you must grant the `SYS_RAWIO` capability to the container security context to allow this. See documentation [here](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-capabilities-for-a-container). + +Also, while Kubernetes is guaranteed to deliver a block device to the container, there’s no guarantee that it’s actually a SCSI disk or any other kind of disk for that matter. The user must either ensure that the desired disk type is used with his pods, or only deploy applications that can handle a variety of block device types. + +## How can I learn more? + +Check out additional documentation on the snapshot feature here: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#raw-block-volume-support + +How do I get involved? + +Join the Kubernetes storage SIG and the CSI community and help us add more great features and improve existing ones like raw block storage! + +https://github.com/kubernetes/community/tree/master/sig-storage +https://github.com/container-storage-interface/community/blob/master/README.md + +Special thanks to all the contributors who helped add block volume support to Kubernetes including: + +- Ben Swartzlander (https://github.com/bswartz) +- Brad Childs (https://github.com/childsb) +- Erin Boyd (https://github.com/erinboyd) +- Masaki Kimura (https://github.com/mkimuram) +- Matthew Wong (https://github.com/wongma7) +- Michelle Au (https://github.com/msau42) +- Mitsuhiro Tanino (https://github.com/mtanino) +- Saad Ali (https://github.com/saad-ali) diff --git a/content/en/blog/_posts/Kubernetes-setup-using-Ansible-and-Vagrant.md b/content/en/blog/_posts/Kubernetes-setup-using-Ansible-and-Vagrant.md new file mode 100644 index 0000000000000..0ea0892d783e1 --- /dev/null +++ b/content/en/blog/_posts/Kubernetes-setup-using-Ansible-and-Vagrant.md @@ -0,0 +1,253 @@ +--- +layout: blog +title: Kubernetes Setup Using Ansible and Vagrant +date: 2019-03-15 +--- + +**Author:** Naresh L J (Infosys) + +## Objective +This blog post describes the steps required to setup a multi node Kubernetes cluster for development purposes. This setup provides a production-like cluster that can be setup on your local machine. + +## Why do we require multi node cluster setup? +Multi node Kubernetes clusters offer a production-like environment which has various advantages. Even though Minikube provides an excellent platform for getting started, it doesn't provide the opportunity to work with multi node clusters which can help solve problems or bugs that are related to application design and architecture. For instance, Ops can reproduce an issue in a multi node cluster environment, Testers can deploy multiple versions of an application for executing test cases and verifying changes. These benefits enable teams to resolve issues faster which make the more agile. + +## Why use Vagrant and Ansible? +Vagrant is a tool that will allow us to create a virtual environment easily and it eliminates pitfalls that cause the works-on-my-machine phenomenon. It can be used with multiple providers such as Oracle VirtualBox, VMware, Docker, and so on. It allows us to create a disposable environment by making use of configuration files. + +Ansible is an infrastructure automation engine that automates software configuration management. It is agentless and allows us to use SSH keys for connecting to remote machines. Ansible playbooks are written in yaml and offer inventory management in simple text files. + + +### Prerequisites +- Vagrant should be installed on your machine. Installation binaries can be found [here](https://www.vagrantup.com/downloads.html). +- Oracle VirtualBox can be used as a Vagrant provider or make use of similar providers as described in Vagrant's official [documentation](https://www.vagrantup.com/docs/providers/). +- Ansible should be installed in your machine. Refer to the [Ansible installation guide](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) for platform specific installation. + +## Setup overview +We will be setting up a Kubernetes cluster that will consist of one master and two worker nodes. All the nodes will run Ubuntu Xenial 64-bit OS and Ansible playbooks will be used for provisioning. + +#### Step 1: Creating a Vagrantfile +Use the text editor of your choice and create a file with named `Vagrantfile`, inserting the code below. The value of N denotes the number of nodes present in the cluster, it can be modified accordingly. In the below example, we are setting the value of N as 2. + +```ruby +IMAGE_NAME = "bento/ubuntu-16.04" +N = 2 + +Vagrant.configure("2") do |config| + config.ssh.insert_key = false + + config.vm.provider "virtualbox" do |v| + v.memory = 1024 + v.cpus = 2 + end + + config.vm.define "k8s-master" do |master| + master.vm.box = IMAGE_NAME + master.vm.network "private_network", ip: "192.168.50.10" + master.vm.hostname = "k8s-master" + master.vm.provision "ansible" do |ansible| + ansible.playbook = "kubernetes-setup/master-playbook.yml" + end + end + + (1..N).each do |i| + config.vm.define "node-#{i}" do |node| + node.vm.box = IMAGE_NAME + node.vm.network "private_network", ip: "192.168.50.#{i + 10}" + node.vm.hostname = "node-#{i}" + node.vm.provision "ansible" do |ansible| + ansible.playbook = "kubernetes-setup/node-playbook.yml" + end + end + end +``` + +### Step 2: Create an Ansible playbook for Kubernetes master. +Create a directory named `kubernetes-setup` in the same directory as the `Vagrantfile`. Create two files named `master-playbook.yml` and `node-playbook.yml` in the directory `kubernetes-setup`. + +In the file `master-playbook.yml`, add the code below. + +#### Step 2.1: Install Docker and its dependent components. + +We will be installing the following packages, and then adding a user named “vagrant” to the “docker” group. +- docker-ce +- docker-ce-cli +- containerd.io + +```yaml +--- +- hosts: all + become: true + tasks: + - name: Install packages that allow apt to be used over HTTPS + apt: + name: "{{ packages }}" + state: present + update_cache: yes + vars: + packages: + - apt-transport-https + - ca-certificates + - curl + - gnupg-agent + - software-properties-common + + - name: Add an apt signing key for Docker + apt_key: + url: https://download.docker.com/linux/ubuntu/gpg + state: present + + - name: Add apt repository for stable version + apt_repository: + repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable + state: present + + - name: Install docker and its dependecies + apt: + name: "{{ packages }}" + state: present + update_cache: yes + vars: + packages: + - docker-ce + - docker-ce-cli + - containerd.io + notify: + - docker status + + - name: Add vagrant user to docker group + user: + name: vagrant + group: docker +``` + +#### Step 2.2: Kubelet will not start if the system has swap enabled, so we are disabling swap using the below code. + +```yaml + - name: Remove swapfile from /etc/fstab + mount: + name: "{{ item }}" + fstype: swap + state: absent + with_items: + - swap + - none + + - name: Disable swap + command: swapoff -a + when: ansible_swaptotal_mb > 0 +``` + +#### Step 2.3: Installing kubelet, kubeadm and kubectl using the below code. + +```yaml + - name: Add an apt signing key for Kubernetes + apt_key: + url: https://packages.cloud.google.com/apt/doc/apt-key.gpg + state: present + + - name: Adding apt repository for Kubernetes + apt_repository: + repo: deb https://apt.kubernetes.io/ kubernetes-xenial main + state: present + filename: kubernetes.list + + - name: Install Kubernetes binaries + apt: + name: "{{ packages }}" + state: present + update_cache: yes + vars: + packages: + - kubelet + - kubeadm + - kubectl +``` + +#### Step 2.3: Initialize the Kubernetes cluster with kubeadm using the below code (applicable only on master node). + +```yaml + - name: Initialize the Kubernetes cluster using kubeadm + command: kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10" --node-name k8s-master --pod-network-cidr=192.168.0.0/16 +``` + +#### Step 2.4: Setup the kube config file for the vagrant user to access the Kubernetes cluster using the below code. + +```yaml + - name: Setup kubeconfig for vagrant user + command: "{{ item }}" + with_items: + - mkdir -p /home/vagrant/.kube + - cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config + - chown vagrant:vagrant /home/vagrant/.kube/config +``` + +#### Step 2.5: Setup the container networking provider and the network policy engine using the below code. + +```yaml + - name: Install calico pod network + become: false + command: kubectl create -f https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/calico.yaml +``` + +#### Step 2.6: Generate kube join command for joining the node to the Kubernetes cluster and store the command in the file named `join-command`. + +```yaml + - name: Generate join command + command: kubeadm token create --print-join-command + register: join_command + + - name: Copy join command to local file + local_action: copy content="{{ join_command.stdout_lines[0] }}" dest="./join-command" +``` + +#### Step 2.7: Setup a handler for checking Docker daemon using the below code. + +```yaml + handlers: + - name: docker status + service: name=docker state=started +``` + +#### Step 3: Create the Ansible playbook for Kubernetes node. +Create a file named `node-playbook.yml` in the directory `kubernetes-setup`. + +Add the code below into `node-playbook.yml` + +#### Step 3.1: Start adding the code from Steps 2.1 till 2.3. + +#### Step 3.2: Join the nodes to the Kubernetes cluster using below code. + +```yaml + - name: Copy the join command to server location + copy: src=join-command dest=/tmp/join-command.sh mode=0777 + + - name: Join the node to cluster + command: sh /tmp/join-command.sh +``` + +#### Step 3.3: Add the code from step 2.7 to finish this playbook. + +#### Step 4: Upon completing the Vagrantfile and playbooks follow the below steps. + +```shell +$ cd /path/to/Vagrantfile +$ vagrant up +``` + +Upon completion of all the above steps, the Kubernetes cluster should be up and running. +We can login to the master or worker nodes using Vagrant as follows: + +```shell +$ ## Accessing master +$ vagrant ssh k8s-master +vagrant@k8s-master:~$ kubectl get nodes +NAME STATUS ROLES AGE VERSION +k8s-master Ready master 18m v1.13.3 +node-1 Ready 12m v1.13.3 +node-2 Ready 6m22s v1.13.3 + +$ ## Accessing nodes +$ vagrant ssh node-1 +$ vagrant ssh node-2 +``` diff --git a/content/en/case-studies/netease/index.html b/content/en/case-studies/netease/index.html new file mode 100644 index 0000000000000..4b699a1fcd5da --- /dev/null +++ b/content/en/case-studies/netease/index.html @@ -0,0 +1,86 @@ +--- +title: NetEase Case Study +case_study_styles: true +cid: caseStudies +css: /css/style_case_studies.css +--- + + +
+

CASE STUDY:
How NetEase Leverages Kubernetes to Support Internet Business Worldwide

+ +
+ +
+ Company  NetEase     Location  Hangzhou, China     Industry  Internet technology +
+ +
+
+
+
+

Challenge

+ Its gaming business is one of the largest in the world, but that’s not all that NetEase provides to Chinese consumers. The company also operates e-commerce, advertising, music streaming, online education, and email platforms; the last of which serves almost a billion users with free email services through sites like 163.com. In 2015, the NetEase Cloud team providing the infrastructure for all of these systems realized that their R&D process was slowing down developers. “Our users needed to prepare all of the infrastructure by themselves,” says Feng Changjian, Architect for NetEase Cloud and Container Service. “We were eager to provide the infrastructure and tools for our users automatically via serverless container service.” +

+

Solution

+ After considering building its own orchestration solution, NetEase decided to base its private cloud platform on Kubernetes. The fact that the technology came out of Google gave the team confidence that it could keep up with NetEase’s scale. “After our 2-to-3-month evaluation, we believed it could satisfy our needs,” says Feng. The team started working with Kubernetes in 2015, before it was even 1.0. Today, the NetEase internal cloud platform—which also leverages the CNCF projects Prometheus, Envoy, Harbor, gRPC, and Helm—runs 10,000 nodes in a production cluster and can support up to 30,000 nodes in a cluster. Based on its learnings from its internal platform, the company introduced a Kubernetes-based cloud and microservices-oriented PaaS product, NetEase Qingzhou Microservice, to outside customers. + + +

+

Impact

+ The NetEase team reports that Kubernetes has increased R&D efficiency by more than 100%. Deployment efficiency has improved by 280%. “In the past, if we wanted to do upgrades, we needed to work with other teams, even in other departments,” says Feng. “We needed special staff to prepare everything, so it took about half an hour. Now we can do it in only 5 minutes.” The new platform also allows for mixed deployments using GPU and CPU resources. “Before, if we put all the resources toward the GPU, we won’t have spare resources for the CPU. But now we have improvements thanks to the mixed deployments,” he says. Those improvements have also brought an increase in resource utilization. +
+
+ +
+
+
+ "The system can support 30,000 nodes in a single cluster. In production, we have gotten the data of 10,000 nodes in a single cluster. The whole internal system is using this system for development, test, and production."

— Zeng Yuxing, Architect, NetEase
+
+
+
+
+

Its gaming business is the fifth-largest in the world, but that’s not all that NetEase provides consumers.

The company also operates e-commerce, advertising, music streaming, online education, and email platforms in China; the last of which serves almost a billion users with free email services through popular sites like 163.com and 126.com. With that kind of scale, the NetEase Cloud team providing the infrastructure for all of these systems realized in 2015 that their R&D process was making it hard for developers to keep up with demand. “Our users needed to prepare all of the infrastructure by themselves,” says Feng Changjian, Architect for NetEase Cloud and Container Service. “We were eager to provide the infrastructure and tools for our users automatically via serverless container service.”

+ After considering building its own orchestration solution, NetEase decided to base its private cloud platform on Kubernetes. The fact that the technology came out of Google gave the team confidence that it could keep up with NetEase’s scale. “After our 2-to-3-month evaluation, we believed it could satisfy our needs,” says Feng. +
+
+
+
+ "We leveraged the programmability of Kubernetes so that we can build a platform to satisfy the needs of our internal customers for upgrades and deployment." +

- Feng Changjian, Architect for NetEase Cloud and Container Service, NetEase
+
+
+
+
+ The team started adopting Kubernetes in 2015, before it was even 1.0, because it was relatively easy to use and enabled DevOps at the company. “We abandoned some of the concepts of Kubernetes; we only wanted to use the standardized framework,” says Feng. “We leveraged the programmability of Kubernetes so that we can build a platform to satisfy the needs of our internal customers for upgrades and deployment.”

+ The team first focused on building the container platform to manage resources better, and then turned their attention to improving its support of microservices by adding internal systems such as monitoring. That has meant integrating the CNCF projects Prometheus, Envoy, Harbor, gRPC, and Helm. “We are trying to provide a simplified and standardized process, so our users and customers can leverage our best practices,” says Feng.

+ And the team is continuing to make improvements. For example, the e-commerce part of the business needs to leverage mixed deployments, which in the past required using two separate platforms: the infrastructure-as-a-service platform and the Kubernetes platform. More recently, NetEase has created a cross-platform application that enables using both with one-command deployment. +
+
+
+
+ "As long as a company has a mature team and enough developers, I think Kubernetes is a very good technology that can help them." +

- Li Lanqing, Kubernetes Developer, NetEase
+
+
+ + +
+
+ Today, the NetEase internal cloud platform “can support 30,000 nodes in a single cluster,” says Architect Zeng Yuxing. “In production, we have gotten the data of 10,000 nodes in a single cluster. The whole internal system is using this system for development, test, and production.”

+ The NetEase team reports that Kubernetes has increased R&D efficiency by more than 100%. Deployment efficiency has improved by 280%. “In the past, if we wanted to do upgrades, we needed to work with other teams, even in other departments,” says Feng. “We needed special staff to prepare everything, so it took about half an hour. Now we can do it in only 5 minutes.” The new platform also allows for mixed deployments using GPU and CPU resources. “Before, if we put all the resources toward the GPU, we won’t have spare resources for the CPU. But now we have improvements thanks to the mixed deployments.” Those improvements have also brought an increase in resource utilization. + +
+ +
+
+ "By engaging with this community, we can gain some experience from it and we can also benefit from it. We can see what are the concerns and the challenges faced by the community, so we can get involved."

- Li Lanqing, Kubernetes Developer, NetEase
+ +
+
+
+ Based on the results and learnings from using its internal platform, the company introduced a Kubernetes-based cloud and microservices-oriented PaaS product, NetEase Qingzhou Microservice, to outside customers. “The idea is that we can find the problems encountered by our game and e-commerce and cloud music providers, so we can integrate their experiences and provide a platform to satisfy the needs of our users,” says Zeng.

+ With or without the use of the NetEase product, the team encourages other companies to try Kubernetes. “As long as a company has a mature team and enough developers, I think Kubernetes is a very good technology that can help them,” says Kubernetes developer Li Lanqing.

+ As an end user as well as a vendor, NetEase has become more involved in the community, learning from other companies and sharing what they’ve done. The team has been contributing to the Harbor and Envoy projects, providing feedback as the technologies are being tested at NetEase scale. “We are a team focusing on addressing the challenges of microservices architecture,” says Feng. “By engaging with this community, we can gain some experience from it and we can also benefit from it. We can see what are the concerns and the challenges faced by the community, so we can get involved.” +
+
diff --git a/content/en/case-studies/netease/netease_featured_logo.png b/content/en/case-studies/netease/netease_featured_logo.png new file mode 100644 index 0000000000000..5700b940f34af Binary files /dev/null and b/content/en/case-studies/netease/netease_featured_logo.png differ diff --git a/content/en/docs/concepts/architecture/cloud-controller.md b/content/en/docs/concepts/architecture/cloud-controller.md index fce1fd7b8482c..24f685539ce7d 100644 --- a/content/en/docs/concepts/architecture/cloud-controller.md +++ b/content/en/docs/concepts/architecture/cloud-controller.md @@ -256,6 +256,7 @@ The following cloud providers have implemented CCMs: * [GCE](https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider/providers/gce) * [AWS](https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider/providers/aws) * [BaiduCloud](https://github.com/baidu/cloud-provider-baiducloud) +* [Linode](https://github.com/linode/linode-cloud-controller-manager) ## Cluster Administration diff --git a/content/en/docs/concepts/architecture/master-node-communication.md b/content/en/docs/concepts/architecture/master-node-communication.md index 7c6b3a9f1c874..be327630fe90d 100644 --- a/content/en/docs/concepts/architecture/master-node-communication.md +++ b/content/en/docs/concepts/architecture/master-node-communication.md @@ -77,7 +77,7 @@ To verify this connection, use the `--kubelet-certificate-authority` flag to provide the apiserver with a root certificate bundle to use to verify the kubelet's serving certificate. -If that is not possible, use [SSH tunneling](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) +If that is not possible, use [SSH tunneling](/docs/concepts/architecture/master-node-communication/#ssh-tunnels) between the apiserver and kubelet if required to avoid connecting over an untrusted or public network. @@ -95,4 +95,15 @@ connection will be encrypted, it will not provide any guarantees of integrity. These connections **are not currently safe** to run over untrusted and/or public networks. +### SSH Tunnels + +Kubernetes supports SSH tunnels to protect the Master -> Cluster communication +paths. In this configuration, the apiserver initiates an SSH tunnel to each node +in the cluster (connecting to the ssh server listening on port 22) and passes +all traffic destined for a kubelet, node, pod, or service through the tunnel. +This tunnel ensures that the traffic is not exposed outside of the network in +which the nodes are running. + +SSH tunnels are currently deprecated so you shouldn't opt to use them unless you know what you are doing. A replacement for this communication channel is being designed. + {{% /capture %}} diff --git a/content/en/docs/concepts/cluster-administration/addons.md b/content/en/docs/concepts/cluster-administration/addons.md index e6704223682d0..d0a8c3946cb8e 100644 --- a/content/en/docs/concepts/cluster-administration/addons.md +++ b/content/en/docs/concepts/cluster-administration/addons.md @@ -26,6 +26,7 @@ Add-ons in each section are sorted alphabetically - the ordering does not imply * [Cilium](https://github.com/cilium/cilium) is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported. * [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, Romana, or Weave. * [Contiv](http://contiv.github.io) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](http://github.com/contiv). The [installer](http://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options. +* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is a open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads. * [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md) is an overlay network provider that can be used with Kubernetes. * [Knitter](https://github.com/ZTE/Knitter/) is a network solution supporting multiple networking in Kubernetes. * [Multus](https://github.com/Intel-Corp/multus-cni) is a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and VPP based workloads in Kubernetes. diff --git a/content/en/docs/concepts/cluster-administration/certificates.md b/content/en/docs/concepts/cluster-administration/certificates.md index 93161bb8eeb2e..1a3ddf82639fc 100644 --- a/content/en/docs/concepts/cluster-administration/certificates.md +++ b/content/en/docs/concepts/cluster-administration/certificates.md @@ -231,8 +231,11 @@ refresh the local list for valid certificates. On each client, perform the following operations: ```bash -$ sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt -$ sudo update-ca-certificates +sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt +sudo update-ca-certificates +``` + +``` Updating certificates in /etc/ssl/certs... 1 added, 0 removed; done. Running hooks in /etc/ca-certificates/update.d.... diff --git a/content/en/docs/concepts/cluster-administration/cloud-providers.md b/content/en/docs/concepts/cluster-administration/cloud-providers.md index 68099874c8e4c..ff3df214b43ee 100644 --- a/content/en/docs/concepts/cluster-administration/cloud-providers.md +++ b/content/en/docs/concepts/cluster-administration/cloud-providers.md @@ -367,14 +367,21 @@ The `--hostname-override` parameter is ignored by the VSphere cloud provider. ## IBM Cloud Kubernetes Service ### Compute nodes -By using the IBM Cloud Kubernetes Service provider, you can create clusters with a mixture of virtual and physical (bare metal) nodes in a single zone or across multiple zones in a region. For more information, see [Planning your cluster and worker node setup](https://console.bluemix.net/docs/containers/cs_clusters_planning.html#plan_clusters). +By using the IBM Cloud Kubernetes Service provider, you can create clusters with a mixture of virtual and physical (bare metal) nodes in a single zone or across multiple zones in a region. For more information, see [Planning your cluster and worker node setup](https://cloud.ibm.com/docs/containers?topic=containers-plan_clusters#plan_clusters). The name of the Kubernetes Node object is the private IP address of the IBM Cloud Kubernetes Service worker node instance. ### Networking -The IBM Cloud Kubernetes Service provider provides VLANs for quality network performance and network isolation for nodes. You can set up custom firewalls and Calico network policies to add an extra layer of security for your cluster, or connect your cluster to your on-prem data center via VPN. For more information, see [Planning in-cluster and private networking](https://console.bluemix.net/docs/containers/cs_network_cluster.html#planning). +The IBM Cloud Kubernetes Service provider provides VLANs for quality network performance and network isolation for nodes. You can set up custom firewalls and Calico network policies to add an extra layer of security for your cluster, or connect your cluster to your on-prem data center via VPN. For more information, see [Planning in-cluster and private networking](https://cloud.ibm.com/docs/containers?topic=containers-cs_network_cluster#cs_network_cluster). -To expose apps to the public or within the cluster, you can leverage NodePort, LoadBalancer, or Ingress services. You can also customize the Ingress application load balancer with annotations. For more information, see [Planning to expose your apps with external networking](https://console.bluemix.net/docs/containers/cs_network_planning.html#planning). +To expose apps to the public or within the cluster, you can leverage NodePort, LoadBalancer, or Ingress services. You can also customize the Ingress application load balancer with annotations. For more information, see [Planning to expose your apps with external networking](https://cloud.ibm.com/docs/containers?topic=containers-cs_network_planning#cs_network_planning). ### Storage -The IBM Cloud Kubernetes Service provider leverages Kubernetes-native persistent volumes to enable users to mount file, block, and cloud object storage to their apps. You can also use database-as-a-service and third-party add-ons for persistent storage of your data. For more information, see [Planning highly available persistent storage](https://console.bluemix.net/docs/containers/cs_storage_planning.html#storage_planning). +The IBM Cloud Kubernetes Service provider leverages Kubernetes-native persistent volumes to enable users to mount file, block, and cloud object storage to their apps. You can also use database-as-a-service and third-party add-ons for persistent storage of your data. For more information, see [Planning highly available persistent storage](https://cloud.ibm.com/docs/containers?topic=containers-storage_planning#storage_planning). + +## Baidu Cloud Container Engine + +### Node Name + +The Baidu cloud provider uses the private IP address of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object. +Note that the Kubernetes Node name must match the Baidu VM private IP. diff --git a/content/en/docs/concepts/cluster-administration/federation.md b/content/en/docs/concepts/cluster-administration/federation.md index 16fc92d1f7237..501568531bbec 100644 --- a/content/en/docs/concepts/cluster-administration/federation.md +++ b/content/en/docs/concepts/cluster-administration/federation.md @@ -6,7 +6,9 @@ weight: 80 {{% capture overview %}} -{{< include "federation-current-state.md" >}} +{{< deprecationfilewarning >}} +{{< include "federation-deprecation-warning-note.md" >}} +{{< /deprecationfilewarning >}} This page explains why and how to manage multiple Kubernetes clusters using federation. diff --git a/content/en/docs/concepts/cluster-administration/logging.md b/content/en/docs/concepts/cluster-administration/logging.md index 7b540f7baabc7..67fa8a5699dac 100644 --- a/content/en/docs/concepts/cluster-administration/logging.md +++ b/content/en/docs/concepts/cluster-administration/logging.md @@ -35,14 +35,14 @@ a container that writes some text to standard output once per second. To run this pod, use the following command: ```shell -$ kubectl create -f https://k8s.io/examples/debug/counter-pod.yaml +kubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml pod/counter created ``` To fetch the logs, use the `kubectl logs` command, as follows: ```shell -$ kubectl logs counter +kubectl logs counter 0: Mon Jan 1 00:00:00 UTC 2001 1: Mon Jan 1 00:00:01 UTC 2001 2: Mon Jan 1 00:00:02 UTC 2001 @@ -178,7 +178,9 @@ Now when you run this pod, you can access each log stream separately by running the following commands: ```shell -$ kubectl logs counter count-log-1 +kubectl logs counter count-log-1 +``` +``` 0: Mon Jan 1 00:00:00 UTC 2001 1: Mon Jan 1 00:00:01 UTC 2001 2: Mon Jan 1 00:00:02 UTC 2001 @@ -186,7 +188,9 @@ $ kubectl logs counter count-log-1 ``` ```shell -$ kubectl logs counter count-log-2 +kubectl logs counter count-log-2 +``` +``` Mon Jan 1 00:00:00 UTC 2001 INFO 0 Mon Jan 1 00:00:01 UTC 2001 INFO 1 Mon Jan 1 00:00:02 UTC 2001 INFO 2 diff --git a/content/en/docs/concepts/cluster-administration/manage-deployment.md b/content/en/docs/concepts/cluster-administration/manage-deployment.md index 0288c73efaac2..d5ef64ab99aaa 100644 --- a/content/en/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/en/docs/concepts/cluster-administration/manage-deployment.md @@ -26,23 +26,26 @@ Many applications require multiple resources to be created, such as a Deployment Multiple resources can be created the same way as a single resource: ```shell -$ kubectl create -f https://k8s.io/examples/application/nginx-app.yaml +kubectl apply -f https://k8s.io/examples/application/nginx-app.yaml +``` + +```shell service/my-nginx-svc created deployment.apps/my-nginx created ``` The resources will be created in the order they appear in the file. Therefore, it's best to specify the service first, since that will ensure the scheduler can spread the pods associated with the service as they are created by the controller(s), such as Deployment. -`kubectl create` also accepts multiple `-f` arguments: +`kubectl apply` also accepts multiple `-f` arguments: ```shell -$ kubectl create -f https://k8s.io/examples/application/nginx/nginx-svc.yaml -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml +kubectl apply -f https://k8s.io/examples/application/nginx/nginx-svc.yaml -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml ``` And a directory can be specified rather than or in addition to individual files: ```shell -$ kubectl create -f https://k8s.io/examples/application/nginx/ +kubectl apply -f https://k8s.io/examples/application/nginx/ ``` `kubectl` will read any files with suffixes `.yaml`, `.yml`, or `.json`. @@ -52,7 +55,10 @@ It is a recommended practice to put resources related to the same microservice o A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into github: ```shell -$ kubectl create -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx/nginx-deployment.yaml +kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx/nginx-deployment.yaml +``` + +```shell deployment.apps/my-nginx created ``` @@ -61,7 +67,10 @@ deployment.apps/my-nginx created Resource creation isn't the only operation that `kubectl` can perform in bulk. It can also extract resource names from configuration files in order to perform other operations, in particular to delete the same resources you created: ```shell -$ kubectl delete -f https://k8s.io/examples/application/nginx-app.yaml +kubectl delete -f https://k8s.io/examples/application/nginx-app.yaml +``` + +```shell deployment.apps "my-nginx" deleted service "my-nginx-svc" deleted ``` @@ -69,13 +78,16 @@ service "my-nginx-svc" deleted In the case of just two resources, it's also easy to specify both on the command line using the resource/name syntax: ```shell -$ kubectl delete deployments/my-nginx services/my-nginx-svc +kubectl delete deployments/my-nginx services/my-nginx-svc ``` For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using `-l` or `--selector`, to filter resources by their labels: ```shell -$ kubectl delete deployment,services -l app=nginx +kubectl delete deployment,services -l app=nginx +``` + +```shell deployment.apps "my-nginx" deleted service "my-nginx-svc" deleted ``` @@ -83,7 +95,10 @@ service "my-nginx-svc" deleted Because `kubectl` outputs resource names in the same syntax it accepts, it's easy to chain operations using `$()` or `xargs`: ```shell -$ kubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service) +kubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service) +``` + +```shell NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx-svc LoadBalancer 10.0.0.208 80/TCP 0s ``` @@ -108,14 +123,20 @@ project/k8s/development By default, performing a bulk operation on `project/k8s/development` will stop at the first level of the directory, not processing any subdirectories. If we had tried to create the resources in this directory using the following command, we would have encountered an error: ```shell -$ kubectl create -f project/k8s/development +kubectl apply -f project/k8s/development +``` + +```shell error: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin) ``` Instead, specify the `--recursive` or `-R` flag with the `--filename,-f` flag as such: ```shell -$ kubectl create -f project/k8s/development --recursive +kubectl apply -f project/k8s/development --recursive +``` + +```shell configmap/my-config created deployment.apps/my-deployment created persistentvolumeclaim/my-pvc created @@ -126,7 +147,10 @@ The `--recursive` flag works with any operation that accepts the `--filename,-f` The `--recursive` flag also works when multiple `-f` arguments are provided: ```shell -$ kubectl create -f project/k8s/namespaces -f project/k8s/development --recursive +kubectl apply -f project/k8s/namespaces -f project/k8s/development --recursive +``` + +```shell namespace/development created namespace/staging created configmap/my-config created @@ -169,8 +193,11 @@ and The labels allow us to slice and dice our resources along any dimension specified by a label: ```shell -$ kubectl create -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml -$ kubectl get pods -Lapp -Ltier -Lrole +kubectl apply -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml +kubectl get pods -Lapp -Ltier -Lrole +``` + +```shell NAME READY STATUS RESTARTS AGE APP TIER ROLE guestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend guestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend @@ -180,7 +207,12 @@ guestbook-redis-slave-2q2yf 1/1 Running 0 1m guestboo guestbook-redis-slave-qgazl 1/1 Running 0 1m guestbook backend slave my-nginx-divi2 1/1 Running 0 29m nginx my-nginx-o0ef1 1/1 Running 0 29m nginx -$ kubectl get pods -lapp=guestbook,role=slave +``` + +```shell +kubectl get pods -lapp=guestbook,role=slave +``` +```shell NAME READY STATUS RESTARTS AGE guestbook-redis-slave-2q2yf 1/1 Running 0 3m guestbook-redis-slave-qgazl 1/1 Running 0 3m @@ -240,7 +272,10 @@ Sometimes existing pods and other resources need to be relabeled before creating For example, if you want to label all your nginx pods as frontend tier, simply run: ```shell -$ kubectl label pods -l app=nginx tier=fe +kubectl label pods -l app=nginx tier=fe +``` + +```shell pod/my-nginx-2035384211-j5fhi labeled pod/my-nginx-2035384211-u2c7e labeled pod/my-nginx-2035384211-u3t6x labeled @@ -250,7 +285,9 @@ This first filters all pods with the label "app=nginx", and then labels them wit To see the pods you just labeled, run: ```shell -$ kubectl get pods -l app=nginx -L tier +kubectl get pods -l app=nginx -L tier +``` +```shell NAME READY STATUS RESTARTS AGE TIER my-nginx-2035384211-j5fhi 1/1 Running 0 23m fe my-nginx-2035384211-u2c7e 1/1 Running 0 23m fe @@ -266,8 +303,10 @@ For more information, please see [labels](/docs/concepts/overview/working-with-o Sometimes you would want to attach annotations to resources. Annotations are arbitrary non-identifying metadata for retrieval by API clients such as tools, libraries, etc. This can be done with `kubectl annotate`. For example: ```shell -$ kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx' -$ kubectl get pods my-nginx-v4-9gw19 -o yaml +kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx' +kubectl get pods my-nginx-v4-9gw19 -o yaml +``` +```shell apiversion: v1 kind: pod metadata: @@ -283,14 +322,18 @@ For more information, please see [annotations](/docs/concepts/overview/working-w When load on your application grows or shrinks, it's easy to scale with `kubectl`. For instance, to decrease the number of nginx replicas from 3 to 1, do: ```shell -$ kubectl scale deployment/my-nginx --replicas=1 +kubectl scale deployment/my-nginx --replicas=1 +``` +```shell deployment.extensions/my-nginx scaled ``` Now you only have one pod managed by the deployment. ```shell -$ kubectl get pods -l app=nginx +kubectl get pods -l app=nginx +``` +```shell NAME READY STATUS RESTARTS AGE my-nginx-2035384211-j5fhi 1/1 Running 0 30m ``` @@ -298,7 +341,9 @@ my-nginx-2035384211-j5fhi 1/1 Running 0 30m To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do: ```shell -$ kubectl autoscale deployment/my-nginx --min=1 --max=3 +kubectl autoscale deployment/my-nginx --min=1 --max=3 +``` +```shell horizontalpodautoscaler.autoscaling/my-nginx autoscaled ``` @@ -320,7 +365,8 @@ Then, you can use [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-co This command will compare the version of the configuration that you're pushing with the previous version and apply the changes you've made, without overwriting any automated changes to properties you haven't specified. ```shell -$ kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml +kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml +```shell deployment.apps/my-nginx configured ``` @@ -330,27 +376,25 @@ Currently, resources are created without this annotation, so the first invocatio All subsequent calls to `kubectl apply`, and other commands that modify the configuration, such as `kubectl replace` and `kubectl edit`, will update the annotation, allowing subsequent calls to `kubectl apply` to detect and perform deletions using a three-way diff. -{{< note >}} -To use apply, always create resource initially with either `kubectl apply` or `kubectl create --save-config`. -{{< /note >}} - ### kubectl edit Alternatively, you may also update resources with `kubectl edit`: ```shell -$ kubectl edit deployment/my-nginx +kubectl edit deployment/my-nginx ``` This is equivalent to first `get` the resource, edit it in text editor, and then `apply` the resource with the updated version: ```shell -$ kubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml -$ vi /tmp/nginx.yaml +kubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml +vi /tmp/nginx.yaml # do some edit, and then save the file -$ kubectl apply -f /tmp/nginx.yaml + +kubectl apply -f /tmp/nginx.yaml deployment.apps/my-nginx configured -$ rm /tmp/nginx.yaml + +rm /tmp/nginx.yaml ``` This allows you to do more significant changes more easily. Note that you can specify the editor with your `EDITOR` or `KUBE_EDITOR` environment variables. @@ -370,7 +414,9 @@ and In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can simply modify your original configuration file: ```shell -$ kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force +kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force +``` +```shell deployment.apps/my-nginx deleted deployment.apps/my-nginx replaced ``` @@ -379,20 +425,21 @@ deployment.apps/my-nginx replaced At some point, you'll eventually need to update your deployed application, typically by specifying a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several update operations, each of which is applicable to different scenarios. -We'll guide you through how to create and update applications with Deployments. If your deployed application is managed by Replication Controllers, -you should read [how to use `kubectl rolling-update`](/docs/tasks/run-application/rolling-update-replication-controller/) instead. +We'll guide you through how to create and update applications with Deployments. Let's say you were running version 1.7.9 of nginx: ```shell -$ kubectl run my-nginx --image=nginx:1.7.9 --replicas=3 +kubectl run my-nginx --image=nginx:1.7.9 --replicas=3 +``` +```shell deployment.apps/my-nginx created ``` To update to version 1.9.1, simply change `.spec.template.spec.containers[0].image` from `nginx:1.7.9` to `nginx:1.9.1`, with the kubectl commands we learned above. ```shell -$ kubectl edit deployment/my-nginx +kubectl edit deployment/my-nginx ``` That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are being updated, and only a certain number of new replicas may be created above the desired number of pods. To learn more details about it, visit [Deployment page](/docs/concepts/workloads/controllers/deployment/). diff --git a/content/en/docs/concepts/cluster-administration/networking.md b/content/en/docs/concepts/cluster-administration/networking.md index d4fda1a89293d..bbd223e9def89 100644 --- a/content/en/docs/concepts/cluster-administration/networking.md +++ b/content/en/docs/concepts/cluster-administration/networking.md @@ -7,8 +7,9 @@ weight: 50 --- {{% capture overview %}} -Kubernetes approaches networking somewhat differently than Docker does by -default. There are 4 distinct networking problems to solve: +Networking is a central part of Kubernetes, but it can be challenging to +understand exactly how it is expected to work. There are 4 distinct networking +problems to address: 1. Highly-coupled container-to-container communications: this is solved by [pods](/docs/concepts/workloads/pods/pod/) and `localhost` communications. @@ -21,80 +22,56 @@ default. There are 4 distinct networking problems to solve: {{% capture body %}} -Kubernetes assumes that pods can communicate with other pods, regardless of -which host they land on. Every pod gets its own IP address so you do not -need to explicitly create links between pods and you almost never need to deal -with mapping container ports to host ports. This creates a clean, -backwards-compatible model where pods can be treated much like VMs or physical -hosts from the perspectives of port allocation, naming, service discovery, load -balancing, application configuration, and migration. - -There are requirements imposed on how you set up your cluster networking to -achieve this. - -## Docker model - -Before discussing the Kubernetes approach to networking, it is worthwhile to -review the "normal" way that networking works with Docker. By default, Docker -uses host-private networking. It creates a virtual bridge, called `docker0` by -default, and allocates a subnet from one of the private address blocks defined -in [RFC1918](https://tools.ietf.org/html/rfc1918) for that bridge. For each -container that Docker creates, it allocates a virtual Ethernet device (called -`veth`) which is attached to the bridge. The veth is mapped to appear as `eth0` -in the container, using Linux namespaces. The in-container `eth0` interface is -given an IP address from the bridge's address range. - -The result is that Docker containers can talk to other containers only if they -are on the same machine (and thus the same virtual bridge). Containers on -different machines can not reach each other - in fact they may end up with the -exact same network ranges and IP addresses. - -In order for Docker containers to communicate across nodes, there must -be allocated ports on the machine’s own IP address, which are then -forwarded or proxied to the containers. This obviously means that -containers must either coordinate which ports they use very carefully -or ports must be allocated dynamically. - -## Kubernetes model - -Coordinating ports across multiple developers is very difficult to do at -scale and exposes users to cluster-level issues outside of their control. +Kubernetes is all about sharing machines between applications. Typically, +sharing machines requires ensuring that two applications do not try to use the +same ports. Coordinating ports across multiple developers is very difficult to +do at scale and exposes users to cluster-level issues outside of their control. + Dynamic port allocation brings a lot of complications to the system - every application has to take ports as flags, the API servers have to know how to insert dynamic port numbers into configuration blocks, services have to know how to find each other, etc. Rather than deal with this, Kubernetes takes a different approach. +## The Kubernetes network model + +Every `Pod` gets its own IP address. This means you do not need to explicitly +create links between `Pods` and you almost never need to deal with mapping +container ports to host ports. This creates a clean, backwards-compatible +model where `Pods` can be treated much like VMs or physical hosts from the +perspectives of port allocation, naming, service discovery, load balancing, +application configuration, and migration. + Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies): - * all containers can communicate with all other containers without NAT - * all nodes can communicate with all containers (and vice-versa) without NAT - * the IP that a container sees itself as is the same IP that others see it as + * pods on a node can communicate with all pods on all nodes without NAT + * agents on a node (e.g. system daemons, kubelet) can communicate with all + pods on that node -What this means in practice is that you can not just take two computers -running Docker and expect Kubernetes to work. You must ensure that the -fundamental requirements are met. +Note: For those platforms that support `Pods` running in the host network (e.g. +Linux): + + * pods in the host network of a node can communicate with all pods on all + nodes without NAT This model is not only less complex overall, but it is principally compatible with the desire for Kubernetes to enable low-friction porting of apps from VMs to containers. If your job previously ran in a VM, your VM had an IP and could talk to other VMs in your project. This is the same basic model. -Until now this document has talked about containers. In reality, Kubernetes -applies IP addresses at the `Pod` scope - containers within a `Pod` share their -network namespaces - including their IP address. This means that containers -within a `Pod` can all reach each other's ports on `localhost`. This does imply -that containers within a `Pod` must coordinate port usage, but this is no -different than processes in a VM. This is called the "IP-per-pod" model. This -is implemented, using Docker, as a "pod container" which holds the network namespace -open while "app containers" (the things the user specified) join that namespace -with Docker's `--net=container:` function. - -As with Docker, it is possible to request host ports, but this is reduced to a -very niche operation. In this case a port will be allocated on the host `Node` -and traffic will be forwarded to the `Pod`. The `Pod` itself is blind to the -existence or non-existence of host ports. +Kubernetes IP addresses exist at the `Pod` scope - containers within a `Pod` +share their network namespaces - including their IP address. This means that +containers within a `Pod` can all reach each other's ports on `localhost`. This +also means that containers within a `Pod` must coordinate port usage, but this +is no different than processes in a VM. This is called the "IP-per-pod" model. + +How this is implemented is a detail of the particular container runtime in use. + +It is possible to request ports on the `Node` itself which forward to your `Pod` +(called host ports), but this is a very niche operation. How that forwarding is +implemented is also a detail of the container runtime. The `Pod` itself is +blind to the existence or non-existence of host ports. ## How to implement the Kubernetes networking model diff --git a/content/en/docs/concepts/configuration/assign-pod-node.md b/content/en/docs/concepts/configuration/assign-pod-node.md index 70ec7f2938ea5..99129cfa0f8a2 100644 --- a/content/en/docs/concepts/configuration/assign-pod-node.md +++ b/content/en/docs/concepts/configuration/assign-pod-node.md @@ -69,7 +69,7 @@ Then add a nodeSelector like so: {{< codenew file="pods/pod-nginx.yaml" >}} -When you then run `kubectl create -f https://k8s.io/examples/pods/pod-nginx.yaml`, +When you then run `kubectl apply -f https://k8s.io/examples/pods/pod-nginx.yaml`, the Pod will get scheduled on the node that you attached the label to. You can verify that it worked by running `kubectl get pods -o wide` and looking at the "NODE" that the Pod was assigned to. @@ -83,8 +83,8 @@ with a standard set of labels. As of Kubernetes v1.4 these labels are * `failure-domain.beta.kubernetes.io/zone` * `failure-domain.beta.kubernetes.io/region` * `beta.kubernetes.io/instance-type` -* `beta.kubernetes.io/os` -* `beta.kubernetes.io/arch` +* `kubernetes.io/os` +* `kubernetes.io/arch` {{< note >}} The value of these labels is cloud provider specific and is not guaranteed to be reliable. diff --git a/content/en/docs/concepts/configuration/manage-compute-resources-container.md b/content/en/docs/concepts/configuration/manage-compute-resources-container.md index b05b2e508d20a..34f332089317e 100644 --- a/content/en/docs/concepts/configuration/manage-compute-resources-container.md +++ b/content/en/docs/concepts/configuration/manage-compute-resources-container.md @@ -189,7 +189,9 @@ unscheduled until a place can be found. An event is produced each time the scheduler fails to find a place for the Pod, like this: ```shell -$ kubectl describe pod frontend | grep -A 3 Events +kubectl describe pod frontend | grep -A 3 Events +``` +``` Events: FirstSeen LastSeen Count From Subobject PathReason Message 36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others @@ -210,7 +212,9 @@ You can check node capacities and amounts allocated with the `kubectl describe nodes` command. For example: ```shell -$ kubectl describe nodes e2e-test-minion-group-4lw4 +kubectl describe nodes e2e-test-minion-group-4lw4 +``` +``` Name: e2e-test-minion-group-4lw4 [ ... lines removed for clarity ...] Capacity: @@ -260,7 +264,9 @@ whether a Container is being killed because it is hitting a resource limit, call `kubectl describe pod` on the Pod of interest: ```shell -[12:54:41] $ kubectl describe pod simmemleak-hra99 +kubectl describe pod simmemleak-hra99 +``` +``` Name: simmemleak-hra99 Namespace: default Image(s): saadali/simmemleak @@ -304,7 +310,9 @@ You can call `kubectl get pod` with the `-o go-template=...` option to fetch the of previously terminated Containers: ```shell -[13:59:01] $ kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-hra99 +kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-hra99 +``` +``` Container Name: simmemleak LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]] ``` diff --git a/content/en/docs/concepts/configuration/overview.md b/content/en/docs/concepts/configuration/overview.md index b7d74b4abea8b..c900a06beef77 100644 --- a/content/en/docs/concepts/configuration/overview.md +++ b/content/en/docs/concepts/configuration/overview.md @@ -23,7 +23,7 @@ This is a living document. If you think of something that is not on this list bu - Group related objects into a single file whenever it makes sense. One file is often easier to manage than several. See the [guestbook-all-in-one.yaml](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/all-in-one/guestbook-all-in-one.yaml) file as an example of this syntax. -- Note also that many `kubectl` commands can be called on a directory. For example, you can call `kubectl create` on a directory of config files. +- Note also that many `kubectl` commands can be called on a directory. For example, you can call `kubectl apply` on a directory of config files. - Don't specify default values unnecessarily: simple, minimal configuration will make errors less likely. @@ -97,7 +97,7 @@ The caching semantics of the underlying image provider make even `imagePullPolic ## Using kubectl -- Use `kubectl apply -f ` or `kubectl create -f `. This looks for Kubernetes configuration in all `.yaml`, `.yml`, and `.json` files in `` and passes it to `apply` or `create`. +- Use `kubectl apply -f `. This looks for Kubernetes configuration in all `.yaml`, `.yml`, and `.json` files in `` and passes it to `apply`. - Use label selectors for `get` and `delete` operations instead of specific object names. See the sections on [label selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) and [using labels effectively](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively). diff --git a/content/en/docs/concepts/configuration/pod-priority-preemption.md b/content/en/docs/concepts/configuration/pod-priority-preemption.md index 7b92b1cd1267f..10b1b10c02e9d 100644 --- a/content/en/docs/concepts/configuration/pod-priority-preemption.md +++ b/content/en/docs/concepts/configuration/pod-priority-preemption.md @@ -309,7 +309,7 @@ When a Pod is preempted, there will be events recorded for the preempted Pod. Preemption should happen only when a cluster does not have enough resources for a Pod. In such cases, preemption happens only when the priority of the pending Pod (preemptor) is higher than the victim Pods. Preemption must not happen when -there is no pending Pod, or when the pending Pods have equal or higher priority +there is no pending Pod, or when the pending Pods have equal or lower priority than the victims. If preemption happens in such scenarios, please file an issue. #### Pods are preempted, but the preemptor is not scheduled diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md index cac58727d7b39..5b4c3c1b44b7c 100644 --- a/content/en/docs/concepts/configuration/secret.md +++ b/content/en/docs/concepts/configuration/secret.md @@ -13,10 +13,10 @@ weight: 50 {{% capture overview %}} -Objects of type `secret` are intended to hold sensitive information, such as -passwords, OAuth tokens, and ssh keys. Putting this information in a `secret` -is safer and more flexible than putting it verbatim in a `pod` definition or in -a docker image. See [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) for more information. +Kubernetes `secret` objects let you store and manage sensitive information, such +as passwords, OAuth tokens, and ssh keys. Putting this information in a `secret` +is safer and more flexible than putting it verbatim in a +{{< glossary_tooltip term_id="pod" >}} definition or in a {{< glossary_tooltip text="container image" term_id="image" >}}. See [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) for more information. {{% /capture %}} @@ -32,7 +32,8 @@ more control over how it is used, and reduces the risk of accidental exposure. Users can create secrets, and the system also creates some secrets. To use a secret, a pod needs to reference the secret. -A secret can be used with a pod in two ways: as files in a [volume](/docs/concepts/storage/volumes/) mounted on one or more of +A secret can be used with a pod in two ways: as files in a +{{< glossary_tooltip text="volume" term_id="volume" >}} mounted on one or more of its containers, or used by kubelet when pulling images for the pod. ### Built-in Secrets @@ -60,8 +61,8 @@ username and password that the pods should use is in the files ```shell # Create files needed for rest of example. -$ echo -n 'admin' > ./username.txt -$ echo -n '1f2d1e2e67df' > ./password.txt +echo -n 'admin' > ./username.txt +echo -n '1f2d1e2e67df' > ./password.txt ``` The `kubectl create secret` command @@ -69,18 +70,31 @@ packages these files into a Secret and creates the object on the Apiserver. ```shell -$ kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt +kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt +``` +``` secret "db-user-pass" created ``` +{{< note >}} +Special characters such as `$`, `\*`, and `!` require escaping. +If the password you are using has special characters, you need to escape them using the `\\` character. For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way: + kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password=S\\!B\\\*d\\$zDsb + You do not need to escape special characters in passwords from files (`--from-file`). +{{< /note >}} You can check that the secret was created like this: ```shell -$ kubectl get secrets +kubectl get secrets +``` +``` NAME TYPE DATA AGE db-user-pass Opaque 2 51s - -$ kubectl describe secrets/db-user-pass +``` +```shell +kubectl describe secrets/db-user-pass +``` +``` Name: db-user-pass Namespace: default Labels: @@ -94,11 +108,14 @@ password.txt: 12 bytes username.txt: 5 bytes ``` -Note that neither `get` nor `describe` shows the contents of the file by default. -This is to protect the secret from being exposed accidentally to someone looking +{{< note >}} +`kubectl get` and `kubectl describe` avoid showing the contents of a secret by +default. +This is to protect the secret from being exposed accidentally to an onlooker, or from being stored in a terminal log. +{{< /note >}} -See [decoding a secret](#decoding-a-secret) for how to see the contents. +See [decoding a secret](#decoding-a-secret) for how to see the contents of a secret. #### Creating a Secret Manually @@ -132,10 +149,12 @@ data: password: MWYyZDFlMmU2N2Rm ``` -Now create the Secret using [`kubectl create`](/docs/reference/generated/kubectl/kubectl-commands#create): +Now create the Secret using [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply): ```shell -$ kubectl create -f ./secret.yaml +kubectl apply -f ./secret.yaml +``` +``` secret "mysecret" created ``` @@ -171,7 +190,7 @@ stringData: ``` Your deployment tool could then replace the `{{username}}` and `{{password}}` -template variables before running `kubectl create`. +template variables before running `kubectl apply`. stringData is a write-only convenience field. It is never output when retrieving Secrets. For example, if you run the following command: @@ -241,12 +260,81 @@ using the `-b` option to split long lines. Conversely Linux users *should* add the option `-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if `-w` option is not available. +#### Creating a Secret from Generator +Kubectl supports [managing objects using Kustomize](/docs/concepts/overview/object-management-kubectl/kustomization/) +since 1.14. With this new feature, +you can also create a Secret from generators and then apply it to create the object on +the Apiserver. The generators +should be specified in a `kustomization.yaml` inside a directory. + +For example, to generate a Secret from files `./username.txt` and `./password.txt` +```shell +# Create a kustomization.yaml file with SecretGenerator +cat <./kustomization.yaml +secretGenerator: +- name: db-user-pass + files: + - username.txt + - password.txt +EOF +``` +Apply the kustomization directory to create the Secret object. +```shell +$ kubectl apply -k . +secret/db-user-pass-96mffmfh4k created +``` + +You can check that the secret was created like this: + +```shell +$ kubectl get secrets +NAME TYPE DATA AGE +db-user-pass-96mffmfh4k Opaque 2 51s + +$ kubectl describe secrets/db-user-pass-96mffmfh4k +Name: db-user-pass +Namespace: default +Labels: +Annotations: + +Type: Opaque + +Data +==== +password.txt: 12 bytes +username.txt: 5 bytes +``` + +For example, to generate a Secret from literals `username=admin` and `password=secret`, +you can specify the secret generator in `kusotmization.yaml` as +```shell +# Create a kustomization.yaml file with SecretGenerator +$ cat <./kustomization.yaml +secretGenerator: +- name: db-user-pass + literals: + - username=admin + - password=secret +EOF +``` +Apply the kustomization directory to create the Secret object. +```shell +$ kubectl apply -k . +secret/db-user-pass-dddghtt9b5 created +``` +{{< note >}} +The generated Secrets name has a suffix appended by hashing the contents. This ensures that a new +Secret is generated each time the contents is modified. +{{< /note >}} + #### Decoding a Secret Secrets can be retrieved via the `kubectl get secret` command. For example, to retrieve the secret created in the previous section: ```shell -$ kubectl get secret mysecret -o yaml +kubectl get secret mysecret -o yaml +``` +``` apiVersion: v1 data: username: YWRtaW4= @@ -265,14 +353,17 @@ type: Opaque Decode the password field: ```shell -$ echo 'MWYyZDFlMmU2N2Rm' | base64 --decode +echo 'MWYyZDFlMmU2N2Rm' | base64 --decode +``` +``` 1f2d1e2e67df ``` ### Using Secrets -Secrets can be mounted as data volumes or be exposed as environment variables to -be used by a container in a pod. They can also be used by other parts of the +Secrets can be mounted as data volumes or be exposed as +{{< glossary_tooltip text="environment variables" term_id="container-env-variables" >}} +to be used by a container in a pod. They can also be used by other parts of the system, without being directly exposed to the pod. For example, they can hold credentials that other parts of the system should use to interact with external systems on your behalf. @@ -424,12 +515,22 @@ This is the result of commands executed inside the container from the example above: ```shell -$ ls /etc/foo/ +ls /etc/foo/ +``` +``` username password -$ cat /etc/foo/username +``` +```shell +cat /etc/foo/username +``` +``` admin -$ cat /etc/foo/password +``` +```shell +cat /etc/foo/password +``` +``` 1f2d1e2e67df ``` @@ -458,7 +559,8 @@ Secret updates. #### Using Secrets as Environment Variables -To use a secret in an environment variable in a pod: +To use a secret in an {{< glossary_tooltip text="environment variable" term_id="container-env-variables" >}} +in a pod: 1. Create a secret or use an existing one. Multiple pods can reference the same secret. 1. Modify your Pod definition in each container that you wish to consume the value of a secret key to add an environment variable for each secret key you wish to consume. The environment variable that consumes the secret key should populate the secret's name and key in `env[].valueFrom.secretKeyRef`. @@ -496,9 +598,15 @@ normal environment variables containing the base-64 decoded values of the secret This is the result of commands executed inside the container from the example above: ```shell -$ echo $SECRET_USERNAME +echo $SECRET_USERNAME +``` +``` admin -$ echo $SECRET_PASSWORD +``` +```shell +echo $SECRET_PASSWORD +``` +``` 1f2d1e2e67df ``` @@ -534,10 +642,10 @@ Secret volume sources are validated to ensure that the specified object reference actually points to an object of type `Secret`. Therefore, a secret needs to be created before any pods that depend on it. -Secret API objects reside in a namespace. They can only be referenced by pods -in that same namespace. +Secret API objects reside in a {{< glossary_tooltip text="namespace" term_id="namespace" >}}. +They can only be referenced by pods in that same namespace. -Individual secrets are limited to 1MB in size. This is to discourage creation +Individual secrets are limited to 1MiB in size. This is to discourage creation of very large secrets which would exhaust apiserver and kubelet memory. However, creation of many smaller secrets could also exhaust memory. More comprehensive limits on memory usage due to secrets is a planned feature. @@ -549,8 +657,8 @@ controller. It does not include pods created via the kubelets not common ways to create pods.) Secrets must be created before they are consumed in pods as environment -variables unless they are marked as optional. References to Secrets that do not exist will prevent -the pod from starting. +variables unless they are marked as optional. References to Secrets that do +not exist will prevent the pod from starting. References via `secretKeyRef` to keys that do not exist in a named Secret will prevent the pod from starting. @@ -563,7 +671,9 @@ invalid keys that were skipped. The example shows a pod which refers to the default/mysecret that contains 2 invalid keys, 1badkey and 2alsobad. ```shell -$ kubectl get events +kubectl get events +``` +``` LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON 0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names. ``` @@ -583,10 +693,20 @@ start until all the pod's volumes are mounted. ### Use-Case: Pod with ssh keys -Create a secret containing some ssh keys: - +Create a kustomization.yaml with SecretGenerator containing some ssh keys: +```shell +$ cp /path/to/.ssh/id_rsa ./id_rsa +$ cp /path/to/.ssh/id_rsa.pub ./id_rsa.pub +$ cat <./kustomization.yaml +SecretGenerator: +- name: ssh-key-secret + files: + - id_rsa + - id_rsa.pub +``` +Create the SecretObject on Apiserver: ```shell -$ kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub +$ kubectl apply -k . ``` {{< caution >}} @@ -633,26 +753,25 @@ This example illustrates a pod which consumes a secret containing prod credentials and another pod which consumes a secret with test environment credentials. -Make the secrets: - +Make the kustomization.yaml with SecretGenerator ```shell -$ kubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11 -secret "prod-db-secret" created -$ kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests -secret "test-db-secret" created +cat < kustomization.yaml +secretGenerator: +- name: prod-db-secret + literals: + - username=produser + - password=Y4nys7f11 +- name: test-db-secret + literals: + - username=testuser + - password=iluvtests +EOF ``` -{{< note >}} -Special characters such as `$`, `\*`, and `!` require escaping. -If the password you are using has special characters, you need to escape them using the `\\` character. For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way: - - kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password=S\\!B\\\*d\\$zDsb - -You do not need to escape special characters in passwords from files (`--from-file`). -{{< /note >}} Now make the pods: -```yaml +```shell +$ cat < pod.yaml apiVersion: v1 kind: List items: @@ -692,6 +811,21 @@ items: - name: secret-volume readOnly: true mountPath: "/etc/secret-volume" +EOF +``` + +Add the pods to the same kustomization.yaml +```shell +$ cat <> kustomization.yaml +resources: +- pod.yaml +EOF +``` + +Apply all those objects on the Apiserver by + +```shell +kubectl apply --k . ``` Both containers will have the following files present on their filesystems with the values for each container's environment: @@ -821,6 +955,7 @@ be available in future releases of Kubernetes. ## Security Properties + ### Protections Because `secret` objects can be created independently of the `pods` that use @@ -829,51 +964,52 @@ creating, viewing, and editing pods. The system can also take additional precautions with `secret` objects, such as avoiding writing them to disk where possible. -A secret is only sent to a node if a pod on that node requires it. It is not -written to disk. It is stored in a tmpfs. It is deleted once the pod that -depends on it is deleted. - -On most Kubernetes-project-maintained distributions, communication between user -to the apiserver, and from apiserver to the kubelets, is protected by SSL/TLS. -Secrets are protected when transmitted over these channels. - -Secret data on nodes is stored in tmpfs volumes and thus does not come to rest -on the node. +A secret is only sent to a node if a pod on that node requires it. +Kubelet stores the secret into a `tmpfs` so that the secret is not written +to disk storage. Once the Pod that depends on the secret is deleted, kubelet +will delete its local copy of the secret data as well. There may be secrets for several pods on the same node. However, only the secrets that a pod requests are potentially visible within its containers. -Therefore, one Pod does not have access to the secrets of another pod. +Therefore, one Pod does not have access to the secrets of another Pod. There may be several containers in a pod. However, each container in a pod has to request the secret volume in its `volumeMounts` for it to be visible within the container. This can be used to construct useful [security partitions at the Pod level](#use-case-secret-visible-to-one-container-in-a-pod). +On most Kubernetes-project-maintained distributions, communication between user +to the apiserver, and from apiserver to the kubelets, is protected by SSL/TLS. +Secrets are protected when transmitted over these channels. + +{{< feature-state for_k8s_version="v1.13" state="beta" >}} + +You can enable [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/) +for secret data, so that the secrets are not stored in the clear into {{< glossary_tooltip term_id="etcd" >}}. + ### Risks - - In the API server secret data is stored as plaintext in etcd; therefore: + - In the API server secret data is stored in {{< glossary_tooltip term_id="etcd" >}}; + therefore: + - Administrators should enable encryption at rest for cluster data (requires v1.13 or later) - Administrators should limit access to etcd to admin users - - Secret data in the API server is at rest on the disk that etcd uses; admins may want to wipe/shred disks - used by etcd when no longer in use + - Administrators may want to wipe/shred disks used by etcd when no longer in use + - If running etcd in a cluster, administrators should make sure to use SSL/TLS + for etcd peer-to-peer communication. - If you configure the secret through a manifest (JSON or YAML) file which has the secret data encoded as base64, sharing this file or checking it in to a - source repository means the secret is compromised. Base64 encoding is not an + source repository means the secret is compromised. Base64 encoding is _not_ an encryption method and is considered the same as plain text. - Applications still need to protect the value of secret after reading it from the volume, such as not accidentally logging it or transmitting it to an untrusted party. - A user who can create a pod that uses a secret can also see the value of that secret. Even if apiserver policy does not allow that user to read the secret object, the user could run a pod which exposes the secret. - - If multiple replicas of etcd are run, then the secrets will be shared between them. - By default, etcd does not secure peer-to-peer communication with SSL/TLS, though this can be configured. - - Currently, anyone with root on any node can read any secret from the apiserver, + - Currently, anyone with root on any node can read _any_ secret from the apiserver, by impersonating the kubelet. It is a planned feature to only send secrets to nodes that actually require them, to restrict the impact of a root exploit on a single node. -{{< note >}} -As of 1.7 [encryption of secret data at rest is supported](/docs/tasks/administer-cluster/encrypt-data/). -{{< /note >}} {{% capture whatsnext %}} diff --git a/content/en/docs/concepts/containers/container-lifecycle-hooks.md b/content/en/docs/concepts/containers/container-lifecycle-hooks.md index 3825868850c51..08d855732fabb 100644 --- a/content/en/docs/concepts/containers/container-lifecycle-hooks.md +++ b/content/en/docs/concepts/containers/container-lifecycle-hooks.md @@ -36,7 +36,7 @@ No parameters are passed to the handler. `PreStop` -This hook is called immediately before a container is terminated. +This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others. A call to the preStop hook fails if the container is already in terminated or completed state. It is blocking, meaning it is synchronous, so it must complete before the call to delete the container can be sent. No parameters are passed to the handler. diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md index 8885784e4c76f..6f22211cedc83 100644 --- a/content/en/docs/concepts/containers/images.md +++ b/content/en/docs/concepts/containers/images.md @@ -149,9 +149,9 @@ Once you have those variables filled in you can ### Using IBM Cloud Container Registry IBM Cloud Container Registry provides a multi-tenant private image registry that you can use to safely store and share your Docker images. By default, images in your private registry are scanned by the integrated Vulnerability Advisor to detect security issues and potential vulnerabilities. Users in your IBM Cloud account can access your images, or you can create a token to grant access to registry namespaces. -To install the IBM Cloud Container Registry CLI plug-in and create a namespace for your images, see [Getting started with IBM Cloud Container Registry](https://console.bluemix.net/docs/services/Registry/index.html#index). +To install the IBM Cloud Container Registry CLI plug-in and create a namespace for your images, see [Getting started with IBM Cloud Container Registry](https://cloud.ibm.com/docs/services/Registry?topic=registry-index#index). -You can use the IBM Cloud Container Registry to deploy containers from [IBM Cloud public images](https://console.bluemix.net/docs/services/RegistryImages/index.html#ibm_images) and your private images into the `default` namespace of your IBM Cloud Kubernetes Service cluster. To deploy a container into other namespaces, or to use an image from a different IBM Cloud Container Registry region or IBM Cloud account, create a Kubernetes `imagePullSecret`. For more information, see [Building containers from images](https://console.bluemix.net/docs/containers/cs_images.html#images). +You can use the IBM Cloud Container Registry to deploy containers from [IBM Cloud public images](https://cloud.ibm.com/docs/services/Registry?topic=registry-public_images#public_images) and your private images into the `default` namespace of your IBM Cloud Kubernetes Service cluster. To deploy a container into other namespaces, or to use an image from a different IBM Cloud Container Registry region or IBM Cloud account, create a Kubernetes `imagePullSecret`. For more information, see [Building containers from images](https://cloud.ibm.com/docs/containers?topic=containers-images#images). ### Configuring Nodes to Authenticate to a Private Registry @@ -205,7 +205,7 @@ example, run these on your desktop/laptop: Verify by creating a pod that uses a private image, e.g.: ```yaml -kubectl create -f - < ./kustomization.yaml +secretGenerator: +- name: myregistrykey + type: docker-registry + literals: + - docker-server=DOCKER_REGISTRY_SERVER + - docker-username=DOCKER_USER + - docker-password=DOCKER_PASSWORD + - docker-email=DOCKER_EMAIL +EOF + +kubectl apply -k . +secret/myregistrykey-66h7d4d986 created ``` -If you need access to multiple registries, you can create one secret for each registry. -Kubelet will merge any `imagePullSecrets` into a single virtual `.docker/config.json` -when pulling images for your Pods. +If you already have a Docker credentials file then, rather than using the above +command, you can import the credentials file as a Kubernetes secret. +[Create a Secret based on existing Docker credentials](/docs/tasks/configure-pod-container/pull-image-private-registry/#registry-secret-existing-credentials) explains how to set this up. +This is particularly useful if you are using multiple private container +registries, as `kubectl create secret docker-registry` creates a Secret that will +only work with a single private registry. +{{< note >}} Pods can only reference image pull secrets in their own namespace, so this process needs to be done one time per namespace. - -##### Bypassing kubectl create secrets - -If for some reason you need multiple items in a single `.docker/config.json` or need -control not given by the above command, then you can [create a secret using -json or yaml](/docs/user-guide/secrets/#creating-a-secret-manually). - -Be sure to: - -- set the name of the data item to `.dockerconfigjson` -- base64 encode the docker file and paste that string, unbroken - as the value for field `data[".dockerconfigjson"]` -- set `type` to `kubernetes.io/dockerconfigjson` - -Example: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: myregistrykey - namespace: awesomeapps -data: - .dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== -type: kubernetes.io/dockerconfigjson -``` - -If you get the error message `error: no objects passed to create`, it may mean the base64 encoded string is invalid. -If you get an error message like `Secret "myregistrykey" is invalid: data[.dockerconfigjson]: invalid value ...`, it means -the data was successfully un-base64 encoded, but could not be parsed as a `.docker/config.json` file. +{{< /note >}} #### Referring to an imagePullSecrets on a Pod Now, you can create pods which reference that secret by adding an `imagePullSecrets` section to a pod definition. -```yaml +```shell +cat < pod.yaml apiVersion: v1 kind: Pod metadata: @@ -337,6 +324,12 @@ spec: image: janedoe/awesomeapp:v1 imagePullSecrets: - name: myregistrykey +EOF + +cat <> ./kustomization.yaml +resources: +- pod.yaml +EOF ``` This needs to be done for each pod that is using a private registry. @@ -377,3 +370,6 @@ common use cases and suggested solutions. - The tenant adds that secret to imagePullSecrets of each namespace. {{% /capture %}} + +If you need access to multiple registries, you can create one secret for each registry. +Kubelet will merge any `imagePullSecrets` into a single virtual `.docker/config.json` diff --git a/content/en/docs/concepts/containers/runtime-class.md b/content/en/docs/concepts/containers/runtime-class.md index 861ce41d9e4b6..c2ca8a830f537 100644 --- a/content/en/docs/concepts/containers/runtime-class.md +++ b/content/en/docs/concepts/containers/runtime-class.md @@ -9,10 +9,16 @@ weight: 20 {{% capture overview %}} -{{< feature-state for_k8s_version="v1.12" state="alpha" >}} +{{< feature-state for_k8s_version="v1.14" state="beta" >}} This page describes the RuntimeClass resource and runtime selection mechanism. +{{< warning >}} +RuntimeClass includes *breaking* changes in the beta upgrade in v1.14. If you were using +RuntimeClass prior to v1.14, see [Upgrading RuntimeClass from Alpha to +Beta](#upgrading-runtimeclass-from-alpha-to-beta). +{{< /warning >}} + {{% /capture %}} @@ -20,72 +26,51 @@ This page describes the RuntimeClass resource and runtime selection mechanism. ## Runtime Class -RuntimeClass is an alpha feature for selecting the container runtime configuration to use to run a -pod's containers. +RuntimeClass is a feature for selecting the container runtime configuration. The container runtime +configuration is used to run a Pod's containers. ### Set Up -As an early alpha feature, there are some additional setup steps that must be taken in order to use -the RuntimeClass feature: - -1. Enable the RuntimeClass feature gate (on apiservers & kubelets, requires version 1.12+) -2. Install the RuntimeClass CRD -3. Configure the CRI implementation on nodes (runtime dependent) -4. Create the corresponding RuntimeClass resources - -#### 1. Enable the RuntimeClass feature gate - -See [Feature Gates](/docs/reference/command-line-tools-reference/feature-gates/) for an explanation -of enabling feature gates. The `RuntimeClass` feature gate must be enabled on apiservers _and_ -kubelets. - -#### 2. Install the RuntimeClass CRD - -The RuntimeClass [CustomResourceDefinition][] (CRD) can be found in the addons directory of the -Kubernetes git repo: [kubernetes/cluster/addons/runtimeclass/runtimeclass_crd.yaml][runtimeclass_crd] - -Install the CRD with `kubectl apply -f runtimeclass_crd.yaml`. - -[CustomResourceDefinition]: /docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/ -[runtimeclass_crd]: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/runtimeclass/runtimeclass_crd.yaml +Ensure the RuntimeClass feature gate is enabled (it is by default). See [Feature +Gates](/docs/reference/command-line-tools-reference/feature-gates/) for an explanation of enabling +feature gates. The `RuntimeClass` feature gate must be enabled on apiservers _and_ kubelets. +1. Configure the CRI implementation on nodes (runtime dependent) +2. Create the corresponding RuntimeClass resources -#### 3. Configure the CRI implementation on nodes +#### 1. Configure the CRI implementation on nodes -The configurations to select between with RuntimeClass are CRI implementation dependent. See the -corresponding documentation for your CRI implementation for how to configure. As this is an alpha -feature, not all CRIs support multiple RuntimeClasses yet. +The configurations available through RuntimeClass are Container Runtime Interface (CRI) +implementation dependent. See the corresponding documentation ([below](#cri-documentation)) for your +CRI implementation for how to configure. {{< note >}} -RuntimeClass currently assumes a homogeneous node configuration across the cluster -(which means that all nodes are configured the same way with respect to container runtimes). Any heterogeneity (varying configurations) must be -managed independently of RuntimeClass through scheduling features (see [Assigning Pods to -Nodes](/docs/concepts/configuration/assign-pod-node/)). +RuntimeClass currently assumes a homogeneous node configuration across the cluster (which means that +all nodes are configured the same way with respect to container runtimes). Any heterogeneity +(varying configurations) must be managed independently of RuntimeClass through scheduling features +(see [Assigning Pods to Nodes](/docs/concepts/configuration/assign-pod-node/)). {{< /note >}} -The configurations have a corresponding `RuntimeHandler` name, referenced by the RuntimeClass. The -RuntimeHandler must be a valid DNS 1123 subdomain (alpha-numeric + `-` and `.` characters). +The configurations have a corresponding `handler` name, referenced by the RuntimeClass. The +handler must be a valid DNS 1123 label (alpha-numeric + `-` characters). -#### 4. Create the corresponding RuntimeClass resources +#### 2. Create the corresponding RuntimeClass resources -The configurations setup in step 3 should each have an associated `RuntimeHandler` name, which -identifies the configuration. For each RuntimeHandler (and optionally the empty `""` handler), -create a corresponding RuntimeClass object. +The configurations setup in step 1 should each have an associated `handler` name, which identifies +the configuration. For each handler, create a corresponding RuntimeClass object. The RuntimeClass resource currently only has 2 significant fields: the RuntimeClass name -(`metadata.name`) and the RuntimeHandler (`spec.runtimeHandler`). The object definition looks like this: +(`metadata.name`) and the handler (`handler`). The object definition looks like this: ```yaml -apiVersion: node.k8s.io/v1alpha1 # RuntimeClass is defined in the node.k8s.io API group +apiVersion: node.k8s.io/v1beta1 # RuntimeClass is defined in the node.k8s.io API group kind: RuntimeClass metadata: name: myclass # The name the RuntimeClass will be referenced by # RuntimeClass is a non-namespaced resource -spec: - runtimeHandler: myconfiguration # The name of the corresponding CRI configuration +handler: myconfiguration # The name of the corresponding CRI configuration ``` - {{< note >}} It is recommended that RuntimeClass write operations (create/update/patch/delete) be restricted to the cluster administrator. This is typically the default. See [Authorization @@ -116,4 +101,66 @@ error message. If no `runtimeClassName` is specified, the default RuntimeHandler will be used, which is equivalent to the behavior when the RuntimeClass feature is disabled. +### CRI Configuration + +For more details on setting up CRI runtimes, see [CRI installation](/docs/setup/cri/). + +#### dockershim + +Kubernetes built-in dockershim CRI does not support runtime handlers. + +#### [containerd](https://containerd.io/) + +Runtime handlers are configured through containerd's configuration at +`/etc/containerd/config.toml`. Valid handlers are configured under the runtimes section: + +``` +[plugins.cri.containerd.runtimes.${HANDLER_NAME}] +``` + +See containerd's config documentation for more details: +https://github.com/containerd/cri/blob/master/docs/config.md + +#### [cri-o](https://cri-o.io/) + +Runtime handlers are configured through cri-o's configuration at `/etc/crio/crio.conf`. Valid +handlers are configured under the [crio.runtime +table](https://github.com/kubernetes-sigs/cri-o/blob/master/docs/crio.conf.5.md#crioruntime-table): + +``` +[crio.runtime.runtimes.${HANDLER_NAME}] + runtime_path = "${PATH_TO_BINARY}" +``` + +See cri-o's config documentation for more details: +https://github.com/kubernetes-sigs/cri-o/blob/master/cmd/crio/config.go + + +### Upgrading RuntimeClass from Alpha to Beta + +The RuntimeClass Beta feature includes the following changes: + +- The `node.k8s.io` API group and `runtimeclasses.node.k8s.io` resource have been migrated to a + built-in API from a CustomResourceDefinition. +- The `spec` has been inlined in the RuntimeClass definition (i.e. there is no more + RuntimeClassSpec). +- The `runtimeHandler` field has been renamed `handler`. +- The `handler` field is now required in all API versions. This means the `runtimeHandler` field in + the Alpha API is also required. +- The `handler` field must be a valid DNS label ([RFC 1123](https://tools.ietf.org/html/rfc1123)), + meaning it can no longer contain `.` characters (in all versions). Valid handlers match the + following regular expression: `^[a-z0-9]([-a-z0-9]*[a-z0-9])?$`. + +**Action Required:** The following actions are required to upgrade from the alpha version of the +RuntimeClass feature to the beta version: + +- RuntimeClass resources must be recreated *after* upgrading to v1.14, and the + `runtimeclasses.node.k8s.io` CRD should be manually deleted: + ``` + kubectl delete customresourcedefinitions.apiextensions.k8s.io runtimeclasses.node.k8s.io + ``` +- Alpha RuntimeClasses with an unspecified or empty `runtimeHandler` or those using a `.` character + in the handler are no longer valid, and must be migrated to a valid handler configuration (see + above). + {{% /capture %}} diff --git a/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md b/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md index e069d491c4f69..84f3c702097ee 100644 --- a/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md +++ b/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md @@ -9,40 +9,48 @@ weight: 20 {{% capture overview %}} -This page explains *custom resources*, which are extensions of the Kubernetes -API, including when to add a custom resource to your Kubernetes cluster and when -to use a standalone service. It describes the two methods for adding custom -resources and how to choose between them. +*Custom resources* are extensions of the Kubernetes API. This page discusses when to add a custom +resource to your Kubernetes cluster and when to use a standalone service. It describes the two +methods for adding custom resources and how to choose between them. {{% /capture %}} {{% capture body %}} ## Custom resources -A *resource* is an endpoint in the [Kubernetes API](/docs/reference/using-api/api-overview/) that stores a collection of [API objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/) of a certain kind. For example, the built-in *pods* resource contains a collection of Pod objects. +A *resource* is an endpoint in the [Kubernetes API](/docs/reference/using-api/api-overview/) that stores a collection of +[API objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/) of a certain kind. For example, the built-in *pods* resource contains a collection of Pod objects. -A *custom resource* is an extension of the Kubernetes API that is not necessarily available on every -Kubernetes cluster. -In other words, it represents a customization of a particular Kubernetes installation. +A *custom resource* is an extension of the Kubernetes API that is not necessarily available in a default +Kubernetes installation. It represents a customization of a particular Kubernetes installation. However, +many core Kubernetes functions are now built using custom resources, making Kubernetes more modular. Custom resources can appear and disappear in a running cluster through dynamic registration, and cluster admins can update custom resources independently of the cluster itself. -Once a custom resource is installed, users can create and access its objects with -[kubectl](/docs/user-guide/kubectl-overview/), just as they do for built-in resources like *pods*. +Once a custom resource is installed, users can create and access its objects using +[kubectl](/docs/user-guide/kubectl-overview/), just as they do for built-in resources like +*Pods*. -### Custom controllers +## Custom controllers On their own, custom resources simply let you store and retrieve structured data. -It is only when combined with a *controller* that they become a true declarative API. +When you combine a custom resource with a *custom controller*, custom resources +provide a true _declarative API_. + A [declarative API](/docs/concepts/overview/working-with-objects/kubernetes-objects/#understanding-kubernetes-objects) -allows you to _declare_ or specify the desired state of your resource and tries -to match the actual state to this desired state. -Here, the controller interprets the structured data as a record of the user's -desired state, and continually takes action to achieve and maintain this state. +allows you to _declare_ or specify the desired state of your resource and tries to +keep the current state of Kubernetes objects in sync with the desired state. +The controller interprets the structured data as a record of the user's +desired state, and continually maintains this state. -A *custom controller* is a controller that users can deploy and update on a running cluster, independently of the cluster's own lifecycle. Custom controllers can work with any kind of resource, but they are especially effective when combined with custom resources. The [Operator](https://coreos.com/blog/introducing-operators.html) pattern is one example of such a combination. It allows developers to encode domain knowledge for specific applications into an extension of the Kubernetes API. +You can deploy and update a custom controller on a running cluster, independently +of the cluster's own lifecycle. Custom controllers can work with any kind of resource, +but they are especially effective when combined with custom resources. The +[Operator pattern](https://coreos.com/blog/introducing-operators.html) combines custom +resources and custom controllers. You can use custom controllers to encode domain knowledge +for specific applications into an extension of the Kubernetes API. -### Should I add a custom resource to my Kubernetes Cluster? +## Should I add a custom resource to my Kubernetes Cluster? When creating a new API, consider whether to [aggregate your API with the Kubernetes cluster APIs](/docs/concepts/api-extension/apiserver-aggregation/) or let your API stand alone. @@ -56,7 +64,7 @@ When creating a new API, consider whether to [aggregate your API with the Kubern | Your resources are naturally scoped to a cluster or to namespaces of a cluster. | Cluster or namespace scoped resources are a poor fit; you need control over the specifics of resource paths. | | You want to reuse [Kubernetes API support features](#common-features). | You don't need those features. | -#### Declarative APIs +### Declarative APIs In a Declarative API, typically: @@ -80,7 +88,7 @@ Signs that your API might not be declarative include: - The API is not easily modeled as objects. - You chose to represent pending operations with an operation ID or an operation object. -### Should I use a configMap or a custom resource? +## Should I use a configMap or a custom resource? Use a ConfigMap if any of the following apply: @@ -126,13 +134,9 @@ This frees you from writing your own API server to handle the custom resource, but the generic nature of the implementation means you have less flexibility than with [API server aggregation](#api-server-aggregation). -Refer to the [Custom Controller example, which uses Custom Resources](https://github.com/kubernetes/sample-controller) -for a demonstration of how to register a new custom resource, work with instances of your new resource type, -and setup a controller to handle events. - -{{< note >}} -CRD is the successor to the deprecated *ThirdPartyResource* (TPR) API, and is available as of Kubernetes 1.7. -{{< /note >}} +Refer to the [custom controller example](https://github.com/kubernetes/sample-controller) +for an example of how to register a new custom resource, work with instances of your new resource type, +and use a controller to handle events. ## API server aggregation @@ -143,7 +147,7 @@ implementations for your custom resources by writing and deploying your own stan The main API server delegates requests to you for the custom resources that you handle, making them available to all of its clients. -### Choosing a method for adding custom resources +## Choosing a method for adding custom resources CRDs are easier to use. Aggregated APIs are more flexible. Choose the method that best meets your needs. @@ -152,7 +156,7 @@ Typically, CRDs are a good fit if: * You have a handful of fields * You are using the resource within your company, or as part of a small open-source project (as opposed to a commercial product) -#### Comparing ease of use +### Comparing ease of use CRDs are easier to create than Aggregated APIs. @@ -181,7 +185,7 @@ Aggregated APIs offer more advanced API features and customization of other feat | Protocol Buffers | The new resource supports clients that want to use Protocol Buffers | No | Yes | | OpenAPI Schema | Is there an OpenAPI (swagger) schema for the types that can be dynamically fetched from the server? Is the user protected from misspelling field names by ensuring only allowed fields are set? Are types enforced (in other words, don't put an `int` in a `string` field?) | No, but planned | Yes | -#### Common Features +### Common Features When you create a custom resource, either via a CRDs or an AA, you get many features for your API, compared to implementing it outside the Kubernetes platform: diff --git a/content/en/docs/concepts/overview/components.md b/content/en/docs/concepts/overview/components.md index c473a42df98a2..4373482ffcb3e 100644 --- a/content/en/docs/concepts/overview/components.md +++ b/content/en/docs/concepts/overview/components.md @@ -4,6 +4,9 @@ reviewers: title: Kubernetes Components content_template: templates/concept weight: 20 +card: + name: concepts + weight: 20 --- {{% capture overview %}} @@ -76,7 +79,8 @@ network rules on the host and performing connection forwarding. ### Container Runtime -The container runtime is the software that is responsible for running containers. Kubernetes supports several runtimes: [Docker](http://www.docker.com), [rkt](https://coreos.com/rkt/), [runc](https://github.com/opencontainers/runc) and any OCI [runtime-spec](https://github.com/opencontainers/runtime-spec) implementation. +The container runtime is the software that is responsible for running containers. +Kubernetes supports several runtimes: [Docker](http://www.docker.com), [containerd](https://containerd.io), [cri-o](https://cri-o.io/), [rktlet](https://github.com/kubernetes-incubator/rktlet) and any implementation of the [Kubernetes CRI (Container Runtime Interface)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md). ## Addons diff --git a/content/en/docs/concepts/overview/kubernetes-api.md b/content/en/docs/concepts/overview/kubernetes-api.md index 179b471dcd891..2ef08ade264bb 100644 --- a/content/en/docs/concepts/overview/kubernetes-api.md +++ b/content/en/docs/concepts/overview/kubernetes-api.md @@ -4,6 +4,9 @@ reviewers: title: The Kubernetes API content_template: templates/concept weight: 30 +card: + name: concepts + weight: 30 --- {{% capture overview %}} diff --git a/content/en/docs/concepts/overview/object-management-kubectl/imperative-command.md b/content/en/docs/concepts/overview/object-management-kubectl/imperative-command.md index 38b194d2a4fbd..bc83bd6b03e95 100644 --- a/content/en/docs/concepts/overview/object-management-kubectl/imperative-command.md +++ b/content/en/docs/concepts/overview/object-management-kubectl/imperative-command.md @@ -73,7 +73,7 @@ that must be set: The `kubectl` command also supports update commands driven by an aspect of the object. Setting this aspect may set different fields for different object types: -- `set` : Set an aspect of an object. +- `set` ``: Set an aspect of an object. {{< note >}} In Kubernetes version 1.5, not every verb-driven command has an associated aspect-driven command. @@ -160,5 +160,3 @@ kubectl create --edit -f /tmp/srv.yaml - [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl/) - [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) {{% /capture %}} - - diff --git a/content/en/docs/concepts/overview/object-management-kubectl/kustomization.md b/content/en/docs/concepts/overview/object-management-kubectl/kustomization.md new file mode 100644 index 0000000000000..87ec48daea4f0 --- /dev/null +++ b/content/en/docs/concepts/overview/object-management-kubectl/kustomization.md @@ -0,0 +1,758 @@ +--- +title: Declarative Management of Kubernetes Objects Using Kustomize +content_template: templates/concept +weight: 40 +--- + +{{% capture overview %}} +[Kustomize](https://github.com/kubernetes-sigs/kustomize) is a standalone tool +to customize Kubernetes objects +through a [kustomization file](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/kustomization.yaml). +Since 1.14, Kubectl also +supports the management of Kubernetes objects using a kustomization file. +To view Resources found in a directory containing a kustomization file, run the following command: +```shell +kubectl kustomize +``` +To apply those Resources, run `kubectl apply` with `--kustomize` or `-k` flag: +```shell +kubectl apply -k +``` +{{% /capture %}} + +{{% capture body %}} + +## Overview of Kustomize +Kustomize is a tool for customizing Kubernetes configurations. It has the following features to manage application configuration files: + +* generating resources from other sources +* setting cross-cutting fields for resources +* composing and customizing collections of resources + +### Generating Resources +ConfigMap and Secret hold config or sensitive data that are used by other Kubernetes objects, such as Pods. The source +of truth of ConfigMap or Secret are usually from somewhere else, such as a `.properties` file or a ssh key file. +Kustomize has `secretGenerator` and `configMapGenerator`, which generate Secret and ConfigMap from files or literals. + + +#### configMapGenerator +To generate a ConfigMap from a file, add an entry to `files` list in `configMapGenerator`. Here is an example of generating a ConfigMap with a data item from a file content. +```shell +# Create a application.properties file +cat <application.properties +FOO=Bar +EOF + +cat <./kustomization.yaml +configMapGenerator: +- name: example-configmap-1 + files: + - application.properties +EOF +``` +The generated ConfigMap can be checked by the following command: +```shell +kubectl kustomize ./ +``` +The generated ConfigMap is +```yaml +apiVersion: v1 +data: + application.properties: | + FOO=Bar +kind: ConfigMap +metadata: + name: example-configmap-1-8mbdf7882g +``` + +ConfigMap can also be generated from literal key-value pairs. To generate a ConfigMap from a literal key-value pair, add an entry to `literals` list in configMapGenerator. Here is an example of generating a ConfigMap with a data item from a key-value pair. +```shell +cat <./kustomization.yaml +configMapGenerator: +- name: example-configmap-2 + literals: + - FOO=Bar +EOF +``` +The generated ConfigMap can be checked by the following command: +```shell +kubectl kustomize ./ +``` +The generated ConfigMap is +```yaml +apiVersion: v1 +data: + FOO: Bar +kind: ConfigMap +metadata: + name: example-configmap-2-g2hdhfc6tk +``` + +#### secretGenerator +Secret can also be generated from files or literal key-value pairs. To generate a Secret from a file, add an entry to `files` list in `secretGenerator`. Here is an example of generating a Secret with a data item from a file. +```shell +# Create a password.txt file +cat <./password.txt +username=admin +password=secret +EOF + +cat <./kustomization.yaml +secretGenerator: +- name: example-secret-1 + files: + - password.txt +EOF +``` +The generated Secret is as follows: +```yaml +apiVersion: v1 +data: + password.txt: dXNlcm5hbWU9YWRtaW4KcGFzc3dvcmQ9c2VjcmV0Cg== +kind: Secret +metadata: + name: example-secret-1-t2kt65hgtb +type: Opaque +``` +To generate a Secret from a literal key-value pair, add an entry to `literals` list in `secretGenerator`. Here is an example of generating a Secret with a data item from a key-value pair. +```shell +cat <./kustomization.yaml +secretGenerator: +- name: example-secret-2 + literals: + - username=admin + - password=secert +EOF +``` +The generated Secret is as follows: +```yaml +apiVersion: v1 +data: + password: c2VjZXJ0 + username: YWRtaW4= +kind: Secret +metadata: + name: example-secret-2-t52t6g96d8 +type: Opaque +``` + +#### generatorOptions +The generated ConfigMaps and Secrets have a suffix appended by hashing the contents. This ensures that a new ConfigMap or Secret is generated when the content is changed. To disable the behavior of appending a suffix, one can use `generatorOptions`. Besides that, it is also possible to specify cross-cutting options for generated ConfigMaps and Secrets. +```shell +cat <./kustomization.yaml +configMapGenerator: +- name: example-configmap-3 + literals: + - FOO=Bar +generatorOptions: + disableNameSuffixHash: true + labels: + type: generated + annotations: + note: generated +EOF +``` +Run`kubectl kustomize ./` to view the generated ConfigMap: +```yaml +apiVersion: v1 +data: + FOO: Bar +kind: ConfigMap +metadata: + annotations: + note: generated + labels: + type: generated + name: example-configmap-3 +``` + +### Setting cross-cutting fields +It is quite common to set cross-cutting fields for all Kubernetes resources in a project. +Some use cases for setting cross-cutting fields: + +* setting the same namespace for all Resource +* adding the same name prefix or suffix +* adding the same set of labels +* adding the same set of annotations + +Here is an example: +```shell +# Create a deployment.yaml +cat <./deployment.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment + labels: + app: nginx +spec: + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx +EOF + +cat <./kustomization.yaml +namespace: my-namespace +namePrefix: dev- +nameSuffix: "-001" +commonLabels: + app: bingo +commonAnnotations: + oncallPager: 800-555-1212 +resources: +- deployment.yaml +EOF +``` +Run `kubectl kustomize ./` to view those fields are all set in the Deployment Resource: +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + annotations: + oncallPager: 800-555-1212 + labels: + app: bingo + name: dev-nginx-deployment-001 + namespace: my-namespace +spec: + selector: + matchLabels: + app: bingo + template: + metadata: + annotations: + oncallPager: 800-555-1212 + labels: + app: bingo + spec: + containers: + - image: nginx + name: nginx +``` + +### Composing and Customizing Resources +It is common to compose a set of Resources in a project and manage them inside +the same file or directory. +Kustomize offers composing Resources from different files and applying patches or other customization to them. + +#### Composing +Kustomize supports composition of different resources. The `resources` field, in the `kustomization.yaml` file, defines the list of resources to include in a configuration. Set the path to a resource's configuration file in the `resources` list. +Here is an example for an nginx application with a Deployment and a Service. +```shell +# Create a deployment.yaml file +cat < deployment.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-nginx +spec: + selector: + matchLabels: + run: my-nginx + replicas: 2 + template: + metadata: + labels: + run: my-nginx + spec: + containers: + - name: my-nginx + image: nginx + ports: + - containerPort: 80 +EOF + +# Create a service.yaml file +cat < service.yaml +apiVersion: v1 +kind: Service +metadata: + name: my-nginx + labels: + run: my-nginx +spec: + ports: + - port: 80 + protocol: TCP + selector: + run: my-nginx +EOF + +# Create a kustomization.yaml composing them +cat <./kustomization.yaml +resources: +- deployment.yaml +- service.yaml +EOF +``` +The Resources from `kubectl kustomize ./` contains both the Deployment and the Service objects. + +#### Customizing +On top of Resources, one can apply different customizations by applying patches. Kustomize supports different patching +mechanisms through `patchesStrategicMerge` and `patchesJson6902`. `patchesStrategicMerge` is a list of file paths. Each file should be resolved to a [strategic merge patch](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md). The names inside the patches must match Resource names that are already loaded. Small patches that do one thing are recommended. For example, create one patch for increasing the deployment replica number and another patch for setting the memory limit. +```shell +# Create a deployment.yaml file +cat < deployment.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-nginx +spec: + selector: + matchLabels: + run: my-nginx + replicas: 2 + template: + metadata: + labels: + run: my-nginx + spec: + containers: + - name: my-nginx + image: nginx + ports: + - containerPort: 80 +EOF + +# Create a patch increase_replicas.yaml +cat < increase_replicas.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-nginx +spec: + replicas: 3 +EOF + +# Create another patch set_memory.yaml +cat < set_memory.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-nginx +spec: + template: + spec: + containers: + - name: my-nginx + resources: + limits: + memory: 512Mi +EOF + +cat <./kustomization.yaml +resources: +- deployment.yaml +patchesStrategicMerge: +- increase_replicas.yaml +- set_memory.yaml +EOF +``` +Run `kubectl kustomize ./` to view the Deployment: +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-nginx +spec: + replicas: 3 + selector: + matchLabels: + run: my-nginx + template: + metadata: + labels: + run: my-nginx + spec: + containers: + - image: nginx + limits: + memory: 512Mi + name: my-nginx + ports: + - containerPort: 80 +``` +Not all Resources or fields support strategic merge patches. To support modifying arbitrary fields in arbitrary Resources, +Kustomize offers applying [JSON patch](https://tools.ietf.org/html/rfc6902) through `patchesJson6902`. +To find the correct Resource for a Json patch, the group, version, kind and name of that Resource need to be +specified in `kustomization.yaml`. For example, increasing the replica number of a Deployment object can also be done +through `patchesJson6902`. +```shell +# Create a deployment.yaml file +cat < deployment.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-nginx +spec: + selector: + matchLabels: + run: my-nginx + replicas: 2 + template: + metadata: + labels: + run: my-nginx + spec: + containers: + - name: my-nginx + image: nginx + ports: + - containerPort: 80 +EOF + +# Create a json patch +cat < patch.yaml +- op: replace + path: /spec/replicas + value: 3 +EOF + +# Create a kustomization.yaml +cat <./kustomization.yaml +resources: +- deployment.yaml + +patchesJson6902: +- target: + group: apps + version: v1 + kind: Deployment + name: my-nginx + path: patch.yaml +EOF +``` +Run `kubectl kustomize ./` to see the `replicas` field is updated: +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-nginx +spec: + replicas: 3 + selector: + matchLabels: + run: my-nginx + template: + metadata: + labels: + run: my-nginx + spec: + containers: + - image: nginx + name: my-nginx + ports: + - containerPort: 80 +``` +In addition to patches, Kustomize also offers customizing container images or injecting field values from other objects into containers +without creating patches. For example, you can change the image used inside containers by specifying the new image in `images` field in `kustomization.yaml`. +```shell +cat < deployment.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-nginx +spec: + selector: + matchLabels: + run: my-nginx + replicas: 2 + template: + metadata: + labels: + run: my-nginx + spec: + containers: + - name: my-nginx + image: nginx + ports: + - containerPort: 80 +EOF + +cat <./kustomization.yaml +resources: +- deployment.yaml +images: +- name: nginx + newName: my.image.registry/nginx + newTag: 1.4.0 +EOF +``` +Run `kubectl kustomize ./` to see that the image being used is updated: +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-nginx +spec: + replicas: 2 + selector: + matchLabels: + run: my-nginx + template: + metadata: + labels: + run: my-nginx + spec: + containers: + - image: my.image.registry/nginx:1.4.0 + name: my-nginx + ports: + - containerPort: 80 +``` +Sometimes, the application running in a Pod may need to use configuration values from other objects. For example, +a Pod from a Deployment object need to read the corresponding Service name from Env or as a command argument. +Since the Service name may change as `namePrefix` or `nameSuffix` is added in the `kustomization.yaml` file. It is +not recommended to hard code the Service name in the command argument. For this usage, Kustomize can inject the Service name into containers through `vars`. + +```shell +# Create a deployment.yaml file +cat < deployment.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-nginx +spec: + selector: + matchLabels: + run: my-nginx + replicas: 2 + template: + metadata: + labels: + run: my-nginx + spec: + containers: + - name: my-nginx + image: nginx + command: ["start", "--host", "\$(MY_SERVICE_NAME)"] +EOF + +# Create a service.yaml file +cat < service.yaml +apiVersion: v1 +kind: Service +metadata: + name: my-nginx + labels: + run: my-nginx +spec: + ports: + - port: 80 + protocol: TCP + selector: + run: my-nginx +EOF + +cat <./kustomization.yaml +namePrefix: dev- +nameSuffix: "-001" + +resources: +- deployment.yaml +- service.yaml + +vars: +- name: MY_SERVICE_NAME + objref: + kind: Service + name: my-nginx + apiVersion: v1 +EOF +``` +Run `kubectl kustomize ./` to see that the Service name injected into containers is `dev-my-nginx-001`: +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: dev-my-nginx-001 +spec: + replicas: 2 + selector: + matchLabels: + run: my-nginx + template: + metadata: + labels: + run: my-nginx + spec: + containers: + - command: + - start + - --host + - dev-my-nginx-001 + image: nginx + name: my-nginx +``` + +## Bases and Overlays +Kustomize has the concepts of **bases** and **overlays**. A **base** is a directory with a `kustomization.yaml`, which contains a +set of resources and associated customization. A base could be either a local directory or a directory from a remote repo, +as long as a `kustomization.yaml` is present inside. An **overlay** is a directory with a `kustomization.yaml` that refers to other +kustomization directories as its `bases`. A **base** has no knowledge of an overlay and can be used in multiple overlays. +An overlay may have multiple bases and it composes all resources +from bases and may also have customization on top of them. + +Here is an example of a base. +```shell +# Create a directory to hold the base +mkdir base +# Create a base/deployment.yaml +cat < base/deployment.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-nginx +spec: + selector: + matchLabels: + run: my-nginx + replicas: 2 + template: + metadata: + labels: + run: my-nginx + spec: + containers: + - name: my-nginx + image: nginx +EOF + +# Create a base/service.yaml file +cat < base/service.yaml +apiVersion: v1 +kind: Service +metadata: + name: my-nginx + labels: + run: my-nginx +spec: + ports: + - port: 80 + protocol: TCP + selector: + run: my-nginx +EOF +# Create a base/kustomization.yaml +cat < base/kustomization.yaml +resources: +- deployment.yaml +- service.yaml +``` +This base can be used in multiple overlays. You can add different `namePrefix` or other cross-cutting fields +in different overlays. Here are two overlays using the same base. +```shell +mkdir dev +cat < dev/kustomization.yaml +bases: +- ../base +namePrefix: dev- +EOF + +mkdir prod +cat < prod/kustomization.yaml +bases: +- ../base +namePrefix: prod- +EOF +``` + +## How to apply/view/delete objects using Kustomize +Use `--kustomize` or `-k` in `kubectl` commands to recognize Resources managed by `kustomization.yaml`. +Note that `-k` should point to a kustomization directory, such as + +```shell +kubectl apply -k / +``` +Given the following `kustomization.yaml`, +```shell +# Create a deployment.yaml file +cat < deployment.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-nginx +spec: + selector: + matchLabels: + run: my-nginx + replicas: 2 + template: + metadata: + labels: + run: my-nginx + spec: + containers: + - name: my-nginx + image: nginx + ports: + - containerPort: 80 +EOF + +# Create a kustomization.yaml +cat <./kustomization.yaml +namePrefix: dev- +commonLabels: + app: my-nginx +resources: +- deployment.yaml +EOF +``` +Running the following command will apply the Deployment object `dev-my-nginx`: +```shell +> kubectl apply -k ./ +deployment.apps/dev-my-nginx created +``` +Running the following command will get he Deployment object `dev-my-nginx`: +```shell +kubectl get -k ./ +``` +or +```shell +kubectl describe -k ./ +``` +Running the following command will delete the Deployment object `dev-my-nginx`: +```shell +> kubectl delete -k ./ +deployment.apps "dev-my-nginx" deleted +``` + + +## Kustomize Feature List +Here is a list of all the features in Kustomize. + +| Field | Type | Explanation | +|-----------------------|--------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------| +| namespace | string | add namespace to all resources | +| namePrefix | string | value of this field is prepended to the names of all resources | +| nameSuffix | string | value of this field is appended to the names of all resources | +| commonlabels | map[string]string | labels to add to all resources and selectors | +| commonAnnotations | map[string]string | annotations to add to all resources | +| resources | []string | each entry in this list must resolve to an existing resource configuration file | +| configmapGenerator | [][ConfigMapArgs](https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/types/kustomization.go#L195) | Each entry in this list generates a ConfigMap | +| secretGenerator | [][SecretArgs](https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/types/kustomization.go#L201) | Each entry in this list generates a Secret | +| generatorOptions | [GeneratorOptions](https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/types/kustomization.go#L239) | Modify behaviors of all ConfigMap and Secret generatos | +| bases | []string | Each entry in this list should resolve to a directory containing a kustomization.yaml file | +| patchesStrategicMerge | []string | Each entry in this list should resolve a strategic merge patch of a Kubernetes object | +| patchesJson6902 | [][Json6902](https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/patch/json6902.go#L23) | Each entry in this list should resolve to a Kubernetes object and a Json Patch | +| vars | [][Var](https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/types/var.go#L31) | Each entry is to capture text from one resource's field | +| images | [][Image](https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/image/image.go#L23) | Each entry is to modify the name, tags and/or digest for one image without creating patches | +| configurations | []string | Each entry in this list should resolve to a file containing [Kustomize transformer configurations](https://github.com/kubernetes-sigs/kustomize/tree/master/examples/transformerconfigs) | +| crds | []string | Each entry in this list should resolve to an OpenAPI definition file for Kubernetes types | + + + +{{% capture whatsnext %}} +- [Kustomize](https://github.com/kubernetes-sigs/kustomize) +- [Kubectl Book](https://kubectl.docs.kubernetes.io) +- [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl/) +- [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) +{{% /capture %}} diff --git a/content/en/docs/concepts/overview/object-management-kubectl/overview.md b/content/en/docs/concepts/overview/object-management-kubectl/overview.md index 3987da4a71f00..723df86818c01 100644 --- a/content/en/docs/concepts/overview/object-management-kubectl/overview.md +++ b/content/en/docs/concepts/overview/object-management-kubectl/overview.md @@ -7,7 +7,8 @@ weight: 10 {{% capture overview %}} The `kubectl` command-line tool supports several different ways to create and manage Kubernetes objects. This document provides an overview of the different -approaches. +approaches. Read the [Kubectl book](https://kubectl.docs.kubernetes.io) for +details of managing objects by Kubectl. {{% /capture %}} {{% capture body %}} @@ -179,6 +180,7 @@ Disadvantages compared to imperative object configuration: - [Managing Kubernetes Objects Using Object Configuration (Imperative)](/docs/concepts/overview/object-management-kubectl/imperative-config/) - [Managing Kubernetes Objects Using Object Configuration (Declarative)](/docs/concepts/overview/object-management-kubectl/declarative-config/) - [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl-commands/) +- [Kubectl Book](https://kubectl.docs.kubernetes.io) - [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) {{< comment >}} diff --git a/content/en/docs/concepts/overview/what-is-kubernetes.md b/content/en/docs/concepts/overview/what-is-kubernetes.md index 014d4945c03f4..6bfd4404f0cae 100644 --- a/content/en/docs/concepts/overview/what-is-kubernetes.md +++ b/content/en/docs/concepts/overview/what-is-kubernetes.md @@ -5,6 +5,9 @@ reviewers: title: What is Kubernetes? content_template: templates/concept weight: 10 +card: + name: concepts + weight: 10 --- {{% capture overview %}} diff --git a/content/en/docs/concepts/overview/working-with-objects/field-selectors.md b/content/en/docs/concepts/overview/working-with-objects/field-selectors.md index 243eecce24d78..637af3ad92981 100644 --- a/content/en/docs/concepts/overview/working-with-objects/field-selectors.md +++ b/content/en/docs/concepts/overview/working-with-objects/field-selectors.md @@ -12,15 +12,15 @@ _Field selectors_ let you [select Kubernetes resources](/docs/concepts/overview/ This `kubectl` command selects all Pods for which the value of the [`status.phase`](/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) field is `Running`: ```shell -$ kubectl get pods --field-selector status.phase=Running +kubectl get pods --field-selector status.phase=Running ``` {{< note >}} Field selectors are essentially resource *filters*. By default, no selectors/filters are applied, meaning that all resources of the specified type are selected. This makes the following `kubectl` queries equivalent: ```shell -$ kubectl get pods -$ kubectl get pods --field-selector "" +kubectl get pods +kubectl get pods --field-selector "" ``` {{< /note >}} @@ -29,7 +29,9 @@ $ kubectl get pods --field-selector "" Supported field selectors vary by Kubernetes resource type. All resource types support the `metadata.name` and `metadata.namespace` fields. Using unsupported field selectors produces an error. For example: ```shell -$ kubectl get ingress --field-selector foo.bar=baz +kubectl get ingress --field-selector foo.bar=baz +``` +``` Error from server (BadRequest): Unable to find "ingresses" that match label selector "", field selector "foo.bar=baz": "foo.bar" is not a known field selector: only "metadata.name", "metadata.namespace" ``` @@ -38,7 +40,7 @@ Error from server (BadRequest): Unable to find "ingresses" that match label sele You can use the `=`, `==`, and `!=` operators with field selectors (`=` and `==` mean the same thing). This `kubectl` command, for example, selects all Kubernetes Services that aren't in the `default` namespace: ```shell -$ kubectl get services --field-selector metadata.namespace!=default +kubectl get services --field-selector metadata.namespace!=default ``` ## Chained selectors @@ -46,7 +48,7 @@ $ kubectl get services --field-selector metadata.namespace!=default As with [label](/docs/concepts/overview/working-with-objects/labels) and other selectors, field selectors can be chained together as a comma-separated list. This `kubectl` command selects all Pods for which the `status.phase` does not equal `Running` and the `spec.restartPolicy` field equals `Always`: ```shell -$ kubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always +kubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always ``` ## Multiple resource types @@ -54,5 +56,5 @@ $ kubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Alw You use field selectors across multiple resource types. This `kubectl` command selects all Statefulsets and Services that are not in the `default` namespace: ```shell -$ kubectl get statefulsets,services --field-selector metadata.namespace!=default +kubectl get statefulsets,services --field-selector metadata.namespace!=default ``` diff --git a/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md index ae529e39bcfe7..fec01aeb8e54f 100644 --- a/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md +++ b/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md @@ -2,6 +2,9 @@ title: Understanding Kubernetes Objects content_template: templates/concept weight: 10 +card: + name: concepts + weight: 40 --- {{% capture overview %}} @@ -28,7 +31,7 @@ Every Kubernetes object includes two nested object fields that govern the object For example, a Kubernetes Deployment is an object that can represent an application running on your cluster. When you create the Deployment, you might set the Deployment spec to specify that you want three replicas of the application to be running. The Kubernetes system reads the Deployment spec and starts three instances of your desired application--updating the status to match your spec. If any of those instances should fail (a status change), the Kubernetes system responds to the difference between spec and status by making a correction--in this case, starting a replacement instance. -For more information on the object spec, status, and metadata, see the [Kubernetes API Conventions](https://git.k8s.io/community/contributors/devel/api-conventions.md). +For more information on the object spec, status, and metadata, see the [Kubernetes API Conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md). ### Describing a Kubernetes Object @@ -39,11 +42,11 @@ Here's an example `.yaml` file that shows the required fields and object spec fo {{< codenew file="application/deployment.yaml" >}} One way to create a Deployment using a `.yaml` file like the one above is to use the -[`kubectl create`](/docs/reference/generated/kubectl/kubectl-commands#create) command +[`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply) command in the `kubectl` command-line interface, passing the `.yaml` file as an argument. Here's an example: ```shell -$ kubectl create -f https://k8s.io/examples/application/deployment.yaml --record +kubectl apply -f https://k8s.io/examples/application/deployment.yaml --record ``` The output is similar to this: diff --git a/content/en/docs/concepts/overview/working-with-objects/labels.md b/content/en/docs/concepts/overview/working-with-objects/labels.md index d737ef6f64516..d2858af8dbf55 100644 --- a/content/en/docs/concepts/overview/working-with-objects/labels.md +++ b/content/en/docs/concepts/overview/working-with-objects/labels.md @@ -139,25 +139,25 @@ LIST and WATCH operations may specify label selectors to filter the sets of obje Both label selector styles can be used to list or watch resources via a REST client. For example, targeting `apiserver` with `kubectl` and using _equality-based_ one may write: ```shell -$ kubectl get pods -l environment=production,tier=frontend +kubectl get pods -l environment=production,tier=frontend ``` or using _set-based_ requirements: ```shell -$ kubectl get pods -l 'environment in (production),tier in (frontend)' +kubectl get pods -l 'environment in (production),tier in (frontend)' ``` As already mentioned _set-based_ requirements are more expressive.  For instance, they can implement the _OR_ operator on values: ```shell -$ kubectl get pods -l 'environment in (production, qa)' +kubectl get pods -l 'environment in (production, qa)' ``` or restricting negative matching via _exists_ operator: ```shell -$ kubectl get pods -l 'environment,environment notin (frontend)' +kubectl get pods -l 'environment,environment notin (frontend)' ``` ### Set references in API objects diff --git a/content/en/docs/concepts/overview/working-with-objects/namespaces.md b/content/en/docs/concepts/overview/working-with-objects/namespaces.md index eb10f1067b35d..862ae2adc606a 100644 --- a/content/en/docs/concepts/overview/working-with-objects/namespaces.md +++ b/content/en/docs/concepts/overview/working-with-objects/namespaces.md @@ -46,7 +46,9 @@ for namespaces](/docs/admin/namespaces). You can list the current namespaces in a cluster using: ```shell -$ kubectl get namespaces +kubectl get namespaces +``` +``` NAME STATUS AGE default Active 1d kube-system Active 1d @@ -66,8 +68,8 @@ To temporarily set the namespace for a request, use the `--namespace` flag. For example: ```shell -$ kubectl --namespace= run nginx --image=nginx -$ kubectl --namespace= get pods +kubectl --namespace= run nginx --image=nginx +kubectl --namespace= get pods ``` ### Setting the namespace preference @@ -76,9 +78,9 @@ You can permanently save the namespace for all subsequent kubectl commands in th context. ```shell -$ kubectl config set-context $(kubectl config current-context) --namespace= +kubectl config set-context $(kubectl config current-context) --namespace= # Validate it -$ kubectl config view | grep namespace: +kubectl config view | grep namespace: ``` ## Namespaces and DNS @@ -101,10 +103,10 @@ To see which Kubernetes resources are and aren't in a namespace: ```shell # In a namespace -$ kubectl api-resources --namespaced=true +kubectl api-resources --namespaced=true # Not in a namespace -$ kubectl api-resources --namespaced=false +kubectl api-resources --namespaced=false ``` {{% /capture %}} diff --git a/content/en/docs/concepts/policy/pod-security-policy.md b/content/en/docs/concepts/policy/pod-security-policy.md index 4f1c658aa81d6..1e796586accd9 100644 --- a/content/en/docs/concepts/policy/pod-security-policy.md +++ b/content/en/docs/concepts/policy/pod-security-policy.md @@ -41,7 +41,7 @@ administrator to control the following: | Restricting escalation to root privileges | [`allowPrivilegeEscalation`, `defaultAllowPrivilegeEscalation`](#privilege-escalation) | | Linux capabilities | [`defaultAddCapabilities`, `requiredDropCapabilities`, `allowedCapabilities`](#capabilities) | | The SELinux context of the container | [`seLinux`](#selinux) | -| The Allowed Proc Mount types for the container | [`allowedProcMountTypes`](#allowedProcMountTypes) | +| The Allowed Proc Mount types for the container | [`allowedProcMountTypes`](#allowedprocmounttypes) | | The AppArmor profile used by containers | [annotations](#apparmor) | | The seccomp profile used by containers | [annotations](#seccomp) | | The sysctl profile used by containers | [annotations](#sysctl) | @@ -336,7 +336,6 @@ pause-7774d79b5-qrgcb 0/1 Pending 0 1s pause-7774d79b5-qrgcb 0/1 Pending 0 1s pause-7774d79b5-qrgcb 0/1 ContainerCreating 0 1s pause-7774d79b5-qrgcb 1/1 Running 0 2s -^C ``` ### Clean up @@ -465,7 +464,7 @@ Please make sure [`volumes`](#volumes-and-file-systems) field contains the For example: ```yaml -apiVersion: extensions/v1beta1 +apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: allow-flex-volumes @@ -610,6 +609,6 @@ default cannot be changed. ### Sysctl Controlled via annotations on the PodSecurityPolicy. Refer to the [Sysctl documentation]( -/docs/concepts/cluster-administration/sysctl-cluster/#podsecuritypolicy-annotations). +/docs/concepts/cluster-administration/sysctl-cluster/#podsecuritypolicy). {{% /capture %}} diff --git a/content/en/docs/concepts/services-networking/connect-applications-service.md b/content/en/docs/concepts/services-networking/connect-applications-service.md index ae0160ad9fb3b..073e83abdd185 100644 --- a/content/en/docs/concepts/services-networking/connect-applications-service.md +++ b/content/en/docs/concepts/services-networking/connect-applications-service.md @@ -17,7 +17,7 @@ Now that you have a continuously running, replicated application you can expose By default, Docker uses host-private networking, so containers can talk to other containers only if they are on the same machine. In order for Docker containers to communicate across nodes, there must be allocated ports on the machine’s own IP address, which are then forwarded or proxied to the containers. This obviously means that containers must either coordinate which ports they use very carefully or ports must be allocated dynamically. -Coordinating ports across multiple developers is very difficult to do at scale and exposes users to cluster-level issues outside of their control. Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. We give every pod its own cluster-private-IP address so you do not need to explicitly create links between pods or mapping container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document will elaborate on how you can run reliable services on such a networking model. +Coordinating ports across multiple developers is very difficult to do at scale and exposes users to cluster-level issues outside of their control. Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. We give every pod its own cluster-private-IP address so you do not need to explicitly create links between pods or map container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document will elaborate on how you can run reliable services on such a networking model. This guide uses a simple nginx server to demonstrate proof of concept. The same principles are embodied in a more complete [Jenkins CI application](https://kubernetes.io/blog/2015/07/strong-simple-ssl-for-kubernetes). @@ -35,8 +35,10 @@ Create an nginx Pod, and note that it has a container port specification: This makes it accessible from any node in your cluster. Check the nodes the Pod is running on: ```shell -$ kubectl create -f ./run-my-nginx.yaml -$ kubectl get pods -l run=my-nginx -o wide +kubectl apply -f ./run-my-nginx.yaml +kubectl get pods -l run=my-nginx -o wide +``` +``` NAME READY STATUS RESTARTS AGE IP NODE my-nginx-3800858182-jr4a2 1/1 Running 0 13s 10.244.3.4 kubernetes-minion-905m my-nginx-3800858182-kna2y 1/1 Running 0 13s 10.244.2.5 kubernetes-minion-ljyd @@ -45,7 +47,7 @@ my-nginx-3800858182-kna2y 1/1 Running 0 13s 10.244.2.5 Check your pods' IPs: ```shell -$ kubectl get pods -l run=my-nginx -o yaml | grep podIP +kubectl get pods -l run=my-nginx -o yaml | grep podIP podIP: 10.244.3.4 podIP: 10.244.2.5 ``` @@ -63,11 +65,13 @@ A Kubernetes Service is an abstraction which defines a logical set of Pods runni You can create a Service for your 2 nginx replicas with `kubectl expose`: ```shell -$ kubectl expose deployment/my-nginx +kubectl expose deployment/my-nginx +``` +``` service/my-nginx exposed ``` -This is equivalent to `kubectl create -f` the following yaml: +This is equivalent to `kubectl apply -f` the following yaml: {{< codenew file="service/networking/nginx-svc.yaml" >}} @@ -81,7 +85,9 @@ API object to see the list of supported fields in service definition. Check your Service: ```shell -$ kubectl get svc my-nginx +kubectl get svc my-nginx +``` +``` NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx ClusterIP 10.0.162.149 80/TCP 21s ``` @@ -95,7 +101,9 @@ Check the endpoints, and note that the IPs are the same as the Pods created in the first step: ```shell -$ kubectl describe svc my-nginx +kubectl describe svc my-nginx +``` +``` Name: my-nginx Namespace: default Labels: run=my-nginx @@ -107,8 +115,11 @@ Port: 80/TCP Endpoints: 10.244.2.5:80,10.244.3.4:80 Session Affinity: None Events: - -$ kubectl get ep my-nginx +``` +```shell +kubectl get ep my-nginx +``` +``` NAME ENDPOINTS AGE my-nginx 10.244.2.5:80,10.244.3.4:80 1m ``` @@ -131,7 +142,9 @@ each active Service. This introduces an ordering problem. To see why, inspect the environment of your running nginx Pods (your Pod name will be different): ```shell -$ kubectl exec my-nginx-3800858182-jr4a2 -- printenv | grep SERVICE +kubectl exec my-nginx-3800858182-jr4a2 -- printenv | grep SERVICE +``` +``` KUBERNETES_SERVICE_HOST=10.0.0.1 KUBERNETES_SERVICE_PORT=443 KUBERNETES_SERVICE_PORT_HTTPS=443 @@ -147,9 +160,11 @@ replicas. This will give you scheduler-level Service spreading of your Pods variables: ```shell -$ kubectl scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2; +kubectl scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2; -$ kubectl get pods -l run=my-nginx -o wide +kubectl get pods -l run=my-nginx -o wide +``` +``` NAME READY STATUS RESTARTS AGE IP NODE my-nginx-3800858182-e9ihh 1/1 Running 0 5s 10.244.2.7 kubernetes-minion-ljyd my-nginx-3800858182-j4rm4 1/1 Running 0 5s 10.244.3.8 kubernetes-minion-905m @@ -158,7 +173,9 @@ my-nginx-3800858182-j4rm4 1/1 Running 0 5s 10.244.3.8 You may notice that the pods have different names, since they are killed and recreated. ```shell -$ kubectl exec my-nginx-3800858182-e9ihh -- printenv | grep SERVICE +kubectl exec my-nginx-3800858182-e9ihh -- printenv | grep SERVICE +``` +``` KUBERNETES_SERVICE_PORT=443 MY_NGINX_SERVICE_HOST=10.0.162.149 KUBERNETES_SERVICE_HOST=10.0.0.1 @@ -171,19 +188,23 @@ KUBERNETES_SERVICE_PORT_HTTPS=443 Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster: ```shell -$ kubectl get services kube-dns --namespace=kube-system +kubectl get services kube-dns --namespace=kube-system +``` +``` NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.0.0.10 53/UDP,53/TCP 8m ``` -If it isn't running, you can [enable it](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/README.md#how-do-i-configure-it). +If it isn't running, you can [enable it](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/kube-dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived IP (my-nginx), and a DNS server that has assigned a name to that IP (the CoreDNS cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Let's run another curl application to test this: ```shell -$ kubectl run curl --image=radial/busyboxplus:curl -i --tty +kubectl run curl --image=radial/busyboxplus:curl -i --tty +``` +``` Waiting for pod default/curl-131556218-9fnch to be running, status is Pending, pod ready: false Hit enter for command prompt ``` @@ -210,10 +231,16 @@ Till now we have only accessed the nginx server from within the cluster. Before You can acquire all these from the [nginx https example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/https-nginx/). This requires having go and make tools installed. If you don't want to install those, then follow the manual steps later. In short: ```shell -$ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json -$ kubectl create -f /tmp/secret.json +make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json +kubectl apply -f /tmp/secret.json +``` +``` secret/nginxsecret created -$ kubectl get secrets +``` +```shell +kubectl get secrets +``` +``` NAME TYPE DATA AGE default-token-il9rc kubernetes.io/service-account-token 1 1d nginxsecret Opaque 2 1m @@ -242,8 +269,10 @@ data: Now create the secrets using the file: ```shell -$ kubectl create -f nginxsecrets.yaml -$ kubectl get secrets +kubectl apply -f nginxsecrets.yaml +kubectl get secrets +``` +``` NAME TYPE DATA AGE default-token-il9rc kubernetes.io/service-account-token 1 1d nginxsecret Opaque 2 1m @@ -263,13 +292,13 @@ Noteworthy points about the nginx-secure-app manifest: This is setup *before* the nginx server is started. ```shell -$ kubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml +kubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml ``` At this point you can reach the nginx server from any node. ```shell -$ kubectl get pods -o yaml | grep -i podip +kubectl get pods -o yaml | grep -i podip podIP: 10.244.3.5 node $ curl -k https://10.244.3.5 ... @@ -283,11 +312,15 @@ Let's test this from a pod (the same secret is being reused for simplicity, the {{< codenew file="service/networking/curlpod.yaml" >}} ```shell -$ kubectl create -f ./curlpod.yaml -$ kubectl get pods -l app=curlpod +kubectl apply -f ./curlpod.yaml +kubectl get pods -l app=curlpod +``` +``` NAME READY STATUS RESTARTS AGE curl-deployment-1515033274-1410r 1/1 Running 0 1m -$ kubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacert /etc/nginx/ssl/nginx.crt +``` +```shell +kubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacert /etc/nginx/ssl/nginx.crt ... Welcome to nginx! ... @@ -302,7 +335,7 @@ so your nginx HTTPS replica is ready to serve traffic on the internet if your node has a public IP. ```shell -$ kubectl get svc my-nginx -o yaml | grep nodePort -C 5 +kubectl get svc my-nginx -o yaml | grep nodePort -C 5 uid: 07191fb3-f61a-11e5-8ae5-42010af00002 spec: clusterIP: 10.0.162.149 @@ -319,8 +352,9 @@ spec: targetPort: 443 selector: run: my-nginx - -$ kubectl get nodes -o yaml | grep ExternalIP -C 1 +``` +```shell +kubectl get nodes -o yaml | grep ExternalIP -C 1 - address: 104.197.41.11 type: ExternalIP allocatable: @@ -338,12 +372,15 @@ $ curl https://: -k Let's now recreate the Service to use a cloud load balancer, just change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`: ```shell -$ kubectl edit svc my-nginx -$ kubectl get svc my-nginx +kubectl edit svc my-nginx +kubectl get svc my-nginx +``` +``` NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx ClusterIP 10.0.162.149 162.222.184.144 80/TCP,81/TCP,82/TCP 21s - -$ curl https:// -k +``` +``` +curl https:// -k ... Welcome to nginx! ``` @@ -357,7 +394,7 @@ output, in fact, so you'll need to do `kubectl describe service my-nginx` to see it. You'll see something like this: ```shell -$ kubectl describe service my-nginx +kubectl describe service my-nginx ... LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com ... diff --git a/content/en/docs/concepts/services-networking/dns-pod-service.md b/content/en/docs/concepts/services-networking/dns-pod-service.md index 41cb2ee4baa54..cdbdc46868c0e 100644 --- a/content/en/docs/concepts/services-networking/dns-pod-service.md +++ b/content/en/docs/concepts/services-networking/dns-pod-service.md @@ -84,7 +84,7 @@ the hostname of the pod. For example, given a Pod with `hostname` set to The Pod spec also has an optional `subdomain` field which can be used to specify its subdomain. For example, a Pod with `hostname` set to "`foo`", and `subdomain` set to "`bar`", in namespace "`my-namespace`", will have the fully qualified -domain name (FQDN) "`foo.bar.my-namespace.pod.cluster.local`". +domain name (FQDN) "`foo.bar.my-namespace.svc.cluster.local`". Example: @@ -141,7 +141,7 @@ record for the Pod's fully qualified hostname. For example, given a Pod with the hostname set to "`busybox-1`" and the subdomain set to "`default-subdomain`", and a headless Service named "`default-subdomain`" in the same namespace, the pod will see its own FQDN as -"`busybox-1.default-subdomain.my-namespace.pod.cluster.local`". DNS serves an +"`busybox-1.default-subdomain.my-namespace.svc.cluster.local`". DNS serves an A record at that name, pointing to the Pod's IP. Both pods "`busybox1`" and "`busybox2`" can have their distinct A records. diff --git a/content/en/docs/concepts/services-networking/ingress-controllers.md b/content/en/docs/concepts/services-networking/ingress-controllers.md new file mode 100644 index 0000000000000..57af46a01a0c5 --- /dev/null +++ b/content/en/docs/concepts/services-networking/ingress-controllers.md @@ -0,0 +1,74 @@ +--- +title: Ingress Controllers +reviewers: +content_template: templates/concept +weight: 40 +--- + +{{% capture overview %}} + +In order for the Ingress resource to work, the cluster must have an ingress controller running. + +Unlike other types of controllers which run as part of the `kube-controller-manager` binary, Ingress controllers +are not started automatically with a cluster. Use this page to choose the ingress controller implementation +that best fits your cluster. + +Kubernetes as a project currently supports and maintains [GCE](https://git.k8s.io/ingress-gce/README.md) and + [nginx](https://git.k8s.io/ingress-nginx/README.md) controllers. + +{{% /capture %}} + +{{% capture body %}} + +## Additional controllers + +* [Ambassador](https://www.getambassador.io/) API Gateway is an [Envoy](https://www.envoyproxy.io) based ingress + controller with [community](https://www.getambassador.io/docs) or + [commercial](https://www.getambassador.io/pro/) support from [Datawire](https://www.datawire.io/). +* [AppsCode Inc.](https://appscode.com) offers support and maintenance for the most widely used [HAProxy](http://www.haproxy.org/) based ingress controller [Voyager](https://appscode.com/products/voyager). +* [Contour](https://github.com/heptio/contour) is an [Envoy](https://www.envoyproxy.io) based ingress controller + provided and supported by Heptio. +* Citrix provides an [Ingress Controller](https://github.com/citrix/citrix-k8s-ingress-controller) for its hardware (MPX), virtualized (VPX) and [free containerized (CPX) ADC](https://www.citrix.com/products/citrix-adc/cpx-express.html) for [baremetal](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment/baremetal) and [cloud](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment) deployments. +* F5 Networks provides [support and maintenance](https://support.f5.com/csp/article/K86859508) + for the [F5 BIG-IP Controller for Kubernetes](http://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest). +* [Gloo](https://gloo.solo.io) is an open-source ingress controller based on [Envoy](https://www.envoyproxy.io) which offers API Gateway functionality with enterprise support from [solo.io](https://www.solo.io). +* [HAProxy](http://www.haproxy.org/) based ingress controller + [jcmoraisjr/haproxy-ingress](https://github.com/jcmoraisjr/haproxy-ingress) which is mentioned on the blog post + [HAProxy Ingress Controller for Kubernetes](https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/). + [HAProxy Technologies](https://www.haproxy.com/) offers support and maintenance for HAProxy Enterprise and + the ingress controller [jcmoraisjr/haproxy-ingress](https://github.com/jcmoraisjr/haproxy-ingress). +* [Istio](https://istio.io/) based ingress controller + [Control Ingress Traffic](https://istio.io/docs/tasks/traffic-management/ingress/). +* [Kong](https://konghq.com/) offers [community](https://discuss.konghq.com/c/kubernetes) or + [commercial](https://konghq.com/kong-enterprise/) support and maintenance for the + [Kong Ingress Controller for Kubernetes](https://github.com/Kong/kubernetes-ingress-controller). +* [NGINX, Inc.](https://www.nginx.com/) offers support and maintenance for the + [NGINX Ingress Controller for Kubernetes](https://www.nginx.com/products/nginx/kubernetes-ingress-controller). +* [Traefik](https://github.com/containous/traefik) is a fully featured ingress controller + ([Let's Encrypt](https://letsencrypt.org), secrets, http2, websocket), and it also comes with commercial + support by [Containous](https://containo.us/services). + +## Using multiple Ingress controllers + +You may deploy [any number of ingress controllers](https://git.k8s.io/ingress-nginx/docs/user-guide/multiple-ingress.md#multiple-ingress-controllers) +within a cluster. When you create an ingress, you should annotate each ingress with the appropriate +[`ingress.class`](https://git.k8s.io/ingress-gce/docs/faq/README.md#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster) +to indicate which ingress controller should be used if more than one exists within your cluster. + +If you do not define a class, your cloud provider may use a default ingress provider. + +Ideally, all ingress controllers should fulfill this specification, but the various ingress +controllers operate slightly differently. + +{{< note >}} +Make sure you review your ingress controller's documentation to understand the caveats of choosing it. +{{< /note >}} + +{{% /capture %}} + +{{% capture whatsnext %}} + +* Learn more about [Ingress](/docs/concepts/services-networking/ingress/). +* [Set up Ingress on Minikube with the NGINX Controller](/docs/tasks/access-application-cluster/ingress-minikube). + +{{% /capture %}} diff --git a/content/en/docs/concepts/services-networking/ingress.md b/content/en/docs/concepts/services-networking/ingress.md index a29ff81515d19..19683544fe518 100644 --- a/content/en/docs/concepts/services-networking/ingress.md +++ b/content/en/docs/concepts/services-networking/ingress.md @@ -25,7 +25,7 @@ For the sake of clarity, this guide defines the following terms: Ingress, added in Kubernetes v1.1, exposes HTTP and HTTPS routes from outside the cluster to {{< link text="services" url="/docs/concepts/services-networking/service/" >}} within the cluster. -Traffic routing is controlled by rules defined on the ingress resource. +Traffic routing is controlled by rules defined on the Ingress resource. ```none internet @@ -35,9 +35,9 @@ Traffic routing is controlled by rules defined on the ingress resource. [ Services ] ``` -An ingress can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, and offer name based virtual hosting. An [ingress controller](#ingress-controllers) is responsible for fulfilling the ingress, usually with a loadbalancer, though it may also configure your edge router or additional frontends to help handle the traffic. +An Ingress can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, and offer name based virtual hosting. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers) is responsible for fulfilling the Ingress, usually with a loadbalancer, though it may also configure your edge router or additional frontends to help handle the traffic. -An ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically +An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type [Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport) or [Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#loadbalancer). @@ -45,52 +45,19 @@ uses a service of type [Service.Type=NodePort](/docs/concepts/services-networkin {{< feature-state for_k8s_version="v1.1" state="beta" >}} -Before you start using an ingress, there are a few things you should understand. The ingress is a beta resource. You will need an ingress controller to satisfy an ingress, simply creating the resource will have no effect. +Before you start using an Ingress, there are a few things you should understand. The Ingress is a beta resource. -GCE/Google Kubernetes Engine deploys an [ingress controller](#ingress-controllers) on the master. Review the +{{< note >}} +You must have an [Ingress controller](/docs/concepts/services-networking/ingress-controllers) to satisfy an Ingress. Only creating an Ingress resource has no effect. +{{< /note >}} + +GCE/Google Kubernetes Engine deploys an Ingress controller on the master. Review the [beta limitations](https://github.com/kubernetes/ingress-gce/blob/master/BETA_LIMITATIONS.md#glbc-beta-limitations) of this controller if you are using GCE/GKE. In environments other than GCE/Google Kubernetes Engine, you may need to [deploy an ingress controller](https://kubernetes.github.io/ingress-nginx/deploy/). There are a number of -[ingress controller](#ingress-controllers) you may choose from. - -## Ingress controllers - -In order for the ingress resource to work, the cluster must have an ingress controller running. This is unlike other types of controllers, which run as part of the `kube-controller-manager` binary, and are typically started automatically with a cluster. Choose the ingress controller implementation that best fits your cluster. - -* Kubernetes as a project currently supports and maintains [GCE](https://git.k8s.io/ingress-gce/README.md) and - [nginx](https://git.k8s.io/ingress-nginx/README.md) controllers. - -Additional controllers include: - -* [Contour](https://github.com/heptio/contour) is an [Envoy](https://www.envoyproxy.io) based ingress controller - provided and supported by Heptio. -* Citrix provides an [Ingress Controller](https://github.com/citrix/citrix-k8s-ingress-controller) for its hardware (MPX), virtualized (VPX) and [free containerized (CPX) ADC](https://www.citrix.com/products/citrix-adc/cpx-express.html) for [baremetal](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment/baremetal) and [cloud](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment) deployments. -* F5 Networks provides [support and maintenance](https://support.f5.com/csp/article/K86859508) - for the [F5 BIG-IP Controller for Kubernetes](http://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest). -* [Gloo](https://gloo.solo.io) is an open-source ingress controller based on [Envoy](https://www.envoyproxy.io) which offers API Gateway functionality with enterprise support from [solo.io](https://www.solo.io). -* [HAProxy](http://www.haproxy.org/) based ingress controller - [jcmoraisjr/haproxy-ingress](https://github.com/jcmoraisjr/haproxy-ingress) which is mentioned on the blog post - [HAProxy Ingress Controller for Kubernetes](https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/). - [HAProxy Technologies](https://www.haproxy.com/) offers support and maintenance for HAProxy Enterprise and - the ingress controller [jcmoraisjr/haproxy-ingress](https://github.com/jcmoraisjr/haproxy-ingress). -* [Istio](https://istio.io/) based ingress controller - [Control Ingress Traffic](https://istio.io/docs/tasks/traffic-management/ingress/). -* [Kong](https://konghq.com/) offers [community](https://discuss.konghq.com/c/kubernetes) or - [commercial](https://konghq.com/kong-enterprise/) support and maintenance for the - [Kong Ingress Controller for Kubernetes](https://github.com/Kong/kubernetes-ingress-controller). -* [NGINX, Inc.](https://www.nginx.com/) offers support and maintenance for the - [NGINX Ingress Controller for Kubernetes](https://www.nginx.com/products/nginx/kubernetes-ingress-controller). -* [Traefik](https://github.com/containous/traefik) is a fully featured ingress controller - ([Let's Encrypt](https://letsencrypt.org), secrets, http2, websocket), and it also comes with commercial - support by [Containous](https://containo.us/services). - -You may deploy [any number of ingress controllers](https://git.k8s.io/ingress-nginx/docs/user-guide/multiple-ingress.md#multiple-ingress-controllers) within a cluster. -When you create an ingress, you should annotate each ingress with the appropriate -[`ingress.class`](https://git.k8s.io/ingress-gce/docs/faq/README.md#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster) to indicate which ingress -controller should be used if more than one exists within your cluster. -If you do not define a class, your cloud provider may use a default ingress provider. +[ingress controllers](/docs/concepts/services-networking/ingress-controllers) you may choose from. ### Before you begin @@ -122,14 +89,14 @@ spec: servicePort: 80 ``` - As with all other Kubernetes resources, an ingress needs `apiVersion`, `kind`, and `metadata` fields. + As with all other Kubernetes resources, an Ingress needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see [deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/), [configuring containers](/docs/tasks/configure-pod-container/configure-pod-configmap/), [managing resources](/docs/concepts/cluster-administration/manage-deployment/). - Ingress frequently uses annotations to configure some options depending on the ingress controller, an example of which + Ingress frequently uses annotations to configure some options depending on the Ingress controller, an example of which is the [rewrite-target annotation](https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md). - Different [ingress controller](#ingress-controllers) support different annotations. Review the documentation for - your choice of ingress controller to learn which annotations are supported. + Different [Ingress controller](/docs/concepts/services-networking/ingress-controllers) support different annotations. Review the documentation for + your choice of Ingress controller to learn which annotations are supported. -The ingress [spec](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status) +The Ingress [spec](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status) has all the information needed to configure a loadbalancer or proxy server. Most importantly, it contains a list of rules matched against all incoming requests. Ingress resource only supports rules for directing HTTP traffic. @@ -146,18 +113,17 @@ Each http rule contains the following information: loadbalancer will direct traffic to the referenced service. * A backend is a combination of service and port names as described in the [services doc](/docs/concepts/services-networking/service/). HTTP (and HTTPS) requests to the - ingress matching the host and path of the rule will be sent to the listed backend. + Ingress matching the host and path of the rule will be sent to the listed backend. -A default backend is often configured in an ingress controller that will service any requests that do not +A default backend is often configured in an Ingress controller that will service any requests that do not match a path in the spec. ### Default Backend -An ingress with no rules sends all traffic to a single default backend. The default -backend is typically a configuration option of the [ingress controller](#ingress-controllers) -and is not specified in your ingress resources. +An Ingress with no rules sends all traffic to a single default backend. The default +backend is typically a configuration option of the [Ingress controller](/docs/concepts/services-networking/ingress-controllers) and is not specified in your Ingress resources. -If none of the hosts or paths match the HTTP request in the ingress objects, the traffic is +If none of the hosts or paths match the HTTP request in the Ingress objects, the traffic is routed to your default backend. ## Types of Ingress @@ -165,12 +131,12 @@ routed to your default backend. ### Single Service Ingress There are existing Kubernetes concepts that allow you to expose a single Service -(see [alternatives](#alternatives)). You can also do this with an ingress by specifying a +(see [alternatives](#alternatives)). You can also do this with an Ingress by specifying a *default backend* with no rules. {{< codenew file="service/networking/ingress.yaml" >}} -If you create it using `kubectl create -f` you should see: +If you create it using `kubectl apply -f` you should see: ```shell kubectl get ingress test-ingress @@ -181,8 +147,8 @@ NAME HOSTS ADDRESS PORTS AGE test-ingress * 107.178.254.228 80 59s ``` -Where `107.178.254.228` is the IP allocated by the ingress controller to satisfy -this ingress. +Where `107.178.254.228` is the IP allocated by the Ingress controller to satisfy +this Ingress. {{< note >}} Ingress controllers and load balancers may take a minute or two to allocate an IP address. @@ -192,7 +158,7 @@ Until that time you will often see the address listed as ``. ### Simple fanout A fanout configuration routes traffic from a single IP address to more than one service, -based on the HTTP URI being requested. An ingress allows you to keep the number of loadbalancers +based on the HTTP URI being requested. An Ingress allows you to keep the number of loadbalancers down to a minimum. For example, a setup like: ```shell @@ -200,7 +166,7 @@ foo.bar.com -> 178.91.123.132 -> / foo service1:4200 / bar service2:8080 ``` -would require an ingress such as: +would require an Ingress such as: ```yaml apiVersion: extensions/v1beta1 @@ -224,7 +190,7 @@ spec: servicePort: 8080 ``` -When you create the ingress with `kubectl create -f`: +When you create the ingress with `kubectl apply -f`: ```shell kubectl describe ingress simple-fanout-example @@ -249,13 +215,13 @@ Events: Normal ADD 22s loadbalancer-controller default/test ``` -The ingress controller will provision an implementation specific loadbalancer -that satisfies the ingress, as long as the services (`s1`, `s2`) exist. -When it has done so, you will see the address of the loadbalancer at the +The Ingress controller provisions an implementation specific loadbalancer +that satisfies the Ingress, as long as the services (`s1`, `s2`) exist. +When it has done so, you can see the address of the loadbalancer at the Address field. {{< note >}} -Depending on the [ingress controller](#ingress-controllers) you are using, you may need to +Depending on the [Ingress controller](/docs/concepts/services-networking/ingress-controllers) you are using, you may need to create a default-http-backend [Service](/docs/concepts/services-networking/service/). {{< /note >}} @@ -269,7 +235,7 @@ foo.bar.com --| |-> foo.bar.com s1:80 bar.foo.com --| |-> bar.foo.com s2:80 ``` -The following ingress tells the backing loadbalancer to route requests based on +The following Ingress tells the backing loadbalancer to route requests based on the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4). ```yaml @@ -293,9 +259,9 @@ spec: servicePort: 80 ``` -If you create an ingress resource without any hosts defined in the rules, then any -web traffic to the IP address of your ingress controller can be matched without a name based -virtual host being required. For example, the following ingress resource will route traffic +If you create an Ingress resource without any hosts defined in the rules, then any +web traffic to the IP address of your Ingress controller can be matched without a name based +virtual host being required. For example, the following Ingress resource will route traffic requested for `first.bar.com` to `service1`, `second.foo.com` to `service2`, and any traffic to the IP address without a hostname defined in request (that is, without a request header being presented) to `service3`. @@ -328,12 +294,12 @@ spec: ### TLS -You can secure an ingress by specifying a [secret](/docs/concepts/configuration/secret) -that contains a TLS private key and certificate. Currently the ingress only +You can secure an Ingress by specifying a [secret](/docs/concepts/configuration/secret) +that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. If the TLS -configuration section in an ingress specifies different hosts, they will be +configuration section in an Ingress specifies different hosts, they will be multiplexed on the same port according to the hostname specified through the -SNI TLS extension (provided the ingress controller supports SNI). The TLS secret +SNI TLS extension (provided the Ingress controller supports SNI). The TLS secret must contain keys named `tls.crt` and `tls.key` that contain the certificate and private key to use for TLS, e.g.: @@ -346,10 +312,10 @@ kind: Secret metadata: name: testsecret-tls namespace: default -type: Opaque +type: kubernetes.io/tls ``` -Referencing this secret in an ingress will tell the ingress controller to +Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the loadbalancer using TLS. You need to make sure the TLS secret you created came from a certificate that contains a CN for `sslexample.foo.com`. @@ -375,24 +341,24 @@ spec: ``` {{< note >}} -There is a gap between TLS features supported by various ingress +There is a gap between TLS features supported by various Ingress controllers. Please refer to documentation on [nginx](https://git.k8s.io/ingress-nginx/README.md#https), [GCE](https://git.k8s.io/ingress-gce/README.md#frontend-https), or any other -platform specific ingress controller to understand how TLS works in your environment. +platform specific Ingress controller to understand how TLS works in your environment. {{< /note >}} ### Loadbalancing -An ingress controller is bootstrapped with some load balancing policy settings -that it applies to all ingress, such as the load balancing algorithm, backend +An Ingress controller is bootstrapped with some load balancing policy settings +that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced load balancing concepts (e.g. persistent sessions, dynamic weights) are not yet exposed through the -ingress. You can still get these features through the +Ingress. You can still get these features through the [service loadbalancer](https://github.com/kubernetes/ingress-nginx). It's also worth noting that even though health checks are not exposed directly -through the ingress, there exist parallel concepts in Kubernetes such as +through the Ingress, there exist parallel concepts in Kubernetes such as [readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) which allow you to achieve the same end result. Please review the controller specific docs to see how they handle health checks ( @@ -401,7 +367,7 @@ specific docs to see how they handle health checks ( ## Updating an Ingress -To update an existing ingress to add a new Host, you can update it by editing the resource: +To update an existing Ingress to add a new Host, you can update it by editing the resource: ```shell kubectl describe ingress test @@ -452,7 +418,7 @@ spec: ``` Saving the yaml will update the resource in the API server, which should tell the -ingress controller to reconfigure the loadbalancer. +Ingress controller to reconfigure the loadbalancer. ```shell kubectl describe ingress test @@ -478,25 +444,24 @@ Events: Normal ADD 45s loadbalancer-controller default/test ``` -You can achieve the same by invoking `kubectl replace -f` on a modified ingress yaml file. +You can achieve the same by invoking `kubectl replace -f` on a modified Ingress yaml file. ## Failing across availability zones Techniques for spreading traffic across failure domains differs between cloud providers. -Please check the documentation of the relevant [ingress controller](#ingress-controllers) for -details. You can also refer to the [federation documentation](/docs/concepts/cluster-administration/federation/) -for details on deploying ingress in a federated cluster. +Please check the documentation of the relevant [Ingress controller](/docs/concepts/services-networking/ingress-controllers) for details. You can also refer to the [federation documentation](/docs/concepts/cluster-administration/federation/) +for details on deploying Ingress in a federated cluster. ## Future Work Track [SIG Network](https://github.com/kubernetes/community/tree/master/sig-network) for more details on the evolution of the ingress and related resources. You may also track the -[ingress repository](https://github.com/kubernetes/ingress/tree/master) for more details on the -evolution of various ingress controllers. +[Ingress repository](https://github.com/kubernetes/ingress/tree/master) for more details on the +evolution of various Ingress controllers. ## Alternatives -You can expose a Service in multiple ways that don't directly involve the ingress resource: +You can expose a Service in multiple ways that don't directly involve the Ingress resource: * Use [Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#loadbalancer) * Use [Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport) @@ -505,6 +470,5 @@ You can expose a Service in multiple ways that don't directly involve the ingres {{% /capture %}} {{% capture whatsnext %}} - +* [Set up Ingress on Minikube with the NGINX Controller](/docs/tasks/access-application-cluster/ingress-minikube) {{% /capture %}} - diff --git a/content/en/docs/concepts/services-networking/network-policies.md b/content/en/docs/concepts/services-networking/network-policies.md index 570ce7f65423c..cd18e6ecaad9f 100644 --- a/content/en/docs/concepts/services-networking/network-policies.md +++ b/content/en/docs/concepts/services-networking/network-policies.md @@ -92,11 +92,12 @@ __egress__: Each `NetworkPolicy` may include a list of whitelist `egress` rules. So, the example NetworkPolicy: 1. isolates "role=db" pods in the "default" namespace for both ingress and egress traffic (if they weren't already isolated) -2. allows connections to TCP port 6379 of "role=db" pods in the "default" namespace from: +2. (Ingress rules) allows connections to all pods in the “default” namespace with the label “role=db” on TCP port 6379 from: + * any pod in the "default" namespace with the label "role=frontend" * any pod in a namespace with the label "project=myproject" * IP addresses in the ranges 172.17.0.0–172.17.0.255 and 172.17.2.0–172.17.255.255 (ie, all of 172.17.0.0/16 except 172.17.1.0/24) -3. allows connections from any pod in the "default" namespace with the label "role=db" to CIDR 10.0.0.0/24 on TCP port 5978 +3. (Egress rules) allows connections from any pod in the "default" namespace with the label "role=db" to CIDR 10.0.0.0/24 on TCP port 5978 See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) walkthrough for further examples. @@ -266,4 +267,3 @@ The CNI plugin has to support SCTP as `protocol` value in `NetworkPolicy`. - See more [Recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for common scenarios enabled by the NetworkPolicy resource. {{% /capture %}} - diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index 4b5bbba41f087..3723ca6bbe969 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -83,12 +83,9 @@ deploying and evolving your `Services`. For example, you can change the port number that pods expose in the next version of your backend software, without breaking clients. -Kubernetes `Services` support `TCP`, `UDP` and `SCTP` for protocols. The default -is `TCP`. - -{{< note >}} -SCTP support is an alpha feature since Kubernetes 1.12 -{{< /note >}} +`TCP` is the default protocol for services, and you can also use any other +[supported protocol](#protocol-support). At the moment, you can only set a +single `port` and `protocol` for a Service. ### Services without selectors @@ -519,6 +516,16 @@ metadata: [...] ``` {{% /tab %}} +{{% tab name="Baidu Cloud" %}} +```yaml +[...] +metadata: + name: my-service + annotations: + service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true" +[...] +``` +{{% /tab %}} {{< /tabs >}} @@ -934,29 +941,74 @@ Service is a top-level resource in the Kubernetes REST API. More details about t API object can be found at: [Service API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core). -## SCTP support +## Supported protocols {#protocol-support} + +### TCP + +{{< feature-state for_k8s_version="v1.0" state="stable" >}} + +You can use TCP for any kind of service, and it's the default network protocol. + +### UDP + +{{< feature-state for_k8s_version="v1.0" state="stable" >}} + +You can use UDP for most services. For type=LoadBalancer services, UDP support +depends on the cloud provider offering this facility. + +### HTTP + +{{< feature-state for_k8s_version="v1.1" state="stable" >}} + +If your cloud provider supports it, you can use a Service in LoadBalancer mode +to set up external HTTP / HTTPS reverse proxying, forwarded to the Endpoints +of the Service. + +{{< note >}} +You can also use {{< glossary_tooltip term_id="ingress" >}} in place of Service +to expose HTTP / HTTPS services. +{{< /note >}} + +### PROXY protocol + +{{< feature-state for_k8s_version="v1.1" state="stable" >}} + +If your cloud provider supports it (eg, [AWS](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#aws)), +you can use a Service in LoadBalancer mode to configure a load balancer outside +of Kubernetes itself, that will forward connections prefixed with +[PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt). + +The load balancer will send an initial series of octets describing the +incoming connection, similar to this example + +``` +PROXY TCP4 192.0.2.202 10.0.42.7 12345 7\r\n +``` +followed by the data from the client. + +### SCTP {{< feature-state for_k8s_version="v1.12" state="alpha" >}} Kubernetes supports SCTP as a `protocol` value in `Service`, `Endpoint`, `NetworkPolicy` and `Pod` definitions as an alpha feature. To enable this feature, the cluster administrator needs to enable the `SCTPSupport` feature gate on the apiserver, for example, `“--feature-gates=SCTPSupport=true,...”`. When the feature gate is enabled, users can set the `protocol` field of a `Service`, `Endpoint`, `NetworkPolicy` and `Pod` to `SCTP`. Kubernetes sets up the network accordingly for the SCTP associations, just like it does for TCP connections. -### Warnings +#### Warnings {#caveat-sctp-overview} -#### The support of multihomed SCTP associations +##### Support for multihomed SCTP associations {#caveat-sctp-multihomed} The support of multihomed SCTP associations requires that the CNI plugin can support the assignment of multiple interfaces and IP addresses to a `Pod`. NAT for multihomed SCTP associations requires special logic in the corresponding kernel modules. -#### Service with type=LoadBalancer +##### Service with type=LoadBalancer {#caveat-sctp-loadbalancer-service-type} A `Service` with `type` LoadBalancer and `protocol` SCTP can be created only if the cloud provider's load balancer implementation supports SCTP as a protocol. Otherwise the `Service` creation request is rejected. The current set of cloud load balancer providers (`Azure`, `AWS`, `CloudStack`, `GCE`, `OpenStack`) do not support SCTP. -#### Windows +##### Windows {#caveat-sctp-windows-os} SCTP is not supported on Windows based nodes. -#### Userspace kube-proxy +##### Userspace kube-proxy {#caveat-sctp-kube-proxy-userspace} The kube-proxy does not support the management of SCTP associations when it is in userspace mode. diff --git a/content/en/docs/concepts/storage/persistent-volumes.md b/content/en/docs/concepts/storage/persistent-volumes.md index ee66fdd81c0b4..6f471b8a228d4 100644 --- a/content/en/docs/concepts/storage/persistent-volumes.md +++ b/content/en/docs/concepts/storage/persistent-volumes.md @@ -179,8 +179,8 @@ However, the particular path specified in the custom recycler pod template in th ### Expanding Persistent Volumes Claims -{{< feature-state for_k8s_version="v1.8" state="alpha" >}} {{< feature-state for_k8s_version="v1.11" state="beta" >}} + Support for expanding PersistentVolumeClaims (PVCs) is now enabled by default. You can expand the following types of volumes: @@ -193,6 +193,7 @@ the following types of volumes: * Azure Disk * Portworx * FlexVolumes +* CSI You can only expand a PVC if its storage class's `allowVolumeExpansion` field is set to true. @@ -214,6 +215,13 @@ To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying `PersistentVolume`. A new `PersistentVolume` is never created to satisfy the claim. Instead, an existing volume is resized. +#### CSI Volume expansion + +{{< feature-state for_k8s_version="v1.14" state="alpha" >}} + +CSI volume expansion requires enabling `ExpandCSIVolumes` feature gate and also requires specific CSI driver to support volume expansion. Please refer to documentation of specific CSI driver for more information. + + #### Resizing a volume containing a file system You can only resize volumes containing a file system if the file system is XFS, Ext3, or Ext4. @@ -312,7 +320,7 @@ Currently, storage size is the only resource that can be set or requested. Futu {{< feature-state for_k8s_version="v1.13" state="beta" >}} Prior to Kubernetes 1.9, all volume plugins created a filesystem on the persistent volume. -Now, you can set the value of `volumeMode` to `raw` to use a raw block device, or `filesystem` +Now, you can set the value of `volumeMode` to `block` to use a raw block device, or `filesystem` to use a filesystem. `filesystem` is the default if the value is omitted. This is an optional API parameter. diff --git a/content/en/docs/concepts/storage/storage-classes.md b/content/en/docs/concepts/storage/storage-classes.md index fe817d1aac5b6..b0df83ed47612 100644 --- a/content/en/docs/concepts/storage/storage-classes.md +++ b/content/en/docs/concepts/storage/storage-classes.md @@ -151,6 +151,11 @@ The following plugins support `WaitForFirstConsumer` with pre-created Persistent * All of the above * [Local](#local) +{{< feature-state state="beta" for_k8s_version="1.14" >}} +[CSI volumes](/docs/concepts/storage/volumes/#csi) are also supported with dynamic provisioning +and pre-created PVs, but you'll need to look at the documentation for a specific CSI driver +to see its supported topology keys and examples. The `CSINodeInfo` feature gate must be enabled. + ### Allowed Topologies When a cluster operator specifies the `WaitForFirstConsumer` volume binding mode, it is no longer necessary @@ -739,7 +744,7 @@ references it. ### Local -{{< feature-state for_k8s_version="v1.10" state="beta" >}} +{{< feature-state for_k8s_version="v1.14" state="stable" >}} ```yaml kind: StorageClass @@ -750,7 +755,7 @@ provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer ``` -Local volumes do not support dynamic provisioning yet, however a StorageClass +Local volumes do not currently support dynamic provisioning, however a StorageClass should still be created to delay volume binding until pod scheduling. This is specified by the `WaitForFirstConsumer` volume binding mode. diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md index fdda03b775185..bcf6abe2d9c72 100644 --- a/content/en/docs/concepts/storage/volumes.md +++ b/content/en/docs/concepts/storage/volumes.md @@ -70,6 +70,7 @@ Kubernetes supports several types of Volumes: * [azureDisk](#azuredisk) * [azureFile](#azurefile) * [cephfs](#cephfs) + * [cinder](#cinder) * [configMap](#configmap) * [csi](#csi) * [downwardAPI](#downwardapi) @@ -148,6 +149,17 @@ spec: fsType: ext4 ``` +#### CSI Migration + +{{< feature-state for_k8s_version="v1.14" state="alpha" >}} + +The CSI Migration feature for awsElasticBlockStore, when enabled, shims all plugin operations +from the existing in-tree plugin to the `ebs.csi.aws.com` Container +Storage Interface (CSI) Driver. In order to use this feature, the [AWS EBS CSI +Driver](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) +must be installed on the cluster and the `CSIMigration` and `CSIMigrationAWS` +Alpha features must be enabled. + ### azureDisk {#azuredisk} A `azureDisk` is used to mount a Microsoft Azure [Data Disk](https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-about-disks-vhds/) into a Pod. @@ -176,6 +188,48 @@ You must have your own Ceph server running with the share exported before you ca See the [CephFS example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/cephfs/) for more details. +### cinder {#cinder} + +{{< note >}} +Prerequisite: Kubernetes with OpenStack Cloud Provider configured. For cloudprovider +configuration please refer [cloud provider openstack](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#openstack). +{{< /note >}} + +`cinder` is used to mount OpenStack Cinder Volume into your Pod. + +#### Cinder Volume Example configuration + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: test-cinder +spec: + containers: + - image: k8s.gcr.io/test-webserver + name: test-cinder-container + volumeMounts: + - mountPath: /test-cinder + name: test-volume + volumes: + - name: test-volume + # This OpenStack volume must already exist. + cinder: + volumeID: + fsType: ext4 +``` + +#### CSI Migration + +{{< feature-state for_k8s_version="v1.14" state="alpha" >}} + +The CSI Migration feature for Cinder, when enabled, shims all plugin operations +from the existing in-tree plugin to the `cinder.csi.openstack.org` Container +Storage Interface (CSI) Driver. In order to use this feature, the [Openstack Cinder CSI +Driver](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md) +must be installed on the cluster and the `CSIMigration` and `CSIMigrationOpenStack` +Alpha features must be enabled. + ### configMap {#configmap} The [`configMap`](/docs/tasks/configure-pod-container/configure-pod-configmap/) resource @@ -401,6 +455,17 @@ spec: fsType: ext4 ``` +#### CSI Migration + +{{< feature-state for_k8s_version="v1.14" state="alpha" >}} + +The CSI Migration feature for GCE PD, when enabled, shims all plugin operations +from the existing in-tree plugin to the `pd.csi.storage.gke.io` Container +Storage Interface (CSI) Driver. In order to use this feature, the [GCE PD CSI +Driver](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) +must be installed on the cluster and the `CSIMigration` and `CSIMigrationGCE` +Alpha features must be enabled. + ### gitRepo (deprecated) {#gitrepo} {{< warning >}} @@ -535,14 +600,7 @@ See the [iSCSI example](https://github.com/kubernetes/examples/tree/{{< param "g ### local {#local} -{{< feature-state for_k8s_version="v1.10" state="beta" >}} - -{{< note >}} -The alpha PersistentVolume NodeAffinity annotation has been deprecated -and will be removed in a future release. Existing PersistentVolumes using this -annotation must be updated by the user to use the new PersistentVolume -`NodeAffinity` field. -{{< /note >}} +{{< feature-state for_k8s_version="v1.14" state="stable" >}} A `local` volume represents a mounted local storage device such as a disk, partition or directory. @@ -608,7 +666,8 @@ selectors, Pod affinity, and Pod anti-affinity. An external static provisioner can be run separately for improved management of the local volume lifecycle. Note that this provisioner does not support dynamic provisioning yet. For an example on how to run an external local provisioner, -see the [local volume provisioner user guide](https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume). +see the [local volume provisioner user +guide](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner). {{< note >}} The local PersistentVolume requires manual cleanup and deletion by the @@ -790,9 +849,8 @@ receive updates for those volume sources. ### portworxVolume {#portworxvolume} A `portworxVolume` is an elastic block storage layer that runs hyperconverged with -Kubernetes. Portworx fingerprints storage in a server, tiers based on capabilities, -and aggregates capacity across multiple servers. Portworx runs in-guest in virtual -machines or on bare metal Linux nodes. +Kubernetes. [Portworx](https://portworx.com/use-case/kubernetes-storage/) fingerprints storage in a server, tiers based on capabilities, +and aggregates capacity across multiple servers. Portworx runs in-guest in virtual machines or on bare metal Linux nodes. A `portworxVolume` can be dynamically created through Kubernetes or it can also be pre-provisioned and referenced inside a Kubernetes Pod. @@ -835,7 +893,9 @@ You must have your own Quobyte setup running with the volumes created before you can use it. {{< /caution >}} -See the [Quobyte example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/quobyte) for more details. +Quobyte supports the {{< glossary_tooltip text="Container Storage Interface" term_id="csi" >}}. +CSI is the recommended plugin to use Quobyte volumes inside Kubernetes. Quobyte's +GitHub project has [instructions](https://github.com/quobyte/quobyte-csi#quobyte-csi) for deploying Quobyte using CSI, along with examples. ### rbd {#rbd} @@ -1151,12 +1211,12 @@ CSI support was introduced as alpha in Kubernetes v1.9, moved to beta in Kubernetes v1.10, and is GA in Kubernetes v1.13. {{< note >}} -**Note:** Support for CSI spec versions 0.2 and 0.3 are deprecated in Kubernetes +Support for CSI spec versions 0.2 and 0.3 are deprecated in Kubernetes v1.13 and will be removed in a future release. {{< /note >}} {{< note >}} -**Note:** CSI drivers may not be compatible across all Kubernetes releases. +CSI drivers may not be compatible across all Kubernetes releases. Please check the specific CSI driver's documentation for supported deployments steps for each Kubernetes release and a compatibility matrix. {{< /note >}} @@ -1217,28 +1277,78 @@ persistent volume: #### CSI raw block volume support -{{< feature-state for_k8s_version="v1.11" state="alpha" >}} +{{< feature-state for_k8s_version="v1.14" state="beta" >}} Starting with version 1.11, CSI introduced support for raw block volumes, which relies on the raw block volume feature that was introduced in a previous version of Kubernetes. This feature will make it possible for vendors with external CSI drivers to implement raw block volumes support in Kubernetes workloads. -CSI block volume support is feature-gated and turned off by default. To run CSI with -block volume support enabled, a cluster administrator must enable the feature for each -Kubernetes component using the following feature gate flags: +CSI block volume support is feature-gated, but enabled by default. The two +feature gates which must be enabled for this feature are `BlockVolume` and +`CSIBlockVolume`. + +Learn how to +[setup your PV/PVC with raw block volume support](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support). + +#### CSI ephemeral volumes + +{{< feature-state for_k8s_version="v1.14" state="alpha" >}} + +This feature allows CSI volumes to be directly embedded in the Pod specification instead of a PersistentVolume. Volumes specified in this way are ephemeral and do not persist across Pod restarts. + +Example: + +```yaml +kind: Pod +apiVersion: v1 +metadata: + name: my-csi-app +spec: + containers: + - name: my-frontend + image: busybox + volumeMounts: + - mountPath: "/data" + name: my-csi-inline-vol + command: [ "sleep", "1000000" ] + volumes: + - name: my-csi-inline-vol + csi: + driver: inline.storage.kubernetes.io + volumeAttributes: + foo: bar +``` + +This feature requires CSIInlineVolume feature gate to be enabled: ``` ---feature-gates=BlockVolume=true,CSIBlockVolume=true +--feature-gates=CSIInlineVolume=true ``` -Learn how to -[setup your PV/PVC with raw block volume support](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support). +CSI ephemeral volumes are only supported by a subset of CSI drivers. Please see the list of CSI drivers [here](https://kubernetes-csi.github.io/docs/drivers.html). -#### Developer resources +# Developer resources For more information on how to develop a CSI driver, refer to the [kubernetes-csi documentation](https://kubernetes-csi.github.io/docs/) +#### Migrating to CSI drivers from in-tree plugins + +{{< feature-state for_k8s_version="v1.14" state="alpha" >}} + +The CSI Migration feature, when enabled, directs operations against existing in-tree +plugins to corresponding CSI plugins (which are expected to be installed and configured). +The feature implements the necessary translation logic and shims to re-route the +operations in a seamless fashion. As a result, operators do not have to make any +configuration changes to existing Storage Classes, PVs or PVCs (referring to +in-tree plugins) when transitioning to a CSI driver that supersedes an in-tree plugin. + +In the alpha state, the operations and features that are supported include +provisioning/delete, attach/detach and mount/unmount of volumes with `volumeMode` set to `filesystem` + +In-tree plugins that support CSI Migration and have a corresponding CSI driver implemented +are listed in the "Types of Volumes" section above. + ### Flexvolume {#flexVolume} Flexvolume is an out-of-tree plugin interface that has existed in Kubernetes @@ -1307,8 +1417,8 @@ MountFlags=shared ``` Or, remove `MountFlags=slave` if present. Then restart the Docker daemon: ```shell -$ sudo systemctl daemon-reload -$ sudo systemctl restart docker +sudo systemctl daemon-reload +sudo systemctl restart docker ``` diff --git a/content/en/docs/concepts/workloads/controllers/cron-jobs.md b/content/en/docs/concepts/workloads/controllers/cron-jobs.md index 413547e2cb151..339cb81f78855 100644 --- a/content/en/docs/concepts/workloads/controllers/cron-jobs.md +++ b/content/en/docs/concepts/workloads/controllers/cron-jobs.md @@ -46,11 +46,13 @@ It is important to note that if the `startingDeadlineSeconds` field is set (not A CronJob is counted as missed if it has failed to be created at its scheduled time. For example, If `concurrencyPolicy` is set to `Forbid` and a CronJob was attempted to be scheduled when there was a previous schedule still running, then it would count as missed. -For example, suppose a cron job is set to start at exactly `08:30:00` and its -`startingDeadlineSeconds` is set to 10, if the CronJob controller happens to -be down from `08:29:00` to `08:42:00`, the job will not start. -Set a longer `startingDeadlineSeconds` if starting later is better than not -starting at all. +For example, suppose a CronJob is set to schedule a new Job every one minute beginning at `08:30:00`, and its +`startingDeadlineSeconds` field is not set. The default for this field is `100` seconds. If the CronJob controller happens to +be down from `08:29:00` to `10:21:00`, the job will not start as the number of missed jobs which missed their schedule is greater than 100. + +To illustrate this concept further, suppose a CronJob is set to schedule a new Job every one minute beginning at `08:30:00`, and its +`startingDeadlineSeconds` is set to 200 seconds. If the CronJob controller happens to +be down for the same period as the previous example (`08:29:00` to `10:21:00`,) the Job will still start at 10:22:00. This happens as the controller now checks how many missed schedules happened in the last 200 seconds (ie, 3 missed schedules), rather than from the last scheduled time until now. The Cronjob is only responsible for creating Jobs that match its schedule, and the Job in turn is responsible for the management of the Pods it represents. diff --git a/content/en/docs/concepts/workloads/controllers/daemonset.md b/content/en/docs/concepts/workloads/controllers/daemonset.md index 04050174d9488..3a1a875e2e2e3 100644 --- a/content/en/docs/concepts/workloads/controllers/daemonset.md +++ b/content/en/docs/concepts/workloads/controllers/daemonset.md @@ -21,7 +21,7 @@ Some typical uses of a DaemonSet are: - running a cluster storage daemon, such as `glusterd`, `ceph`, on each node. - running a logs collection daemon on every node, such as `fluentd` or `logstash`. - running a node monitoring daemon on every node, such as [Prometheus Node Exporter]( - https://github.com/prometheus/node_exporter), `collectd`, [Dynatrace OneAgent](https://www.dynatrace.com/technologies/kubernetes-monitoring/), [AppDynamics Agent](https://docs.appdynamics.com/display/CLOUD/Container+Visibility+with+Kubernetes), Datadog agent, New Relic agent, Ganglia `gmond` or Instana agent. + https://github.com/prometheus/node_exporter), `collectd`, [Dynatrace OneAgent](https://www.dynatrace.com/technologies/kubernetes-monitoring/), [AppDynamics Agent](https://docs.appdynamics.com/display/CLOUD/Container+Visibility+with+Kubernetes), [Datadog agent](https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/), [New Relic agent](https://docs.newrelic.com/docs/integrations/kubernetes-integration/installation/kubernetes-installation-configuration), Ganglia `gmond` or Instana agent. In a simple case, one DaemonSet, covering all nodes, would be used for each type of daemon. A more complex setup might use multiple DaemonSets for a single type of daemon, but with @@ -42,7 +42,7 @@ You can describe a DaemonSet in a YAML file. For example, the `daemonset.yaml` f * Create a DaemonSet based on the YAML file: ``` -kubectl create -f https://k8s.io/examples/controllers/daemonset.yaml +kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml ``` ### Required Fields diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md index e3ed2574ccd93..5d91c8b878b9a 100644 --- a/content/en/docs/concepts/workloads/controllers/deployment.md +++ b/content/en/docs/concepts/workloads/controllers/deployment.md @@ -40,7 +40,6 @@ The following are typical use cases for Deployments: * [Use the status of the Deployment](#deployment-status) as an indicator that a rollout has stuck. * [Clean up older ReplicaSets](#clean-up-policy) that you don't need anymore. - ## Creating a Deployment The following is an example of a Deployment. It creates a ReplicaSet to bring up three `nginx` Pods: @@ -55,9 +54,9 @@ In this example: In this case, you simply select a label that is defined in the Pod template (`app: nginx`). However, more sophisticated selection rules are possible, as long as the Pod template itself satisfies the rule. - + {{< note >}} - `matchLabels` is a map of {key,value} pairs. A single {key,value} in the `matchLabels` map + `matchLabels` is a map of {key,value} pairs. A single {key,value} in the `matchLabels` map is equivalent to an element of `matchExpressions`, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. {{< /note >}} @@ -74,7 +73,7 @@ In this example: To create this Deployment, run the following command: ```shell -kubectl create -f https://k8s.io/examples/controllers/nginx-deployment.yaml +kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml ``` {{< note >}} @@ -128,18 +127,19 @@ To see the ReplicaSet (`rs`) created by the deployment, run `kubectl get rs`: ```shell NAME DESIRED CURRENT READY AGE -nginx-deployment-2035384211 3 3 3 18s +nginx-deployment-75675f5897 3 3 3 18s ``` -Notice that the name of the ReplicaSet is always formatted as `[DEPLOYMENT-NAME]-[POD-TEMPLATE-HASH-VALUE]`. The hash value is automatically generated when the Deployment is created. +Notice that the name of the ReplicaSet is always formatted as `[DEPLOYMENT-NAME]-[RANDOM-STRING]`. The random string is +randomly generated and uses the pod-template-hash as a seed. To see the labels automatically generated for each pod, run `kubectl get pods --show-labels`. The following output is returned: ```shell NAME READY STATUS RESTARTS AGE LABELS -nginx-deployment-2035384211-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211 -nginx-deployment-2035384211-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211 -nginx-deployment-2035384211-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211 +nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 +nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 +nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 ``` The created ReplicaSet ensures that there are three `nginx` Pods running at all times. @@ -171,21 +171,27 @@ Suppose that you now want to update the nginx Pods to use the `nginx:1.9.1` imag instead of the `nginx:1.7.9` image. ```shell -$ kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment -nginx=nginx:1.9.1 image updated +kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 +``` +``` +image updated ``` Alternatively, you can `edit` the Deployment and change `.spec.template.spec.containers[0].image` from `nginx:1.7.9` to `nginx:1.9.1`: ```shell -$ kubectl edit deployment.v1.apps/nginx-deployment +kubectl edit deployment.v1.apps/nginx-deployment +``` +``` deployment.apps/nginx-deployment edited ``` To see the rollout status, run: ```shell -$ kubectl rollout status deployment.v1.apps/nginx-deployment +kubectl rollout status deployment.v1.apps/nginx-deployment +``` +``` Waiting for rollout to finish: 2 out of 3 new replicas have been updated... deployment.apps/nginx-deployment successfully rolled out ``` @@ -193,7 +199,9 @@ deployment.apps/nginx-deployment successfully rolled out After the rollout succeeds, you may want to `get` the Deployment: ```shell -$ kubectl get deployments +kubectl get deployments +``` +``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 3 3 3 3 36s ``` @@ -206,7 +214,9 @@ You can run `kubectl get rs` to see that the Deployment updated the Pods by crea up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas. ```shell -$ kubectl get rs +kubectl get rs +``` +``` NAME DESIRED CURRENT READY AGE nginx-deployment-1564180365 3 3 3 6s nginx-deployment-2035384211 0 0 0 36s @@ -215,7 +225,9 @@ nginx-deployment-2035384211 0 0 0 36s Running `get pods` should now show only the new Pods: ```shell -$ kubectl get pods +kubectl get pods +``` +``` NAME READY STATUS RESTARTS AGE nginx-deployment-1564180365-khku8 1/1 Running 0 14s nginx-deployment-1564180365-nacti 1/1 Running 0 14s @@ -236,7 +248,9 @@ new Pods have come up, and does not create new Pods until a sufficient number of It makes sure that number of available Pods is at least 2 and the number of total Pods is at most 4. ```shell -$ kubectl describe deployments +kubectl describe deployments +``` +``` Name: nginx-deployment Namespace: default CreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000 @@ -337,14 +351,18 @@ rolled back. Suppose that you made a typo while updating the Deployment, by putting the image name as `nginx:1.91` instead of `nginx:1.9.1`: ```shell -$ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true +kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true +``` +``` deployment.apps/nginx-deployment image updated ``` The rollout will be stuck. ```shell -$ kubectl rollout status deployment.v1.apps/nginx-deployment +kubectl rollout status deployment.v1.apps/nginx-deployment +``` +``` Waiting for rollout to finish: 1 out of 3 new replicas have been updated... ``` @@ -354,7 +372,9 @@ Press Ctrl-C to stop the above rollout status watch. For more information on stu You will see that the number of old replicas (nginx-deployment-1564180365 and nginx-deployment-2035384211) is 2, and new replicas (nginx-deployment-3066724191) is 1. ```shell -$ kubectl get rs +kubectl get rs +``` +``` NAME DESIRED CURRENT READY AGE nginx-deployment-1564180365 3 3 3 25s nginx-deployment-2035384211 0 0 0 36s @@ -364,7 +384,9 @@ nginx-deployment-3066724191 1 1 0 6s Looking at the Pods created, you will see that 1 Pod created by new ReplicaSet is stuck in an image pull loop. ```shell -$ kubectl get pods +kubectl get pods +``` +``` NAME READY STATUS RESTARTS AGE nginx-deployment-1564180365-70iae 1/1 Running 0 25s nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s @@ -379,7 +401,9 @@ Kubernetes by default sets the value to 25%. {{< /note >}} ```shell -$ kubectl describe deployment +kubectl describe deployment +``` +``` Name: nginx-deployment Namespace: default CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700 @@ -426,10 +450,12 @@ To fix this, you need to rollback to a previous revision of Deployment that is s First, check the revisions of this deployment: ```shell -$ kubectl rollout history deployment.v1.apps/nginx-deployment +kubectl rollout history deployment.v1.apps/nginx-deployment +``` +``` deployments "nginx-deployment" REVISION CHANGE-CAUSE -1 kubectl create --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true +1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true 2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true 3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true ``` @@ -442,7 +468,9 @@ REVISION CHANGE-CAUSE To further see the details of each revision, run: ```shell -$ kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2 +kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2 +``` +``` deployments "nginx-deployment" revision 2 Labels: app=nginx pod-template-hash=1159050644 @@ -463,14 +491,18 @@ deployments "nginx-deployment" revision 2 Now you've decided to undo the current rollout and rollback to the previous revision: ```shell -$ kubectl rollout undo deployment.v1.apps/nginx-deployment +kubectl rollout undo deployment.v1.apps/nginx-deployment +``` +``` deployment.apps/nginx-deployment ``` -Alternatively, you can rollback to a specific revision by specify that in `--to-revision`: +Alternatively, you can rollback to a specific revision by specifying it with `--to-revision`: ```shell -$ kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2 +kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2 +``` +``` deployment.apps/nginx-deployment ``` @@ -480,11 +512,17 @@ The Deployment is now rolled back to a previous stable revision. As you can see, for rolling back to revision 2 is generated from Deployment controller. ```shell -$ kubectl get deployment nginx-deployment +kubectl get deployment nginx-deployment +``` +``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 3 3 3 3 30m +``` -$ kubectl describe deployment nginx-deployment +```shell +kubectl describe deployment nginx-deployment +``` +``` Name: nginx-deployment Namespace: default CreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500 @@ -533,7 +571,9 @@ Events: You can scale a Deployment by using the following command: ```shell -$ kubectl scale deployment.v1.apps/nginx-deployment --replicas=10 +kubectl scale deployment.v1.apps/nginx-deployment --replicas=10 +``` +``` deployment.apps/nginx-deployment scaled ``` @@ -542,7 +582,9 @@ in your cluster, you can setup an autoscaler for your Deployment and choose the Pods you want to run based on the CPU utilization of your existing Pods. ```shell -$ kubectl autoscale deployment.v1.apps/nginx-deployment --min=10 --max=15 --cpu-percent=80 +kubectl autoscale deployment.v1.apps/nginx-deployment --min=10 --max=15 --cpu-percent=80 +``` +``` deployment.apps/nginx-deployment scaled ``` @@ -556,7 +598,9 @@ ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called *p For example, you are running a Deployment with 10 replicas, [maxSurge](#max-surge)=3, and [maxUnavailable](#max-unavailable)=2. ```shell -$ kubectl get deploy +kubectl get deploy +``` +``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 10 10 10 10 50s ``` @@ -564,7 +608,9 @@ nginx-deployment 10 10 10 10 50s You update to a new image which happens to be unresolvable from inside the cluster. ```shell -$ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:sometag +kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:sometag +``` +``` deployment.apps/nginx-deployment image updated ``` @@ -572,7 +618,9 @@ The image update starts a new rollout with ReplicaSet nginx-deployment-198919819 `maxUnavailable` requirement that you mentioned above. ```shell -$ kubectl get rs +kubectl get rs +``` +``` NAME DESIRED CURRENT READY AGE nginx-deployment-1989198191 5 5 0 9s nginx-deployment-618515232 8 8 8 1m @@ -590,10 +638,17 @@ new ReplicaSet. The rollout process should eventually move all replicas to the n the new replicas become healthy. ```shell -$ kubectl get deploy +kubectl get deploy +``` +``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 15 18 7 8 7m -$ kubectl get rs +``` + +```shell +kubectl get rs +``` +``` NAME DESIRED CURRENT READY AGE nginx-deployment-1989198191 7 7 0 7m nginx-deployment-618515232 11 11 11 7m @@ -607,10 +662,16 @@ apply multiple fixes in between pausing and resuming without triggering unnecess For example, with a Deployment that was just created: ```shell -$ kubectl get deploy +kubectl get deploy +``` +``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx 3 3 3 3 1m -$ kubectl get rs +``` +```shell +kubectl get rs +``` +``` NAME DESIRED CURRENT READY AGE nginx-2142116321 3 3 3 1m ``` @@ -618,26 +679,36 @@ nginx-2142116321 3 3 3 1m Pause by running the following command: ```shell -$ kubectl rollout pause deployment.v1.apps/nginx-deployment +kubectl rollout pause deployment.v1.apps/nginx-deployment +``` +``` deployment.apps/nginx-deployment paused ``` Then update the image of the Deployment: ```shell -$ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 +kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 +``` +``` deployment.apps/nginx-deployment image updated ``` Notice that no new rollout started: ```shell -$ kubectl rollout history deployment.v1.apps/nginx-deployment +kubectl rollout history deployment.v1.apps/nginx-deployment +``` +``` deployments "nginx" REVISION CHANGE-CAUSE 1 +``` -$ kubectl get rs +```shell +kubectl get rs +``` +``` NAME DESIRED CURRENT READY AGE nginx-2142116321 3 3 3 2m ``` @@ -645,7 +716,9 @@ nginx-2142116321 3 3 3 2m You can make as many updates as you wish, for example, update the resources that will be used: ```shell -$ kubectl set resources deployment.v1.apps/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi +kubectl set resources deployment.v1.apps/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi +``` +``` deployment.apps/nginx-deployment resource requirements updated ``` @@ -655,9 +728,18 @@ the Deployment will not have any effect as long as the Deployment is paused. Eventually, resume the Deployment and observe a new ReplicaSet coming up with all the new updates: ```shell -$ kubectl rollout resume deployment.v1.apps/nginx-deployment +kubectl rollout resume deployment.v1.apps/nginx-deployment +``` + +``` deployment.apps/nginx-deployment resumed -$ kubectl get rs -w +``` + +```shell +kubectl get rs -w +``` + +``` NAME DESIRED CURRENT READY AGE nginx-2142116321 2 2 2 2m nginx-3926361531 2 2 0 6s @@ -673,8 +755,12 @@ nginx-2142116321 0 1 1 2m nginx-2142116321 0 1 1 2m nginx-2142116321 0 0 0 2m nginx-3926361531 3 3 3 20s -^C -$ kubectl get rs + +``` +```shell +kubectl get rs +``` +``` NAME DESIRED CURRENT READY AGE nginx-2142116321 0 0 0 2m nginx-3926361531 3 3 3 28s @@ -713,7 +799,9 @@ You can check if a Deployment has completed by using `kubectl rollout status`. I successfully, `kubectl rollout status` returns a zero exit code. ```shell -$ kubectl rollout status deployment.v1.apps/nginx-deployment +kubectl rollout status deployment.v1.apps/nginx-deployment +``` +``` Waiting for rollout to finish: 2 of 3 updated replicas are available... deployment.apps/nginx-deployment successfully rolled out $ echo $? @@ -741,7 +829,9 @@ The following `kubectl` command sets the spec with `progressDeadlineSeconds` to lack of progress for a Deployment after 10 minutes: ```shell -$ kubectl patch deployment.v1.apps/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}' +kubectl patch deployment.v1.apps/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}' +``` +``` deployment.apps/nginx-deployment patched ``` Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the following @@ -770,7 +860,9 @@ due to any other kind of error that can be treated as transient. For example, le insufficient quota. If you describe the Deployment you will notice the following section: ```shell -$ kubectl describe deployment nginx-deployment +kubectl describe deployment nginx-deployment +``` +``` <...> Conditions: Type Status Reason @@ -846,7 +938,9 @@ You can check if a Deployment has failed to progress by using `kubectl rollout s returns a non-zero exit code if the Deployment has exceeded the progression deadline. ```shell -$ kubectl rollout status deployment.v1.apps/nginx-deployment +kubectl rollout status deployment.v1.apps/nginx-deployment +``` +``` Waiting for rollout to finish: 2 out of 3 new replicas have been updated... error: deployment "nginx" exceeded its progress deadline $ echo $? @@ -988,15 +1082,12 @@ Field `.spec.rollbackTo` has been deprecated in API versions `extensions/v1beta1 ### Revision History Limit -A Deployment's revision history is stored in the replica sets it controls. +A Deployment's revision history is stored in the ReplicaSets it controls. `.spec.revisionHistoryLimit` is an optional field that specifies the number of old ReplicaSets to retain -to allow rollback. Its ideal value depends on the frequency and stability of new Deployments. All old -ReplicaSets will be kept by default, consuming resources in `etcd` and crowding the output of `kubectl get rs`, -if this field is not set. The configuration of each Deployment revision is stored in its ReplicaSets; -therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of Deployment. +to allow rollback. These old ReplicaSets consume resources in `etcd` and crowd the output of `kubectl get rs`. The configuration of each Deployment revision is stored in its ReplicaSets; therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of Deployment. By default, 10 old ReplicaSets will be kept, however its ideal value depends on the frequency and stability of new Deployments. -More specifically, setting this field to zero means that all old ReplicaSets with 0 replica will be cleaned up. +More specifically, setting this field to zero means that all old ReplicaSets with 0 replicas will be cleaned up. In this case, a new Deployment rollout cannot be undone, since its revision history is cleaned up. ### Paused @@ -1015,5 +1106,3 @@ in a similar fashion. But Deployments are recommended, since they are declarativ additional features, such as rolling back to any previous revision even after the rolling update is done. {{% /capture %}} - - diff --git a/content/en/docs/concepts/workloads/controllers/garbage-collection.md b/content/en/docs/concepts/workloads/controllers/garbage-collection.md index a2b8517afaa62..ecd89ba87693b 100644 --- a/content/en/docs/concepts/workloads/controllers/garbage-collection.md +++ b/content/en/docs/concepts/workloads/controllers/garbage-collection.md @@ -39,7 +39,7 @@ If you create the ReplicaSet and then view the Pod metadata, you can see OwnerReferences field: ```shell -kubectl create -f https://k8s.io/examples/controllers/replicaset.yaml +kubectl apply -f https://k8s.io/examples/controllers/replicaset.yaml kubectl get pods --output=yaml ``` @@ -60,6 +60,14 @@ metadata: ... ``` +{{< note >}} +Cross-namespace owner references is disallowed by design. This means: +1) Namespace-scoped dependents can only specify owners in the same namespace, +and owners that are cluster-scoped. +2) Cluster-scoped dependents can only specify cluster-scoped owners, but not +namespace-scoped owners. +{{< /note >}} + ## Controlling how the garbage collector deletes dependents When you delete an object, you can specify whether the object's dependents are diff --git a/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md b/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md index 214fe74b41a2b..51bc45a826659 100644 --- a/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md +++ b/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md @@ -13,16 +13,16 @@ weight: 70 {{% capture overview %}} -A _job_ creates one or more pods and ensures that a specified number of them successfully terminate. -As pods successfully complete, the _job_ tracks the successful completions. When a specified number -of successful completions is reached, the job itself is complete. Deleting a Job will cleanup the -pods it created. +A Job creates one or more Pods and ensures that a specified number of them successfully terminate. +As pods successfully complete, the Job tracks the successful completions. When a specified number +of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up +the Pods it created. A simple case is to create one Job object in order to reliably run one Pod to completion. -The Job object will start a new Pod if the first pod fails or is deleted (for example +The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot). -A Job can also be used to run multiple pods in parallel. +You can also use a Job to run multiple Pods in parallel. {{% /capture %}} @@ -36,17 +36,21 @@ It takes around 10s to complete. {{< codenew file="controllers/job.yaml" >}} -Run the example job by downloading the example file and then running this command: +You can run the example with this command: ```shell -$ kubectl create -f https://k8s.io/examples/controllers/job.yaml +kubectl apply -f https://k8s.io/examples/controllers/job.yaml +``` +``` job "pi" created ``` -Check on the status of the job using this command: +Check on the status of the Job with `kubectl`: ```shell -$ kubectl describe jobs/pi +kubectl describe jobs/pi +``` +``` Name: pi Namespace: default Selector: controller-uid=b1db589a-2c8d-11e6-b324-0209dc45a495 @@ -78,18 +82,20 @@ Events: 1m 1m 1 {job-controller } Normal SuccessfulCreate Created pod: pi-dtn4q ``` -To view completed pods of a job, use `kubectl get pods`. +To view completed Pods of a Job, use `kubectl get pods`. -To list all the pods that belong to a job in a machine readable form, you can use a command like this: +To list all the Pods that belong to a Job in a machine readable form, you can use a command like this: ```shell -$ pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath={.items..metadata.name}) -$ echo $pods +pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}') +echo $pods +``` +``` pi-aiw0a ``` -Here, the selector is the same as the selector for the job. The `--output=jsonpath` option specifies an expression -that just gets the name from each pod in the returned list. +Here, the selector is the same as the selector for the Job. The `--output=jsonpath` option specifies an expression +that just gets the name from each Pod in the returned list. View the standard output of one of the pods: @@ -102,7 +108,7 @@ $ kubectl logs $pods As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields. -A Job also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status). +A Job also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). ### Pod Template @@ -110,7 +116,7 @@ The `.spec.template` is the only required field of the `.spec`. The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [pod](/docs/user-guide/pods), except it is nested and does not have an `apiVersion` or `kind`. -In addition to required fields for a Pod, a pod template in a job must specify appropriate +In addition to required fields for a Pod, a pod template in a Job must specify appropriate labels (see [pod selector](#pod-selector)) and an appropriate restart policy. Only a [`RestartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Never` or `OnFailure` is allowed. @@ -123,31 +129,30 @@ See section [specifying your own pod selector](#specifying-your-own-pod-selector ### Parallel Jobs -There are three main types of jobs: +There are three main types of task suitable to run as a Job: 1. Non-parallel Jobs - - normally only one pod is started, unless the pod fails. - - job is complete as soon as Pod terminates successfully. + - normally, only one Pod is started, unless the Pod fails. + - the Job is complete as soon as its Pod terminates successfully. 1. Parallel Jobs with a *fixed completion count*: - specify a non-zero positive value for `.spec.completions`. - - the job is complete when there is one successful pod for each value in the range 1 to `.spec.completions`. - - **not implemented yet:** Each pod passed a different index in the range 1 to `.spec.completions`. + - the Job represents the overall task, and is complete when there is one successful Pod for each value in the range 1 to `.spec.completions`. + - **not implemented yet:** Each Pod is passed a different index in the range 1 to `.spec.completions`. 1. Parallel Jobs with a *work queue*: - do not specify `.spec.completions`, default to `.spec.parallelism`. - - the pods must coordinate with themselves or an external service to determine what each should work on. - - each pod is independently capable of determining whether or not all its peers are done, thus the entire Job is done. - - when _any_ pod terminates with success, no new pods are created. - - once at least one pod has terminated with success and all pods are terminated, then the job is completed with success. - - once any pod has exited with success, no other pod should still be doing any work or writing any output. They should all be - in the process of exiting. - -For a Non-parallel job, you can leave both `.spec.completions` and `.spec.parallelism` unset. When both are + - the Pods must coordinate amongst themselves or an external service to determine what each should work on. For example, a Pod might fetch a batch of up to N items from the work queue. + - each Pod is independently capable of determining whether or not all its peers are done, and thus that the entire Job is done. + - when _any_ Pod from the Job terminates with success, no new Pods are created. + - once at least one Pod has terminated with success and all Pods are terminated, then the Job is completed with success. + - once any Pod has exited with success, no other Pod should still be doing any work for this task or writing any output. They should all be in the process of exiting. + +For a _non-parallel_ Job, you can leave both `.spec.completions` and `.spec.parallelism` unset. When both are unset, both are defaulted to 1. -For a Fixed Completion Count job, you should set `.spec.completions` to the number of completions needed. +For a _fixed completion count_ Job, you should set `.spec.completions` to the number of completions needed. You can set `.spec.parallelism`, or leave it unset and it will default to 1. -For a Work Queue Job, you must leave `.spec.completions` unset, and set `.spec.parallelism` to +For a _work queue_ Job, you must leave `.spec.completions` unset, and set `.spec.parallelism` to a non-negative integer. For more information about how to make use of the different types of job, see the [job patterns](#job-patterns) section. @@ -162,28 +167,28 @@ If it is specified as 0, then the Job is effectively paused until it is increase Actual parallelism (number of pods running at any instant) may be more or less than requested parallelism, for a variety of reasons: -- For Fixed Completion Count jobs, the actual number of pods running in parallel will not exceed the number of +- For _fixed completion count_ Jobs, the actual number of pods running in parallel will not exceed the number of remaining completions. Higher values of `.spec.parallelism` are effectively ignored. -- For work queue jobs, no new pods are started after any pod has succeeded -- remaining pods are allowed to complete, however. +- For _work queue_ Jobs, no new Pods are started after any Pod has succeeded -- remaining Pods are allowed to complete, however. - If the controller has not had time to react. -- If the controller failed to create pods for any reason (lack of ResourceQuota, lack of permission, etc.), +- If the controller failed to create Pods for any reason (lack of `ResourceQuota`, lack of permission, etc.), then there may be fewer pods than requested. -- The controller may throttle new pod creation due to excessive previous pod failures in the same Job. -- When a pod is gracefully shutdown, it takes time to stop. +- The controller may throttle new Pod creation due to excessive previous pod failures in the same Job. +- When a Pod is gracefully shut down, it takes time to stop. ## Handling Pod and Container Failures -A Container in a Pod may fail for a number of reasons, such as because the process in it exited with -a non-zero exit code, or the Container was killed for exceeding a memory limit, etc. If this +A container in a Pod may fail for a number of reasons, such as because the process in it exited with +a non-zero exit code, or the container was killed for exceeding a memory limit, etc. If this happens, and the `.spec.template.spec.restartPolicy = "OnFailure"`, then the Pod stays -on the node, but the Container is re-run. Therefore, your program needs to handle the case when it is +on the node, but the container is re-run. Therefore, your program needs to handle the case when it is restarted locally, or else specify `.spec.template.spec.restartPolicy = "Never"`. -See [pods-states](/docs/concepts/workloads/pods/pod-lifecycle/#example-states) for more information on `restartPolicy`. +See [pod lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/#example-states) for more information on `restartPolicy`. An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node (node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the `.spec.template.spec.restartPolicy = "Never"`. When a Pod fails, then the Job controller -starts a new Pod. Therefore, your program needs to handle the case when it is restarted in a new +starts a new Pod. This means that your application needs to handle the case when it is restarted in a new pod. In particular, it needs to handle temporary files, locks, incomplete output and the like caused by previous runs. @@ -194,7 +199,7 @@ sometimes be started twice. If you do specify `.spec.parallelism` and `.spec.completions` both greater than 1, then there may be multiple pods running at once. Therefore, your pods must also be tolerant of concurrency. -### Pod Backoff failure policy +### Pod backoff failure policy There are situations where you want to fail a Job after some amount of retries due to a logical error in configuration etc. @@ -244,7 +249,7 @@ spec: restartPolicy: Never ``` -Note that both the Job Spec and the [Pod Template Spec](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#detailed-behavior) within the Job have an `activeDeadlineSeconds` field. Ensure that you set this field at the proper level. +Note that both the Job spec and the [Pod template spec](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#detailed-behavior) within the Job have an `activeDeadlineSeconds` field. Ensure that you set this field at the proper level. ## Clean Up Finished Jobs Automatically @@ -316,7 +321,7 @@ The tradeoffs are: - One Job object for each work item, vs. a single Job object for all work items. The latter is better for large numbers of work items. The former creates some overhead for the user and for the system to manage large numbers of Job objects. -- Number of pods created equals number of work items, vs. each pod can process multiple work items. +- Number of pods created equals number of work items, vs. each Pod can process multiple work items. The former typically requires less modification to existing code and containers. The latter is better for large numbers of work items, for similar reasons to the previous bullet. - Several approaches use a work queue. This requires running a queue service, @@ -336,7 +341,7 @@ The pattern names are also links to examples and more detailed description. When you specify completions with `.spec.completions`, each Pod created by the Job controller has an identical [`spec`](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status). This means that -all pods will have the same command line and the same +all pods for a task will have the same command line and the same image, the same volumes, and (almost) the same environment variables. These patterns are different ways to arrange for pods to work on different things. @@ -355,29 +360,29 @@ Here, `W` is the number of work items. ### Specifying your own pod selector -Normally, when you create a job object, you do not specify `.spec.selector`. -The system defaulting logic adds this field when the job is created. +Normally, when you create a Job object, you do not specify `.spec.selector`. +The system defaulting logic adds this field when the Job is created. It picks a selector value that will not overlap with any other jobs. However, in some cases, you might need to override this automatically set selector. -To do this, you can specify the `.spec.selector` of the job. +To do this, you can specify the `.spec.selector` of the Job. Be very careful when doing this. If you specify a label selector which is not -unique to the pods of that job, and which matches unrelated pods, then pods of the unrelated -job may be deleted, or this job may count other pods as completing it, or one or both -of the jobs may refuse to create pods or run to completion. If a non-unique selector is -chosen, then other controllers (e.g. ReplicationController) and their pods may behave +unique to the pods of that Job, and which matches unrelated Pods, then pods of the unrelated +job may be deleted, or this Job may count other Pods as completing it, or one or both +Jobs may refuse to create Pods or run to completion. If a non-unique selector is +chosen, then other controllers (e.g. ReplicationController) and their Pods may behave in unpredictable ways too. Kubernetes will not stop you from making a mistake when specifying `.spec.selector`. Here is an example of a case when you might want to use this feature. -Say job `old` is already running. You want existing pods -to keep running, but you want the rest of the pods it creates -to use a different pod template and for the job to have a new name. -You cannot update the job because these fields are not updatable. -Therefore, you delete job `old` but leave its pods -running, using `kubectl delete jobs/old --cascade=false`. +Say Job `old` is already running. You want existing Pods +to keep running, but you want the rest of the Pods it creates +to use a different pod template and for the Job to have a new name. +You cannot update the Job because these fields are not updatable. +Therefore, you delete Job `old` but _leave its pods +running_, using `kubectl delete jobs/old --cascade=false`. Before deleting it, you make a note of what selector it uses: ``` @@ -392,11 +397,11 @@ spec: ... ``` -Then you create a new job with name `new` and you explicitly specify the same selector. -Since the existing pods have label `job-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`, -they are controlled by job `new` as well. +Then you create a new Job with name `new` and you explicitly specify the same selector. +Since the existing Pods have label `job-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`, +they are controlled by Job `new` as well. -You need to specify `manualSelector: true` in the new job since you are not using +You need to specify `manualSelector: true` in the new Job since you are not using the selector that the system normally generates for you automatically. ``` @@ -420,25 +425,25 @@ mismatch. ### Bare Pods -When the node that a pod is running on reboots or fails, the pod is terminated -and will not be restarted. However, a Job will create new pods to replace terminated ones. -For this reason, we recommend that you use a job rather than a bare pod, even if your application -requires only a single pod. +When the node that a Pod is running on reboots or fails, the pod is terminated +and will not be restarted. However, a Job will create new Pods to replace terminated ones. +For this reason, we recommend that you use a Job rather than a bare Pod, even if your application +requires only a single Pod. ### Replication Controller Jobs are complementary to [Replication Controllers](/docs/user-guide/replication-controller). -A Replication Controller manages pods which are not expected to terminate (e.g. web servers), and a Job -manages pods that are expected to terminate (e.g. batch jobs). +A Replication Controller manages Pods which are not expected to terminate (e.g. web servers), and a Job +manages Pods that are expected to terminate (e.g. batch tasks). -As discussed in [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/), `Job` is *only* appropriate for pods with -`RestartPolicy` equal to `OnFailure` or `Never`. (Note: If `RestartPolicy` is not set, the default -value is `Always`.) +As discussed in [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/), `Job` is *only* appropriate +for pods with `RestartPolicy` equal to `OnFailure` or `Never`. +(Note: If `RestartPolicy` is not set, the default value is `Always`.) ### Single Job starts Controller Pod -Another pattern is for a single Job to create a pod which then creates other pods, acting as a sort -of custom controller for those pods. This allows the most flexibility, but may be somewhat +Another pattern is for a single Job to create a Pod which then creates other Pods, acting as a sort +of custom controller for those Pods. This allows the most flexibility, but may be somewhat complicated to get started with and offers less integration with Kubernetes. One example of this pattern would be a Job which starts a Pod which runs a script that in turn @@ -446,10 +451,10 @@ starts a Spark master controller (see [spark example](https://github.com/kuberne driver, and then cleans up. An advantage of this approach is that the overall process gets the completion guarantee of a Job -object, but complete control over what pods are created and how work is assigned to them. +object, but complete control over what Pods are created and how work is assigned to them. -## Cron Jobs +## Cron Jobs {#cron-jobs} -Support for creating Jobs at specified times/dates (i.e. cron) is available in Kubernetes [1.4](https://github.com/kubernetes/kubernetes/pull/11980). More information is available in the [cron job documents](/docs/concepts/workloads/controllers/cron-jobs/) +You can use a [`CronJob`](/docs/concepts/workloads/controllers/cron-jobs/) to create a Job that will run at specified times/dates, similar to the Unix tool `cron`. {{% /capture %}} diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index adeaadc469674..e5db639861cd1 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -54,7 +54,7 @@ Saving this manifest into `frontend.yaml` and submitting it to a Kubernetes clus create the defined ReplicaSet and the Pods that it manages. ```shell -kubectl create -f http://k8s.io/examples/controllers/frontend.yaml +kubectl apply -f http://k8s.io/examples/controllers/frontend.yaml ``` You can then get the current ReplicaSets deployed: @@ -162,7 +162,7 @@ Suppose you create the Pods after the frontend ReplicaSet has been deployed and fulfill its replica count requirement: ```shell -kubectl create -f http://k8s.io/examples/pods/pod-rs.yaml +kubectl apply -f http://k8s.io/examples/pods/pod-rs.yaml ``` The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over @@ -184,12 +184,12 @@ pod2 0/1 Terminating 0 4s If you create the Pods first: ```shell -kubectl create -f http://k8s.io/examples/pods/pod-rs.yaml +kubectl apply -f http://k8s.io/examples/pods/pod-rs.yaml ``` And then create the ReplicaSet however: ```shell -kubectl create -f http://k8s.io/examples/controllers/frontend.yaml +kubectl apply -f http://k8s.io/examples/controllers/frontend.yaml ``` You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the @@ -239,6 +239,10 @@ matchLabels: In the ReplicaSet, `.spec.template.metadata.labels` must match `spec.selector`, or it will be rejected by the API. +{{< note >}} +For 2 ReplicaSets specifying the same `.spec.selector` but different `.spec.template.metadata.labels` and `.spec.template.spec` fields, each ReplicaSet ignores the Pods created by the other ReplicaSet. +{{< /note >}} + ### Replicas You can specify how many Pods should run concurrently by setting `.spec.replicas`. The ReplicaSet will create/delete @@ -304,7 +308,7 @@ create the defined HPA that autoscales the target ReplicaSet depending on the CP of the replicated Pods. ```shell -kubectl create -f https://k8s.io/examples/controllers/hpa-rs.yaml +kubectl apply -f https://k8s.io/examples/controllers/hpa-rs.yaml ``` Alternatively, you can use the `kubectl autoscale` command to accomplish the same diff --git a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md index daf0dfd59a784..499be034cb994 100644 --- a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md @@ -55,14 +55,18 @@ This example ReplicationController config runs three copies of the nginx web ser Run the example job by downloading the example file and then running this command: ```shell -$ kubectl create -f https://k8s.io/examples/controllers/replication.yaml +kubectl apply -f https://k8s.io/examples/controllers/replication.yaml +``` +``` replicationcontroller/nginx created ``` Check on the status of the ReplicationController using this command: ```shell -$ kubectl describe replicationcontrollers/nginx +kubectl describe replicationcontrollers/nginx +``` +``` Name: nginx Namespace: default Selector: app=nginx @@ -97,8 +101,10 @@ Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this: ```shell -$ pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name}) +pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name}) echo $pods +``` +``` nginx-3ntk0 nginx-4ok8v nginx-qrm3m ``` diff --git a/content/en/docs/concepts/workloads/controllers/statefulset.md b/content/en/docs/concepts/workloads/controllers/statefulset.md index 0e3a4e85698e9..e83a4f3c8bea6 100644 --- a/content/en/docs/concepts/workloads/controllers/statefulset.md +++ b/content/en/docs/concepts/workloads/controllers/statefulset.md @@ -134,6 +134,10 @@ As each Pod is created, it gets a matching DNS subdomain, taking the form: `$(podname).$(governing service domain)`, where the governing service is defined by the `serviceName` field on the StatefulSet. +As mentioned in the [limitations](#limitations) section, you are responsible for +creating the [Headless Service](/docs/concepts/services-networking/service/#headless-services) +responsible for the network identity of the pods. + Here are some examples of choices for Cluster Domain, Service name, StatefulSet name, and how that affects the DNS names for the StatefulSet's Pods. diff --git a/content/en/docs/concepts/workloads/pods/disruptions.md b/content/en/docs/concepts/workloads/pods/disruptions.md index 3a3ecb36cfc0d..6725c887f49cb 100644 --- a/content/en/docs/concepts/workloads/pods/disruptions.md +++ b/content/en/docs/concepts/workloads/pods/disruptions.md @@ -63,6 +63,11 @@ Ask your cluster administrator or consult your cloud provider or distribution do to determine if any sources of voluntary disruptions are enabled for your cluster. If none are enabled, you can skip creating Pod Disruption Budgets. +{{< caution >}} +Not all voluntary disruptions are constrained by Pod Disruption Budgets. For example, +deleting deployments or pods bypasses Pod Disruption Budgets. +{{< /caution >}} + ## Dealing with Disruptions Here are some ways to mitigate involuntary disruptions: @@ -102,7 +107,7 @@ percentage of the total. Cluster managers and hosting providers should use tools which respect Pod Disruption Budgets by calling the [Eviction API](/docs/tasks/administer-cluster/safely-drain-node/#the-eviction-api) -instead of directly deleting pods. Examples are the `kubectl drain` command +instead of directly deleting pods or deployments. Examples are the `kubectl drain` command and the Kubernetes-on-GCE cluster upgrade script (`cluster/gce/upgrade.sh`). When a cluster administrator wants to drain a node diff --git a/content/en/docs/concepts/workloads/pods/init-containers.md b/content/en/docs/concepts/workloads/pods/init-containers.md index 6ee6dd45ce7aa..1cf214c7ec40c 100644 --- a/content/en/docs/concepts/workloads/pods/init-containers.md +++ b/content/en/docs/concepts/workloads/pods/init-containers.md @@ -83,7 +83,7 @@ Here are some ideas for how to use Init Containers: * Register this Pod with a remote server from the downward API with a command like: - curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$()&ip=$()' + `curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$()&ip=$()'` * Wait for some time before starting the app Container with a command like `sleep 60`. * Clone a git repository into a volume. @@ -180,12 +180,24 @@ spec: This Pod can be started and debugged with the following commands: ```shell -$ kubectl create -f myapp.yaml +kubectl apply -f myapp.yaml +``` +``` pod/myapp-pod created -$ kubectl get -f myapp.yaml +``` + +```shell +kubectl get -f myapp.yaml +``` +``` NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 6m -$ kubectl describe -f myapp.yaml +``` + +```shell +kubectl describe -f myapp.yaml +``` +``` Name: myapp-pod Namespace: default [...] @@ -218,18 +230,25 @@ Events: 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image "busybox" 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container with docker id 5ced34a04634; Security:[seccomp=unconfined] 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container with docker id 5ced34a04634 -$ kubectl logs myapp-pod -c init-myservice # Inspect the first init container -$ kubectl logs myapp-pod -c init-mydb # Inspect the second init container +``` +```shell +kubectl logs myapp-pod -c init-myservice # Inspect the first init container +kubectl logs myapp-pod -c init-mydb # Inspect the second init container ``` Once we start the `mydb` and `myservice` services, we can see the Init Containers complete and the `myapp-pod` is created: ```shell -$ kubectl create -f services.yaml +kubectl apply -f services.yaml +``` +``` service/myservice created service/mydb created -$ kubectl get -f myapp.yaml +``` + +```shell +kubectl get -f myapp.yaml NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 9m ``` diff --git a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md index 707a3d1658a90..d1e1f7e9f279b 100644 --- a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md @@ -193,7 +193,7 @@ Once Pod is assigned to a node by scheduler, kubelet starts creating containers ## Pod readiness gate -{{< feature-state for_k8s_version="v1.12" state="beta" >}} +{{< feature-state for_k8s_version="v1.14" state="stable" >}} In order to add extensibility to Pod readiness by enabling the injection of extra feedbacks or signals into `PodStatus`, Kubernetes 1.11 introduced a diff --git a/content/en/docs/concepts/workloads/pods/pod-overview.md b/content/en/docs/concepts/workloads/pods/pod-overview.md index 8bb3f68504655..4e5d8109a7ff1 100644 --- a/content/en/docs/concepts/workloads/pods/pod-overview.md +++ b/content/en/docs/concepts/workloads/pods/pod-overview.md @@ -4,6 +4,9 @@ reviewers: title: Pod Overview content_template: templates/concept weight: 10 +card: + name: concepts + weight: 60 --- {{% capture overview %}} @@ -101,5 +104,5 @@ Rather than specifying the current desired state of all replicas, pod templates {{% capture whatsnext %}} * Learn more about Pod behavior: * [Pod Termination](/docs/concepts/workloads/pods/pod/#termination-of-pods) - * Other Pod Topics + * [Pod Lifecycle](../pod-lifecycle) {{% /capture %}} diff --git a/content/en/docs/contribute/intermediate.md b/content/en/docs/contribute/intermediate.md index a786fbb955cfe..2ef7278621798 100644 --- a/content/en/docs/contribute/intermediate.md +++ b/content/en/docs/contribute/intermediate.md @@ -3,6 +3,9 @@ title: Intermediate contributing slug: intermediate content_template: templates/concept weight: 20 +card: + name: contribute + weight: 50 --- {{% capture overview %}} diff --git a/content/en/docs/contribute/localization.md b/content/en/docs/contribute/localization.md index 6e73492a14cc7..db3ef4e9801d9 100644 --- a/content/en/docs/contribute/localization.md +++ b/content/en/docs/contribute/localization.md @@ -5,29 +5,25 @@ approvers: - chenopis - zacharysarah - zparnold +card: + name: contribute + weight: 30 + title: Translating the docs --- {{% capture overview %}} -Documentation for Kubernetes is available in multiple languages: - -- English -- Chinese -- Japanese -- Korean - -We encourage you to add new [localizations](https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/)! +Documentation for Kubernetes is available in multiple languages. We encourage you to add new [localizations](https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/)! {{% /capture %}} - {{% capture body %}} ## Getting started -Localizations must meet some requirements for workflow (*how* to localize) and output (*what* to localize). +Localizations must meet some requirements for workflow (*how* to localize) and output (*what* to localize) before publishing. -To add a new localization of the Kubernetes documentation, you'll need to update the website by modifying the [site configuration](#modify-the-site-configuration) and [directory structure](#add-a-new-localization-directory). Then you can start [translating documents](#translating-documents)! +To add a new localization of the Kubernetes documentation, you'll need to update the website by modifying the [site configuration](#modify-the-site-configuration) and [directory structure](#add-a-new-localization-directory). Then you can start [translating documents](#translating-documents)! {{< note >}} For an example localization-related [pull request](../create-pull-request), see [this pull request](https://github.com/kubernetes/website/pull/8636) to the [Kubernetes website repo](https://github.com/kubernetes/website) adding Korean localization to the Kubernetes docs. @@ -209,7 +205,7 @@ SIG Docs welcomes [upstream contributions and corrections](/docs/contribute/inte {{% capture whatsnext %}} -Once a l10n meets requirements for workflow and minimum output, SIG docs will: +Once a localization meets requirements for workflow and minimum output, SIG docs will: - Enable language selection on the website - Publicize the localization's availability through [Cloud Native Computing Foundation](https://www.cncf.io/) (CNCF) channels, including the [Kubernetes blog](https://kubernetes.io/blog/). diff --git a/content/en/docs/contribute/participating.md b/content/en/docs/contribute/participating.md index 029f3bf7463ae..ca3be91d7d725 100644 --- a/content/en/docs/contribute/participating.md +++ b/content/en/docs/contribute/participating.md @@ -1,6 +1,9 @@ --- title: Participating in SIG Docs content_template: templates/concept +card: + name: contribute + weight: 40 --- {{% capture overview %}} diff --git a/content/en/docs/contribute/start.md b/content/en/docs/contribute/start.md index f4ce637c65634..b106b1666a29c 100644 --- a/content/en/docs/contribute/start.md +++ b/content/en/docs/contribute/start.md @@ -3,6 +3,9 @@ title: Start contributing slug: start content_template: templates/concept weight: 10 +card: + name: contribute + weight: 10 --- {{% capture overview %}} @@ -133,10 +136,11 @@ The SIG Docs team communicates using the following mechanisms: introduce yourself! - [Join the `kubernetes-sig-docs` mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs), where broader discussions take place and official decisions are recorded. -- Participate in the weekly SIG Docs video meeting, which is announced on the - Slack channel and the mailing list. Currently, these meetings take place on - Zoom, so you'll need to download the [Zoom client](https://zoom.us/download) - or dial in using a phone. +- Participate in the [weekly SIG Docs](https://github.com/kubernetes/community/tree/master/sig-docs) video meeting, which is announced on the Slack channel and the mailing list. Currently, these meetings take place on Zoom, so you'll need to download the [Zoom client](https://zoom.us/download) or dial in using a phone. + +{{< note >}} +You can also check the SIG Docs weekly meeting on the [Kubernetes community meetings calendar](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles). +{{< /note >}} ## Improve existing content diff --git a/content/en/docs/contribute/style/content-organization.md b/content/en/docs/contribute/style/content-organization.md index a89e686ac988e..e954c98118fe9 100644 --- a/content/en/docs/contribute/style/content-organization.md +++ b/content/en/docs/contribute/style/content-organization.md @@ -108,7 +108,6 @@ Another widely used example is the `includes` bundle. It sets `headless: true` i en/includes ├── default-storage-class-prereqs.md ├── federated-task-tutorial-prereqs.md -├── federation-content-moved.md ├── index.md ├── partner-script.js ├── partner-style.css diff --git a/content/en/docs/contribute/style/page-templates.md b/content/en/docs/contribute/style/page-templates.md index 1684ed3c2699c..033ddf6223eef 100644 --- a/content/en/docs/contribute/style/page-templates.md +++ b/content/en/docs/contribute/style/page-templates.md @@ -2,6 +2,9 @@ title: Using Page Templates content_template: templates/concept weight: 30 +card: + name: contribute + weight: 30 --- {{% capture overview %}} diff --git a/content/en/docs/contribute/style/style-guide.md b/content/en/docs/contribute/style/style-guide.md index 8cd5e1433703e..e0cc3f17d4285 100644 --- a/content/en/docs/contribute/style/style-guide.md +++ b/content/en/docs/contribute/style/style-guide.md @@ -3,6 +3,10 @@ title: Documentation Style Guide linktitle: Style guide content_template: templates/concept weight: 10 +card: + name: contribute + weight: 20 + title: Documentation Style Guide --- {{% capture overview %}} @@ -106,8 +110,13 @@ document, use the backtick (`). DoDon't The kubectl run command creates a Deployment.The "kubectl run" command creates a Deployment. For declarative management, use kubectl apply.For declarative management, use "kubectl apply". + Enclose code samples with triple backticks. (```)Enclose code samples with any other syntax. +{{< note >}} +The website supports syntax highlighting for code samples, but specifying a language is optional. +{{< /note >}} + ### Use code style for object field names @@ -318,7 +327,7 @@ Shortcodes inside include statements will break the build. You must insert them ``` {{}} -{{}} +{{}} {{}} ``` diff --git a/content/en/docs/doc-contributor-tools/snippets/atom-snippets.cson b/content/en/docs/doc-contributor-tools/snippets/atom-snippets.cson index 5d4573938be10..878ccc4ed73fc 100644 --- a/content/en/docs/doc-contributor-tools/snippets/atom-snippets.cson +++ b/content/en/docs/doc-contributor-tools/snippets/atom-snippets.cson @@ -116,7 +116,7 @@ 'body': '{{< toc >}}' 'Insert code from file': 'prefix': 'codefile' - 'body': '{{< code file="$1" >}}' + 'body': '{{< codenew file="$1" >}}' 'Insert feature state': 'prefix': 'fstate' 'body': '{{< feature-state for_k8s_version="$1" state="$2" >}}' @@ -223,4 +223,4 @@ ${7:"next-steps-or-delete"} {{% /capture %}} """ - \ No newline at end of file + diff --git a/content/en/docs/getting-started-guides/ubuntu/_index.md b/content/en/docs/getting-started-guides/ubuntu/_index.md index 1b1eced18964a..ed60790b67388 100644 --- a/content/en/docs/getting-started-guides/ubuntu/_index.md +++ b/content/en/docs/getting-started-guides/ubuntu/_index.md @@ -51,7 +51,7 @@ These are more in-depth guides for users choosing to run Kubernetes in productio - [Decommissioning](/docs/getting-started-guides/ubuntu/decommissioning/) - [Operational Considerations](/docs/getting-started-guides/ubuntu/operational-considerations/) - [Glossary](/docs/getting-started-guides/ubuntu/glossary/) - + - [Authenticating with LDAP](https://www.ubuntu.com/kubernetes/docs/ldap) ## Third-party Product Integrations @@ -73,5 +73,3 @@ We're normally following the following Slack channels: and we monitor the Kubernetes mailing lists. {{% /capture %}} - - diff --git a/content/en/docs/getting-started-guides/ubuntu/networking.md b/content/en/docs/getting-started-guides/ubuntu/networking.md index e3afca9e871f5..41c43cc7b1215 100644 --- a/content/en/docs/getting-started-guides/ubuntu/networking.md +++ b/content/en/docs/getting-started-guides/ubuntu/networking.md @@ -48,7 +48,7 @@ empty string or undefined the code will attempt to find the default network adapter similar to the following command: ```bash -$ route | grep default | head -n 1 | awk {'print $8'} +route | grep default | head -n 1 | awk {'print $8'} ``` **cidr** The network range to configure the flannel or canal SDN to declare when diff --git a/content/en/docs/getting-started-guides/ubuntu/upgrades.md b/content/en/docs/getting-started-guides/ubuntu/upgrades.md index b34108c2c0018..5f87c0b3c8987 100644 --- a/content/en/docs/getting-started-guides/ubuntu/upgrades.md +++ b/content/en/docs/getting-started-guides/ubuntu/upgrades.md @@ -107,15 +107,15 @@ but is a safer upgrade route. #### Blue/green worker upgrade -Given a deployment where the workers are named kubernetes-alpha. +Given a deployment where the workers are named kubernetes-blue. Deploy new workers: - juju deploy kubernetes-alpha + juju deploy kubernetes-green Pause the old workers so your workload migrates: - juju run-action kubernetes-alpha/# pause + juju run-action kubernetes-blue/# pause Verify old workloads have migrated with: @@ -123,7 +123,7 @@ Verify old workloads have migrated with: Tear down old workers with: - juju remove-application kubernetes-alpha + juju remove-application kubernetes-blue #### In place worker upgrade diff --git a/content/en/docs/home/_index.md b/content/en/docs/home/_index.md index f6f5f6150a22b..31f37880ff631 100644 --- a/content/en/docs/home/_index.md +++ b/content/en/docs/home/_index.md @@ -16,4 +16,43 @@ menu: weight: 20 post: >

Learn how to use Kubernetes with conceptual, tutorial, and reference documentation. You can even help contribute to the docs!

+overview: > + Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerized applications. The open source project is hosted by the Cloud Native Computing Foundation (CNCF). +cards: +- name: concepts + title: "Understand the basics" + description: "Learn about Kubernetes and its fundamental concepts." + button: "Learn Concepts" + button_path: "/docs/concepts" +- name: tutorials + title: "Try Kubernetes" + description: "Follow tutorials to learn how to deploy applications in Kubernetes." + button: "View Tutorials" + button_path: "/docs/tutorials" +- name: setup + title: "Set up a cluster" + description: "Get Kubernetes running based on your resources and needs." + button: "Set up Kubernetes" + button_path: "/docs/setup" +- name: tasks + title: "Learn how to use Kubernetes" + description: "Look up common tasks and how to perform them using a short sequence of steps." + button: "View Tasks" + button_path: "/docs/tasks" +- name: reference + title: Look up reference information + description: Browse terminology, command line syntax, API resource types, and setup tool documentation. + button: View Reference + button_path: /docs/reference +- name: contribute + title: Contribute to the docs + description: Anyone can contribute, whether you’re new to the project or you’ve been around a long time. + button: Contribute to the docs + button_path: /docs/contribute +- name: download + title: Download Kubernetes + description: If you are installing Kubernetes or upgrading to the newest version, refer to the current release notes. +- name: about + title: About the documentation + description: This website contains documentation for the current and previous 4 versions of Kubernetes. --- diff --git a/content/en/docs/home/supported-doc-versions.md b/content/en/docs/home/supported-doc-versions.md index 7747ea2b76435..45a6012eaa148 100644 --- a/content/en/docs/home/supported-doc-versions.md +++ b/content/en/docs/home/supported-doc-versions.md @@ -1,6 +1,10 @@ --- title: Supported Versions of the Kubernetes Documentation content_template: templates/concept +card: + name: about + weight: 10 + title: Supported Versions of the Documentation --- {{% capture overview %}} diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md index fb66b07cc58b3..e067c0dbbe958 100644 --- a/content/en/docs/reference/access-authn-authz/admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md @@ -97,9 +97,9 @@ NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeClaimResize,Defaul ## What does each admission controller do? -### AlwaysAdmit (DEPRECATED) {#alwaysadmit} +### AlwaysAdmit {#alwaysadmit} {{< feature-state for_k8s_version="v1.13" state="deprecated" >}} -Use this admission controller by itself to pass-through all requests. AlwaysAdmit is DEPRECATED as no real meaning. +This admission controller allows all pods into the cluster. It is deprecated because its behavior is the same as if there were no admission controller at all. ### AlwaysPullImages {#alwayspullimages} @@ -111,7 +111,7 @@ scheduled onto the right node), without any authorization check against the imag is enabled, images are always pulled prior to starting containers, which means valid credentials are required. -### AlwaysDeny (DEPRECATED) {#alwaysdeny} +### AlwaysDeny {#alwaysdeny} {{< feature-state for_k8s_version="v1.13" state="deprecated" >}} Rejects all requests. AlwaysDeny is DEPRECATED as no real meaning. @@ -138,26 +138,30 @@ if the pods don't already have toleration for taints `node.kubernetes.io/not-ready:NoExecute` or `node.alpha.kubernetes.io/unreachable:NoExecute`. -### DenyExecOnPrivileged (deprecated) {#denyexeconprivileged} +### DenyExecOnPrivileged {#denyexeconprivileged} {{< feature-state for_k8s_version="v1.13" state="deprecated" >}} This admission controller will intercept all requests to exec a command in a pod if that pod has a privileged container. -If your cluster supports privileged containers, and you want to restrict the ability of end-users to exec -commands in those containers, we strongly encourage enabling this admission controller. - This functionality has been merged into [DenyEscalatingExec](#denyescalatingexec). +The DenyExecOnPrivileged admission plugin is deprecated and will be removed in v1.18. + +Use of a policy-based admission plugin (like [PodSecurityPolicy](#podsecuritypolicy) or a custom admission plugin) +which can be targeted at specific users or Namespaces and also protects against creation of overly privileged Pods +is recommended instead. -### DenyEscalatingExec {#denyescalatingexec} +### DenyEscalatingExec {#denyescalatingexec} {{< feature-state for_k8s_version="v1.13" state="deprecated" >}} This admission controller will deny exec and attach commands to pods that run with escalated privileges that allow host access. This includes pods that run as privileged, have access to the host IPC namespace, and have access to the host PID namespace. -If your cluster supports containers that run with escalated privileges, and you want to -restrict the ability of end-users to exec commands in those containers, we strongly encourage -enabling this admission controller. +The DenyEscalatingExec admission plugin is deprecated and will be removed in v1.18. + +Use of a policy-based admission plugin (like [PodSecurityPolicy](#podsecuritypolicy) or a custom admission plugin) +which can be targeted at specific users or Namespaces and also protects against creation of overly privileged Pods +is recommended instead. -### EventRateLimit (alpha) {#eventratelimit} +### EventRateLimit {#eventratelimit} {{< feature-state for_k8s_version="v1.13" state="alpha" >}} This admission controller mitigates the problem where the API server gets flooded by event requests. The cluster admin can specify event rate limits by: @@ -335,6 +339,13 @@ Examples of information you might put here are: In any case, the annotations are provided by the user and are not validated by Kubernetes in any way. In the future, if an annotation is determined to be widely useful, it may be promoted to a named field of ImageReviewSpec. +### Initializers {#initializers} {{< feature-state for_k8s_version="v1.13" state="alpha" >}} + +The admission controller determines the initializers of a resource based on the existing +`InitializerConfiguration`s. It sets the pending initializers by modifying the +metadata of the resource to be created. +For more information, please check [Dynamic Admission Control](/docs/reference/access-authn-authz/extensible-admission-controllers/). + ### LimitPodHardAntiAffinityTopology {#limitpodhardantiaffinitytopology} This admission controller denies any pod that defines `AntiAffinity` topology key other than @@ -350,7 +361,7 @@ applies a 0.1 CPU requirement to all Pods in the `default` namespace. See the [limitRange design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) and the [example of Limit Range](/docs/tasks/configure-pod-container/limit-range/) for more details. -### MutatingAdmissionWebhook (beta in 1.9) {#mutatingadmissionwebhook} +### MutatingAdmissionWebhook {#mutatingadmissionwebhook} {{< feature-state for_k8s_version="v1.13" state="beta" >}} This admission controller calls any mutating webhooks which match the request. Matching webhooks are called in serial; each one may modify the object if it desires. @@ -418,9 +429,9 @@ This label prefix is reserved for administrators to label their `Node` objects f and kubelets will not be allowed to modify labels with that prefix. * **Allows** kubelets to add/remove/update these labels and label prefixes: * `kubernetes.io/hostname` - * `beta.kubernetes.io/arch` + * `kubernetes.io/arch` + * `kubernetes.io/os` * `beta.kubernetes.io/instance-type` - * `beta.kubernetes.io/os` * `failure-domain.beta.kubernetes.io/region` * `failure-domain.beta.kubernetes.io/zone` * `kubelet.kubernetes.io/`-prefixed labels @@ -438,7 +449,7 @@ This admission controller also protects the access to `metadata.ownerReferences[ of an object, so that only users with "update" permission to the `finalizers` subresource of the referenced *owner* can change it. -### PersistentVolumeLabel (DEPRECATED) {#persistentvolumelabel} +### PersistentVolumeLabel {#persistentvolumelabel} {{< feature-state for_k8s_version="v1.13" state="deprecated" >}} This admission controller automatically attaches region or zone labels to PersistentVolumes as defined by the cloud provider (for example, GCE or AWS). @@ -599,7 +610,7 @@ We strongly recommend using this admission controller if you intend to make use The `StorageObjectInUseProtection` plugin adds the `kubernetes.io/pvc-protection` or `kubernetes.io/pv-protection` finalizers to newly created Persistent Volume Claims (PVCs) or Persistent Volumes (PV). In case a user deletes a PVC or PV the PVC or PV is not removed until the finalizer is removed from the PVC or PV by PVC or PV Protection Controller. Refer to the [Storage Object in Use Protection](/docs/concepts/storage/persistent-volumes/#storage-object-in-use-protection) for more detailed information. -### ValidatingAdmissionWebhook (alpha in 1.8; beta in 1.9) {#validatingadmissionwebhook} +### ValidatingAdmissionWebhook {#validatingadmissionwebhook} {{< feature-state for_k8s_version="v1.13" state="beta" >}} This admission controller calls any validating webhooks which match the request. Matching webhooks are called in parallel; if any of them rejects the request, the request @@ -647,27 +658,4 @@ in the mutating phase. For earlier versions, there was no concept of validating vs mutating and the admission controllers ran in the exact order specified. -* v1.6 - v1.8 - - ```shell - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds - ``` - -* v1.4 - v1.5 - - ```shell - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota - ``` - -* v1.2 - v1.3 - - ```shell - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota - ``` - -* v1.0 - v1.1 - - ```shell - --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,PersistentVolumeLabel,ResourceQuota - ``` {{% /capture %}} diff --git a/content/en/docs/reference/access-authn-authz/authentication.md b/content/en/docs/reference/access-authn-authz/authentication.md index d7a96526929a0..54330b0a79e74 100644 --- a/content/en/docs/reference/access-authn-authz/authentication.md +++ b/content/en/docs/reference/access-authn-authz/authentication.md @@ -322,6 +322,7 @@ To enable the plugin, configure the following flags on the API server: | `--oidc-username-prefix` | Prefix prepended to username claims to prevent clashes with existing names (such as `system:` users). For example, the value `oidc:` will create usernames like `oidc:jane.doe`. If this flag isn't provided and `--oidc-user-claim` is a value other than `email` the prefix defaults to `( Issuer URL )#` where `( Issuer URL )` is the value of `--oidc-issuer-url`. The value `-` can be used to disable all prefixing. | `oidc:` | No | | `--oidc-groups-claim` | JWT claim to use as the user's group. If the claim is present it must be an array of strings. | groups | No | | `--oidc-groups-prefix` | Prefix prepended to group claims to prevent clashes with existing names (such as `system:` groups). For example, the value `oidc:` will create group names like `oidc:engineering` and `oidc:infra`. | `oidc:` | No | +| `--oidc-required-claim` | A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims. | `claim=value` | No | | `--oidc-ca-file` | The path to the certificate for the CA that signed your identity provider's web certificate. Defaults to the host's root CAs. | `/etc/kubernetes/ssl/kc-ca.pem` | No | Importantly, the API server is not an OAuth2 client, rather it can only be @@ -348,7 +349,7 @@ Setup instructions for specific systems: - [UAA](http://apigee.com/about/blog/engineering/kubernetes-authentication-enterprise) - [Dex](https://speakerdeck.com/ericchiang/kubernetes-access-control-with-dex) -- [OpenUnison](https://github.com/TremoloSecurity/openunison-qs-kubernetes) +- [OpenUnison](https://www.tremolosecurity.com/orchestra-k8s/) #### Using kubectl diff --git a/content/en/docs/reference/access-authn-authz/authorization.md b/content/en/docs/reference/access-authn-authz/authorization.md index 366fbefe21ff8..6e9baec83b790 100644 --- a/content/en/docs/reference/access-authn-authz/authorization.md +++ b/content/en/docs/reference/access-authn-authz/authorization.md @@ -47,7 +47,7 @@ Kubernetes reviews only the following API request attributes: * **extra** - A map of arbitrary string keys to string values, provided by the authentication layer. * **API** - Indicates whether the request is for an API resource. * **Request path** - Path to miscellaneous non-resource endpoints like `/api` or `/healthz`. - * **API request verb** - API verbs `get`, `list`, `create`, `update`, `patch`, `watch`, `proxy`, `redirect`, `delete`, and `deletecollection` are used for resource requests. To determine the request verb for a resource API endpoint, see [Determine the request verb](/docs/reference/access-authn-authz/authorization/#determine-whether-a-request-is-allowed-or-denied) below. + * **API request verb** - API verbs `get`, `list`, `create`, `update`, `patch`, `watch`, `proxy`, `redirect`, `delete`, and `deletecollection` are used for resource requests. To determine the request verb for a resource API endpoint, see [Determine the request verb](/docs/reference/access-authn-authz/authorization/#determine-the-request-verb). * **HTTP request verb** - HTTP verbs `get`, `post`, `put`, and `delete` are used for non-resource requests. * **Resource** - The ID or name of the resource that is being accessed (for resource requests only) -- For resource requests using `get`, `update`, `patch`, and `delete` verbs, you must provide the resource name. * **Subresource** - The subresource that is being accessed (for resource requests only). @@ -90,9 +90,16 @@ a given action, and works regardless of the authorization mode used. ```bash -$ kubectl auth can-i create deployments --namespace dev +kubectl auth can-i create deployments --namespace dev +``` +``` yes -$ kubectl auth can-i create deployments --namespace prod +``` + +```shell +kubectl auth can-i create deployments --namespace prod +``` +``` no ``` @@ -100,7 +107,9 @@ Administrators can combine this with [user impersonation](/docs/reference/access to determine what action other users can perform. ```bash -$ kubectl auth can-i list secrets --namespace dev --as dave +kubectl auth can-i list secrets --namespace dev --as dave +``` +``` no ``` @@ -116,7 +125,9 @@ These APIs can be queried by creating normal Kubernetes resources, where the res field of the returned object is the result of the query. ```bash -$ kubectl create -f - -o yaml << EOF +kubectl create -f - -o yaml << EOF +``` +``` apiVersion: authorization.k8s.io/v1 kind: SelfSubjectAccessReview spec: diff --git a/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md b/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md index b4ca5a0696466..d2fb12a3e84b3 100644 --- a/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md @@ -6,6 +6,7 @@ reviewers: - caesarxuchao - deads2k - liggitt +- mbohlool title: Dynamic Admission Control content_template: templates/concept weight: 40 @@ -66,6 +67,13 @@ that is validated in a Kubernetes e2e test. The webhook handles the `admissionReview` requests sent by the apiservers, and sends back its decision wrapped in `admissionResponse`. +the `admissionReview` request can have different versions (e.g. v1beta1 or `v1` in a future version). +The webhook can define what version they accept using `admissionReviewVersions` field. API server +will try to use first version in the list which it supports. If none of the versions specified +in this list supported by API server, validation will fail for this object. If the webhook +configuration has already been persisted, calls to the webhook will fail and be +subject to the failure policy. + The example admission webhook server leaves the `ClientAuth` field [empty](https://github.com/kubernetes/kubernetes/blob/v1.13.0/test/images/webhook/config.go#L47-L48), which defaults to `NoClientCert`. This means that the webhook server does not @@ -111,18 +119,32 @@ webhooks: - CREATE resources: - pods + scope: "Namespaced" clientConfig: service: namespace: name: caBundle: + admissionReviewVersions: + - v1beta1 + timeoutSeconds: 1 ``` +The scope field specifies if only cluster-scoped resources ("Cluster") or namespace-scoped +resources ("Namespaced") will match this rule. "*" means that there are no scope restrictions. + {{< note >}} When using `clientConfig.service`, the server cert must be valid for `..svc`. {{< /note >}} +{{< note >}} +Default timeout for a webhook call is 30 seconds but starting in kubernetes 1.14 you +can set the timeout and it is encouraged to use a very small timeout for webhooks. +If the webhook call times out, the request is handled according to the webhook's +failure policy. +{{< /note >}} + When an apiserver receives a request that matches one of the `rules`, the apiserver sends an `admissionReview` request to webhook as specified in the `clientConfig`. @@ -130,13 +152,6 @@ apiserver sends an `admissionReview` request to webhook as specified in the After you create the webhook configuration, the system will take a few seconds to honor the new configuration. -{{< note >}} -When the webhook plugin is deployed into the Kubernetes cluster as a -service, it has to expose its service on the 443 port. The communication -between the API server and the webhook service may fail if a different port -is used. -{{< /note >}} - ### Authenticate apiservers If your admission webhooks require authentication, you can configure the diff --git a/content/en/docs/reference/access-authn-authz/rbac.md b/content/en/docs/reference/access-authn-authz/rbac.md index b90509b924341..8f6ad335f6e66 100644 --- a/content/en/docs/reference/access-authn-authz/rbac.md +++ b/content/en/docs/reference/access-authn-authz/rbac.md @@ -26,7 +26,7 @@ To enable RBAC, start the apiserver with `--authorization-mode=RBAC`. The RBAC API declares four top-level types which will be covered in this section. Users can interact with these resources as they would with any other API resource (via `kubectl`, API calls, etc.). For instance, -`kubectl create -f (resource).yml` can be used with any of these examples, +`kubectl apply -f (resource).yml` can be used with any of these examples, though readers who wish to follow along should review the section on bootstrapping first. @@ -148,6 +148,24 @@ roleRef: apiGroup: rbac.authorization.k8s.io ``` +You cannot modify which `Role` or `ClusterRole` a binding object refers to. +Attempts to change the `roleRef` field of a binding object will result in a validation error. +To change the `roleRef` field on an existing binding object, the binding object must be deleted and recreated. +There are two primary reasons for this restriction: + +1. A binding to a different role is a fundamentally different binding. +Requiring a binding to be deleted/recreated in order to change the `roleRef` +ensures the full list of subjects in the binding is intended to be granted +the new role (as opposed to enabling accidentally modifying just the roleRef +without verifying all of the existing subjects should be given the new role's permissions). +2. Making `roleRef` immutable allows giving `update` permission on an existing binding object +to a user, which lets them manage the list of subjects, without being able to change the +role that is granted to those subjects. + +The `kubectl auth reconcile` command-line utility creates or updates a manifest file containing RBAC objects, +and handles deleting and recreating binding objects if required to change the role they refer to. +See [command usage and examples](#kubectl-auth-reconcile) for more information. + ### Referring to Resources Most resources are represented by a string representation of their name, such as "pods", just as it @@ -471,13 +489,18 @@ NOTE: editing the role is not recommended as changes will be overwritten on API - - + + + + + + + - +
system:basic-usersystem:authenticated and system:unauthenticated groupsAllows a user read-only access to basic information about themselves.system:authenticated groupAllows a user read-only access to basic information about themselves. Prior to 1.14, this role was also bound to `system:unauthenticated` by default.
system:discoverysystem:authenticated groupAllows read-only access to API discovery endpoints needed to discover and negotiate an API level. Prior to 1.14, this role was also bound to `system:unauthenticated` by default.
system:public-info-viewer system:authenticated and system:unauthenticated groupsAllows read-only access to API discovery endpoints needed to discover and negotiate an API level.Allows read-only access to non-sensitive information about the cluster. Introduced in 1.14.
@@ -677,9 +700,9 @@ Because this is enforced at the API level, it applies even when the RBAC authori A user can only create/update a role if at least one of the following things is true: -1. they already have all the permissions contained in the role, at the same scope as the object being modified +1. They already have all the permissions contained in the role, at the same scope as the object being modified (cluster-wide for a `ClusterRole`, within the same namespace or cluster-wide for a `Role`) -2. they are given explicit permission to perform the `escalate` verb on the `roles` or `clusterroles` resource in the `rbac.authorization.k8s.io` API group (Kubernetes 1.12 and newer) +2. They are given explicit permission to perform the `escalate` verb on the `roles` or `clusterroles` resource in the `rbac.authorization.k8s.io` API group (Kubernetes 1.12 and newer) For example, if "user-1" does not have the ability to list secrets cluster-wide, they cannot create a `ClusterRole` containing that permission. To allow a user to create/update roles: @@ -738,46 +761,156 @@ To bootstrap initial roles and role bindings: ## Command-line Utilities -Two `kubectl` commands exist to grant roles within a namespace or across the entire cluster. +### `kubectl create role` + +Creates a `Role` object defining permissions within a single namespace. Examples: + +* Create a `Role` named "pod-reader" that allows user to perform "get", "watch" and "list" on pods: + + ``` + kubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods + ``` + +* Create a `Role` named "pod-reader" with resourceNames specified: + + ``` + kubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod + ``` + +* Create a `Role` named "foo" with apiGroups specified: + + ``` + kubectl create role foo --verb=get,list,watch --resource=replicasets.apps + ``` + +* Create a `Role` named "foo" with subresource permissions: + + ``` + kubectl create role foo --verb=get,list,watch --resource=pods,pods/status + ``` + +* Create a `Role` named "my-component-lease-holder" with permissions to get/update a resource with a specific name: + + ``` + kubectl create role my-component-lease-holder --verb=get,list,watch,update --resource=lease --resource-name=my-component + ``` + +### `kubectl create clusterrole` + +Creates a `ClusterRole` object. Examples: + +* Create a `ClusterRole` named "pod-reader" that allows user to perform "get", "watch" and "list" on pods: + + ``` + kubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods + ``` + +* Create a `ClusterRole` named "pod-reader" with resourceNames specified: + + ``` + kubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod + ``` + +* Create a `ClusterRole` named "foo" with apiGroups specified: + + ``` + kubectl create clusterrole foo --verb=get,list,watch --resource=replicasets.apps + ``` + +* Create a `ClusterRole` named "foo" with subresource permissions: + + ``` + kubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status + ``` + +* Create a `ClusterRole` name "foo" with nonResourceURL specified: + + ``` + kubectl create clusterrole "foo" --verb=get --non-resource-url=/logs/* + ``` + +* Create a `ClusterRole` name "monitoring" with aggregationRule specified: + + ``` + kubectl create clusterrole monitoring --aggregation-rule="rbac.example.com/aggregate-to-monitoring=true" + ``` ### `kubectl create rolebinding` Grants a `Role` or `ClusterRole` within a specific namespace. Examples: -* Grant the `admin` `ClusterRole` to a user named "bob" in the namespace "acme": +* Within the namespace "acme", grant the permissions in the `admin` `ClusterRole` to a user named "bob": ``` kubectl create rolebinding bob-admin-binding --clusterrole=admin --user=bob --namespace=acme ``` -* Grant the `view` `ClusterRole` to a service account named "myapp" in the namespace "acme": +* Within the namespace "acme", grant the permissions in the `view` `ClusterRole` to the service account in the namespace "acme" named "myapp" : ``` kubectl create rolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp --namespace=acme ``` +* Within the namespace "acme", grant the permissions in the `view` `ClusterRole` to a service account in the namespace "myappnamespace" named "myapp": + + ``` + kubectl create rolebinding myappnamespace-myapp-view-binding --clusterrole=view --serviceaccount=myappnamespace:myapp --namespace=acme + ``` + ### `kubectl create clusterrolebinding` Grants a `ClusterRole` across the entire cluster, including all namespaces. Examples: -* Grant the `cluster-admin` `ClusterRole` to a user named "root" across the entire cluster: +* Across the entire cluster, grant the permissions in the `cluster-admin` `ClusterRole` to a user named "root": ``` kubectl create clusterrolebinding root-cluster-admin-binding --clusterrole=cluster-admin --user=root ``` -* Grant the `system:node` `ClusterRole` to a user named "kubelet" across the entire cluster: +* Across the entire cluster, grant the permissions in the `system:node-proxier ` `ClusterRole` to a user named "system:kube-proxy": ``` - kubectl create clusterrolebinding kubelet-node-binding --clusterrole=system:node --user=kubelet + kubectl create clusterrolebinding kube-proxy-binding --clusterrole=system:node-proxier --user=system:kube-proxy ``` -* Grant the `view` `ClusterRole` to a service account named "myapp" in the namespace "acme" across the entire cluster: +* Across the entire cluster, grant the permissions in the `view` `ClusterRole` to a service account named "myapp" in the namespace "acme": ``` kubectl create clusterrolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp ``` +### `kubectl auth reconcile` {#kubectl-auth-reconcile} + +Creates or updates `rbac.authorization.k8s.io/v1` API objects from a manifest file. + +Missing objects are created, and the containing namespace is created for namespaced objects, if required. + +Existing roles are updated to include the permissions in the input objects, +and remove extra permissions if `--remove-extra-permissions` is specified. + +Existing bindings are updated to include the subjects in the input objects, +and remove extra subjects if `--remove-extra-subjects` is specified. + +Examples: + +* Test applying a manifest file of RBAC objects, displaying changes that would be made: + + ``` + kubectl auth reconcile -f my-rbac-rules.yaml --dry-run + ``` + +* Apply a manifest file of RBAC objects, preserving any extra permissions (in roles) and any extra subjects (in bindings): + + ``` + kubectl auth reconcile -f my-rbac-rules.yaml + ``` + +* Apply a manifest file of RBAC objects, removing any extra permissions (in roles) and any extra subjects (in bindings): + + ``` + kubectl auth reconcile -f my-rbac-rules.yaml --remove-extra-subjects --remove-extra-permissions + ``` + See the CLI help for detailed usage. ## Service Account Permissions diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 43dc0c9be627c..8807d19b2d916 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -55,9 +55,17 @@ different Kubernetes components. | `CPUManager` | `true` | Beta | 1.10 | | | `CRIContainerLogRotation` | `false` | Alpha | 1.10 | 1.10 | | `CRIContainerLogRotation` | `true` | Beta| 1.11 | | -| `CSIBlockVolume` | `false` | Alpha | 1.11 | | -| `CSIDriverRegistry` | `false` | Alpha | 1.12 | | -| `CSINodeInfo` | `false` | Alpha | 1.12 | | +| `CSIBlockVolume` | `false` | Alpha | 1.11 | 1.13 | +| `CSIBlockVolume` | `true` | Beta | 1.14 | | +| `CSIDriverRegistry` | `false` | Alpha | 1.12 | 1.13 | +| `CSIDriverRegistry` | `true` | Beta | 1.14 | | +| `CSIInlineVolume` | `false` | Alpha | 1.14 | - | +| `CSIMigration` | `false` | Alpha | 1.14 | | +| `CSIMigrationAWS` | `false` | Alpha | 1.14 | | +| `CSIMigrationGCE` | `false` | Alpha | 1.14 | | +| `CSIMigrationOpenStack` | `false` | Alpha | 1.14 | | +| `CSINodeInfo` | `false` | Alpha | 1.12 | 1.13 | +| `CSINodeInfo` | `true` | Beta | 1.14 | | | `CSIPersistentVolume` | `false` | Alpha | 1.9 | 1.9 | | `CSIPersistentVolume` | `true` | Beta | 1.10 | 1.12 | | `CSIPersistentVolume` | `true` | GA | 1.13 | - | @@ -79,8 +87,8 @@ different Kubernetes components. | `DynamicVolumeProvisioning` | `true` | Alpha | 1.3 | 1.7 | | `DynamicVolumeProvisioning` | `true` | GA | 1.8 | | | `EnableEquivalenceClassCache` | `false` | Alpha | 1.8 | | +| `ExpandCSIVolumes` | `false` | Alpha | 1.14 | | | | `ExpandInUsePersistentVolumes` | `false` | Alpha | 1.11 | 1.13 | | -| `ExpandInUsePersistentVolumes` | `true` | Beta | 1.14 | | | `ExpandPersistentVolumes` | `false` | Alpha | 1.8 | 1.10 | | `ExpandPersistentVolumes` | `true` | Beta | 1.11 | | | `ExperimentalCriticalPodAnnotation` | `false` | Alpha | 1.5 | | @@ -104,8 +112,10 @@ different Kubernetes components. | `MountPropagation` | `true` | GA | 1.12 | | | `NodeLease` | `false` | Alpha | 1.12 | | | `PersistentLocalVolumes` | `false` | Alpha | 1.7 | 1.9 | -| `PersistentLocalVolumes` | `true` | Beta | 1.10 | | -| `PodPriority` | `false` | Alpha | 1.8 | | +| `PersistentLocalVolumes` | `true` | Beta | 1.10 | 1.13 | +| `PersistentLocalVolumes` | `true` | GA | 1.14 | | +| `PodPriority` | `false` | Alpha | 1.8 | 1.10 | +| `PodPriority` | `true` | Beta | 1.11 | | | `PodReadinessGates` | `false` | Alpha | 1.11 | | | `PodReadinessGates` | `true` | Beta | 1.12 | | | `PodShareProcessNamespace` | `false` | Alpha | 1.10 | | @@ -118,9 +128,10 @@ different Kubernetes components. | `RotateKubeletClientCertificate` | `true` | Beta | 1.7 | | | `RotateKubeletServerCertificate` | `false` | Alpha | 1.7 | 1.11 | | `RotateKubeletServerCertificate` | `true` | Beta | 1.12 | | -| `RunAsGroup` | `false` | Alpha | 1.10 | | -| `RuntimeClass` | `false` | Alpha | 1.12 | | +| `RunAsGroup` | `true` | Beta | 1.14 | | +| `RuntimeClass` | `true` | Beta | 1.14 | | | `SCTPSupport` | `false` | Alpha | 1.12 | | +| `ServerSideApply` | `false` | Alpha | 1.14 | | | `ServiceNodeExclusion` | `false` | Alpha | 1.8 | | | `StorageObjectInUseProtection` | `true` | Beta | 1.10 | 1.10 | | `StorageObjectInUseProtection` | `true` | GA | 1.11 | | @@ -147,6 +158,7 @@ different Kubernetes components. | `VolumeSnapshotDataSource` | `false` | Alpha | 1.12 | - | | `ScheduleDaemonSetPods` | `false` | Alpha | 1.11 | 1.11 | | `ScheduleDaemonSetPods` | `true` | Beta | 1.12 | | +| `WindowsGMSA` | `false` | Alpha | 1.14 | | ## Using a Feature @@ -213,11 +225,16 @@ Each feature gate is designed for enabling/disabling a specific feature: - `CRIContainerLogRotation`: Enable container log rotation for cri container runtime. - `CSIBlockVolume`: Enable external CSI volume drivers to support block storage. See the [`csi` raw block volume support](/docs/concepts/storage/volumes/#csi-raw-block-volume-support) documentation for more details. - `CSIDriverRegistry`: Enable all logic related to the CSIDriver API object in csi.storage.k8s.io. +- `CSIMigration`: Enables shims and translation logic to route volume operations from in-tree plugins to corresponding pre-installed CSI plugins +- `CSIMigrationAWS`: Enables shims and translation logic to route volume operations from the AWS-EBS in-tree plugin to EBS CSI plugin +- `CSIMigrationGCE`: Enables shims and translation logic to route volume operations from the GCE-PD in-tree plugin to PD CSI plugin +- `CSIMigrationOpenStack`: Enables shims and translation logic to route volume operations from the Cinder in-tree plugin to Cinder CSI plugin - `CSINodeInfo`: Enable all logic related to the CSINodeInfo API object in csi.storage.k8s.io. - `CSIPersistentVolume`: Enable discovering and mounting volumes provisioned through a [CSI (Container Storage Interface)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md) compatible volume plugin. Check the [`csi` volume type](/docs/concepts/storage/volumes/#csi) documentation for more details. +- `CSIInlineVolume`: Enable CSI volumes to be directly embedded in Pod specifications instead of a PersistentVolume. - `CustomPodDNS`: Enable customizing the DNS settings for a Pod using its `dnsConfig` property. Check [Pod's DNS Config](/docs/concepts/services-networking/dns-pod-service/#pods-dns-config) for more details. @@ -286,6 +303,7 @@ Each feature gate is designed for enabling/disabling a specific feature: - `RuntimeClass`: Enable the [RuntimeClass](/docs/concepts/containers/runtime-class/) feature for selecting container runtime configurations. - `ScheduleDaemonSetPods`: Enable DaemonSet Pods to be scheduled by the default scheduler instead of the DaemonSet controller. - `SCTPSupport`: Enables the usage of SCTP as `protocol` value in `Service`, `Endpoint`, `NetworkPolicy` and `Pod` definitions +- `ServerSideApply`: Enables the [Sever Side Apply (SSA)](/docs/reference/using-api/api-concepts/#server-side-apply) path at the API Server. - `ServiceNodeExclusion`: Enable the exclusion of nodes from load balancers created by a cloud provider. A node is eligible for exclusion if annotated with "`alpha.service-controller.kubernetes.io/exclude-balancer`" key. - `StorageObjectInUseProtection`: Postpone the deletion of PersistentVolume or @@ -311,5 +329,6 @@ Each feature gate is designed for enabling/disabling a specific feature: type when used together with the `PersistentLocalVolumes` feature gate. - `VolumeSnapshotDataSource`: Enable volume snapshot data source support. - `VolumeSubpathEnvExpansion`: Enable `subPathExpr` field for expanding environment variables into a `subPath`. +- `WindowsGMSA`: Enables passing of GMSA credential specs from pods to container runtimes. {{% /capture %}} diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet.md b/content/en/docs/reference/command-line-tools-reference/kubelet.md index 2c2795eb9d4a0..280cfde25507b 100644 --- a/content/en/docs/reference/command-line-tools-reference/kubelet.md +++ b/content/en/docs/reference/command-line-tools-reference/kubelet.md @@ -707,7 +707,7 @@ kubelet [flags] --kube-reserved mapStringString - A set of ResourceName=ResourceQuantity (e.g. cpu=200m,memory=500Mi,ephemeral-storage=1Gi) pairs that describe resources reserved for kubernetes system components. Currently cpu, memory and local ephemeral storage for root file system are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. [default=none] + A set of ResourceName=ResourceQuantity (e.g. cpu=200m,memory=500Mi,ephemeral-storage=1Gi,pid=1000) pairs that describe resources reserved for kubernetes system components. Currently cpu, memory, pid, and local ephemeral storage for root file system are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. [default=none] @@ -1092,7 +1092,7 @@ kubelet [flags] --system-reserved mapStringString - A set of ResourceName=ResourceQuantity (e.g. cpu=200m,memory=500Mi,ephemeral-storage=1Gi) pairs that describe resources reserved for non-kubernetes components. Currently only cpu and memory are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. [default=none] + A set of ResourceName=ResourceQuantity (e.g. cpu=200m,memory=500Mi,ephemeral-storage=1Gi,pid=1000) pairs that describe resources reserved for non-kubernetes components. Currently only cpu, memory, and pid are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. [default=none] diff --git a/content/en/docs/reference/glossary/app-container.md b/content/en/docs/reference/glossary/app-container.md new file mode 100644 index 0000000000000..c5c4697808798 --- /dev/null +++ b/content/en/docs/reference/glossary/app-container.md @@ -0,0 +1,20 @@ +--- +title: App Container +id: app-container +date: 2019-02-12 +full_link: +short_description: > + A container used to run part of a workload. Compare with init container. + +aka: +tags: +- workload +--- + Application containers (or app containers) are the {{< glossary_tooltip text="containers" term_id="container" >}} in a {{< glossary_tooltip text="pod" term_id="pod" >}} that are started after any {{< glossary_tooltip text="init containers" term_id="init-container" >}} have completed. + + + +An init container lets you separate initialization details that are important for the overall +{{< glossary_tooltip text="workload" term_id="workload" >}}, and that don't need to keep running +once the application container has started. +If a pod doesn't have any init containers configured, all the containers in that pod are app containers. diff --git a/content/en/docs/reference/glossary/cni.md b/content/en/docs/reference/glossary/cni.md index f654f2586c0ec..c989cc8f69991 100644 --- a/content/en/docs/reference/glossary/cni.md +++ b/content/en/docs/reference/glossary/cni.md @@ -1,5 +1,5 @@ --- -title: CNI (Container network interface) +title: Container network interface (CNI) id: cni date: 2018-05-25 full_link: https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni diff --git a/content/en/docs/reference/glossary/container-lifecycle-hooks.md b/content/en/docs/reference/glossary/container-lifecycle-hooks.md index 527e2f3e6efc8..5f19d0606f6f9 100644 --- a/content/en/docs/reference/glossary/container-lifecycle-hooks.md +++ b/content/en/docs/reference/glossary/container-lifecycle-hooks.md @@ -6,13 +6,12 @@ full_link: /docs/concepts/containers/container-lifecycle-hooks/ short_description: > The lifecycle hooks expose events in the container management lifecycle and let the user run code when the events occur. -aka: +aka: tags: - extension --- - The lifecycle hooks expose events in the {{< glossary_tooltip text="Container" term_id="container" >}}container management lifecycle and let the user run code when the events occur. + The lifecycle hooks expose events in the {{< glossary_tooltip text="Container" term_id="container" >}} management lifecycle and let the user run code when the events occur. - + Two hooks are exposed to Containers: PostStart which executes immediately after a container is created and PreStop which is blocking and is called immediately before a container is terminated. - diff --git a/content/en/docs/reference/glossary/cronjob.md b/content/en/docs/reference/glossary/cronjob.md index 3173740b5b6d8..d09dc8e0d4263 100755 --- a/content/en/docs/reference/glossary/cronjob.md +++ b/content/en/docs/reference/glossary/cronjob.md @@ -13,7 +13,7 @@ tags: --- Manages a [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/) that runs on a periodic schedule. - + -Similar to a line in a *crontab* file, a Cronjob object specifies a schedule using the [Cron](https://en.wikipedia.org/wiki/Cron) format. +Similar to a line in a *crontab* file, a CronJob object specifies a schedule using the [cron](https://en.wikipedia.org/wiki/Cron) format. diff --git a/content/en/docs/reference/glossary/csi.md b/content/en/docs/reference/glossary/csi.md index 29e5550ccfb86..8b04559082a8f 100644 --- a/content/en/docs/reference/glossary/csi.md +++ b/content/en/docs/reference/glossary/csi.md @@ -15,7 +15,7 @@ tags: -CSI allows vendors to create custom storage plugins for Kubernetes without adding them to the Kubernetes repository (out-of-tree plugins). To use a CSI driver from a storage provider, you must first [deploy it to your cluster](https://kubernetes-csi.github.io/docs/Setup.html). You will then be able to create a {{< glossary_tooltip text="Storage Class" term_id="storage-class" >}} that uses that CSI driver. +CSI allows vendors to create custom storage plugins for Kubernetes without adding them to the Kubernetes repository (out-of-tree plugins). To use a CSI driver from a storage provider, you must first [deploy it to your cluster](https://kubernetes-csi.github.io/docs/deploying.html). You will then be able to create a {{< glossary_tooltip text="Storage Class" term_id="storage-class" >}} that uses that CSI driver. * [CSI in the Kubernetes documentation](https://kubernetes.io/docs/concepts/storage/volumes/#csi) -* [List of available CSI drivers](https://kubernetes-csi.github.io/docs/Drivers.html) +* [List of available CSI drivers](https://kubernetes-csi.github.io/docs/drivers.html) diff --git a/content/en/docs/reference/glossary/device-plugin.md b/content/en/docs/reference/glossary/device-plugin.md new file mode 100644 index 0000000000000..02e5500677d45 --- /dev/null +++ b/content/en/docs/reference/glossary/device-plugin.md @@ -0,0 +1,17 @@ +--- +title: Device Plugin +id: device-plugin +date: 2019-02-02 +full_link: https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/ +short_description: > + Device Plugins are containers running in Kubernetes that provide access to a vendor specific resource. +aka: +tags: +- fundamental +- extension +--- + Device Plugins are containers running in Kubernetes that provide access to a vendor specific resource. + + + +[Device Plugin](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) are containers running in Kubernetes that provide access to a vendor specific resource. Device Plugins advertise these resources to kubelet and can be deployed manually or as a DeamonSet, rather than writing custom Kubernetes code. diff --git a/content/en/docs/reference/glossary/index.md b/content/en/docs/reference/glossary/index.md index a8c229569a05a..1fb8799a16b51 100755 --- a/content/en/docs/reference/glossary/index.md +++ b/content/en/docs/reference/glossary/index.md @@ -7,5 +7,9 @@ layout: glossary noedit: true default_active_tag: fundamental weight: 5 +card: + name: reference + weight: 10 + title: Glossary --- diff --git a/content/en/docs/reference/glossary/pod-disruption-budget.md b/content/en/docs/reference/glossary/pod-disruption-budget.md new file mode 100644 index 0000000000000..ea5e30e08fcd9 --- /dev/null +++ b/content/en/docs/reference/glossary/pod-disruption-budget.md @@ -0,0 +1,19 @@ +--- +id: pod-disruption-budget +title: Pod Disruption Budget +full-link: /docs/concepts/workloads/pods/disruptions/ +date: 2019-02-12 +short_description: > + An object that limits the number of {{< glossary_tooltip text="Pods" term_id="pod" >}} of a replicated application, that are down simultaneously from voluntary disruptions. + +aka: + - PDB +related: + - pod + - container +tags: + - operation +--- + + A [Pod Disruption Budget](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/) allows an application owner to create an object for a replicated application, that ensures a certain number or percentage of Pods with an assigned label will not be voluntarily evicted at any point in time. PDBs cannot prevent an involuntary disruption, but will count against the budget. + diff --git a/content/en/docs/reference/glossary/pod-lifecycle.md b/content/en/docs/reference/glossary/pod-lifecycle.md new file mode 100644 index 0000000000000..caa588bb8c6bb --- /dev/null +++ b/content/en/docs/reference/glossary/pod-lifecycle.md @@ -0,0 +1,16 @@ +--- +title: Pod Lifecycle +id: pod-lifecycle +date: 2019-02-17 +full-link: /docs/concepts/workloads/pods/pod-lifecycle/ +related: + - pod + - container +tags: + - fundamental +short_description: > + A high-level summary of what phase the Pod is in within its lifecyle. + +--- + +The [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/) is a high level summary of where a Pod is in its lifecyle. A Pod’s `status` field is a [PodStatus](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#podstatus-v1-core) object, which has a `phase` field that displays one of the following phases: Running, Pending, Succeeded, Failed, Unknown, Completed, or CrashLoopBackOff. diff --git a/content/en/docs/reference/glossary/Preemption.md b/content/en/docs/reference/glossary/preemption.md similarity index 97% rename from content/en/docs/reference/glossary/Preemption.md rename to content/en/docs/reference/glossary/preemption.md index c94d9a3a3e378..ac1334c9793a6 100644 --- a/content/en/docs/reference/glossary/Preemption.md +++ b/content/en/docs/reference/glossary/preemption.md @@ -1,6 +1,6 @@ --- title: Preemption -id: Preemption +id: preemption date: 2019-01-31 full_link: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#preemption short_description: > diff --git a/content/en/docs/reference/glossary/rkt.md b/content/en/docs/reference/glossary/rkt.md new file mode 100644 index 0000000000000..455484c8ca2de --- /dev/null +++ b/content/en/docs/reference/glossary/rkt.md @@ -0,0 +1,18 @@ +--- +title: rkt +id: rkt +date: 2019-01-24 +full_link: https://coreos.com/rkt/ +short_description: > + A security-minded, standards-based container engine. + +aka: +tags: +- security +- tool +--- + A security-minded, standards-based container engine. + + + +rkt is an application {% glossary_tooltip text="container" term_id="container" %} engine featuring a {% glossary_tooltip text="pod" term_id="pod" %}-native approach, a pluggable execution environment, and a well-defined surface area. rkt allows users to apply different configurations at both the pod and application level and each pod executes directly in the classic Unix process model, in a self-contained, isolated environment. diff --git a/content/en/docs/reference/glossary/security-context.md b/content/en/docs/reference/glossary/security-context.md index 7bdf99534ae0d..9812304e4dd2a 100755 --- a/content/en/docs/reference/glossary/security-context.md +++ b/content/en/docs/reference/glossary/security-context.md @@ -14,5 +14,4 @@ tags: -The securityContext field in a {{< glossary_tooltip term_id="pod" >}} (applying to all containers) or container is used to set the user (runAsUser) and group (fsGroup), capabilities, privilege settings, and security policies (SELinux/AppArmor/Seccomp) that container processes use. - +The securityContext field in a {{< glossary_tooltip term_id="pod" >}} (applying to all containers) or container is used to set the user, groups, capabilities, privilege settings, and security policies (SELinux/AppArmor/Seccomp) and more that container processes use. diff --git a/content/en/docs/reference/glossary/sysctl.md b/content/en/docs/reference/glossary/sysctl.md new file mode 100755 index 0000000000000..7b73af4c56776 --- /dev/null +++ b/content/en/docs/reference/glossary/sysctl.md @@ -0,0 +1,23 @@ +--- +title: sysctl +id: sysctl +date: 2019-02-12 +full_link: /docs/tasks/administer-cluster/sysctl-cluster/ +short_description: > + An interface for getting and setting Unix kernel parameters + +aka: +tags: +- tool +--- + `sysctl` is a semi-standardized interface for reading or changing the + attributes of the running Unix kernel. + + + +On Unix-like systems, `sysctl` is both the name of the tool that administrators +use to view and modify these settings, and also the system call that the tool +uses. + +{{< glossary_tooltip text="Container" term_id="container" >}} runtimes and +network plugins may rely on `sysctl` values being set a certain way. diff --git a/content/en/docs/reference/glossary/taint.md b/content/en/docs/reference/glossary/taint.md index 8fedadae64dd5..88a6890c6168d 100644 --- a/content/en/docs/reference/glossary/taint.md +++ b/content/en/docs/reference/glossary/taint.md @@ -2,7 +2,7 @@ title: Taint id: taint date: 2019-01-11 -full_link: docs/concepts/configuration/taint-and-toleration/ +full_link: /docs/concepts/configuration/taint-and-toleration/ short_description: > A key-value pair and an effect to prevent the scheduling of pods on nodes or node groups. diff --git a/content/en/docs/reference/glossary/toleration.md b/content/en/docs/reference/glossary/toleration.md index 8ad7c745260c5..6a2f763d18bc1 100644 --- a/content/en/docs/reference/glossary/toleration.md +++ b/content/en/docs/reference/glossary/toleration.md @@ -2,7 +2,7 @@ title: Toleration id: toleration date: 2019-01-11 -full_link: docs/concepts/configuration/taint-and-toleration/ +full_link: /docs/concepts/configuration/taint-and-toleration/ short_description: > A key-value pair and an effect to enable the scheduling of pods on nodes or node groups that have a matching {% glossary_tooltip term_id="taint" %}. diff --git a/content/en/docs/reference/glossary/workload.md b/content/en/docs/reference/glossary/workload.md new file mode 100644 index 0000000000000..1730e7b93f3ce --- /dev/null +++ b/content/en/docs/reference/glossary/workload.md @@ -0,0 +1,28 @@ +--- +title: Workload +id: workload +date: 2019-02-12 +full_link: /docs/concepts/workloads/ +short_description: > + A set of applications for processing information to serve a purpose that is valuable to a single user or group of users. + +aka: +tags: +- workload +--- +A workload consists of a system of services or applications that can run to fulfill a +task or carry out a business process. + + + +Alongside the computer code that runs to carry out the task, a workload also entails +the infrastructure resources that actually run that code. + +For example, a workload that has a web element and a database element might run the +database in one {{< glossary_tooltip term_id="StatefulSet" >}} of +{{< glossary_tooltip text="pods" term_id="pod" >}} and the webserver via +a {{< glossary_tooltip term_id="Deployment" >}} that consists of many web app +{{< glossary_tooltip text="pods" term_id="pod" >}}, all alike. + +The organisation running this workload may well have other workloads that together +provide a valuable outcome to its users. diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index d46e6336bd056..143fc90b4fffd 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -6,6 +6,9 @@ reviewers: - krousey - clove content_template: templates/concept +card: + name: reference + weight: 30 --- {{% capture overview %}} @@ -29,6 +32,13 @@ source <(kubectl completion bash) # setup autocomplete in bash into the current echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell. ``` +You can also use a shorthand alias for `kubectl` that also works with completion: + +```bash +alias k=kubectl +complete -F __start_kubectl k +``` + ### ZSH ```bash @@ -62,21 +72,24 @@ kubectl config set-context gce --user=cluster-admin --namespace=foo \ && kubectl config use-context gce ``` +## Apply +`apply` manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running `kubectl apply`. This is the recommended way of managing Kubernetes applications on production. See [Kubectl Book](https://kubectl.docs.kubernetes.io). + ## Creating Objects Kubernetes manifests can be defined in json or yaml. The file extension `.yaml`, `.yml`, and `.json` can be used. ```bash -kubectl create -f ./my-manifest.yaml # create resource(s) -kubectl create -f ./my1.yaml -f ./my2.yaml # create from multiple files -kubectl create -f ./dir # create resource(s) in all manifest files in dir -kubectl create -f https://git.io/vPieo # create resource(s) from url +kubectl apply -f ./my-manifest.yaml # create resource(s) +kubectl apply -f ./my1.yaml -f ./my2.yaml # create from multiple files +kubectl apply -f ./dir # create resource(s) in all manifest files in dir +kubectl apply -f https://git.io/vPieo # create resource(s) from url kubectl create deployment nginx --image=nginx # start a single instance of nginx kubectl explain pods,svc # get the documentation for pod and svc manifests # Create multiple YAML objects from stdin -cat </dev/null; printf "\n"; done + # Check which nodes are ready JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \ && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True" diff --git a/content/en/docs/reference/kubectl/conventions.md b/content/en/docs/reference/kubectl/conventions.md index 3e6f2d4c99f48..cb084fda8e96e 100644 --- a/content/en/docs/reference/kubectl/conventions.md +++ b/content/en/docs/reference/kubectl/conventions.md @@ -77,6 +77,6 @@ flag, which provides the object to be submitted to the cluster. ### `kubectl apply` -* You can use `kubectl apply` to create or update resources. However, to update a resource you should have created the resource by using `kubectl apply` or `kubectl create --save-config`. For more information about using kubectl apply to update resources, see [Managing Resources](/docs/concepts/cluster-administration/manage-deployment/#kubectl-apply). +* You can use `kubectl apply` to create or update resources. For more information about using kubectl apply to update resources, see [Kubectl Book](https://kubectl.docs.kubernetes.io). {{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/reference/kubectl/docker-cli-to-kubectl.md b/content/en/docs/reference/kubectl/docker-cli-to-kubectl.md index 0925992ef8364..99d951a90f7c3 100644 --- a/content/en/docs/reference/kubectl/docker-cli-to-kubectl.md +++ b/content/en/docs/reference/kubectl/docker-cli-to-kubectl.md @@ -19,10 +19,16 @@ To run an nginx Deployment and expose the Deployment, see [kubectl run](/docs/re docker: ```shell -$ docker run -d --restart=always -e DOMAIN=cluster --name nginx-app -p 80:80 nginx +docker run -d --restart=always -e DOMAIN=cluster --name nginx-app -p 80:80 nginx +``` +``` 55c103fa129692154a7652490236fee9be47d70a8dd562281ae7d2f9a339a6db +``` -$ docker ps +```shell +docker ps +``` +``` CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 55c103fa1296 nginx "nginx -g 'daemon of…" 9 seconds ago Up 9 seconds 0.0.0.0:80->80/tcp nginx-app ``` @@ -31,7 +37,9 @@ kubectl: ```shell # start the pod running nginx -$ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster" +kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster" +``` +``` deployment "nginx-app" created ``` @@ -41,7 +49,9 @@ deployment "nginx-app" created ```shell # expose a port through with a service -$ kubectl expose deployment nginx-app --port=80 --name=nginx-http +kubectl expose deployment nginx-app --port=80 --name=nginx-http +``` +``` service "nginx-http" exposed ``` @@ -66,7 +76,9 @@ To list what is currently running, see [kubectl get](/docs/reference/generated/k docker: ```shell -$ docker ps -a +docker ps -a +``` +``` CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 14636241935f ubuntu:16.04 "echo test" 5 seconds ago Exited (0) 5 seconds ago cocky_fermi 55c103fa1296 nginx "nginx -g 'daemon of…" About a minute ago Up About a minute 0.0.0.0:80->80/tcp nginx-app @@ -75,7 +87,9 @@ CONTAINER ID IMAGE COMMAND CREATED kubectl: ```shell -$ kubectl get po +kubectl get po +``` +``` NAME READY STATUS RESTARTS AGE nginx-app-8df569cb7-4gd89 1/1 Running 0 3m ubuntu 0/1 Completed 0 20s @@ -88,22 +102,30 @@ To attach a process that is already running in a container, see [kubectl attach] docker: ```shell -$ docker ps +docker ps +``` +``` CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 55c103fa1296 nginx "nginx -g 'daemon of…" 5 minutes ago Up 5 minutes 0.0.0.0:80->80/tcp nginx-app +``` -$ docker attach 55c103fa1296 +```shell +docker attach 55c103fa1296 ... ``` kubectl: ```shell -$ kubectl get pods +kubectl get pods +``` +``` NAME READY STATUS RESTARTS AGE nginx-app-5jyvm 1/1 Running 0 10m +``` -$ kubectl attach -it nginx-app-5jyvm +```shell +kubectl attach -it nginx-app-5jyvm ... ``` @@ -116,22 +138,33 @@ To execute a command in a container, see [kubectl exec](/docs/reference/generate docker: ```shell -$ docker ps +docker ps +``` +``` CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 55c103fa1296 nginx "nginx -g 'daemon of…" 6 minutes ago Up 6 minutes 0.0.0.0:80->80/tcp nginx-app - -$ docker exec 55c103fa1296 cat /etc/hostname +``` +```shell +docker exec 55c103fa1296 cat /etc/hostname +``` +``` 55c103fa1296 ``` kubectl: ```shell -$ kubectl get po +kubectl get po +``` +``` NAME READY STATUS RESTARTS AGE nginx-app-5jyvm 1/1 Running 0 10m +``` -$ kubectl exec nginx-app-5jyvm -- cat /etc/hostname +```shell +kubectl exec nginx-app-5jyvm -- cat /etc/hostname +``` +``` nginx-app-5jyvm ``` @@ -141,14 +174,14 @@ To use interactive commands. docker: ```shell -$ docker exec -ti 55c103fa1296 /bin/sh +docker exec -ti 55c103fa1296 /bin/sh # exit ``` kubectl: ```shell -$ kubectl exec -ti nginx-app-5jyvm -- /bin/sh +kubectl exec -ti nginx-app-5jyvm -- /bin/sh # exit ``` @@ -162,7 +195,9 @@ To follow stdout/stderr of a process that is running, see [kubectl logs](/docs/r docker: ```shell -$ docker logs -f a9e +docker logs -f a9e +``` +``` 192.168.9.1 - - [14/Jul/2015:01:04:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.35.0" "-" 192.168.9.1 - - [14/Jul/2015:01:04:03 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.35.0" "-" ``` @@ -170,7 +205,9 @@ $ docker logs -f a9e kubectl: ```shell -$ kubectl logs -f nginx-app-zibvs +kubectl logs -f nginx-app-zibvs +``` +``` 10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-" 10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-" ``` @@ -178,7 +215,9 @@ $ kubectl logs -f nginx-app-zibvs There is a slight difference between pods and containers; by default pods do not terminate if their processes exit. Instead the pods restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated, but for Kubernetes, each invocation is separate. To see the output from a previous run in Kubernetes, do this: ```shell -$ kubectl logs --previous nginx-app-zibvs +kubectl logs --previous nginx-app-zibvs +``` +``` 10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-" 10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-" ``` @@ -192,32 +231,53 @@ To stop and delete a running process, see [kubectl delete](/docs/reference/gener docker: ```shell -$ docker ps +docker ps +``` +``` CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a9ec34d98787 nginx "nginx -g 'daemon of" 22 hours ago Up 22 hours 0.0.0.0:80->80/tcp, 443/tcp nginx-app +``` -$ docker stop a9ec34d98787 +```shell +docker stop a9ec34d98787 +``` +``` a9ec34d98787 +``` -$ docker rm a9ec34d98787 +```shell +docker rm a9ec34d98787 +``` +``` a9ec34d98787 ``` kubectl: ```shell -$ kubectl get deployment nginx-app +kubectl get deployment nginx-app +``` +``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-app 1 1 1 1 2m +``` -$ kubectl get po -l run=nginx-app +```shell +kubectl get po -l run=nginx-app +``` +``` NAME READY STATUS RESTARTS AGE nginx-app-2883164633-aklf7 1/1 Running 0 2m - -$ kubectl delete deployment nginx-app +``` +```shell +kubectl delete deployment nginx-app +``` +``` deployment "nginx-app" deleted +``` -$ kubectl get po -l run=nginx-app +```shell +kubectl get po -l run=nginx-app # Return nothing ``` @@ -236,7 +296,9 @@ To get the version of client and server, see [kubectl version](/docs/reference/g docker: ```shell -$ docker version +docker version +``` +``` Client version: 1.7.0 Client API version: 1.19 Go version (client): go1.4.2 @@ -252,7 +314,9 @@ OS/Arch (server): linux/amd64 kubectl: ```shell -$ kubectl version +kubectl version +``` +``` Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.9+a3d1dfa6f4335", GitCommit:"9b77fed11a9843ce3780f70dd251e92901c43072", GitTreeState:"dirty", BuildDate:"2017-08-29T20:32:58Z", OpenPaasKubernetesVersion:"v1.03.02", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.9+a3d1dfa6f4335", GitCommit:"9b77fed11a9843ce3780f70dd251e92901c43072", GitTreeState:"dirty", BuildDate:"2017-08-29T20:32:58Z", OpenPaasKubernetesVersion:"v1.03.02", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} ``` @@ -264,7 +328,9 @@ To get miscellaneous information about the environment and configuration, see [k docker: ```shell -$ docker info +docker info +``` +``` Containers: 40 Images: 168 Storage Driver: aufs @@ -286,7 +352,9 @@ WARNING: No swap limit support kubectl: ```shell -$ kubectl cluster-info +kubectl cluster-info +``` +``` Kubernetes master is running at https://108.59.85.141 KubeDNS is running at https://108.59.85.141/api/v1/namespaces/kube-system/services/kube-dns/proxy kubernetes-dashboard is running at https://108.59.85.141/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy diff --git a/content/en/docs/reference/kubectl/jsonpath.md b/content/en/docs/reference/kubectl/jsonpath.md index 74fcea92fbb09..1eed8c22beadd 100644 --- a/content/en/docs/reference/kubectl/jsonpath.md +++ b/content/en/docs/reference/kubectl/jsonpath.md @@ -81,11 +81,11 @@ range, end | iterate list | {range .items[*]}[{.metadata.nam Examples using `kubectl` and JSONPath expressions: ```shell -$ kubectl get pods -o json -$ kubectl get pods -o=jsonpath='{@}' -$ kubectl get pods -o=jsonpath='{.items[0]}' -$ kubectl get pods -o=jsonpath='{.items[0].metadata.name}' -$ kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.startTime}{"\n"}{end}' +kubectl get pods -o json +kubectl get pods -o=jsonpath='{@}' +kubectl get pods -o=jsonpath='{.items[0]}' +kubectl get pods -o=jsonpath='{.items[0].metadata.name}' +kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.startTime}{"\n"}{end}' ``` On Windows, you must _double_ quote any JSONPath template that contains spaces (not single quote as shown above for bash). This in turn means that you must use a single quote or escaped double quote around any literals in the template. For example: diff --git a/content/en/docs/reference/kubectl/overview.md b/content/en/docs/reference/kubectl/overview.md index 3fa3836460726..cf73c6845733b 100644 --- a/content/en/docs/reference/kubectl/overview.md +++ b/content/en/docs/reference/kubectl/overview.md @@ -5,6 +5,9 @@ reviewers: title: Overview of kubectl content_template: templates/concept weight: 20 +card: + name: reference + weight: 20 --- {{% capture overview %}} @@ -30,27 +33,27 @@ where `command`, `TYPE`, `NAME`, and `flags` are: * `TYPE`: Specifies the [resource type](#resource-types). Resource types are case-insensitive and you can specify the singular, plural, or abbreviated forms. For example, the following commands produce the same output: ```shell - $ kubectl get pod pod1 - $ kubectl get pods pod1 - $ kubectl get po pod1 + kubectl get pod pod1 + kubectl get pods pod1 + kubectl get po pod1 ``` -* `NAME`: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example `$ kubectl get pods`. +* `NAME`: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example `kubectl get pods`. When performing an operation on multiple resources, you can specify each resource by type and name or specify one or more files: * To specify resources by type and name: * To group resources if they are all the same type: `TYPE1 name1 name2 name<#>`.
- Example: `$ kubectl get pod example-pod1 example-pod2` + Example: `kubectl get pod example-pod1 example-pod2` * To specify multiple resource types individually: `TYPE1/name1 TYPE1/name2 TYPE2/name3 TYPE<#>/name<#>`.
- Example: `$ kubectl get pod/example-pod1 replicationcontroller/example-rc1` + Example: `kubectl get pod/example-pod1 replicationcontroller/example-rc1` * To specify resources with one or more files: `-f file1 -f file2 -f file<#>` * [Use YAML rather than JSON](/docs/concepts/configuration/overview/#general-configuration-tips) since YAML tends to be more user-friendly, especially for configuration files.
- Example: `$ kubectl get pod -f ./pod.yaml` + Example: `kubectl get pod -f ./pod.yaml` * `flags`: Specifies optional flags. For example, you can use the `-s` or `--server` flags to specify the address and port of the Kubernetes API server.
@@ -173,7 +176,7 @@ Output format | Description In this example, the following command outputs the details for a single pod as a YAML formatted object: ```shell -$ kubectl get pod web-pod-13je7 -o=yaml +kubectl get pod web-pod-13je7 -o=yaml ``` Remember: See the [kubectl](/docs/user-guide/kubectl/) reference documentation for details about which output format is supported by each command. @@ -187,13 +190,13 @@ To define custom columns and output only the details that you want into a table, Inline: ```shell -$ kubectl get pods -o=custom-columns=NAME:.metadata.name,RSRC:.metadata.resourceVersion +kubectl get pods -o=custom-columns=NAME:.metadata.name,RSRC:.metadata.resourceVersion ``` Template file: ```shell -$ kubectl get pods -o=custom-columns-file=template.txt +kubectl get pods -o=custom-columns-file=template.txt ``` where the `template.txt` file contains: @@ -248,63 +251,63 @@ kubectl [command] [TYPE] [NAME] --sort-by= To print a list of pods sorted by name, you run: ```shell -$ kubectl get pods --sort-by=.metadata.name +kubectl get pods --sort-by=.metadata.name ``` ## Examples: Common operations Use the following set of examples to help you familiarize yourself with running the commonly used `kubectl` operations: -`kubectl create` - Create a resource from a file or stdin. +`kubectl apply` - Apply or Update a resource from a file or stdin. ```shell -// Create a service using the definition in example-service.yaml. -$ kubectl create -f example-service.yaml +# Create a service using the definition in example-service.yaml. +kubectl apply -f example-service.yaml -// Create a replication controller using the definition in example-controller.yaml. -$ kubectl create -f example-controller.yaml +# Create a replication controller using the definition in example-controller.yaml. +kubectl apply -f example-controller.yaml -// Create the objects that are defined in any .yaml, .yml, or .json file within the directory. -$ kubectl create -f +# Create the objects that are defined in any .yaml, .yml, or .json file within the directory. +kubectl apply -f ``` `kubectl get` - List one or more resources. ```shell -// List all pods in plain-text output format. -$ kubectl get pods +# List all pods in plain-text output format. +kubectl get pods -// List all pods in plain-text output format and include additional information (such as node name). -$ kubectl get pods -o wide +# List all pods in plain-text output format and include additional information (such as node name). +kubectl get pods -o wide -// List the replication controller with the specified name in plain-text output format. Tip: You can shorten and replace the 'replicationcontroller' resource type with the alias 'rc'. -$ kubectl get replicationcontroller +# List the replication controller with the specified name in plain-text output format. Tip: You can shorten and replace the 'replicationcontroller' resource type with the alias 'rc'. +kubectl get replicationcontroller -// List all replication controllers and services together in plain-text output format. -$ kubectl get rc,services +# List all replication controllers and services together in plain-text output format. +kubectl get rc,services -// List all daemon sets, including uninitialized ones, in plain-text output format. -$ kubectl get ds --include-uninitialized +# List all daemon sets, including uninitialized ones, in plain-text output format. +kubectl get ds --include-uninitialized -// List all pods running on node server01 -$ kubectl get pods --field-selector=spec.nodeName=server01 +# List all pods running on node server01 +kubectl get pods --field-selector=spec.nodeName=server01 ``` `kubectl describe` - Display detailed state of one or more resources, including the uninitialized ones by default. ```shell -// Display the details of the node with name . -$ kubectl describe nodes +# Display the details of the node with name . +kubectl describe nodes -// Display the details of the pod with name . -$ kubectl describe pods/ +# Display the details of the pod with name . +kubectl describe pods/ -// Display the details of all the pods that are managed by the replication controller named . -// Remember: Any pods that are created by the replication controller get prefixed with the name of the replication controller. -$ kubectl describe pods +# Display the details of all the pods that are managed by the replication controller named . +# Remember: Any pods that are created by the replication controller get prefixed with the name of the replication controller. +kubectl describe pods -// Describe all pods, not including uninitialized ones -$ kubectl describe pods --include-uninitialized=false +# Describe all pods, not including uninitialized ones +kubectl describe pods --include-uninitialized=false ``` {{< note >}} @@ -322,40 +325,40 @@ the pods running on it, the events generated for the node etc. `kubectl delete` - Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources. ```shell -// Delete a pod using the type and name specified in the pod.yaml file. -$ kubectl delete -f pod.yaml +# Delete a pod using the type and name specified in the pod.yaml file. +kubectl delete -f pod.yaml -// Delete all the pods and services that have the label name=. -$ kubectl delete pods,services -l name= +# Delete all the pods and services that have the label name=. +kubectl delete pods,services -l name= -// Delete all the pods and services that have the label name=, including uninitialized ones. -$ kubectl delete pods,services -l name= --include-uninitialized +# Delete all the pods and services that have the label name=, including uninitialized ones. +kubectl delete pods,services -l name= --include-uninitialized -// Delete all pods, including uninitialized ones. -$ kubectl delete pods --all +# Delete all pods, including uninitialized ones. +kubectl delete pods --all ``` `kubectl exec` - Execute a command against a container in a pod. ```shell -// Get output from running 'date' from pod . By default, output is from the first container. -$ kubectl exec date +# Get output from running 'date' from pod . By default, output is from the first container. +kubectl exec date -// Get output from running 'date' in container of pod . -$ kubectl exec -c date +# Get output from running 'date' in container of pod . +kubectl exec -c date -// Get an interactive TTY and run /bin/bash from pod . By default, output is from the first container. -$ kubectl exec -ti /bin/bash +# Get an interactive TTY and run /bin/bash from pod . By default, output is from the first container. +kubectl exec -ti /bin/bash ``` `kubectl logs` - Print the logs for a container in a pod. ```shell -// Return a snapshot of the logs from pod . -$ kubectl logs +# Return a snapshot of the logs from pod . +kubectl logs -// Start streaming the logs from pod . This is similar to the 'tail -f' Linux command. -$ kubectl logs -f +# Start streaming the logs from pod . This is similar to the 'tail -f' Linux command. +kubectl logs -f ``` ## Examples: Creating and using plugins @@ -363,45 +366,54 @@ $ kubectl logs -f Use the following set of examples to help you familiarize yourself with writing and using `kubectl` plugins: ```shell -// create a simple plugin in any language and name the resulting executable file -// so that it begins with the prefix "kubectl-" -$ cat ./kubectl-hello +# create a simple plugin in any language and name the resulting executable file +# so that it begins with the prefix "kubectl-" +cat ./kubectl-hello #!/bin/bash # this plugin prints the words "hello world" echo "hello world" -// with our plugin written, let's make it executable -$ sudo chmod +x ./kubectl-hello +# with our plugin written, let's make it executable +sudo chmod +x ./kubectl-hello -// and move it to a location in our PATH -$ sudo mv ./kubectl-hello /usr/local/bin +# and move it to a location in our PATH +sudo mv ./kubectl-hello /usr/local/bin -// we have now created and "installed" a kubectl plugin. -// we can begin using our plugin by invoking it from kubectl as if it were a regular command -$ kubectl hello +# we have now created and "installed" a kubectl plugin. +# we can begin using our plugin by invoking it from kubectl as if it were a regular command +kubectl hello +``` +``` hello world +``` -// we can "uninstall" a plugin, by simply removing it from our PATH -$ sudo rm /usr/local/bin/kubectl-hello +``` +# we can "uninstall" a plugin, by simply removing it from our PATH +sudo rm /usr/local/bin/kubectl-hello ``` In order to view all of the plugins that are available to `kubectl`, we can use the `kubectl plugin list` subcommand: ```shell -$ kubectl plugin list +kubectl plugin list +``` +``` The following kubectl-compatible plugins are available: /usr/local/bin/kubectl-hello /usr/local/bin/kubectl-foo /usr/local/bin/kubectl-bar - -// this command can also warn us about plugins that are -// not executable, or that are overshadowed by other -// plugins, for example -$ sudo chmod -x /usr/local/bin/kubectl-foo -$ kubectl plugin list +``` +``` +# this command can also warn us about plugins that are +# not executable, or that are overshadowed by other +# plugins, for example +sudo chmod -x /usr/local/bin/kubectl-foo +kubectl plugin list +``` +``` The following kubectl-compatible plugins are available: /usr/local/bin/kubectl-hello @@ -416,7 +428,7 @@ We can think of plugins as a means to build more complex functionality on top of the existing kubectl commands: ```shell -$ cat ./kubectl-whoami +cat ./kubectl-whoami #!/bin/bash # this plugin makes use of the `kubectl config` command in order to output @@ -428,13 +440,13 @@ Running the above plugin gives us an output containing the user for the currentl context in our KUBECONFIG file: ```shell -// make the file executable -$ sudo chmod +x ./kubectl-whoami +# make the file executable +sudo chmod +x ./kubectl-whoami -// and move it into our PATH -$ sudo mv ./kubectl-whoami /usr/local/bin +# and move it into our PATH +sudo mv ./kubectl-whoami /usr/local/bin -$ kubectl whoami +kubectl whoami Current user: plugins-user ``` diff --git a/content/en/docs/reference/kubernetes-api/labels-annotations-taints.md b/content/en/docs/reference/kubernetes-api/labels-annotations-taints.md index 78219c3085823..6f958c4060d0a 100644 --- a/content/en/docs/reference/kubernetes-api/labels-annotations-taints.md +++ b/content/en/docs/reference/kubernetes-api/labels-annotations-taints.md @@ -11,23 +11,32 @@ This document serves both as a reference to the values, and as a coordination po {{% /capture %}} {{% capture body %}} -## beta.kubernetes.io/arch +## kubernetes.io/arch -Example: `beta.kubernetes.io/arch=amd64` +Example: `kubernetes.io/arch=amd64` Used on: Node Kubelet populates this with `runtime.GOARCH` as defined by Go. This can be handy if you are mixing arm and x86 nodes, for example. -## beta.kubernetes.io/os +## kubernetes.io/os -Example: `beta.kubernetes.io/os=linux` +Example: `kubernetes.io/os=linux` Used on: Node Kubelet populates this with `runtime.GOOS` as defined by Go. This can be handy if you are mixing operating systems -in your cluster (although currently Linux is the only OS supported by Kubernetes). +in your cluster (e.g., mixing Linux and Windows nodes). + +## beta.kubernetes.io/arch (deprecated) + +This label has been deprecated. Please use `kubernetes.io/arch` instead. + +## beta.kubernetes.io/os (deprecated) + +This label has been deprecated. Please use `kubernetes.io/os` instead. + ## kubernetes.io/hostname diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_preflight.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_preflight.md deleted file mode 100644 index d88f71b0d9204..0000000000000 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_preflight.md +++ /dev/null @@ -1,50 +0,0 @@ - -Commands related to pre-flight checks - -### Synopsis - - -This command is not meant to be run on its own. See list of available subcommands. - -### Options - - - - - - - - - - - - - - - - -
-h, --help
help for preflight
- - - -### Options inherited from parent commands - - - - - - - - - - - - - - - - -
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
- - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_preflight_node.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_preflight_node.md deleted file mode 100644 index 47be57c832538..0000000000000 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_preflight_node.md +++ /dev/null @@ -1,77 +0,0 @@ - -Run node pre-flight checks - -### Synopsis - - -Run node pre-flight checks, functionally equivalent to what implemented by kubeadm join. - -Alpha Disclaimer: this command is currently alpha. - -``` -kubeadm alpha preflight node [flags] -``` - -### Examples - -``` - # Run node pre-flight checks. - kubeadm alpha preflight node -``` - -### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
--config string
Path to a kubeadm configuration file.
-h, --help
help for node
--ignore-preflight-errors stringSlice
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
- - - -### Options inherited from parent commands - - - - - - - - - - - - - - - - -
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
- - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md new file mode 100644 index 0000000000000..aed553b8c3d1e --- /dev/null +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md @@ -0,0 +1,27 @@ + +Upload certificates to kubeadm-certs + +### Synopsis + +This command is not meant to be run on its own. See list of available subcommands. + +``` +kubeadm init phase upload-certs [flags] +``` + +### Options + +``` + --certificate-key string Key used to encrypt the control-plane certificates in the kubeadm-certs Secret. + --config string Path to a kubeadm configuration file. + --experimental-upload-certs Upload control-plane certificates to the kubeadm-certs Secret. + -h, --help help for upload-certs + --skip-certificate-key-print Don't print the key used to encrypt the control-plane certificates. +``` + +### Options inherited from parent commands + +``` + --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. +``` + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase.md new file mode 100644 index 0000000000000..a5562c5dc4e6c --- /dev/null +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase.md @@ -0,0 +1,19 @@ + +use this command to invoke single phase of the join workflow + +### Synopsis + +use this command to invoke single phase of the join workflow + +### Options + +``` + -h, --help help for phase +``` + +### Options inherited from parent commands + +``` + --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. +``` + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join.md new file mode 100644 index 0000000000000..e65c5248f44d4 --- /dev/null +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join.md @@ -0,0 +1,30 @@ + +Joins a machine as a control plane instance + +### Synopsis + +Joins a machine as a control plane instance + +``` +kubeadm join phase control-plane-join [flags] +``` + +### Examples + +``` + # Joins a machine as a control plane instance + kubeadm join phase control-plane-join all +``` + +### Options + +``` + -h, --help help for control-plane-join +``` + +### Options inherited from parent commands + +``` + --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. +``` + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_all.md new file mode 100644 index 0000000000000..d2f288fd98c59 --- /dev/null +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_all.md @@ -0,0 +1,27 @@ + +Joins a machine as a control plane instance + +### Synopsis + +Joins a machine as a control plane instance + +``` +kubeadm join phase control-plane-join all [flags] +``` + +### Options + +``` + --apiserver-advertise-address string If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used. + --config string Path to kubeadm config file. + --experimental-control-plane Create a new control plane instance on this node + -h, --help help for all + --node-name string Specify the node name. +``` + +### Options inherited from parent commands + +``` + --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. +``` + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_etcd.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_etcd.md new file mode 100644 index 0000000000000..05ebd37d41c17 --- /dev/null +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_etcd.md @@ -0,0 +1,27 @@ + +Add a new local etcd member + +### Synopsis + +Add a new local etcd member + +``` +kubeadm join phase control-plane-join etcd [flags] +``` + +### Options + +``` + --apiserver-advertise-address string If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used. + --config string Path to kubeadm config file. + --experimental-control-plane Create a new control plane instance on this node + -h, --help help for etcd + --node-name string Specify the node name. +``` + +### Options inherited from parent commands + +``` + --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. +``` + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_mark-control-plane.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_mark-control-plane.md new file mode 100644 index 0000000000000..9a06263e3876b --- /dev/null +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_mark-control-plane.md @@ -0,0 +1,26 @@ + +Mark a node as a control-plane + +### Synopsis + +Mark a node as a control-plane + +``` +kubeadm join phase control-plane-join mark-control-plane [flags] +``` + +### Options + +``` + --config string Path to kubeadm config file. + --experimental-control-plane Create a new control plane instance on this node + -h, --help help for mark-control-plane + --node-name string Specify the node name. +``` + +### Options inherited from parent commands + +``` + --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. +``` + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_update-status.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_update-status.md new file mode 100644 index 0000000000000..00a10bb606939 --- /dev/null +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_update-status.md @@ -0,0 +1,27 @@ + +Register the new control-plane node into the ClusterStatus maintained in the kubeadm-config ConfigMap + +### Synopsis + +Register the new control-plane node into the ClusterStatus maintained in the kubeadm-config ConfigMap + +``` +kubeadm join phase control-plane-join update-status [flags] +``` + +### Options + +``` + --apiserver-advertise-address string If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used. + --config string Path to kubeadm config file. + --experimental-control-plane Create a new control plane instance on this node + -h, --help help for update-status + --node-name string Specify the node name. +``` + +### Options inherited from parent commands + +``` + --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. +``` + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare.md new file mode 100644 index 0000000000000..1ed4d231ba2e9 --- /dev/null +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare.md @@ -0,0 +1,30 @@ + +Prepares the machine for serving a control plane. + +### Synopsis + +Prepares the machine for serving a control plane. + +``` +kubeadm join phase control-plane-prepare [flags] +``` + +### Examples + +``` + # Prepares the machine for serving a control plane + kubeadm join phase control-plane-prepare all +``` + +### Options + +``` + -h, --help help for control-plane-prepare +``` + +### Options inherited from parent commands + +``` + --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. +``` + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_all.md new file mode 100644 index 0000000000000..30e3351584f55 --- /dev/null +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_all.md @@ -0,0 +1,35 @@ + +Prepares the machine for serving a control plane. + +### Synopsis + +Prepares the machine for serving a control plane. + +``` +kubeadm join phase control-plane-prepare all [api-server-endpoint] [flags] +``` + +### Options + +``` + --apiserver-advertise-address string If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used. + --apiserver-bind-port int32 If the node should host a new control plane instance, the port for the API Server to bind to. (default 6443) + --certificate-key string Use this key to decrypt the certificate secrets uploaded by init. + --config string Path to kubeadm config file. + --discovery-file string For file-based discovery, a file or URL from which to load cluster information. + --discovery-token string For token-based discovery, the token used to validate cluster information fetched from the API server. + --discovery-token-ca-cert-hash strings For token-based discovery, validate that the root CA public key matches this hash (format: ":"). + --discovery-token-unsafe-skip-ca-verification For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning. + --experimental-control-plane Create a new control plane instance on this node + -h, --help help for all + --node-name string Specify the node name. + --tls-bootstrap-token string Specify the token used to temporarily authenticate with the Kubernetes Control Plane while joining the node. + --token string Use this token for both discovery-token and tls-bootstrap-token when those values are not provided. +``` + +### Options inherited from parent commands + +``` + --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. +``` + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_certs.md new file mode 100644 index 0000000000000..f429b7536cf6e --- /dev/null +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_certs.md @@ -0,0 +1,33 @@ + +Generates the certificates for the new control plane components + +### Synopsis + +Generates the certificates for the new control plane components + +``` +kubeadm join phase control-plane-prepare certs [api-server-endpoint] [flags] +``` + +### Options + +``` + --apiserver-advertise-address string If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used. + --config string Path to kubeadm config file. + --discovery-file string For file-based discovery, a file or URL from which to load cluster information. + --discovery-token string For token-based discovery, the token used to validate cluster information fetched from the API server. + --discovery-token-ca-cert-hash strings For token-based discovery, validate that the root CA public key matches this hash (format: ":"). + --discovery-token-unsafe-skip-ca-verification For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning. + --experimental-control-plane Create a new control plane instance on this node + -h, --help help for certs + --node-name string Specify the node name. + --tls-bootstrap-token string Specify the token used to temporarily authenticate with the Kubernetes Control Plane while joining the node. + --token string Use this token for both discovery-token and tls-bootstrap-token when those values are not provided. +``` + +### Options inherited from parent commands + +``` + --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. +``` + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_control-plane.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_control-plane.md new file mode 100644 index 0000000000000..cecc4b2a80ae8 --- /dev/null +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_control-plane.md @@ -0,0 +1,27 @@ + +Generates the manifests for the new control plane components + +### Synopsis + +Generates the manifests for the new control plane components + +``` +kubeadm join phase control-plane-prepare control-plane [flags] +``` + +### Options + +``` + --apiserver-advertise-address string If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used. + --apiserver-bind-port int32 If the node should host a new control plane instance, the port for the API Server to bind to. (default 6443) + --config string Path to kubeadm config file. + --experimental-control-plane Create a new control plane instance on this node + -h, --help help for control-plane +``` + +### Options inherited from parent commands + +``` + --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. +``` + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_download-certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_download-certs.md new file mode 100644 index 0000000000000..cb87677c20600 --- /dev/null +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_download-certs.md @@ -0,0 +1,32 @@ + +[EXPERIMENTAL] Downloads certificates shared among control-plane nodes from the kubeadm-certs Secret + +### Synopsis + +[EXPERIMENTAL] Downloads certificates shared among control-plane nodes from the kubeadm-certs Secret + +``` +kubeadm join phase control-plane-prepare download-certs [api-server-endpoint] [flags] +``` + +### Options + +``` + --certificate-key string Use this key to decrypt the certificate secrets uploaded by init. + --config string Path to kubeadm config file. + --discovery-file string For file-based discovery, a file or URL from which to load cluster information. + --discovery-token string For token-based discovery, the token used to validate cluster information fetched from the API server. + --discovery-token-ca-cert-hash strings For token-based discovery, validate that the root CA public key matches this hash (format: ":"). + --discovery-token-unsafe-skip-ca-verification For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning. + --experimental-control-plane Create a new control plane instance on this node + -h, --help help for download-certs + --tls-bootstrap-token string Specify the token used to temporarily authenticate with the Kubernetes Control Plane while joining the node. + --token string Use this token for both discovery-token and tls-bootstrap-token when those values are not provided. +``` + +### Options inherited from parent commands + +``` + --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. +``` + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_kubeconfig.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_kubeconfig.md new file mode 100644 index 0000000000000..558ed7fd33ccb --- /dev/null +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_kubeconfig.md @@ -0,0 +1,32 @@ + +Generates the kubeconfig for the new control plane components + +### Synopsis + +Generates the kubeconfig for the new control plane components + +``` +kubeadm join phase control-plane-prepare kubeconfig [api-server-endpoint] [flags] +``` + +### Options + +``` + --certificate-key string Use this key to decrypt the certificate secrets uploaded by init. + --config string Path to kubeadm config file. + --discovery-file string For file-based discovery, a file or URL from which to load cluster information. + --discovery-token string For token-based discovery, the token used to validate cluster information fetched from the API server. + --discovery-token-ca-cert-hash strings For token-based discovery, validate that the root CA public key matches this hash (format: ":"). + --discovery-token-unsafe-skip-ca-verification For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning. + --experimental-control-plane Create a new control plane instance on this node + -h, --help help for kubeconfig + --tls-bootstrap-token string Specify the token used to temporarily authenticate with the Kubernetes Control Plane while joining the node. + --token string Use this token for both discovery-token and tls-bootstrap-token when those values are not provided. +``` + +### Options inherited from parent commands + +``` + --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. +``` + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_kubelet-start.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_kubelet-start.md new file mode 100644 index 0000000000000..6120e664bb255 --- /dev/null +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_kubelet-start.md @@ -0,0 +1,32 @@ + +Writes kubelet settings, certificates and (re)starts the kubelet + +### Synopsis + +Writes a file with KubeletConfiguration and an environment file with node specific kubelet settings, and then (re)starts kubelet. + +``` +kubeadm join phase kubelet-start [api-server-endpoint] [flags] +``` + +### Options + +``` + --config string Path to kubeadm config file. + --cri-socket string Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket. + --discovery-file string For file-based discovery, a file or URL from which to load cluster information. + --discovery-token string For token-based discovery, the token used to validate cluster information fetched from the API server. + --discovery-token-ca-cert-hash strings For token-based discovery, validate that the root CA public key matches this hash (format: ":"). + --discovery-token-unsafe-skip-ca-verification For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning. + -h, --help help for kubelet-start + --node-name string Specify the node name. + --tls-bootstrap-token string Specify the token used to temporarily authenticate with the Kubernetes Control Plane while joining the node. + --token string Use this token for both discovery-token and tls-bootstrap-token when those values are not provided. +``` + +### Options inherited from parent commands + +``` + --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. +``` + diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_preflight.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_preflight.md new file mode 100644 index 0000000000000..70643a0da341a --- /dev/null +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_preflight.md @@ -0,0 +1,44 @@ + +Run join pre-flight checks + +### Synopsis + +Run pre-flight checks for kubeadm join. + +``` +kubeadm join phase preflight [api-server-endpoint] [flags] +``` + +### Examples + +``` + # Run join pre-flight checks using a config file. + kubeadm join phase preflight --config kubeadm-config.yml +``` + +### Options + +``` + --apiserver-advertise-address string If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used. + --apiserver-bind-port int32 If the node should host a new control plane instance, the port for the API Server to bind to. (default 6443) + --certificate-key string Use this key to decrypt the certificate secrets uploaded by init. + --config string Path to kubeadm config file. + --cri-socket string Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket. + --discovery-file string For file-based discovery, a file or URL from which to load cluster information. + --discovery-token string For token-based discovery, the token used to validate cluster information fetched from the API server. + --discovery-token-ca-cert-hash strings For token-based discovery, validate that the root CA public key matches this hash (format: ":"). + --discovery-token-unsafe-skip-ca-verification For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning. + --experimental-control-plane Create a new control plane instance on this node + -h, --help help for preflight + --ignore-preflight-errors strings A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks. + --node-name string Specify the node name. + --tls-bootstrap-token string Specify the token used to temporarily authenticate with the Kubernetes Control Plane while joining the node. + --token string Use this token for both discovery-token and tls-bootstrap-token when those values are not provided. +``` + +### Options inherited from parent commands + +``` + --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. +``` + diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-alpha.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-alpha.md index da92919353a51..ad59320140614 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-alpha.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-alpha.md @@ -48,15 +48,6 @@ to enable the DynamicKubeletConfiguration feature. {{< tab name="enable-dynamic" include="generated/kubeadm_alpha_kubelet_config_download.md" />}} {{< /tabs >}} -## kubeadm alpha preflight node {#cmd-phase-preflight} - -You can use the `node` sub command to run preflight checks on a worker node. - -{{< tabs name="tab-preflight" >}} -{{< tab name="preflight" include="generated/kubeadm_alpha_preflight.md" />}} -{{< tab name="node" include="generated/kubeadm_alpha_preflight_node.md" />}} -{{< /tabs >}} - ## kubeadm alpha selfhosting pivot {#cmd-selfhosting} diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md index 360ac57aac704..b5644d854c50d 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md @@ -79,7 +79,17 @@ Use the following phase to create a local etcd instance based on a static Pod fi {{< /tabs >}} -## kubeadm init phase mark-control-plane {#cmd-phase-control-plane} +## kubeadm init phase upload-certs {#cmd-phase-upload-certs} + +Use the following phase to upload control-plane certificates to the cluster. +By default the certs and encryption key expire after two hours. + +{{< tabs name="tab-upload-certs" >}} +{{< tab name="upload-certs" include="generated/kubeadm_init_phase_upload-certs.md" />}} +{{< /tabs >}} + + +## kubeadm init phase mark-control-plane {#cmd-phase-mark-control-plane} Use the following phase to label and taint the node with the `node-role.kubernetes.io/master=""` key-value pair. diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md index c2aab8f261c63..af58cfa87c2a2 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md @@ -49,7 +49,7 @@ following steps: run there. 1. Generates the token that additional nodes can use to register - themselves with the master in the future. Optionally, the user can provide a + themselves with a control-plane in the future. Optionally, the user can provide a token via `--token`, as described in the [kubeadm token](/docs/reference/setup-tools/kubeadm/kubeadm-token/) docs. @@ -82,13 +82,13 @@ Note that by calling `kubeadm init` all of the phases and sub-phases will be exe Some phases have unique flags, so if you want to have a look at the list of available options add `--help`, for example: -```bash +```shell sudo kubeadm init phase control-plane controller-manager --help ``` You can also use `--help` to see the list of sub-phases for a certain parent phase: -```bash +```shell sudo kubeadm init phase control-plane --help ``` @@ -96,7 +96,7 @@ sudo kubeadm init phase control-plane --help An example: -```bash +```shell sudo kubeadm init phase control-plane all --config=configfile.yaml sudo kubeadm init phase etcd local --config=configfile.yaml # you can now modify the control plane and etcd manifest files @@ -117,12 +117,13 @@ configuration file options. This file is passed in the `--config` option. In Kubernetes 1.11 and later, the default configuration can be printed out using the [kubeadm config print](/docs/reference/setup-tools/kubeadm/kubeadm-config/) command. + It is **recommended** that you migrate your old `v1alpha3` configuration to `v1beta1` using the [kubeadm config migrate](/docs/reference/setup-tools/kubeadm/kubeadm-config/) command, -because `v1alpha3` will be removed in Kubernetes 1.14. +because `v1alpha3` will be removed in Kubernetes 1.15. For more details on each field in the `v1beta1` configuration you can navigate to our -[API reference pages.] (https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta1) +[API reference pages](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta1). ### Adding kube-proxy parameters {#kube-proxy} @@ -148,8 +149,8 @@ Allowed customization are: * To provide an alternative `imageRepository` to be used instead of `k8s.gcr.io`. -* To provide a `unifiedControlPlaneImage` to be used instead of different images for control plane components. -* To provide a specific `etcd.image` to be used instead of the image available at`k8s.gcr.io`. +* To set `useHyperKubeImage` to `true` to use the HyperKube image. +* To provide a specific `imageRepository` and `imageTag` for etcd or DNS add-on. Please note that the configuration field `kubernetesVersion` or the command line flag `--kubernetes-version` affect the version of the images. @@ -266,20 +267,6 @@ with the `kubeadm init` and `kubeadm join` workflow to deploy Kubernetes cluster You may also want to set `--cri-socket` to `kubeadm init` and `kubeadm reset` when using an external CRI implementation. -### Using internal IPs in your cluster - -In order to set up a cluster where the master and worker nodes communicate with internal IP addresses (instead of public ones), execute following steps. - -1. When running init, you must make sure you specify an internal IP for the API server's bind address, like so: - - `kubeadm init --apiserver-advertise-address=` - -2. When a master or worker node has been provisioned, add a flag to `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` that specifies the private IP of the worker node: - - `--node-ip=` - -3. Finally, when you run `kubeadm join`, make sure you provide the private IP of the API server addressed as defined in step 1. - ### Setting the node name By default, `kubeadm` assigns a node name based on a machine's host address. You can override this setting with the `--node-name`flag. @@ -296,27 +283,23 @@ manager, and scheduler run as [DaemonSet pods](/docs/concepts/workloads/controll configured via the Kubernetes API instead of [static pods](/docs/tasks/administer-cluster/static-pod/) configured in the kubelet via static files. -To create a self-hosted cluster, pass the flag `--feature-gates=SelfHosting=true` to `kubeadm init`. - -{{< caution >}} -`SelfHosting` is an alpha feature. It is deprecated in 1.12 -and will be removed in 1.13. -{{< /caution >}} +To create a self-hosted cluster see the `kubeadm alpha selfhosting` command. #### Caveats -Self-hosting in 1.8 and later has some important limitations. In particular, a -self-hosted cluster _cannot recover from a reboot of the control-plane node_ -without manual intervention. This and other limitations are expected to be -resolved before self-hosting graduates from alpha. +1. Self-hosting in 1.8 and later has some important limitations. In particular, a + self-hosted cluster _cannot recover from a reboot of the control-plane node_ + without manual intervention. -By default, self-hosted control plane Pods rely on credentials loaded from -[`hostPath`](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath) -volumes. Except for initial creation, these credentials are not managed by -kubeadm. +1. A self-hosted cluster is not upgradeable using `kubeadm upgrade`. -In kubeadm 1.8, the self-hosted portion of the control plane does not include etcd, -which still runs as a static Pod. +1. By default, self-hosted control plane Pods rely on credentials loaded from + [`hostPath`](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath) + volumes. Except for initial creation, these credentials are not managed by + kubeadm. + +1. The self-hosted portion of the control plane does not include etcd, + which still runs as a static Pod. #### Process @@ -345,35 +328,16 @@ In summary, `kubeadm alpha selfhosting` works as follows: ### Running kubeadm without an internet connection -For running kubeadm without an internet connection you have to pre-pull the required master images for the version of choice: - -| Image Name | v1.10 release branch version | -|--------------------------------------------|------------------------------| -| k8s.gcr.io/kube-apiserver-${ARCH} | v1.10.x | -| k8s.gcr.io/kube-controller-manager-${ARCH} | v1.10.x | -| k8s.gcr.io/kube-scheduler-${ARCH} | v1.10.x | -| k8s.gcr.io/kube-proxy-${ARCH} | v1.10.x | -| k8s.gcr.io/etcd-${ARCH} | 3.1.12 | -| k8s.gcr.io/pause-${ARCH} | 3.1 | -| k8s.gcr.io/k8s-dns-sidecar-${ARCH} | 1.14.8 | -| k8s.gcr.io/k8s-dns-kube-dns-${ARCH} | 1.14.8 | -| k8s.gcr.io/k8s-dns-dnsmasq-nanny-${ARCH} | 1.14.8 | -| coredns/coredns | 1.0.6 | - -Here `v1.10.x` means the "latest patch release of the v1.10 branch". - -`${ARCH}` can be one of: `amd64`, `arm`, `arm64`, `ppc64le` or `s390x`. - -If you run Kubernetes version 1.10 or earlier, and if you set `--feature-gates=CoreDNS=true`, -you must also use the `coredns/coredns` image, instead of the three `k8s-dns-*` images. +For running kubeadm without an internet connection you have to pre-pull the required control-plane images. In Kubernetes 1.11 and later, you can list and pull the images using the `kubeadm config images` sub-command: -``` + +```shell kubeadm config images list kubeadm config images pull ``` -Starting with Kubernetes 1.12, the `k8s.gcr.io/kube-*`, `k8s.gcr.io/etcd` and `k8s.gcr.io/pause` images +In Kubernetes 1.12 and later, the `k8s.gcr.io/kube-*`, `k8s.gcr.io/etcd` and `k8s.gcr.io/pause` images don't require an `-${ARCH}` suffix. ### Automating kubeadm @@ -381,7 +345,7 @@ don't require an `-${ARCH}` suffix. Rather than copying the token you obtained from `kubeadm init` to each node, as in the [basic kubeadm tutorial](/docs/setup/independent/create-cluster-kubeadm/), you can parallelize the token distribution for easier automation. To implement this automation, you must -know the IP address that the master will have after it is started. +know the IP address that the control-plane node will have after it is started. 1. Generate a token. This token must have the form `<6 character string>.<16 character string>`. More formally, it must match the regex: @@ -389,7 +353,7 @@ know the IP address that the master will have after it is started. kubeadm can generate a token for you: - ```bash + ```shell kubeadm token generate ``` diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join-phase.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join-phase.md new file mode 100644 index 0000000000000..bb993fa113cc9 --- /dev/null +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join-phase.md @@ -0,0 +1,62 @@ +--- +title: kubeadm join phase +weight: 90 +--- +In v1.14.0, kubeadm introduces the `kubeadm join phase` command with the aim of making kubeadm more modular. This modularity enables you to invoke atomic sub-steps of the join process. +Hence, you can let kubeadm do some parts and fill in yourself where you need customizations. + +`kubeadm join phase` is consistent with the [kubeadm join workflow](/docs/reference/setup-tools/kubeadm/kubeadm-join/#join-workflow), +and behind the scene both use the same code. + +## kubeadm join phase {#cmd-join-phase} + +{{< tabs name="tab-phase" >}} +{{< tab name="phase" include="generated/kubeadm_join_phase.md" />}} +{{< /tabs >}} + +## kubeadm join phase preflight {#cmd-join-phase-preflight} + +Using this phase you can execute preflight checks on a joining node. + +{{< tabs name="tab-preflight" >}} +{{< tab name="preflight" include="generated/kubeadm_join_phase_preflight.md" />}} +{{< /tabs >}} + +## kubeadm join phase control-plane-prepare {#cmd-join-phase-control-plane-prepare} + +Using this phase you can prepare a node for serving a control-plane. + +{{< tabs name="tab-control-plane-prepare" >}} +{{< tab name="control-plane-prepare" include="generated/kubeadm_join_phase_control-plane-prepare.md" />}} +{{< tab name="all" include="generated/kubeadm_join_phase_control-plane-prepare_all.md" />}} +{{< tab name="download-certs" include="generated/kubeadm_join_phase_control-plane-prepare_download-certs.md" />}} +{{< tab name="certs" include="generated/kubeadm_join_phase_control-plane-prepare_certs.md" />}} +{{< tab name="kubeconfig" include="generated/kubeadm_join_phase_control-plane-prepare_kubeconfig.md" />}} +{{< tab name="control-plane" include="generated/kubeadm_join_phase_control-plane-prepare_control-plane.md" />}} +{{< /tabs >}} + +## kubeadm join phase kubelet-start {#cmd-join-phase-kubelet-start} + +Using this phase you can write the kubelet settings, certificates and (re)start the kubelet. + +{{< tabs name="tab-kubelet-start" >}} +{{< tab name="kubelet-start" include="generated/kubeadm_join_phase_kubelet-start.md" />}} +{{< /tabs >}} + +## kubeadm join phase control-plane-join {#cmd-join-phase-control-plane-join} + +Using this phase you can join a node as a control-plane instance. + +{{< tabs name="tab-control-plane-join" >}} +{{< tab name="control-plane-join" include="generated/kubeadm_join_phase_control-plane-join.md" />}} +{{< tab name="all" include="generated/kubeadm_join_phase_control-plane-join_all.md" />}} +{{< tab name="etcd" include="generated/kubeadm_join_phase_control-plane-join_etcd.md" />}} +{{< tab name="update-status" include="generated/kubeadm_join_phase_control-plane-join_update-status.md" />}} +{{< tab name="mark-control-plane" include="generated/kubeadm_join_phase_control-plane-join_mark-control-plane.md" />}} +{{< /tabs >}} + +## What's next +* [kubeadm init](/docs/reference/setup-tools/kubeadm/kubeadm-init/) to bootstrap a Kubernetes control-plane node +* [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join/) to connect a node to the cluster +* [kubeadm reset](/docs/reference/setup-tools/kubeadm/kubeadm-reset/) to revert any changes made to this host by `kubeadm init` or `kubeadm join` +* [kubeadm alpha](/docs/reference/setup-tools/kubeadm/kubeadm-alpha/) to try experimental functionality diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md index 6c6de5a281606..7852e16af1e0e 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md @@ -14,23 +14,16 @@ This command initializes a Kubernetes worker node and joins it to the cluster. {{% capture body %}} {{< include "generated/kubeadm_join.md" >}} -### The joining workflow +### The join workflow {#join-workflow} -`kubeadm join` bootstraps a Kubernetes worker node and joins it to the cluster. -This action consists of the following steps: +`kubeadm join` bootstraps a Kubernetes worker node or a control-plane node and adds it to the cluster. +This action consists of the following steps for worker nodes: 1. kubeadm downloads necessary cluster information from the API server. By default, it uses the bootstrap token and the CA key hash to verify the authenticity of that data. The root CA can also be discovered directly via a file or URL. -1. If kubeadm is invoked with `--feature-gates=DynamicKubeletConfig` enabled, - it first retrieves the kubelet init configuration from the master and writes it to - the disk. When kubelet starts up, kubeadm updates the node `Node.spec.configSource` property of the node. - See [Set Kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file/) - and [Reconfigure a Node's Kubelet in a Live Cluster](/docs/tasks/administer-cluster/reconfigure-kubelet/) - for more information about Dynamic Kubelet Configuration. - 1. Once the cluster information is known, kubelet can start the TLS bootstrapping process. @@ -41,6 +34,40 @@ This action consists of the following steps: 1. Finally, kubeadm configures the local kubelet to connect to the API server with the definitive identity assigned to the node. +For control-plane nodes additional steps are performed: + +1. Downloading certificates shared among control-plane nodes from the cluster + (if explicitly requested by the user). + +1. Generating control-plane component manifests, certificates and kubeconfig. + +1. Adding new local etcd member. + +1. Adding this node to the ClusterStatus of the kubeadm cluster. + +### Using join phases with kubeadm {#join-phases} + +Kubeadm allows you join a node to the cluster in phases. The `kubeadm join phase` command was added in v1.14.0. + +To view the ordered list of phases and sub-phases you can call `kubeadm join --help`. The list will be located +at the top of the help screen and each phase will have a description next to it. +Note that by calling `kubeadm join` all of the phases and sub-phases will be executed in this exact order. + +Some phases have unique flags, so if you want to have a look at the list of available options add `--help`, for example: + +```shell +kubeadm join phase kubelet-start --help +``` + +Similar to the [kubeadm init phase](/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-phases) +command, `kubadm join phase` allows you to skip a list of phases using the `--skip-phases` flag. + +For example: + +```shell +sudo kubeadm join --skip-phases=preflight --config=config.yaml +``` + ### Discovering what cluster CA to trust The kubeadm discovery has several options, each with security tradeoffs. @@ -56,27 +83,35 @@ that the API server certificate is valid under the root CA. The CA key hash has the format `sha256:`. By default, the hash value is returned in the `kubeadm join` command printed at the end of `kubeadm init` or in the output of `kubeadm token create --print-join-command`. It is in a standard format (see [RFC7469](https://tools.ietf.org/html/rfc7469#section-2.4)) and can also be calculated by 3rd party tools or provisioning systems. For example, using the OpenSSL CLI: -```bash +```shell openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' ``` -**Example `kubeadm join` command:** +**Example `kubeadm join` commands:** -```bash +For worker nodes: + +```shell kubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef 1.2.3.4:6443 ``` +For control-plane nodes: + +```shell +kubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef --experimental-control-plane 1.2.3.4:6443 +``` + **Advantages:** - Allows bootstrapping nodes to securely discover a root of trust for the - master even if other worker nodes or the network are compromised. + control-plane node even if other worker nodes or the network are compromised. - Convenient to execute manually since all of the information required fits into a single `kubeadm join` command that is easy to copy and paste. **Disadvantages:** - - The CA hash is not normally known until the master has been provisioned, + - The CA hash is not normally known until the control-plane node has been provisioned, which can make it more difficult to build automated provisioning tools that use kubeadm. By generating your CA in beforehand, you may workaround this limitation though. @@ -86,13 +121,13 @@ kubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert _This was the default in Kubernetes 1.7 and earlier_, but comes with some important caveats. This mode relies only on the symmetric token to sign (HMAC-SHA256) the discovery information that establishes the root of trust for -the master. It's still possible in Kubernetes 1.8 and above using the +the control-plane. It's still possible in Kubernetes 1.8 and above using the `--discovery-token-unsafe-skip-ca-verification` flag, but you should consider using one of the other modes if possible. **Example `kubeadm join` command:** -``` +```shell kubeadm join --token abcdef.1234567890abcdef --discovery-token-unsafe-skip-ca-verification 1.2.3.4:6443` ``` @@ -100,7 +135,7 @@ kubeadm join --token abcdef.1234567890abcdef --discovery-token-unsafe-skip-ca-ve - Still protects against many network-level attacks. - - The token can be generated ahead of time and shared with the master and + - The token can be generated ahead of time and shared with the control-plane node and worker nodes, which can then bootstrap in parallel without coordination. This allows it to be used in many provisioning scenarios. @@ -108,11 +143,11 @@ kubeadm join --token abcdef.1234567890abcdef --discovery-token-unsafe-skip-ca-ve - If an attacker is able to steal a bootstrap token via some vulnerability, they can use that token (along with network-level access) to impersonate the - master to other bootstrapping nodes. This may or may not be an appropriate + control-plane node to other bootstrapping nodes. This may or may not be an appropriate tradeoff in your environment. #### File or HTTPS-based discovery -This provides an out-of-band way to establish a root of trust between the master +This provides an out-of-band way to establish a root of trust between the control-plane node and bootstrapping nodes. Consider using this mode if you are building automated provisioning using kubeadm. @@ -125,12 +160,12 @@ using kubeadm. **Advantages:** - Allows bootstrapping nodes to securely discover a root of trust for the - master even if the network or other worker nodes are compromised. + control-plane node even if the network or other worker nodes are compromised. **Disadvantages:** - Requires that you have some way to carry the discovery information from - the master to the bootstrapping nodes. This might be possible, for example, + the control-plane node to the bootstrapping nodes. This might be possible, for example, via your cloud provider or provisioning tool. The information in this file is not secret, but HTTPS or equivalent is required to ensure its integrity. @@ -145,21 +180,21 @@ By default, there is a CSR auto-approver enabled that basically approves any cli for a kubelet when a Bootstrap Token was used when authenticating. If you don't want the cluster to automatically approve kubelet client certs, you can turn it off by executing this command: -```console +```shell $ kubectl delete clusterrolebinding kubeadm:node-autoapprove-bootstrap ``` After that, `kubeadm join` will block until the admin has manually approved the CSR in flight: -```console -$ kubectl get csr +```shell +kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ 18s system:bootstrap:878f07 Pending -$ kubectl certificate approve node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ +kubectl certificate approve node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ certificatesigningrequest "node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ" approved -$ kubectl get csr +kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ 1m system:bootstrap:878f07 Approved,Issued ``` @@ -169,15 +204,15 @@ Only after `kubectl certificate approve` has been run, `kubeadm join` can procee #### Turning off public access to the cluster-info ConfigMap In order to achieve the joining flow using the token as the only piece of validation information, a - ConfigMap with some data needed for validation of the master's identity is exposed publicly by + ConfigMap with some data needed for validation of the control-plane node's identity is exposed publicly by default. While there is no private data in this ConfigMap, some users might wish to turn it off regardless. Doing so will disable the ability to use the `--discovery-token` flag of the `kubeadm join` flow. Here are the steps to do so: * Fetch the `cluster-info` file from the API Server: -```console -$ kubectl -n kube-public get cm cluster-info -o yaml | grep "kubeconfig:" -A11 | grep "apiVersion" -A10 | sed "s/ //" | tee cluster-info.yaml +```shell +kubectl -n kube-public get cm cluster-info -o yaml | grep "kubeconfig:" -A11 | grep "apiVersion" -A10 | sed "s/ //" | tee cluster-info.yaml apiVersion: v1 clusters: - cluster: @@ -195,8 +230,8 @@ users: [] * Turn off public access to the `cluster-info` ConfigMap: -```console -$ kubectl -n kube-public delete rolebinding kubeadm:bootstrap-signer-clusterinfo +```shell +kubectl -n kube-public delete rolebinding kubeadm:bootstrap-signer-clusterinfo ``` These commands should be run after `kubeadm init` but before `kubeadm join`. @@ -214,7 +249,7 @@ contain a `JoinConfiguration` structure. To print the default values of `JoinConfiguration` run the following command: -```bash +```shell kubeadm config print-default --api-objects=JoinConfiguration ``` diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm.md index eccf635588e72..80d4dff5b3e61 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm.md @@ -5,6 +5,9 @@ reviewers: - jbeda title: Overview of kubeadm weight: 10 +card: + name: reference + weight: 40 --- Kubeadm is a tool built to provide `kubeadm init` and `kubeadm join` as best-practice “fast paths” for creating Kubernetes clusters. diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md index 5438cd2a7dce1..1d45abedef66c 100644 --- a/content/en/docs/reference/using-api/api-concepts.md +++ b/content/en/docs/reference/using-api/api-concepts.md @@ -317,6 +317,110 @@ Some values of an object are typically generated before the object is persisted. * Any field set by a mutating admission controller * For the `Service` resource: Ports or IPs that kube-apiserver assigns to v1.Service objects -{{% /capture %}} +## Server Side Apply + +{{< feature-state for_k8s_version="v1.14" state="alpha" >}} Server Side Apply allows clients other than kubectl to perform the Apply operation, and will eventually fully replace the complicated Client Side Apply logic that only exists in kubectl. If the Server Side Apply feature is enabled, The `PATCH` endpoint accepts the additional `application/apply-patch+yaml` content type. Users of Server Side Apply can send partially specified objects to this endpoint. An applied config should always include every field that the applier has an opinion about. + +### Enable the Server Side Apply alpha feature + +Server Side Apply is an alpha feature, so it is disabled by default. To turn this [feature gate](/docs/reference/command-line-tools-reference/feature-gates) on, +you need to include in the `--feature-gates ServerSideApply=true` flag when starting `kube-apiserver`. +If you have multiple `kube-apiserver` replicas, all should have the same flag setting. + +### Field Management + +Compared to the `last-applied` annotation managed by `kubectl`, Server Side Apply uses a more declarative approach, which tracks a user's field management, rather than a user's last applied state. This means that as a side effect of using Server Side Apply, information about which field manager manages each field in an object also becomes available. + +For a user to manage a field, in the Server Side Apply sense, means that the user relies on and expects the value of the field not to change. The user who last made an assertion about the value of a field will be recorded as the current field manager. This can be done either by changing the value with `POST`, `PUT`, or non-apply `PATCH`, or by including the field in a config sent to the Server Side Apply endpoint. Any applier that tries to change the field which is managed by someone else will get its request rejected (if not forced, see the Conflicts section below). + +Field management is stored in a newly introduced `managedFields` field that is part of an object's [`metadata`](/docs/reference/generated/kubernetes-api/v1.14/#objectmeta-v1-meta). + +A simple example of an object created by Server Side Apply could look like this: + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: test-cm + namespace: default + labels: + test-label: test + managedFields: + - manager: kubectl + operation: Apply + apiVersion: v1 + fields: + f:metadata: + f:labels: + f:test-label: {} + f:data: + f:key: {} +data: + key: some value +``` + +The above object contains a single manager in `metadata.managedFields`. The manager consists of basic information about the managing entity itself, like operation type, api version, and the fields managed by it. + +{{< note >}} This field is managed by the apiserver and should not be changed by the user. {{< /note >}} + +### Operations + +The two operation types considered by this feature are `Apply` (`PATCH` with content type `application/apply-patch+yaml`) and `Update` (all other operations which modify the object). Both operations update the `managedFields`, but behave a little differently. + +For instance, only the apply operation fails on conflicts while update does not. Also, apply operations are required to identify themselves by providing a `fieldManager` query parameter, while the query parameter is optional for update operations. + +An example object with multiple managers could look like this: + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: test-cm + namespace: default + labels: + test-label: test + managedFields: + - manager: kubectl + operation: Apply + apiVersion: v1 + fields: + f:metadata: + f:labels: + f:test-label: {} + - manager: kube-controller-manager + operation: Update + apiVersion: v1 + time: '2019-03-30T16:00:00.000Z' + fields: + f:data: + f:key: {} +data: + key: new value +``` + +In this example, a second operation was run as an `Update` by the manager called `kube-controller-manager`. The update changed a value in the data field which caused the field's management to change to the `kube-controller-manager`. +{{< note >}}If this update would have been an `Apply` operation, the operation would have failed due to conflicting ownership.{{< /note >}} + +### Merge Rules + +When a user sends a partially specified object to the Server Side Apply endpoint, the server merges it with the live object favoring the value in the applied config if it is specified twice. If the set of items present in the applied config is not a superset of the items applied by the same user last time, each missing item not managed by any other field manager is removed. For more information about how an object's schema is used to make decisions when merging, see [sigs.k8s.io/structured-merge-diff](https://sigs.k8s.io/structured-merge-diff). + +### Conflicts + +A conflict is a special status error that occurs when an `Apply` operation tries to change a field, which another user also claims to manage. This prevents an applier from unintentionally overwriting the value set by another user. When this occurs, the applier has 3 options to resolve the conflicts: + +* **Overwrite value, become sole manager:** If overwriting the value was intentional (or if the applier is an automated process like a controller) the applier should set the `force` query parameter to true and make the request again. This forces the operation to succeed, changes the value of the field, and removes the field from all other managers' entries in managedFields. +* **Don't overwrite value, give up management claim:** If the applier doesn't care about the value of the field anymore, they can remove it from their config and make the request again. This leaves the value unchanged, and causes the field to be removed from the applier's entry in managedFields. +* **Don't overwrite value, become shared manager:** If the applier still cares about the value of the field, but doesn't want to overwrite it, they can change the value of the field in their config to match the value of the object on the server, and make the request again. This leaves the value unchanged, and causes the field's management to be shared by the applier and all other field managers that already claimed to manage it. + +### Comparison with Client Side Apply + +A consequence of the conflict detection and resolution implemented by Server Side Apply is that an applier always has up to date field values in their local state. If they don't, they get a conflict the next time they apply. Any of the three options to resolve conflicts results in the applied config being an up to date subset of the object on the server's fields. + +This is different from Client Side Apply, where outdated values which have been overwritten by other users are left in an applier's local config. These values only become accurate when the user updates that specific field, if ever, and an applier has no way of knowing whether their next apply will overwrite other users' changes. + +Another difference is that an applier using Client Side Apply is unable to change the API version they are using, but Server Side Apply supports this use case. +### Custom Resources +Server Side Apply currently treats all custom resources as unstructured data. All keys are treated the same as struct fields, and all lists are considered atomic. In the future, it will use the validation field in Custom Resource Definitions to allow Custom Resource authors to define how to how to merge their own objects. diff --git a/content/en/docs/reference/using-api/api-overview.md b/content/en/docs/reference/using-api/api-overview.md index 38baa5aa92ac8..f74a204f38116 100644 --- a/content/en/docs/reference/using-api/api-overview.md +++ b/content/en/docs/reference/using-api/api-overview.md @@ -7,6 +7,10 @@ reviewers: - jbeda content_template: templates/concept weight: 10 +card: + name: reference + weight: 50 + title: Overview of API --- {{% capture overview %}} diff --git a/content/en/docs/setup/cri.md b/content/en/docs/setup/cri.md index 6f16f6a8f6644..dfe484aa2b220 100644 --- a/content/en/docs/setup/cri.md +++ b/content/en/docs/setup/cri.md @@ -7,20 +7,56 @@ content_template: templates/concept weight: 100 --- {{% capture overview %}} -Since v1.6.0, Kubernetes has enabled the use of CRI, Container Runtime Interface, by default. -This page contains installation instruction for various runtimes. +{{< feature-state for_k8s_version="v1.6" state="stable" >}} +To run containers in Pods, Kubernetes uses a container runtime. Here are +the installation instruction for various runtimes. {{% /capture %}} {{% capture body %}} -Please proceed with executing the following commands based on your OS as root. -You may become the root user by executing `sudo -i` after SSH-ing to each host. + +{{< caution >}} +A flaw was found in the way runc handled system file descriptors when running containers. +A malicious container could use this flaw to overwrite contents of the runc binary and +consequently run arbitrary commands on the container host system. + +Please refer to this link for more information about this issue +[cve-2019-5736 : runc vulnerability ] (https://access.redhat.com/security/cve/cve-2019-5736) +{{< /caution >}} + +### Applicability + +{{< note >}} +This document is written for users installing CRI onto Linux. For other operating +systems, look for documentation specific to your platform. +{{< /note >}} + +You should execute all the commands in this guide as `root`. For example, prefix commands +with `sudo `, or become `root` and run the commands as that user. + +### Cgroup drivers + +When systemd is chosen as the init system for a Linux distribution, the init process generates +and consumes a root control group (`cgroup`) and acts as a cgroup manager. Systemd has a tight +integration with cgroups and will allocate cgroups per process. It's possible to configure your +container runtime and the kubelet to use `cgroupfs`. Using `cgroupfs` alongside systemd means +that there will then be two different cgroup managers. + +Control groups are used to constrain resources that are allocated to processes. +A single cgroup manager will simplify the view of what resources are being allocated +and will by default have a more consistent view of the available and in-use resources. When we have +two managers we end up with two views of those resources. We have seen cases in the field +where nodes that are configured to use `cgroupfs` for the kubelet and Docker, and `systemd` +for the rest of the processes running on the node becomes unstable under resource pressure. + +Changing the settings such that your container runtime and kubelet use `systemd` as the cgroup driver +stabilized the system. Please note the `native.cgroupdriver=systemd` option in the Docker setup below. ## Docker On each of your machines, install Docker. -Version 18.06 is recommended, but 1.11, 1.12, 1.13, 17.03 and 18.09 are known to work as well. +Version 18.06.2 is recommended, but 1.11, 1.12, 1.13, 17.03 and 18.09 are known to work as well. Keep track of the latest verified Docker version in the Kubernetes release notes. Use the following commands to install Docker on your system: @@ -45,7 +81,7 @@ Use the following commands to install Docker on your system: stable" ## Install docker ce. -apt-get update && apt-get install docker-ce=18.06.0~ce~3-0~ubuntu +apt-get update && apt-get install docker-ce=18.06.2~ce~3-0~ubuntu # Setup daemon. cat > /etc/docker/daemon.json < node-role.kubernetes.io/master:NoSchedule- +``` -# Generate and deploy etcd certificates -export CLUSTER_DOMAIN=$(kubectl get ConfigMap --namespace kube-system coredns -o yaml | awk '/kubernetes/ {print $2}') -tls/certs/gen-cert.sh $CLUSTER_DOMAIN -tls/deploy-certs.sh +To deploy Cilium you just need to run: -# Label kube-dns with fixed identity label -kubectl label -n kube-system pod $(kubectl -n kube-system get pods -l k8s-app=kube-dns -o jsonpath='{range .items[]}{.metadata.name}{" "}{end}') io.cilium.fixed-identity=kube-dns +```shell +kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.4/examples/kubernetes/1.13/cilium.yaml +``` -kubectl create -f ./ +Once all Cilium pods are marked as `READY`, you start using your cluster. -# Wait several minutes for Cilium, coredns and etcd pods to converge to a working state +```shell +$ kubectl get pods -n kube-system --selector=k8s-app=cilium +NAME READY STATUS RESTARTS AGE +cilium-drxkl 1/1 Running 0 18m ``` - {{% /tab %}} {{% tab name="Flannel" %}} @@ -341,10 +340,11 @@ Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information please see [here](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements). -Note that `flannel` works on `amd64`, `arm`, `arm64` and `ppc64le`. +Note that `flannel` works on `amd64`, `arm`, `arm64`, `ppc64le` and `s390x` under Linux. +Windows (`amd64`) is claimed as supported in v0.11.0 but the usage is undocumented. ```shell -kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml +kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml ``` For more information about `flannel`, see [the CoreOS flannel repository on GitHub @@ -402,6 +402,16 @@ There are multiple, flexible ways to install JuniperContrail/TungstenFabric CNI. Kindly refer to this quickstart: [TungstenFabric](https://tungstenfabric.github.io/website/) {{% /tab %}} + +{{% tab name="Contiv-VPP" %}} +[Contiv-VPP](https://contivpp.io/) employs a programmable CNF vSwitch based on [FD.io VPP](https://fd.io/), +offering feature-rich & high-performance cloud-native networking and services. + +It implements k8s services and network policies in the user space (on VPP). + +Please refer to this installation guide: [Contiv-VPP Manual Installation](https://github.com/contiv/vpp/blob/master/docs/setup/MANUAL_INSTALL.md) +{{% /tab %}} + {{< /tabs >}} diff --git a/content/en/docs/setup/independent/high-availability.md b/content/en/docs/setup/independent/high-availability.md index 10e0e2b32ce37..38f0425dfc530 100644 --- a/content/en/docs/setup/independent/high-availability.md +++ b/content/en/docs/setup/independent/high-availability.md @@ -19,15 +19,12 @@ control plane nodes and etcd members are separated. Before proceeding, you should carefully consider which approach best meets the needs of your applications and environment. [This comparison topic](/docs/setup/independent/ha-topology/) outlines the advantages and disadvantages of each. -Your clusters must run Kubernetes version 1.12 or later. You should also be aware that -setting up HA clusters with kubeadm is still experimental and will be further simplified -in future versions. You might encounter issues with upgrading your clusters, for example. +You should also be aware that setting up HA clusters with kubeadm is still experimental and will be further +simplified in future versions. You might encounter issues with upgrading your clusters, for example. We encourage you to try either approach, and provide us with feedback in the kubeadm [issue tracker](https://github.com/kubernetes/kubeadm/issues/new). -Note that the alpha feature gate `HighAvailability` is deprecated in v1.12 and removed in v1.13. - -See also [The HA upgrade documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-ha-1-13). +See also [The upgrade documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14). {{< caution >}} This page does not address running your cluster on a cloud provider. In a cloud @@ -57,28 +54,12 @@ For the external etcd cluster only, you also need: - Three additional machines for etcd members -{{< note >}} -The following examples run Calico as the Pod networking provider. If you run another -networking provider, make sure to replace any default values as needed. -{{< /note >}} - {{% /capture %}} {{% capture steps %}} ## First steps for both methods -{{< note >}} -**Note**: All commands on any control plane or etcd node should be -run as root. -{{< /note >}} - -- Some CNI network plugins like Calico require a CIDR such as `192.168.0.0/16` and - some like Weave do not. See the [CNI network - documentation](/docs/setup/independent/create-cluster-kubeadm/#pod-network). - To add a pod CIDR set the `podSubnet: 192.168.0.0/16` field under - the `networking` object of `ClusterConfiguration`. - ### Create load balancer for kube-apiserver {{< note >}} @@ -119,38 +100,6 @@ option. Your cluster requirements may need a different configuration. 1. Add the remaining control plane nodes to the load balancer target group. -### Configure SSH - -SSH is required if you want to control all nodes from a single machine. - -1. Enable ssh-agent on your main device that has access to all other nodes in - the system: - - ``` - eval $(ssh-agent) - ``` - -1. Add your SSH identity to the session: - - ``` - ssh-add ~/.ssh/path_to_private_key - ``` - -1. SSH between nodes to check that the connection is working correctly. - - - When you SSH to any node, make sure to add the `-A` flag: - - ``` - ssh -A 10.0.0.7 - ``` - - - When using sudo on any node, make sure to preserve the environment so SSH - forwarding works: - - ``` - sudo -E -s - ``` - ## Stacked control plane and etcd nodes ### Steps for the first control plane node @@ -160,9 +109,6 @@ SSH is required if you want to control all nodes from a single machine. apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: stable - apiServer: - certSANs: - - "LOAD_BALANCER_DNS" controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" - `kubernetesVersion` should be set to the Kubernetes version to use. This @@ -170,131 +116,124 @@ SSH is required if you want to control all nodes from a single machine. - `controlPlaneEndpoint` should match the address or DNS and port of the load balancer. - It's recommended that the versions of kubeadm, kubelet, kubectl and Kubernetes match. -1. Make sure that the node is in a clean state: +{{< note >}} +Some CNI network plugins like Calico require a CIDR such as `192.168.0.0/16` and +some like Weave do not. See the [CNI network +documentation](/docs/setup/independent/create-cluster-kubeadm/#pod-network). +To add a pod CIDR set the `podSubnet: 192.168.0.0/16` field under +the `networking` object of `ClusterConfiguration`. +{{< /note >}} + +1. Initialize the control plane: ```sh - sudo kubeadm init --config=kubeadm-config.yaml + sudo kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs ``` - - You should see something like: + - The `--experimental-upload-certs` flags is used to upload the certificates that should be shared + across all the control-plane instances to the cluster. If instead, you prefer to copy certs across + control-plane nodes manually or using automation tools, please remove this flag and refer to [Manual + certificate distribution](#manual-certs) section bellow. + + After the command completes you should see something like so: ```sh ... - You can now join any number of machines by running the following on each node - as root: + You can now join any number of control-plane node by running the following command on each as a root: + kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --experimental-control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07 - kubeadm join 192.168.0.200:6443 --token j04n3m.octy8zely83cy2ts --discovery-token-ca-cert-hash sha256:84938d2a22203a8e56a787ec0c6ddad7bc7dbd52ebabc62fd5f4dbea72b14d1f - ``` - -1. Copy this output to a text file. You will need it later to join other control plane nodes to the - cluster. - -1. Apply the Weave CNI plugin: + Please note that the certificate-key gives access to cluster sensitive data, keep it secret! + As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward. - ```sh - kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" + Then you can join any number of worker nodes by running the following on each as root: + kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 ``` -1. Type the following and watch the pods of the components get started: + - Copy this output to a text file. You will need it later to join control plane and worker nodes to the cluster. + - When `--experimental-upload-certs` is used with `kubeadm init`, the certificates of the primary control plane + are encrypted and uploaded in the `kubeadm-certs` Secret. + - To re-upload the certificates and generate a new decryption key, use the following command on a control plane + node that is already joined to the cluster: - ```sh - kubectl get pod -n kube-system -w - ``` - - - It's recommended that you join new control plane nodes only after the first node has finished initializing. + ```sh + sudo kubeadm init phase upload-certs --experimental-upload-certs + ``` -1. Copy the certificate files from the first control plane node to the rest: - - In the following example, replace `CONTROL_PLANE_IPS` with the IP addresses of the - other control plane nodes. - ```sh - USER=ubuntu # customizable - CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8" - for host in ${CONTROL_PLANE_IPS}; do - scp /etc/kubernetes/pki/ca.crt "${USER}"@$host: - scp /etc/kubernetes/pki/ca.key "${USER}"@$host: - scp /etc/kubernetes/pki/sa.key "${USER}"@$host: - scp /etc/kubernetes/pki/sa.pub "${USER}"@$host: - scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host: - scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host: - scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt - scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key - scp /etc/kubernetes/admin.conf "${USER}"@$host: - done - ``` +{{< note >}} +The `kubeadm-certs` Secret and decryption key expire after two hours. +{{< /note >}} {{< caution >}} -Copy only the certificates in the above list. kubeadm will take care of generating the rest of the certificates -with the required SANs for the joining control-plane instances. If you copy all the certificates by mistake, -the creation of additional nodes could fail due to a lack of required SANs. +As stated in the command output, the certificate-key gives access to cluster sensitive data, keep it secret! {{< /caution >}} -### Steps for the rest of the control plane nodes +1. Apply the CNI plugin of your choice: + + [Follow these instructions](/docs/setup/independent/create-cluster-kubeadm/#pod-network) to install + the CNI provider. Make sure the configuration corresponds to the Pod CIDR specified in the kubeadm + configuration file if applicable. -1. Move the files created by the previous step where `scp` was used: + In this example we are using Weave Net: ```sh - USER=ubuntu # customizable - mkdir -p /etc/kubernetes/pki/etcd - mv /home/${USER}/ca.crt /etc/kubernetes/pki/ - mv /home/${USER}/ca.key /etc/kubernetes/pki/ - mv /home/${USER}/sa.pub /etc/kubernetes/pki/ - mv /home/${USER}/sa.key /etc/kubernetes/pki/ - mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/ - mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/ - mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt - mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key - mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf + kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" ``` - This process writes all the requested files in the `/etc/kubernetes` folder. - -1. Start `kubeadm join` on this node using the join command that was previously given to you by `kubeadm init` on - the first node. It should look something like this: +1. Type the following and watch the pods of the control plane components get started: ```sh - sudo kubeadm join 192.168.0.200:6443 --token j04n3m.octy8zely83cy2ts --discovery-token-ca-cert-hash sha256:84938d2a22203a8e56a787ec0c6ddad7bc7dbd52ebabc62fd5f4dbea72b14d1f --experimental-control-plane + kubectl get pod -n kube-system -w ``` - - Notice the addition of the `--experimental-control-plane` flag. This flag automates joining this - control plane node to the cluster. +### Steps for the rest of the control plane nodes + +{{< caution >}} +You must join new control plane nodes sequentially, only after the first node has finished initializing. +{{< /caution >}} -1. Type the following and watch the pods of the components get started: +For each additional control plane node you should: + +1. Execute the join command that was previously given to you by the `kubeadm init` output on the first node. + It should look something like this: ```sh - kubectl get pod -n kube-system -w + sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --experimental-control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07 ``` -1. Repeat these steps for the rest of the control plane nodes. + - The `--experimental-control-plane` flag tells `kubeadm join` to create a new control plane. + - The `--certificate-key ...` will cause the control plane certificates to be downloaded + from the `kubeadm-certs` Secret in the cluster and be decrypted using the given key. ## External etcd nodes +Setting up a cluster with external etcd nodes is similar to the procedure used for stacked etcd +with the exception that you should setup etcd first, and you should pass the etcd information +in the kubeadm config file. + ### Set up the etcd cluster -- Follow [these instructions](/docs/setup/independent/setup-ha-etcd-with-kubeadm/) - to set up the etcd cluster. +1. Follow [these instructions](/docs/setup/independent/setup-ha-etcd-with-kubeadm/) + to set up the etcd cluster. -### Set up the first control plane node +1. Setup SSH as described [here](#manual-certs). -1. Copy the following files from any node from the etcd cluster to this node: +1. Copy the following files from any etcd node in the cluster to the first control plane node: ```sh export CONTROL_PLANE="ubuntu@10.0.0.7" - +scp /etc/kubernetes/pki/etcd/ca.crt "${CONTROL_PLANE}": - +scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${CONTROL_PLANE}": - +scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}": + scp /etc/kubernetes/pki/etcd/ca.crt "${CONTROL_PLANE}": + scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${CONTROL_PLANE}": + scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}": ``` - - Replace the value of `CONTROL_PLANE` with the `user@host` of this machine. + - Replace the value of `CONTROL_PLANE` with the `user@host` of the first control plane machine. + +### Set up the first control plane node 1. Create a file called `kubeadm-config.yaml` with the following contents: apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: stable - apiServer: - certSANs: - - "LOAD_BALANCER_DNS" controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" etcd: external: @@ -306,9 +245,13 @@ the creation of additional nodes could fail due to a lack of required SANs. certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key - - The difference between stacked etcd and external etcd here is that we are using the `external` field for `etcd` in the kubeadm config. In the case of the stacked etcd topology this is managed automatically. +{{< note >}} +The difference between stacked etcd and external etcd here is that we are using +the `external` field for `etcd` in the kubeadm config. In the case of the stacked +etcd topology this is managed automatically. +{{< /note >}} - - Replace the following variables in the template with the appropriate values for your cluster: + - Replace the following variables in the config template with the appropriate values for your cluster: - `LOAD_BALANCER_DNS` - `LOAD_BALANCER_PORT` @@ -316,11 +259,13 @@ the creation of additional nodes could fail due to a lack of required SANs. - `ETCD_1_IP` - `ETCD_2_IP` -1. Run `kubeadm init --config kubeadm-config.yaml` on this node. +The following steps are exactly the same as described for stacked etcd setup: -1. Write the join command that is returned to a text file for later use. +1. Run `sudo kubeadm init --config kubeadm-config.yaml --experimental-upload-certs` on this node. -1. Apply the Weave CNI plugin: +1. Write the output join commands that are returned to a text file for later use. + +1. Apply the CNI plugin of your choice. The given example is for Weave Net: ```sh kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" @@ -328,27 +273,103 @@ the creation of additional nodes could fail due to a lack of required SANs. ### Steps for the rest of the control plane nodes -To add the rest of the control plane nodes, follow [these instructions](#steps-for-the-rest-of-the-control-plane-nodes). -The steps are the same as for the stacked etcd setup, with the exception that a local -etcd member is not created. - -To summarize: +The steps are the same as for the stacked etcd setup: - Make sure the first control plane node is fully initialized. -- Copy certificates between the first control plane node and the other control plane nodes. -- Join each control plane node with the join command you saved to a text file, plus add the `--experimental-control-plane` flag. +- Join each control plane node with the join command you saved to a text file. It's recommended +to join the control plane nodes one at a time. +- Don't forget that the decryption key from `--certificate-key` expires after two hours, by default. ## Common tasks after bootstrapping control plane -### Install a pod network +### Install workers -[Follow these instructions](/docs/setup/independent/create-cluster-kubeadm/#pod-network) to install -the pod network. Make sure this corresponds to whichever pod CIDR you provided -in the master configuration file. +Worker nodes can be joined to the cluster with the command you stored previously +as the output from the `kubeadm init` command: -### Install workers +```sh +sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 +``` + +## Manual certificate distribution {#manual-certs} + +If you choose to not use `kubeadm init` with the `--experimental-upload-certs` flag this means that +you are going to have to manually copy the certificates from the primary control plane node to the +joining control plane nodes. + +There are many ways to do this. In the following example we are using `ssh` and `scp`: + +SSH is required if you want to control all nodes from a single machine. + +1. Enable ssh-agent on your main device that has access to all other nodes in + the system: + + ``` + eval $(ssh-agent) + ``` + +1. Add your SSH identity to the session: + + ``` + ssh-add ~/.ssh/path_to_private_key + ``` + +1. SSH between nodes to check that the connection is working correctly. + + - When you SSH to any node, make sure to add the `-A` flag: + + ``` + ssh -A 10.0.0.7 + ``` -Each worker node can now be joined to the cluster with the command returned from any of the -`kubeadm init` commands. The flag `--experimental-control-plane` should not be added to worker nodes. + - When using sudo on any node, make sure to preserve the environment so SSH + forwarding works: + + ``` + sudo -E -s + ``` + +1. After configuring SSH on all the nodes you should run the following script on the first control plane node after + running `kubeadm init`. This script will copy the certificates from the first control plane node to the other + control plane nodes: + + In the following example, replace `CONTROL_PLANE_IPS` with the IP addresses of the + other control plane nodes. + ```sh + USER=ubuntu # customizable + CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8" + for host in ${CONTROL_PLANE_IPS}; do + scp /etc/kubernetes/pki/ca.crt "${USER}"@$host: + scp /etc/kubernetes/pki/ca.key "${USER}"@$host: + scp /etc/kubernetes/pki/sa.key "${USER}"@$host: + scp /etc/kubernetes/pki/sa.pub "${USER}"@$host: + scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host: + scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host: + scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt + scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key + done + ``` + +{{< caution >}} +Copy only the certificates in the above list. kubeadm will take care of generating the rest of the certificates +with the required SANs for the joining control-plane instances. If you copy all the certificates by mistake, +the creation of additional nodes could fail due to a lack of required SANs. +{{< /caution >}} + +1. Then on each joining control plane node you have to run the following script before running `kubeadm join`. + This script will move the previously copied certificates from the home directory to `/etc/kuberentes/pki`: + + ```sh + USER=ubuntu # customizable + mkdir -p /etc/kubernetes/pki/etcd + mv /home/${USER}/ca.crt /etc/kubernetes/pki/ + mv /home/${USER}/ca.key /etc/kubernetes/pki/ + mv /home/${USER}/sa.pub /etc/kubernetes/pki/ + mv /home/${USER}/sa.key /etc/kubernetes/pki/ + mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/ + mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/ + mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt + mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key + ``` {{% /capture %}} diff --git a/content/en/docs/setup/independent/install-kubeadm.md b/content/en/docs/setup/independent/install-kubeadm.md index e1b821e374b59..db25539b94eee 100644 --- a/content/en/docs/setup/independent/install-kubeadm.md +++ b/content/en/docs/setup/independent/install-kubeadm.md @@ -2,6 +2,10 @@ title: Installing kubeadm content_template: templates/task weight: 20 +card: + name: setup + weight: 20 + title: Install the kubeadm setup tool --- {{% capture overview %}} @@ -123,7 +127,7 @@ You will install these packages on all of your machines: * `kubectl`: the command line util to talk to your cluster. kubeadm **will not** install or manage `kubelet` or `kubectl` for you, so you will -need to ensure they match the version of the Kubernetes control panel you want +need to ensure they match the version of the Kubernetes control plane you want kubeadm to install for you. If you do not, there is a risk of a version skew occurring that can lead to unexpected, buggy behaviour. However, _one_ minor version skew between the kubelet and the control plane is supported, but the kubelet version may never exceed the API diff --git a/content/en/docs/setup/independent/kubelet-integration.md b/content/en/docs/setup/independent/kubelet-integration.md index 03feb7cf4dcf2..d5cc7d31326a8 100644 --- a/content/en/docs/setup/independent/kubelet-integration.md +++ b/content/en/docs/setup/independent/kubelet-integration.md @@ -193,10 +193,10 @@ The DEB and RPM packages shipped with the Kubernetes releases are: | Package name | Description | |--------------|-------------| -| `kubeadm` | Installs the `/usr/bin/kubeadm` CLI tool and [The kubelet drop-in file(#the-kubelet-drop-in-file-for-systemd) for the kubelet. | +| `kubeadm` | Installs the `/usr/bin/kubeadm` CLI tool and the [kubelet drop-in file](#the-kubelet-drop-in-file-for-systemd) for the kubelet. | | `kubelet` | Installs the `/usr/bin/kubelet` binary. | | `kubectl` | Installs the `/usr/bin/kubectl` binary. | | `kubernetes-cni` | Installs the official CNI binaries into the `/opt/cni/bin` directory. | -| `cri-tools` | Installs the `/usr/bin/crictl` binary from [https://github.com/kubernetes-incubator/cri-tools](https://github.com/kubernetes-incubator/cri-tools). | +| `cri-tools` | Installs the `/usr/bin/crictl` binary from the [cri-tools git repository](https://github.com/kubernetes-incubator/cri-tools). | {{% /capture %}} diff --git a/content/en/docs/setup/independent/troubleshooting-kubeadm.md b/content/en/docs/setup/independent/troubleshooting-kubeadm.md index cf59d2c191480..359ce55b747b5 100644 --- a/content/en/docs/setup/independent/troubleshooting-kubeadm.md +++ b/content/en/docs/setup/independent/troubleshooting-kubeadm.md @@ -56,7 +56,7 @@ This may be caused by a number of problems. The most common are: ``` There are two common ways to fix the cgroup driver problem: - + 1. Install Docker again following instructions [here](/docs/setup/independent/install-kubeadm/#installing-docker). 1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to @@ -100,9 +100,8 @@ Right after `kubeadm init` there should not be any pods in these states. until you have deployed the network solution. - If you see Pods in the `RunContainerError`, `CrashLoopBackOff` or `Error` state after deploying the network solution and nothing happens to `coredns` (or `kube-dns`), - it's very likely that the Pod Network solution and nothing happens to the DNS server, it's very - likely that the Pod Network solution that you installed is somehow broken. You - might have to grant it more RBAC privileges or use a newer version. Please file + it's very likely that the Pod Network solution that you installed is somehow broken. + You might have to grant it more RBAC privileges or use a newer version. Please file an issue in the Pod Network providers' issue tracker and get the issue triaged there. - If you install a version of Docker older than 1.12.1, remove the `MountFlags=slave` option when booting `dockerd` with `systemd` and restart `docker`. You can see the MountFlags in `/usr/lib/systemd/system/docker.service`. @@ -155,6 +154,18 @@ Unable to connect to the server: x509: certificate signed by unknown authority ( regenerate a certificate if necessary. The certificates in a kubeconfig file are base64 encoded. The `base64 -d` command can be used to decode the certificate and `openssl x509 -text -noout` can be used for viewing the certificate information. +- Unset the `KUBECONFIG` environment variable using: + + ```sh + unset KUBECONFIG + ``` + + Or set it to the default `KUBECONFIG` location: + + ```sh + export KUBECONFIG=/etc/kubernetes/admin.conf + ``` + - Another workaround is to overwrite the existing `kubeconfig` for the "admin" user: ```sh @@ -250,4 +261,23 @@ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/dock yum install docker-ce-18.06.1.ce-3.el7.x86_64 ``` +## Not possible to pass a comma separated list of values to arguments inside a `--component-extra-args` flag + +`kubeadm init` flags such as `--component-extra-args` allow you to pass custom arguments to a control-plane +component like the kube-apiserver. However, this mechanism is limited due to the underlying type used for parsing +the values (`mapStringString`). + +If you decide to pass an argument that supports multiple, comma-separated values such as +`--apiserver-extra-args "enable-admission-plugins=LimitRanger,NamespaceExists"` this flag will fail with +`flag: malformed pair, expect string=string`. This happens because the list of arguments for +`--apiserver-extra-args` expects `key=value` pairs and in this case `NamespacesExists` is considered +as a key that is missing a value. + +Alternativelly, you can try separating the `key=value` pairs like so: +`--apiserver-extra-args "enable-admission-plugins=LimitRanger,enable-admission-plugins=NamespaceExists"` +but this will result in the key `enable-admission-plugins` only having the value of `NamespaceExists`. + +A known workaround is to use the kubeadm +[configuration file](https://kubernetes.io/docs/setup/independent/control-plane-flags/#apiserver-flags). + {{% /capture %}} diff --git a/content/en/docs/setup/minikube.md b/content/en/docs/setup/minikube.md index 122ce8b598841..0a8d06fe3dd40 100644 --- a/content/en/docs/setup/minikube.md +++ b/content/en/docs/setup/minikube.md @@ -47,30 +47,50 @@ Note that the IP below is dynamic and can change. It can be retrieved with `mini * none (Runs the Kubernetes components on the host and not in a VM. Using this driver requires Docker ([docker install](https://docs.docker.com/install/linux/docker-ce/ubuntu/)) and a Linux environment) ```shell -$ minikube start +minikube start +``` +``` Starting local Kubernetes cluster... Running pre-create checks... Creating machine... Starting local Kubernetes cluster... - -$ kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080 +``` +```shell +kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080 +``` +``` deployment.apps/hello-minikube created -$ kubectl expose deployment hello-minikube --type=NodePort -service/hello-minikube exposed +``` +```shell +kubectl expose deployment hello-minikube --type=NodePort +``` +``` +service/hello-minikube exposed +``` +``` # We have now launched an echoserver pod but we have to wait until the pod is up before curling/accessing it # via the exposed service. # To check whether the pod is up and running we can use the following: -$ kubectl get pod +kubectl get pod +``` +``` NAME READY STATUS RESTARTS AGE hello-minikube-3383150820-vctvh 0/1 ContainerCreating 0 3s +``` +``` # We can see that the pod is still being created from the ContainerCreating status -$ kubectl get pod +kubectl get pod +``` +``` NAME READY STATUS RESTARTS AGE hello-minikube-3383150820-vctvh 1/1 Running 0 13s +``` +``` # We can see that the pod is now Running and we will now be able to curl it: -$ curl $(minikube service hello-minikube --url) - +curl $(minikube service hello-minikube --url) +``` +``` Hostname: hello-minikube-7c77b68cff-8wdzq @@ -96,13 +116,26 @@ Request Headers: Request Body: -no body in request- +``` - -$ kubectl delete services hello-minikube +```shell +kubectl delete services hello-minikube +``` +``` service "hello-minikube" deleted -$ kubectl delete deployment hello-minikube +``` + +```shell +kubectl delete deployment hello-minikube +``` +``` deployment.extensions "hello-minikube" deleted -$ minikube stop +``` + +```shell +minikube stop +``` +``` Stopping local Kubernetes cluster... Stopping "minikube"... ``` @@ -114,7 +147,7 @@ Stopping "minikube"... To use [containerd](https://github.com/containerd/containerd) as the container runtime, run: ```bash -$ minikube start \ +minikube start \ --network-plugin=cni \ --enable-default-cni \ --container-runtime=containerd \ @@ -124,7 +157,7 @@ $ minikube start \ Or you can use the extended version: ```bash -$ minikube start \ +minikube start \ --network-plugin=cni \ --enable-default-cni \ --extra-config=kubelet.container-runtime=remote \ @@ -138,7 +171,7 @@ $ minikube start \ To use [CRI-O](https://github.com/kubernetes-incubator/cri-o) as the container runtime, run: ```bash -$ minikube start \ +minikube start \ --network-plugin=cni \ --enable-default-cni \ --container-runtime=cri-o \ @@ -148,7 +181,7 @@ $ minikube start \ Or you can use the extended version: ```bash -$ minikube start \ +minikube start \ --network-plugin=cni \ --enable-default-cni \ --extra-config=kubelet.container-runtime=remote \ @@ -162,7 +195,7 @@ $ minikube start \ To use [rkt](https://github.com/rkt/rkt) as the container runtime run: ```shell -$ minikube start \ +minikube start \ --network-plugin=cni \ --enable-default-cni \ --container-runtime=rkt @@ -269,7 +302,7 @@ To set the `AuthorizationMode` on the `apiserver` to `RBAC`, you can use: `--ext ### Stopping a Cluster The `minikube stop` command can be used to stop your cluster. This command shuts down the Minikube Virtual Machine, but preserves all cluster state and data. -Starting the cluster again will restore it to it's previous state. +Starting the cluster again will restore it to its previous state. ### Deleting a Cluster The `minikube delete` command can be used to delete your cluster. @@ -379,7 +412,7 @@ To do this, pass the required environment variables as flags during `minikube st For example: ```shell -$ minikube start --docker-env http_proxy=http://$YOURPROXY:PORT \ +minikube start --docker-env http_proxy=http://$YOURPROXY:PORT \ --docker-env https_proxy=https://$YOURPROXY:PORT ``` @@ -387,7 +420,7 @@ If your Virtual Machine address is 192.168.99.100, then chances are your proxy s To by-pass proxy configuration for this IP address, you should modify your no_proxy settings. You can do so with: ```shell -$ export no_proxy=$no_proxy,$(minikube ip) +export no_proxy=$no_proxy,$(minikube ip) ``` ## Known Issues @@ -407,8 +440,9 @@ For more information about Minikube, see the [proposal](https://git.k8s.io/commu * **Goals and Non-Goals**: For the goals and non-goals of the Minikube project, please see our [roadmap](https://git.k8s.io/minikube/docs/contributors/roadmap.md). * **Development Guide**: See [CONTRIBUTING.md](https://git.k8s.io/minikube/CONTRIBUTING.md) for an overview of how to send pull requests. * **Building Minikube**: For instructions on how to build/test Minikube from source, see the [build guide](https://git.k8s.io/minikube/docs/contributors/build_guide.md). -* **Adding a New Dependency**: For instructions on how to add a new dependency to Minikube see the [adding dependencies guide](https://git.k8s.io/minikube/docs/contributors/adding_a_dependency.md). -* **Adding a New Addon**: For instruction on how to add a new addon for Minikube see the [adding an addon guide](https://git.k8s.io/minikube/docs/contributors/adding_an_addon.md). +* **Adding a New Dependency**: For instructions on how to add a new dependency to Minikube, see the [adding dependencies guide](https://git.k8s.io/minikube/docs/contributors/adding_a_dependency.md). +* **Adding a New Addon**: For instructions on how to add a new addon for Minikube, see the [adding an addon guide](https://git.k8s.io/minikube/docs/contributors/adding_an_addon.md). +* **MicroK8s**: Linux users wishing to avoid running a virtual machine may consider [MicroK8s](https://microk8s.io/) as an alternative. ## Community diff --git a/content/en/docs/setup/multiple-zones.md b/content/en/docs/setup/multiple-zones.md index da31844a01b9a..4beab2b3b9169 100644 --- a/content/en/docs/setup/multiple-zones.md +++ b/content/en/docs/setup/multiple-zones.md @@ -5,8 +5,17 @@ reviewers: - quinton-hoole title: Running in Multiple Zones weight: 90 +content_template: templates/concept --- +{{% capture overview %}} + +This page describes how to run a cluster in multiple zones. + +{{% /capture %}} + +{{% capture body %}} + ## Introduction Kubernetes 1.2 adds support for running a single cluster in multiple failure zones @@ -27,8 +36,6 @@ add similar support for other clouds or even bare metal, by simply arranging for the appropriate labels to be added to nodes and volumes). -{{< toc >}} - ## Functionality When nodes are started, the kubelet automatically adds labels to them with @@ -122,9 +129,12 @@ labels are `failure-domain.beta.kubernetes.io/region` for the region, and `failure-domain.beta.kubernetes.io/zone` for the zone: ```shell -> kubectl get nodes --show-labels +kubectl get nodes --show-labels +``` +The output is similar to this: +```shell NAME STATUS ROLES AGE VERSION LABELS kubernetes-master Ready,SchedulingDisabled 6m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master kubernetes-minion-87j9 Ready 6m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9 @@ -158,8 +168,12 @@ View the nodes again; 3 more nodes should have launched and be tagged in us-central1-b: ```shell -> kubectl get nodes --show-labels +kubectl get nodes --show-labels +``` + +The output is similar to this: +```shell NAME STATUS ROLES AGE VERSION LABELS kubernetes-master Ready,SchedulingDisabled 16m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master kubernetes-minion-281d Ready 2m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d @@ -175,7 +189,7 @@ kubernetes-minion-wf8i Ready 2m v1.13.0 Create a volume using the dynamic volume creation (only PersistentVolumes are supported for zone affinity): ```json -kubectl create -f - < kubectl get pv --show-labels +kubectl get pv --show-labels +``` + +The output is similar to this: + +```shell NAME CAPACITY ACCESSMODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE LABELS pv-gce-mj4gm 5Gi RWO Retain Bound default/claim1 manual 46s failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a ``` @@ -221,7 +240,7 @@ Because GCE PDs / AWS EBS volumes cannot be attached across zones, this means that this pod can only be created in the same zone as the volume: ```yaml -kubectl create -f - < kubectl describe pod mypod | grep Node +kubectl describe pod mypod | grep Node +``` + +```shell Node: kubernetes-minion-9vlv/10.240.0.5 -> kubectl get node kubernetes-minion-9vlv --show-labels +``` + +And check node labels: + +```shell +kubectl get node kubernetes-minion-9vlv --show-labels +``` + +```shell NAME STATUS AGE VERSION LABELS kubernetes-minion-9vlv Ready 22m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv ``` @@ -277,18 +307,26 @@ kubectl get nodes --show-labels Create the guestbook-go example, which includes an RC of size 3, running a simple web app: ```shell -find kubernetes/examples/guestbook-go/ -name '*.json' | xargs -I {} kubectl create -f {} +find kubernetes/examples/guestbook-go/ -name '*.json' | xargs -I {} kubectl apply -f {} ``` The pods should be spread across all 3 zones: ```shell -> kubectl describe pod -l app=guestbook | grep Node +kubectl describe pod -l app=guestbook | grep Node +``` + +```shell Node: kubernetes-minion-9vlv/10.240.0.5 Node: kubernetes-minion-281d/10.240.0.8 Node: kubernetes-minion-olsh/10.240.0.11 +``` - > kubectl get node kubernetes-minion-9vlv kubernetes-minion-281d kubernetes-minion-olsh --show-labels +```shell +kubectl get node kubernetes-minion-9vlv kubernetes-minion-281d kubernetes-minion-olsh --show-labels +``` + +```shell NAME STATUS ROLES AGE VERSION LABELS kubernetes-minion-9vlv Ready 34m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv kubernetes-minion-281d Ready 20m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d @@ -300,15 +338,42 @@ Load-balancers span all zones in a cluster; the guestbook-go example includes an example load-balanced service: ```shell -> kubectl describe service guestbook | grep LoadBalancer.Ingress +kubectl describe service guestbook | grep LoadBalancer.Ingress +``` + +The output is similar to this: + +```shell LoadBalancer Ingress: 130.211.126.21 +``` + +Set the above IP: + +```shell +export IP=130.211.126.21 +``` + +Explore with curl via IP: -> ip=130.211.126.21 +```shell +curl -s http://${IP}:3000/env | grep HOSTNAME +``` -> curl -s http://${ip}:3000/env | grep HOSTNAME +The output is similar to this: + +```shell "HOSTNAME": "guestbook-44sep", +``` -> (for i in `seq 20`; do curl -s http://${ip}:3000/env | grep HOSTNAME; done) | sort | uniq +Again, explore multiple times: + +```shell +(for i in `seq 20`; do curl -s http://${IP}:3000/env | grep HOSTNAME; done) | sort | uniq +``` + +The output is similar to this: + +```shell "HOSTNAME": "guestbook-44sep", "HOSTNAME": "guestbook-hum5n", "HOSTNAME": "guestbook-ppm40", @@ -335,3 +400,5 @@ KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2c k KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2b kubernetes/cluster/kube-down.sh KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a kubernetes/cluster/kube-down.sh ``` + +{{% /capture %}} diff --git a/content/en/docs/setup/on-premises-metal/krib.md b/content/en/docs/setup/on-premises-metal/krib.md index 3762068ccddd1..4ee90777e29b7 100644 --- a/content/en/docs/setup/on-premises-metal/krib.md +++ b/content/en/docs/setup/on-premises-metal/krib.md @@ -8,7 +8,7 @@ author: Rob Hirschfeld (zehicle) This guide helps to install a Kubernetes cluster hosted on bare metal with [Digital Rebar Provision](https://github.com/digitalrebar/provision) using only its Content packages and *kubeadm*. -Digital Rebar Provision (DRP) is an integrated Golang DHCP, bare metal provisioning (PXE/iPXE) and workflow automation platform. While [DRP can be used to invoke](https://provision.readthedocs.io/en/tip/doc/integrations/ansible.html) [kubespray](../kubespray), it also offers a self-contained Kubernetes installation known as [KRIB (Kubernetes Rebar Integrated Bootstrap)](https://github.com/digitalrebar/provision-content/tree/master/krib). +Digital Rebar Provision (DRP) is an integrated Golang DHCP, bare metal provisioning (PXE/iPXE) and workflow automation platform. While [DRP can be used to invoke](https://provision.readthedocs.io/en/tip/doc/integrations/ansible.html) [kubespray](/docs/setup/custom-cloud/kubespray), it also offers a self-contained Kubernetes installation known as [KRIB (Kubernetes Rebar Integrated Bootstrap)](https://github.com/digitalrebar/provision-content/tree/master/krib). {{< note >}} KRIB is not a _stand-alone_ installer: Digital Rebar templates drive a standard *[kubeadm](/docs/admin/kubeadm/)* configuration that manages the Kubernetes installation with the [Digital Rebar cluster pattern](https://provision.readthedocs.io/en/tip/doc/arch/cluster.html#rs-cluster-pattern) to elect leaders _without external supervision_. diff --git a/content/en/docs/setup/pick-right-solution.md b/content/en/docs/setup/pick-right-solution.md index e32cbc0d4fd91..05df4722204b2 100644 --- a/content/en/docs/setup/pick-right-solution.md +++ b/content/en/docs/setup/pick-right-solution.md @@ -6,6 +6,20 @@ reviewers: title: Picking the Right Solution weight: 10 content_template: templates/concept +card: + name: setup + weight: 20 + anchors: + - anchor: "#hosted-solutions" + title: Hosted Solutions + - anchor: "#turnkey-cloud-solutions" + title: Turnkey Cloud Solutions + - anchor: "#on-premises-turnkey-cloud-solutions" + title: On-Premises Solutions + - anchor: "#custom-solutions" + title: Custom Solutions + - anchor: "#local-machine-solutions" + title: Local Machine --- {{% capture overview %}} @@ -32,14 +46,20 @@ a Kubernetes cluster from scratch. ## Local-machine Solutions +### Community Supported Tools + * [Minikube](/docs/setup/minikube/) is a method for creating a local, single-node Kubernetes cluster for development and testing. Setup is completely automated and doesn't require a cloud provider account. -* [Docker Desktop](https://www.docker.com/products/docker-desktop) is an -easy-to-install application for your Mac or Windows environment that enables you to +* [Kubeadm-dind](https://github.com/kubernetes-sigs/kubeadm-dind-cluster) is a multi-node (while minikube is single-node) Kubernetes cluster which only requires a docker daemon. It uses docker-in-docker technique to spawn the Kubernetes cluster. + +### Ecosystem Tools + +* [Docker Desktop](https://www.docker.com/products/docker-desktop) is an +easy-to-install application for your Mac or Windows environment that enables you to start coding and deploying in containers in minutes on a single-node Kubernetes cluster. -* [Minishift](https://docs.okd.io/latest/minishift/) installs the community version of the Kubernetes enterprise platform OpenShift for local development & testing. It offers an All-In-One VM (`minishift start`) for Windows, macOS and Linux and the containeriz based `oc cluster up` (Linux only) and [comes with some easy to install Add Ons](https://github.com/minishift/minishift-addons/tree/master/add-ons). +* [Minishift](https://docs.okd.io/latest/minishift/) installs the community version of the Kubernetes enterprise platform OpenShift for local development & testing. It offers an all-in-one VM (`minishift start`) for Windows, macOS, and Linux. The container start is based on `oc cluster up` (Linux only). You can also install [the included add-ons](https://github.com/minishift/minishift-addons/tree/master/add-ons). * [MicroK8s](https://microk8s.io/) provides a single command installation of the latest Kubernetes release on a local machine for development and testing. Setup is quick, fast (~30 sec) and supports many plugins including Istio with a single command. @@ -47,7 +67,7 @@ cluster. * [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers) is a Terraform/Packer/BASH based Infrastructure as Code (IaC) scripts to create a seven node (1 Boot, 1 Master, 1 Management, 1 Proxy and 3 Workers) LXD cluster on Linux Host. -* [Kubeadm-dind](https://github.com/kubernetes-sigs/kubeadm-dind-cluster) is a multi-node (while minikube is single-node) Kubernetes cluster which only requires a docker daemon. It uses docker-in-docker technique to spawn the Kubernetes cluster. +* [Kind](https://kind.sigs.k8s.io/), Kubernetes IN Docker is a tool for running local Kubernetes clusters using Docker containers as "nodes". It is primarily designed for testing Kubernetes, initially targeting the conformance tests. * [Ubuntu on LXD](/docs/getting-started-guides/ubuntu/local/) supports a nine-instance deployment on localhost. @@ -69,12 +89,14 @@ cluster. * [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) offers managed Kubernetes clusters. -* [IBM Cloud Kubernetes Service](https://console.bluemix.net/docs/containers/container_index.html) offers managed Kubernetes clusters with isolation choice, operational tools, integrated security insight into images and containers, and integration with Watson, IoT, and data. +* [IBM Cloud Kubernetes Service](https://cloud.ibm.com/docs/containers?topic=containers-container_index#container_index) offers managed Kubernetes clusters with isolation choice, operational tools, integrated security insight into images and containers, and integration with Watson, IoT, and data. * [Kubermatic](https://www.loodse.com) provides managed Kubernetes clusters for various public clouds, including AWS and Digital Ocean, as well as on-premises with OpenStack integration. * [Kublr](https://kublr.com) offers enterprise-grade secure, scalable, highly reliable Kubernetes clusters on AWS, Azure, GCP, and on-premise. It includes out-of-the-box backup and disaster recovery, multi-cluster centralized logging and monitoring, and built-in alerting. +* [KubeSail](https://kubesail.com) is an easy, free way to try Kubernetes. + * [Madcore.Ai](https://madcore.ai) is devops-focused CLI tool for deploying Kubernetes infrastructure in AWS. Master, auto-scaling group nodes with spot-instances, ingress-ssl-lego, Heapster, and Grafana. * [Nutanix Karbon](https://www.nutanix.com/products/karbon/) is a multi-cluster, highly available Kubernetes management and operational platform that simplifies the provisioning, operations, and lifecycle management of Kubernetes. @@ -83,7 +105,7 @@ cluster. * [OpenShift Online](https://www.openshift.com/features/) provides free hosted access for Kubernetes applications. -* [Oracle Container Engine for Kubernetes](https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengoverview.htm) is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud. +* [Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)](https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengoverview.htm) is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud. * [Platform9](https://platform9.com/products/kubernetes/) offers managed Kubernetes on-premises or on any public cloud, and provides 24/7 health monitoring and alerting. (Kube2go, a web-UI driven Kubernetes cluster deployment service Platform9 released, has been integrated to Platform9 Sandbox.) @@ -117,12 +139,13 @@ few commands. These solutions are actively developed and have active community s * [Madcore.Ai](https://madcore.ai/) * [Nirmata](https://nirmata.com/) * [Nutanix Karbon](https://www.nutanix.com/products/karbon/) -* [Oracle Container Engine for K8s](https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengprerequisites.htm) +* [Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)](https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengprerequisites.htm) * [Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service) * [Rancher 2.0](https://rancher.com/docs/rancher/v2.x/en/) * [Stackpoint.io](/docs/setup/turnkey/stackpoint/) -* [Tectonic by CoreOS](https://coreos.com/tectonic) +* [Supergiant.io](https://supergiant.io/) * [VMware Cloud PKS](https://cloud.vmware.com/vmware-cloud-pks) +* [VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) ## On-Premises turnkey cloud solutions These solutions allow you to create Kubernetes clusters on your internal, secure, cloud network with only a @@ -136,7 +159,7 @@ few commands. * [IBM Cloud Private](https://www.ibm.com/cloud-computing/products/ibm-cloud-private/) * [Kontena Pharos](https://kontena.io/pharos/) * [Kubermatic](https://www.loodse.com) -* [Kublr](https://kublr.com/) +* [Kublr](www.kublr.com/kubernetes.io/setup-hosted-solution) * [Mirantis Cloud Platform](https://www.mirantis.com/software/kubernetes/) * [Nirmata](https://nirmata.com/) * [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) (OCP) by [Red Hat](https://www.redhat.com) @@ -144,24 +167,20 @@ few commands. * [Rancher 2.0](https://rancher.com/docs/rancher/v2.x/en/) * [SUSE CaaS Platform](https://www.suse.com/products/caas-platform) * [SUSE Cloud Application Platform](https://www.suse.com/products/cloud-application-platform/) +* [VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) + ## Custom Solutions Kubernetes can run on a wide range of Cloud providers and bare-metal environments, and with many base operating systems. -If you can find a guide below that matches your needs, use it. It may be a little out of date, but -it will be easier than starting from scratch. If you do want to start from scratch, either because you -have special requirements, or just because you want to understand what is underneath a Kubernetes -cluster, try the [Getting Started from Scratch](/docs/setup/scratch/) guide. - -If you are interested in supporting Kubernetes on a new platform, see -[Writing a Getting Started Guide](https://git.k8s.io/community/contributors/devel/writing-a-getting-started-guide.md). +If you can find a guide below that matches your needs, use it. ### Universal If you already have a way to configure hosting resources, use -[kubeadm](/docs/setup/independent/create-cluster-kubeadm/) to easily bring up a cluster +[kubeadm](/docs/setup/independent/create-cluster-kubeadm/) to bring up a cluster with a single command per machine. ### Cloud @@ -169,35 +188,34 @@ with a single command per machine. These solutions are combinations of cloud providers and operating systems not covered by the above solutions. * [Cloud Foundry Container Runtime (CFCR)](https://docs-cfcr.cfapps.io/) -* [CoreOS on AWS or GCE](/docs/setup/custom-cloud/coreos/) * [Gardener](https://gardener.cloud/) -* [Kublr](https://kublr.com/) +* [Kublr](www.kublr.com/kubernetes.io/setup-hosted-solution) * [Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/) * [Kubespray](/docs/setup/custom-cloud/kubespray/) * [Rancher Kubernetes Engine (RKE)](https://github.com/rancher/rke) +* [VMware Essential PKS](https://cloud.vmware.com/vmware-essential-PKS) ### On-Premises VMs * [Cloud Foundry Container Runtime (CFCR)](https://docs-cfcr.cfapps.io/) -* [CloudStack](/docs/setup/on-premises-vm/cloudstack/) (uses Ansible, CoreOS and flannel) +* [CloudStack](/docs/setup/on-premises-vm/cloudstack/) (uses Ansible) * [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) (uses Fedora and flannel) * [Nutanix AHV](https://www.nutanix.com/products/acropolis/virtualization/) * [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) (OCP) Kubernetes platform by [Red Hat](https://www.redhat.com) * [oVirt](/docs/setup/on-premises-vm/ovirt/) -* [Vagrant](/docs/setup/custom-cloud/coreos/) (uses CoreOS and flannel) -* [VMware](/docs/setup/custom-cloud/coreos/) (uses CoreOS and flannel) -* [VMware vSphere](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/) +* [VMware Essential PKS](https://cloud.vmware.com/vmware-essential-PKS) +* [VMware vSphere](https://github.com/kubernetes/cloud-provider-vsphere) * [VMware vSphere, OpenStack, or Bare Metal](/docs/getting-started-guides/ubuntu/) (uses Juju, Ubuntu and flannel) ### Bare Metal -* [CoreOS](/docs/setup/custom-cloud/coreos/) * [Digital Rebar](/docs/setup/on-premises-metal/krib/) * [Docker Enterprise](https://www.docker.com/products/docker-enterprise) * [Fedora (Single Node)](/docs/getting-started-guides/fedora/fedora_manual_config/) * [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) * [Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/) * [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) (OCP) Kubernetes platform by [Red Hat](https://www.redhat.com) +* [VMware Essential PKS](https://cloud.vmware.com/vmware-essential-PKS) ### Integrations @@ -216,6 +234,7 @@ IaaS Provider | Config. Mgmt. | OS | Networking | Docs any | any | multi-support | any CNI | [docs](/docs/setup/independent/create-cluster-kubeadm/) | Project ([SIG-cluster-lifecycle](https://git.k8s.io/community/sig-cluster-lifecycle)) Google Kubernetes Engine | | | GCE | [docs](https://cloud.google.com/kubernetes-engine/docs/) | Commercial Docker Enterprise | custom | [multi-support](https://success.docker.com/article/compatibility-matrix) | [multi-support](https://docs.docker.com/ee/ucp/kubernetes/install-cni-plugin/) | [docs](https://docs.docker.com/ee/) | Commercial +IBM Cloud Private | Ansible | multi-support | multi-support | [docs](https://www.ibm.com/support/knowledgecenter/SSBS6K/product_welcome_cloud_private.html) | [Commercial](https://www.ibm.com/mysupport/s/topic/0TO500000001o0fGAA/ibm-cloud-private?language=en_US&productId=01t50000004X1PWAA0) and [Community](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.2/troubleshoot/support_types.html) | Red Hat OpenShift | Ansible & CoreOS | RHEL & CoreOS | [multi-support](https://docs.openshift.com/container-platform/3.11/architecture/networking/network_plugins.html) | [docs](https://docs.openshift.com/container-platform/3.11/welcome/index.html) | Commercial Stackpoint.io | | multi-support | multi-support | [docs](https://stackpoint.io/) | Commercial AppsCode.com | Saltstack | Debian | multi-support | [docs](https://appscode.com/products/cloud-deployment/) | Commercial @@ -223,7 +242,7 @@ Madcore.Ai | Jenkins DSL | Ubuntu | flannel | [docs](https://madc Platform9 | | multi-support | multi-support | [docs](https://platform9.com/managed-kubernetes/) | Commercial Kublr | custom | multi-support | multi-support | [docs](http://docs.kublr.com/) | Commercial Kubermatic | | multi-support | multi-support | [docs](http://docs.kubermatic.io/) | Commercial -IBM Cloud Kubernetes Service | | Ubuntu | IBM Cloud Networking + Calico | [docs](https://console.bluemix.net/docs/containers/) | Commercial +IBM Cloud Kubernetes Service | | Ubuntu | IBM Cloud Networking + Calico | [docs](https://cloud.ibm.com/docs/containers?topic=containers-container_index#container_index) | Commercial Giant Swarm | | CoreOS | flannel and/or Calico | [docs](https://docs.giantswarm.io/) | Commercial GCE | Saltstack | Debian | GCE | [docs](/docs/setup/turnkey/gce/) | Project Azure Kubernetes Service | | Ubuntu | Azure | [docs](https://docs.microsoft.com/en-us/azure/aks/) | Commercial @@ -233,11 +252,8 @@ Bare-metal | custom | Fedora | flannel | [docs](/docs/gettin libvirt | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) KVM | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) DCOS | Marathon | CoreOS/Alpine | custom | [docs](/docs/getting-started-guides/dcos/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) -AWS | CoreOS | CoreOS | flannel | [docs](/docs/setup/turnkey/aws/) | Community -GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires)) -Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles)) CloudStack | Ansible | CoreOS | flannel | [docs](/docs/getting-started-guides/cloudstack/) | Community ([@sebgoa](https://github.com/sebgoa)) -VMware vSphere | any | multi-support | multi-support | [docs](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/) | [Community](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/contactus.html) +VMware vSphere | any | multi-support | multi-support | [docs](https://github.com/kubernetes/cloud-provider-vsphere/tree/master/docs) | [Community](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/contactus.html) Bare-metal | custom | CentOS | flannel | [docs](/docs/getting-started-guides/centos/centos_manual_config/) | Community ([@coolsvap](https://github.com/coolsvap)) lxd | Juju | Ubuntu | flannel/canal | [docs](/docs/getting-started-guides/ubuntu/local/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) AWS | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) @@ -251,16 +267,17 @@ AWS | Saltstack | Debian | AWS | [docs](/docs/setup/ AWS | kops | Debian | AWS | [docs](https://github.com/kubernetes/kops/) | Community ([@justinsb](https://github.com/justinsb)) Bare-metal | custom | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY)) oVirt | | | | [docs](/docs/setup/on-premises-vm/ovirt/) | Community ([@simon3z](https://github.com/simon3z)) -any | any | any | any | [docs](/docs/setup/scratch/) | Community ([@erictune](https://github.com/erictune)) any | any | any | any | [docs](http://docs.projectcalico.org/v2.2/getting-started/kubernetes/installation/) | Commercial and Community any | RKE | multi-support | flannel or canal | [docs](https://rancher.com/docs/rancher/v2.x/en/quick-start-guide/) | [Commercial](https://rancher.com/what-is-rancher/overview/) and [Community](https://github.com/rancher/rancher) any | [Gardener Cluster-Operator](https://kubernetes.io/blog/2018/05/17/gardener/) | multi-support | multi-support | [docs](https://gardener.cloud) | [Project/Community](https://github.com/gardener) and [Commercial]( https://cloudplatform.sap.com/) Alibaba Cloud Container Service For Kubernetes | ROS | CentOS | flannel/Terway | [docs](https://www.aliyun.com/product/containerservice) | Commercial Agile Stacks | Terraform | CoreOS | multi-support | [docs](https://www.agilestacks.com/products/kubernetes) | Commercial -IBM Cloud Kubernetes Service | | Ubuntu | calico | [docs](https://console.bluemix.net/docs/containers/container_index.html) | Commercial +IBM Cloud Kubernetes Service | | Ubuntu | calico | [docs](https://cloud.ibm.com/docs/containers?topic=containers-container_index#container_index) | Commercial Digital Rebar | kubeadm | any | metal | [docs](/docs/setup/on-premises-metal/krib/) | Community ([@digitalrebar](https://github.com/digitalrebar)) VMware Cloud PKS | | Photon OS | Canal | [docs](https://docs.vmware.com/en/VMware-Kubernetes-Engine/index.html) | Commercial +VMware Enterprise PKS | BOSH | Ubuntu | VMware NSX-T/flannel | [docs](https://docs.vmware.com/en/VMware-Enterprise-PKS/) | Commercial Mirantis Cloud Platform | Salt | Ubuntu | multi-support | [docs](https://docs.mirantis.com/mcp/) | Commercial +IAAS Provider- Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) | | | multi-support | [docs](https://docs.cloud.oracle.com/iaas/Content/ContEng/Concepts/contengoverview.htm) | Commercial {{< note >}} The above table is ordered by version test/used in nodes, followed by support level. diff --git a/content/en/docs/setup/release/building-from-source.md b/content/en/docs/setup/release/building-from-source.md index 866d3d7b23a90..ada3b689704cb 100644 --- a/content/en/docs/setup/release/building-from-source.md +++ b/content/en/docs/setup/release/building-from-source.md @@ -2,13 +2,20 @@ reviewers: - david-mcmahon - jbeda -title: Building from Source +title: Building a release +content_template: templates/concept +card: + name: download + weight: 20 + title: Building a release --- - +{{% capture overview %}} You can either build a release from source or download a pre-built release. If you do not plan on developing Kubernetes itself, we suggest using a pre-built version of the current release, which can be found in the [Release Notes](/docs/setup/release/notes/). The Kubernetes source code can be downloaded from the [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) repo. +{{% /capture %}} +{{% capture body %}} ## Building from source If you are simply building a release from source there is no need to set up a full golang environment as all building happens in a Docker container. @@ -22,3 +29,5 @@ make release ``` For more details on the release process see the kubernetes/kubernetes [`build`](http://releases.k8s.io/{{< param "githubbranch" >}}/build/) directory. + +{{% /capture %}} diff --git a/content/en/docs/setup/release/notes.md b/content/en/docs/setup/release/notes.md index 336579e5a8ca8..c024e441356c2 100644 --- a/content/en/docs/setup/release/notes.md +++ b/content/en/docs/setup/release/notes.md @@ -1,5 +1,13 @@ --- title: v1.13 Release Notes +card: + name: download + weight: 10 + anchors: + - anchor: "#" + title: Current Release Notes + - anchor: "#urgent-upgrade-notes" + title: Urgent Upgrade Notes --- diff --git a/content/en/docs/setup/turnkey/azure.md b/content/en/docs/setup/turnkey/azure.md index 028bab47b7342..eccbbca75bcb3 100644 --- a/content/en/docs/setup/turnkey/azure.md +++ b/content/en/docs/setup/turnkey/azure.md @@ -14,20 +14,18 @@ For an example of deploying a Kubernetes cluster onto Azure via the Azure Kubern **[Microsoft Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes)** -## Custom Deployments: ACS-Engine +## Custom Deployments: AKS-Engine The core of the Azure Kubernetes Service is **open source** and available on GitHub for the community -to use and contribute to: **[ACS-Engine](https://github.com/Azure/acs-engine)**. +to use and contribute to: **[AKS-Engine](https://github.com/Azure/aks-engine)**. The legacy [ACS-Engine](https://github.com/Azure/acs-engine) codebase has been deprecated in favor of AKS-engine. -ACS-Engine is a good choice if you need to make customizations to the deployment beyond what the Azure Kubernetes +AKS-Engine is a good choice if you need to make customizations to the deployment beyond what the Azure Kubernetes Service officially supports. These customizations include deploying into existing virtual networks, utilizing multiple -agent pools, and more. Some community contributions to ACS-Engine may even become features of the Azure Kubernetes Service. +agent pools, and more. Some community contributions to AKS-Engine may even become features of the Azure Kubernetes Service. -The input to ACS-Engine is similar to the ARM template syntax used to deploy a cluster directly with the Azure Kubernetes Service. -The resulting output is an Azure Resource Manager Template that can then be checked into source control and can then be used -to deploy Kubernetes clusters into Azure. +The input to AKS-Engine is an apimodel JSON file describing the Kubernetes cluster. It is similar to the Azure Resource Manager (ARM) template syntax used to deploy a cluster directly with the Azure Kubernetes Service. The resulting output is an ARM template that can be checked into source control and used to deploy Kubernetes clusters to Azure. -You can get started quickly by following the **[ACS-Engine Kubernetes Walkthrough](https://github.com/Azure/acs-engine/blob/master/docs/kubernetes.md)**. +You can get started by following the **[AKS-Engine Kubernetes Tutorial](https://github.com/Azure/aks-engine/blob/master/docs/tutorials/README.md)**. ## CoreOS Tectonic for Azure diff --git a/content/en/docs/setup/turnkey/icp.md b/content/en/docs/setup/turnkey/icp.md index 7fdf2e7ecf884..df2c835b2a3a0 100644 --- a/content/en/docs/setup/turnkey/icp.md +++ b/content/en/docs/setup/turnkey/icp.md @@ -6,7 +6,7 @@ title: Running Kubernetes on Multiple Clouds with IBM Cloud Private IBM® Cloud Private is a turnkey cloud solution and an on-premises turnkey cloud solution. IBM Cloud Private delivers pure upstream Kubernetes with the typical management components that are required to run real enterprise workloads. These workloads include health management, log management, audit trails, and metering for tracking usage of workloads on the platform. -IBM Cloud Private is available in a community edition and a fully supported enterprise edition. The community edition is available at no charge from Docker Hub. The enterprise edition supports high availability topologies and includes commercial support from IBM for Kubernetes and the IBM Cloud Private management platform. +IBM Cloud Private is available in a community edition and a fully supported enterprise edition. The community edition is available at no charge from [Docker Hub](https://hub.docker.com/r/ibmcom/icp-inception/). The enterprise edition supports high availability topologies and includes commercial support from IBM for Kubernetes and the IBM Cloud Private management platform. If you want to try IBM Cloud Private, you can use either the hosted trial, the tutorial, or the self-guided demo. You can also try the free community edition. For details, see [Get started with IBM Cloud Private](https://www.ibm.com/cloud/private/get-started). For more information, explore the following resources: @@ -18,37 +18,26 @@ For more information, explore the following resources: The following modules are available where you can deploy IBM Cloud Private by using Terraform: -* Terraform module: [Deploy IBM Cloud Private on any supported infrastructure vendor](https://github.com/ibm-cloud-architecture/terraform-module-icp-deploy) -* IBM Cloud: [Deploy IBM Cloud Private cluster to IBM Cloud](https://github.com/ibm-cloud-architecture/terraform-icp-ibmcloud) -* VMware: [Deploy IBM Cloud Private to VMware](https://github.com/ibm-cloud-architecture/terraform-icp-vmware) * AWS: [Deploy IBM Cloud Private to AWS](https://github.com/ibm-cloud-architecture/terraform-icp-aws) -* OpenStack: [Deploy IBM Cloud Private to OpenStack](https://github.com/ibm-cloud-architecture/terraform-icp-openstack) * Azure: [Deploy IBM Cloud Private to Azure](https://github.com/ibm-cloud-architecture/terraform-icp-azure) +* IBM Cloud: [Deploy IBM Cloud Private cluster to IBM Cloud](https://github.com/ibm-cloud-architecture/terraform-icp-ibmcloud) +* OpenStack: [Deploy IBM Cloud Private to OpenStack](https://github.com/ibm-cloud-architecture/terraform-icp-openstack) +* Terraform module: [Deploy IBM Cloud Private on any supported infrastructure vendor](https://github.com/ibm-cloud-architecture/terraform-module-icp-deploy) +* VMware: [Deploy IBM Cloud Private to VMware](https://github.com/ibm-cloud-architecture/terraform-icp-vmware) -## IBM Cloud Private on Azure - -You can enable Microsoft Azure as a cloud provider for IBM Cloud Private deployment and take advantage of all the IBM Cloud Private features on the Azure public cloud. For more information, see [Configuring settings to enable Azure Cloud Provider](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.1/manage_cluster/azure_conf_settings.html). - -## IBM Cloud Private on VMware - -You can install IBM Cloud Private on VMware with either Ubuntu or RHEL images. For details, see the following projects: - -* [Installing IBM Cloud Private with Ubuntu](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_prem_ubuntu.md) -* [Installing IBM Cloud Private with Red Hat Enterprise](https://github.com/ibm-cloud-architecture/refarch-privatecloud/tree/master/icp-on-rhel) - -The IBM Cloud Private Hosted service automatically deploys IBM Cloud Private Hosted on your VMware vCenter Server instances. This service brings the power of microservices and containers to your VMware environment on IBM Cloud. With this service, you can extend the same familiar VMware and IBM Cloud Private operational model and tools from on-premises into the IBM Cloud. +## IBM Cloud Private on AWS -For more information, see [IBM Cloud Private Hosted service](https://console.bluemix.net/docs/services/vmwaresolutions/services/icp_overview.html#ibm-cloud-private-hosted-overview). +You can deploy an IBM Cloud Private cluster on Amazon Web Services (AWS) by using either AWS CloudFormation or Terraform. -## IBM Cloud Private on AWS +IBM Cloud Private has a Quick Start that automatically deploys IBM Cloud Private into a new virtual private cloud (VPC) on the AWS Cloud. A regular deployment takes about 60 minutes, and a high availability (HA) deployment takes about 75 minutes to complete. The Quick Start includes AWS CloudFormation templates and a deployment guide. -IBM Cloud Private can run on the AWS cloud platform. To deploy IBM Cloud Private in an AWS EC2 environment, see [Installing IBM Cloud Private on AWS](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_aws.md). +This Quick Start is for users who want to explore application modernization and want to accelerate meeting their digital transformation goals, by using IBM Cloud Private and IBM tooling. The Quick Start helps users rapidly deploy a high availability (HA), production-grade, IBM Cloud Private reference architecture on AWS. For all of the details and the deployment guide, see the [IBM Cloud Private on AWS Quick Start](https://aws.amazon.com/quickstart/architecture/ibm-cloud-private/). -Stay tuned for the IBM Cloud Private on AWS Quick Start Guide. +IBM Cloud Private can also run on the AWS cloud platform by using Terraform. To deploy IBM Cloud Private in an AWS EC2 environment, see [Installing IBM Cloud Private on AWS](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_aws.md). -## IBM Cloud Private on VirtualBox +## IBM Cloud Private on Azure -To install IBM Cloud Private to a VirtualBox environment, see [Installing IBM Cloud Private on VirtualBox](https://github.com/ibm-cloud-architecture/refarch-privatecloud-virtualbox). +You can enable Microsoft Azure as a cloud provider for IBM Cloud Private deployment and take advantage of all the IBM Cloud Private features on the Azure public cloud. For more information, see [IBM Cloud Private on Azure](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.2/supported_environments/azure_overview.html). ## IBM Cloud Private on Red Hat OpenShift @@ -62,4 +51,19 @@ Integration capabilities: * Integrated core platform services, such as monitoring, metering, and logging * IBM Cloud Private uses the OpenShift image registry -For more information see, [IBM Cloud Private on OpenShift](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.1/supported_environments/openshift/overview.html). +For more information see, [IBM Cloud Private on OpenShift](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.2/supported_environments/openshift/overview.html). + +## IBM Cloud Private on VirtualBox + +To install IBM Cloud Private to a VirtualBox environment, see [Installing IBM Cloud Private on VirtualBox](https://github.com/ibm-cloud-architecture/refarch-privatecloud-virtualbox). + +## IBM Cloud Private on VMware + +You can install IBM Cloud Private on VMware with either Ubuntu or RHEL images. For details, see the following projects: + +* [Installing IBM Cloud Private with Ubuntu](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_prem_ubuntu.md) +* [Installing IBM Cloud Private with Red Hat Enterprise](https://github.com/ibm-cloud-architecture/refarch-privatecloud/tree/master/icp-on-rhel) + +The IBM Cloud Private Hosted service automatically deploys IBM Cloud Private Hosted on your VMware vCenter Server instances. This service brings the power of microservices and containers to your VMware environment on IBM Cloud. With this service, you can extend the same familiar VMware and IBM Cloud Private operational model and tools from on-premises into the IBM Cloud. + +For more information, see [IBM Cloud Private Hosted service](https://cloud.ibm.com/docs/services/vmwaresolutions/vmonic?topic=vmware-solutions-prod_overview#ibm-cloud-private-hosted). diff --git a/content/en/docs/tasks/access-application-cluster/access-cluster.md b/content/en/docs/tasks/access-application-cluster/access-cluster.md index 2dfc3767d7e90..c7598381b27cc 100644 --- a/content/en/docs/tasks/access-application-cluster/access-cluster.md +++ b/content/en/docs/tasks/access-application-cluster/access-cluster.md @@ -26,7 +26,7 @@ or someone else setup the cluster and provided you with credentials and a locati Check the location and credentials that kubectl knows about with this command: ```shell -$ kubectl config view +kubectl config view ``` Many of the [examples](/docs/user-guide/kubectl-cheatsheet) provide an introduction to using @@ -56,7 +56,7 @@ locating the apiserver and authenticating. Run it like this: ```shell -$ kubectl proxy --port=8080 +kubectl proxy --port=8080 ``` See [kubectl proxy](/docs/reference/generated/kubectl/kubectl-commands/#proxy) for more details. @@ -65,7 +65,12 @@ Then you can explore the API with curl, wget, or a browser, replacing localhost with [::1] for IPv6, like so: ```shell -$ curl http://localhost:8080/api/ +curl http://localhost:8080/api/ +``` + +The output is similar to this: + +```json { "versions": [ "v1" @@ -76,16 +81,19 @@ $ curl http://localhost:8080/api/ ### Without kubectl proxy -Use `kubectl describe secret...` to get the token for the default service account: - -Use `kubectl describe secret` with grep/cut: +Use `kubectl describe secret...` to get the token for the default service account with grep/cut: ```shell -$ APISERVER=$(kubectl config view --minify | grep server | cut -f 2- -d ":" | tr -d " ") -$ SECRET_NAME=$(kubectl get secrets | grep ^default | cut -f1 -d ' ') -$ TOKEN=$(kubectl describe secret $SECRET_NAME | grep -E '^token' | cut -f2 -d':' | tr -d " ") +APISERVER=$(kubectl config view --minify | grep server | cut -f 2- -d ":" | tr -d " ") +SECRET_NAME=$(kubectl get secrets | grep ^default | cut -f1 -d ' ') +TOKEN=$(kubectl describe secret $SECRET_NAME | grep -E '^token' | cut -f2 -d':' | tr -d " ") -$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure +curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure +``` + +The output is similar to this: + +```json { "kind": "APIVersions", "versions": [ @@ -103,11 +111,16 @@ $ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure Using `jsonpath`: ```shell -$ APISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}') -$ SECRET_NAME=$(kubectl get serviceaccount default -o jsonpath='{.secrets[0].name}') -$ TOKEN=$(kubectl get secret $SECRET_NAME -o jsonpath='{.data.token}' | base64 --decode) +APISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}') +SECRET_NAME=$(kubectl get serviceaccount default -o jsonpath='{.secrets[0].name}') +TOKEN=$(kubectl get secret $SECRET_NAME -o jsonpath='{.data.token}' | base64 --decode) -$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure +curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure +``` + +The output is similar to this: + +```json { "kind": "APIVersions", "versions": [ @@ -239,14 +252,18 @@ Typically, there are several services which are started on a cluster by kube-sys with the `kubectl cluster-info` command: ```shell -$ kubectl cluster-info - - Kubernetes master is running at https://104.197.5.247 - elasticsearch-logging is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy - kibana-logging is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/kibana-logging/proxy - kube-dns is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/kube-dns/proxy - grafana is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy - heapster is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/monitoring-heapster/proxy +kubectl cluster-info +``` + +The output is similar to this: + +``` +Kubernetes master is running at https://104.197.5.247 +elasticsearch-logging is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy +kibana-logging is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/kibana-logging/proxy +kube-dns is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/kube-dns/proxy +grafana is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy +heapster is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/monitoring-heapster/proxy ``` This shows the proxy-verb URL for accessing each service. @@ -278,18 +295,18 @@ The supported formats for the name segment of the URL are: * To access the Elasticsearch cluster health information `_cluster/health?pretty=true`, you would use: `https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_cluster/health?pretty=true` ```json - { - "cluster_name" : "kubernetes_logging", - "status" : "yellow", - "timed_out" : false, - "number_of_nodes" : 1, - "number_of_data_nodes" : 1, - "active_primary_shards" : 5, - "active_shards" : 5, - "relocating_shards" : 0, - "initializing_shards" : 0, - "unassigned_shards" : 5 - } +{ + "cluster_name" : "kubernetes_logging", + "status" : "yellow", + "timed_out" : false, + "number_of_nodes" : 1, + "number_of_data_nodes" : 1, + "active_primary_shards" : 5, + "active_shards" : 5, + "relocating_shards" : 0, + "initializing_shards" : 0, + "unassigned_shards" : 5 +} ``` ### Using web browsers to access services running on the cluster diff --git a/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md b/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md index edf999f3caeb0..27a7d5fabdc7b 100644 --- a/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md +++ b/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md @@ -44,7 +44,7 @@ directory of the nginx server. Create the Pod and the two Containers: - kubectl create -f https://k8s.io/examples/pods/two-container-pod.yaml + kubectl apply -f https://k8s.io/examples/pods/two-container-pod.yaml View information about the Pod and the Containers: diff --git a/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md b/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md index cb8b67e557cd0..55d5c6873d31b 100644 --- a/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md +++ b/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md @@ -2,6 +2,9 @@ title: Configure Access to Multiple Clusters content_template: templates/task weight: 30 +card: + name: tasks + weight: 40 --- @@ -251,22 +254,31 @@ The preceding configuration file defines a new context named `dev-ramp-up`. See whether you have an environment variable named `KUBECONFIG`. If so, save the current value of your `KUBECONFIG` environment variable, so you can restore it later. -For example, on Linux: +For example: +### Linux ```shell export KUBECONFIG_SAVED=$KUBECONFIG ``` - +### Windows PowerShell +```shell + $Env:KUBECONFIG_SAVED=$ENV:KUBECONFIG + ``` The `KUBECONFIG` environment variable is a list of paths to configuration files. The list is colon-delimited for Linux and Mac, and semicolon-delimited for Windows. If you have a `KUBECONFIG` environment variable, familiarize yourself with the configuration files in the list. -Temporarily append two paths to your `KUBECONFIG` environment variable. For example, on Linux: +Temporarily append two paths to your `KUBECONFIG` environment variable. For example:
+### Linux ```shell export KUBECONFIG=$KUBECONFIG:config-demo:config-demo-2 ``` +### Windows PowerShell +```shell +$Env:KUBECONFIG=("config-demo;config-demo-2") +``` In your `config-exercise` directory, enter this command: @@ -320,11 +332,16 @@ familiarize yourself with the contents of these files. If you have a `$HOME/.kube/config` file, and it's not already listed in your `KUBECONFIG` environment variable, append it to your `KUBECONFIG` environment variable now. -For example, on Linux: +For example: +### Linux ```shell export KUBECONFIG=$KUBECONFIG:$HOME/.kube/config ``` +### Windows Powershell +```shell + $Env:KUBECONFIG=($Env:KUBECONFIG;$HOME/.kube/config) +``` View configuration information merged from all the files that are now listed in your `KUBECONFIG` environment variable. In your config-exercise directory, enter: @@ -335,11 +352,15 @@ kubectl config view ## Clean up -Return your `KUBECONFIG` environment variable to its original value. For example, on Linux: - +Return your `KUBECONFIG` environment variable to its original value. For example:
+Linux: ```shell export KUBECONFIG=$KUBECONFIG_SAVED ``` +Windows PowerShell +```shell + $Env:KUBECONFIG=$ENV:KUBECONFIG_SAVED +``` {{% /capture %}} diff --git a/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md b/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md index a447e85d16d10..3cb90383c7abb 100644 --- a/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md +++ b/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md @@ -8,14 +8,15 @@ weight: 70 This task shows how to create a frontend and a backend microservice. The backend microservice is a hello greeter. The -frontend and backend are connected using a Kubernetes Service object. +frontend and backend are connected using a Kubernetes +{{< glossary_tooltip term_id="service" >}} object. {{% /capture %}} {{% capture objectives %}} -* Create and run a microservice using a Deployment object. +* Create and run a microservice using a {{< glossary_tooltip term_id="deployment" >}} object. * Route traffic to the backend using a frontend. * Use a Service object to connect the frontend application to the backend application. @@ -47,13 +48,13 @@ file for the backend Deployment: Create the backend Deployment: -``` -kubectl create -f https://k8s.io/examples/service/access/hello.yaml +```shell +kubectl apply -f https://k8s.io/examples/service/access/hello.yaml ``` View information about the backend Deployment: -``` +```shell kubectl describe deployment hello ``` @@ -99,7 +100,8 @@ Events: The key to connecting a frontend to a backend is the backend Service. A Service creates a persistent IP address and DNS name entry so that the backend microservice can always be reached. A Service uses -selector labels to find the Pods that it routes traffic to. +{{< glossary_tooltip text="selectors" term_id="selector" >}} to find +the Pods that it routes traffic to. First, explore the Service configuration file: @@ -110,8 +112,8 @@ that have the labels `app: hello` and `tier: backend`. Create the `hello` Service: -``` -kubectl create -f https://k8s.io/examples/service/access/hello-service.yaml +```shell +kubectl apply -f https://k8s.io/examples/service/access/hello-service.yaml ``` At this point, you have a backend Deployment running, and you have a @@ -137,8 +139,8 @@ the Service uses the default load balancer of your cloud provider. Create the frontend Deployment and Service: -``` -kubectl create -f https://k8s.io/examples/service/access/frontend.yaml +```shell +kubectl apply -f https://k8s.io/examples/service/access/frontend.yaml ``` The output verifies that both resources were created: @@ -161,7 +163,7 @@ so that you can change the configuration more easily. Once you’ve created a Service of type LoadBalancer, you can use this command to find the external IP: -``` +```shell kubectl get service frontend --watch ``` @@ -169,16 +171,16 @@ This displays the configuration for the `frontend` Service and watches for changes. Initially, the external IP is listed as ``: ``` -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -frontend ClusterIP 10.51.252.116 80/TCP 10s +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +frontend LoadBalancer 10.51.252.116 80/TCP 10s ``` As soon as an external IP is provisioned, however, the configuration updates to include the new IP under the `EXTERNAL-IP` heading: ``` -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -frontend ClusterIP 10.51.252.116 XXX.XXX.XXX.XXX 80/TCP 1m +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +frontend LoadBalancer 10.51.252.116 XXX.XXX.XXX.XXX 80/TCP 1m ``` That IP can now be used to interact with the `frontend` service from outside the @@ -189,8 +191,8 @@ cluster. The frontend and backends are now connected. You can hit the endpoint by using the curl command on the external IP of your frontend Service. -``` -curl http:// +```shell +curl http://${EXTERNAL_IP} # replace this with the EXTERNAL-IP you saw earlier ``` The output shows the message generated by the backend: diff --git a/content/en/docs/tasks/access-application-cluster/create-external-load-balancer.md b/content/en/docs/tasks/access-application-cluster/create-external-load-balancer.md index 22c601d3d1aa3..b8a5b1c35280d 100644 --- a/content/en/docs/tasks/access-application-cluster/create-external-load-balancer.md +++ b/content/en/docs/tasks/access-application-cluster/create-external-load-balancer.md @@ -162,7 +162,7 @@ Service Configuration file. ### Feature availability -| k8s version | Feature support | +| K8s version | Feature support | | :---------: |:-----------:| | 1.7+ | Supports the full API fields | | 1.5 - 1.6 | Supports Beta Annotations | diff --git a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md new file mode 100644 index 0000000000000..a810603f19aa2 --- /dev/null +++ b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md @@ -0,0 +1,292 @@ +--- +title: Set up Ingress on Minikube with the NGINX Ingress Controller +content_template: templates/task +weight: 100 +--- + +{{% capture overview %}} + +An [Ingress](/docs/concepts/services-networking/ingress/) is an API object that defines rules which allow external access +to services in a cluster. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers/) fulfills the rules set in the Ingress. + +{{< caution >}} +For the Ingress resource to work, the cluster **must** also have an Ingress controller running. +{{< /caution >}} + +This page shows you how to set up a simple Ingress which routes requests to Service web or web2 depending on the HTTP URI. + +{{% /capture %}} + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +{{% /capture %}} + +{{% capture steps %}} + +## Create a Minikube cluster + +1. Click **Launch Terminal** + + {{< kat-button >}} + +1. (Optional) If you installed Minikube locally, run the following command: + + ```shell + minikube start + ``` + +## Enable the Ingress controller + +1. To enable the NGINX Ingress controller, run the following command: + + ```shell + minikube addons enable ingress + ``` + +1. Verify that the NGINX Ingress controller is running + + ```shell + kubectl get pods -n kube-system + ``` + + {{< note >}}This can take up to a minute.{{< /note >}} + + Output: + + ```shell + NAME READY STATUS RESTARTS AGE + default-http-backend-59868b7dd6-xb8tq 1/1 Running 0 1m + kube-addon-manager-minikube 1/1 Running 0 3m + kube-dns-6dcb57bcc8-n4xd4 3/3 Running 0 2m + kubernetes-dashboard-5498ccf677-b8p5h 1/1 Running 0 2m + nginx-ingress-controller-5984b97644-rnkrg 1/1 Running 0 1m + storage-provisioner 1/1 Running 0 2m + ``` + +## Deploy a hello, world app + +1. Create a Deployment using the following command: + + ```shell + kubectl run web --image=gcr.io/google-samples/hello-app:1.0 --port=8080 + ``` + + Output: + + ```shell + deployment.apps/web created + ``` + +1. Expose the Deployment: + + ```shell + kubectl expose deployment web --target-port=8080 --type=NodePort + ``` + + Output: + + ```shell + service/web exposed + ``` + +1. Verify the Service is created and is available on a node port: + + ```shell + kubectl get service web + ``` + + Output: + + ```shell + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + web NodePort 10.104.133.249 8080:31637/TCP 12m + ``` + +1. Visit the service via NodePort: + + ```shell + minikube service web --url + ``` + + Output: + + ```shell + http://172.17.0.15:31637 + ``` + + {{< note >}}Katacoda environment only: at the top of the terminal panel, click the plus sign, and then click **Select port to view on Host 1**. Enter the NodePort, in this case `31637`, and then click **Display Port**.{{< /note >}} + + Output: + + ```shell + Hello, world! + Version: 1.0.0 + Hostname: web-55b8c6998d-8k564 + ``` + + You can now access the sample app via the Minikube IP address and NodePort. The next step lets you access + the app using the Ingress resource. + +## Create an Ingress resource + +The following file is an Ingress resource that sends traffic to your Service via hello-world.info. + +1. Create `example-ingress.yaml` from the following file: + + ```yaml + --- + apiVersion: extensions/v1beta1 + kind: Ingress + metadata: + name: example-ingress + annotations: + nginx.ingress.kubernetes.io/rewrite-target: / + spec: + rules: + - host: hello-world.info + http: + paths: + - path: /* + backend: + serviceName: web + servicePort: 8080 + ``` + +1. Create the Ingress resource by running the following command: + + ```shell + kubectl apply -f example-ingress.yaml + ``` + + Output: + + ```shell + ingress.extensions/example-ingress created + ``` + +1. Verify the IP address is set: + + ```shell + kubectl get ingress + ``` + + {{< note >}}This can take a couple of minutes.{{< /note >}} + + ```shell + NAME HOSTS ADDRESS PORTS AGE + example-ingress hello-world.info 172.17.0.15 80 38s + ``` + +1. Add the following line to the bottom of the `/etc/hosts` file. + + ``` + 172.17.0.15 hello-world.info + ``` + + This sends requests from hello-world.info to Minikube. + +1. Verify that the Ingress controller is directing traffic: + + ```shell + curl hello-world.info + ``` + + Output: + + ```shell + Hello, world! + Version: 1.0.0 + Hostname: web-55b8c6998d-8k564 + ``` + + {{< note >}}If you are running Minikube locally, you can visit hello-world.info from your browser.{{< /note >}} + +## Create Second Deployment + +1. Create a v2 Deployment using the following command: + + ```shell + kubectl run web2 --image=gcr.io/google-samples/hello-app:2.0 --port=8080 + ``` + Output: + + ```shell + deployment.apps/web2 created + ``` + +1. Expose the Deployment: + + ```shell + kubectl expose deployment web2 --target-port=8080 --type=NodePort + ``` + + Output: + + ```shell + service/web2 exposed + ``` + +## Edit Ingress + +1. Edit the existing `example-ingress.yaml` and add the following lines: + + ```yaml + - path: /v2/* + backend: + serviceName: web2 + servicePort: 8080 + ``` + +1. Apply the changes: + + ```shell + kubectl apply -f example-ingress.yaml + ``` + + Output: + ```shell + ingress.extensions/example-ingress configured + ``` + +## Test Your Ingress + +1. Access the 1st version of the Hello World app. + + ```shell + curl hello-world.info + ``` + + Output: + ```shell + Hello, world! + Version: 1.0.0 + Hostname: web-55b8c6998d-8k564 + ``` + +1. Access the 2nd version of the Hello World app. + + ```shell + curl hello-world.info/v2 + ``` + + Output: + ```shell + Hello, world! + Version: 2.0.0 + Hostname: web2-75cd47646f-t8cjk + ``` + + {{< note >}}If you are running Minikube locally, you can visit hello-world.info and hello-world.info/v2 from your browser.{{< /note >}} + +{{% /capture %}} + + +{{% capture whatsnext %}} +* Read more about [Ingress](/docs/concepts/services-networking/ingress/) +* Read more about [Ingress Controllers](/docs/concepts/services-networking/ingress-controllers/) +* Read more about [Services](/docs/concepts/services-networking/service/) + +{{% /capture %}} + diff --git a/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md b/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md index a303b9a8780b6..f05ee8297ba83 100644 --- a/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md +++ b/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md @@ -28,7 +28,7 @@ for database debugging. 1. Create a Redis deployment: - kubectl create -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml + kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml The output of a successful command verifies that the deployment was created: @@ -64,7 +64,7 @@ for database debugging. 2. Create a Redis service: - kubectl create -f https://k8s.io/examples/application/guestbook/redis-master-service.yaml + kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-service.yaml The output of a successful command verifies that the service was created: diff --git a/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md b/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md index ceea70165fec8..f51eac0f71244 100644 --- a/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md +++ b/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md @@ -6,6 +6,10 @@ reviewers: title: Web UI (Dashboard) content_template: templates/concept weight: 10 +card: + name: tasks + weight: 30 + title: Use the Web UI Dashboard --- {{% capture overview %}} @@ -26,7 +30,7 @@ Dashboard also provides information on the state of Kubernetes resources in your The Dashboard UI is not deployed by default. To deploy it, run the following command: ``` -kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml +kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml ``` ## Accessing the Dashboard UI diff --git a/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md b/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md index 225951dc64145..a66be5a1add79 100644 --- a/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md +++ b/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md @@ -22,13 +22,173 @@ Configuring the [aggregation layer](/docs/concepts/extend-kubernetes/api-extensi There are a few setup requirements for getting the aggregation layer working in your environment to support mutual TLS auth between the proxy and extension apiservers. Kubernetes and the kube-apiserver have multiple CAs, so make sure that the proxy is signed by the aggregation layer CA and not by something else, like the master CA. {{< /note >}} +{{% /capture %}} + +{{% capture authflow %}} + +## Authentication Flow + +Unlike Custom Resource Definitions (CRDs), the Aggregation API involves another server - your Extension apiserver - in addition to the standard Kubernetes apiserver. The Kubernetes apiserver will need to communicate with your extension apiserver, and your extension apiserver will need to communicate with the Kubernetes apiserver. In order for this communication to be secured, the Kubernetes apiserver uses x509 certificates to authenticate itself to the extension apiserver. + +This section describes how the authentication and authorization flows work, and how to configure them. + +The high-level flow is as follows: + +1. Kubenetes apiserver: authenticate the requesting user and authorize their rights to the requested API path. +2. Kubenetes apiserver: proxy the request to the extension apiserver +3. Extension apiserver: authenticate the request from the Kubernetes apiserver +4. Extension apiserver: authorize the request from the original user +5. Extension apiserver: execute + +The rest of this section describes these steps in detail. + +The flow can be seen in the following diagram. + +![aggregation auth flows](/images/docs/aggregation-api-auth-flow.png). + +The source for the above swimlanes can be found in the source of this document. + + + +### Kubernetes Apiserver Authentication and Authorization + +A request to an API path that is served by an extension apiserver begins the same way as all API requests: communication to the Kubernetes apiserver. This path already has been registered with the Kubernetes apiserver by the extension apiserver. + +The user communicates with the Kubernetes apiserver, requesting access to the path. The Kubernetes apiserver uses standard authentication and authorization configured with the Kubernetes apiserver to authenticate the user and authorize access to the specific path. + +For an overview of authenticating to a Kubernetes cluster, see ["Authenticating to a Cluster"](/docs/reference/access-authn-authz/authentication/). For an overview of authorization of access to Kubernetes cluster resources, see ["Authorization Overview"](/docs/reference/access-authn-authz/authorization/). + +Everything to this point has been standard Kubernetes API requests, authentication and authorization. + +The Kubernetes apiserver now is prepared to send the request to the extension apiserver. + +### Kubernetes Apiserver Proxies the Request + +The Kubernetes apiserver now will send, or proxy, the request to the extension apiserver that registered to handle the request. In order to do so, it needs to know several things: + +1. How should the Kubernetes apiserver authenticate to the extension apiserver, informing the extension apiserver that the request, which comes over the network, is coming from a valid Kubernetes apiserver? +2. How should the Kubernetes apiserver inform the extension apiserver of the username and group for which the original request was authenticated? + +In order to provide for these two, you must configure the Kubernetes apiserver using several flags. + +#### Kubernetes Apiserver Client Authentication + +The Kubernetes apiserver connects to the extension apiserver over TLS, authenticating itself using a client certificate. You must provide the following to the Kubernetes apiserver upon startup, using the provided flags: + +* private key file via `--proxy-client-key-file` +* signed client certificate file via `--proxy-client-cert-file` +* certificate of the CA that signed the client certificate file via `--requestheader-client-ca-file` +* valid Common Names (CN) in the signed client certificate via `--requestheader-allowed-names` + +The Kubernetes apiserver will use the files indicated by `--proxy-client-*-file` to authenticate to the extension apiserver. In order for the request to be considered valid by a compliant extension apiserver, the following conditions must be met: + +1. The connection must be made using a client certificate that is signed by the CA whose certificate is in `--requestheader-client-ca-file`. +2. The connection must be made using a client certificate whose CN is one of those listed in `--requestheader-allowed-names`. **Note:** You can set this option to blank as `--requestheader-allowed-names=""`. This will indicate to an extension apiserver that _any_ CN is acceptable. + +When started with these options, the Kubernetes apiserver will: + +1. Use them to authenticate to the extension apiserver. +2. Create a configmap in the `kube-system` namespace called `extension-apiserver-authentication`, in which it will place the CA certificate and the allowed CNs. These in turn can be retrieved by extension apiservers to validate requests. + +Note that the same client certificate is used by the Kubernetes apiserver to authenticate against _all_ extension apiservers. It does not create a client certificate per extension apiserver, but rather a single one to authenticate as the Kubernetes apiserver. This same one is reused for all extension apiserver requests. + +#### Original Request Username and Group + +When the Kubernetes apiserver proxies the request to the extension apiserver, it informs the extension apiserver of the username and group with which the original request successfully authenticated. It provides these in http headers of its proxied request. You must inform the Kubernetes apiserver of the names of the headers to be used. + +* the header in which to store the username via `--requestheader-username-headers` +* the header in which to store the group via `--requestheader-group-headers` +* the prefix to append to all extra headers via `--requestheader-extra-headers-prefix` + +These header names are also placed in the `extension-apiserver-authentication` configmap, so they can be retrieved and used by extension apiservers. + +### Extension Apiserver Authenticates the Request + +The extension apiserver, upon receiving a proxied request from the Kubernetes apiserver, must validate that the request actually did come from a valid authenticating proxy, which role the Kubernetes apiserver is fulfilling. The extension apiserver validates it via: + +1. Retrieve the following from the configmap in `kube-system`, as described above: + * Client CA certificate + * List of allowed names (CNs) + * Header names for username, group and extra info +2. Check that the TLS connection was authenticated using a client certificate which: + * Was signed by the CA whose certificate matches the retrieved CA certificate. + * Has a CN in the list of allowed CNs, unless the list is blank, in which case all CNs are allowed. + * Extract the username and group from the appropriate headers + +If the above passes, then the request is a valid proxied request from a legitimate authenticating proxy, in this case the Kubernetes apiserver. + +Note that it is the responsibility of the extension apiserver implementation to provide the above. Many do it by default, leveraging the `k8s.io/apiserver/` package. Others may provide options to override it using command-line options. + +In order to have permission to retrieve the configmap, an extension apiserver requires the appropriate role. There is a default role named `extension-apiserver-authentication-reader` in the `kube-system` namespace which can be assigned. + +### Extension Apiserver Authorizes the Request + +The extension apiserver now can validate that the user/group retrieved from the headers are authorized to execute the given request. It does so by sending a standard [SubjectAccessReview](/docs/reference/access-authn-authz/authorization/) request to the Kubernetes apiserver. + +In order for the extension apiserver to be authorized itself to submit the `SubjectAccessReview` request to the Kubernetes apiserver, it needs the correct permissions. Kubernetes includes a default `ClusterRole` named `system:auth-delegator` that has the appropriate permissions. It can be granted to the extension apiserver's service account. + +### Extension Apiserver Executes + +If the `SubjectAccessReview` passes, the extension apiserver executes the request. + + {{% /capture %}} {{% capture steps %}} -## Enable apiserver flags +## Enable Kubernetes Apiserver flags -Enable the aggregation layer via the following kube-apiserver flags. They may have already been taken care of by your provider. +Enable the aggregation layer via the following `kube-apiserver` flags. They may have already been taken care of by your provider. --requestheader-client-ca-file= --requestheader-allowed-names=front-proxy-client @@ -42,7 +202,7 @@ Enable the aggregation layer via the following kube-apiserver flags. They may ha Do **not** reuse a CA that is used in a different context unless you understand the risks and the mechanisms to protect the CA's usage. {{< /warning >}} -If you are not running kube-proxy on a host running the API server, then you must make sure that the system is enabled with the following apiserver flag: +If you are not running kube-proxy on a host running the API server, then you must make sure that the system is enabled with the following `kube-apiserver` flag: --enable-aggregator-routing=true @@ -56,5 +216,3 @@ If you are not running kube-proxy on a host running the API server, then you mus {{% /capture %}} - - diff --git a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md index 034200a1e46e9..8506ecbe053df 100644 --- a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md +++ b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md @@ -87,10 +87,10 @@ spec: ``` You can save the CustomResourceDefinition in a YAML file, then use -`kubectl create` to create it. +`kubectl apply` to create it. ```shell -kubectl create -f my-versioned-crontab.yaml +kubectl apply -f my-versioned-crontab.yaml ``` After creation, the API server starts to serve each enabled version at an HTTP @@ -110,7 +110,7 @@ same way that the Kubernetes project sorts Kubernetes versions. Versions start w `v` followed by a number, an optional `beta` or `alpha` designation, and optional additional numeric versioning information. Broadly, a version string might look like `v2` or `v2beta1`. Versions are sorted using the following algorithm: - + - Entries that follow Kubernetes version patterns are sorted before those that do not. - For entries that follow Kubernetes version patterns, the numeric portions of @@ -185,7 +185,7 @@ how to [authenticate API servers](/docs/reference/access-authn-authz/extensible- ### Deploy the conversion webhook service Documentation for deploying the conversion webhook is the same as for the [admission webhook example service](/docs/reference/access-authn-authz/extensible-admission-controllers/#deploy_the_admission_webhook_service). -The assumption for next sections is that the conversion webhook server is deployed to a service named `example-conversion-webhook-server` in `default` namespace. +The assumption for next sections is that the conversion webhook server is deployed to a service named `example-conversion-webhook-server` in `default` namespace and serving traffic on path `/crdconvert`. {{< note >}} When the webhook server is deployed into the Kubernetes cluster as a @@ -242,6 +242,8 @@ spec: service: namespace: default name: example-conversion-webhook-server + # path is the url the API server will call. It should match what the webhook is serving at. The default is '/'. + path: /crdconvert caBundle: # either Namespaced or Cluster scope: Namespaced diff --git a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions.md b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions.md index fddff0fcbe88d..daccc86fc89cb 100644 --- a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions.md +++ b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions.md @@ -70,7 +70,7 @@ spec: And create it: ```shell -kubectl create -f resourcedefinition.yaml +kubectl apply -f resourcedefinition.yaml ``` Then a new namespaced RESTful API endpoint is created at: @@ -112,7 +112,7 @@ spec: and create it: ```shell -kubectl create -f my-crontab.yaml +kubectl apply -f my-crontab.yaml ``` You can then manage your CronTab objects using kubectl. For example: @@ -288,7 +288,7 @@ spec: And create it: ```shell -kubectl create -f resourcedefinition.yaml +kubectl apply -f resourcedefinition.yaml ``` A request to create a custom object of kind `CronTab` will be rejected if there are invalid values in its fields. @@ -313,7 +313,7 @@ spec: and create it: ```shell -kubectl create -f my-crontab.yaml +kubectl apply -f my-crontab.yaml ``` you will get an error: @@ -343,10 +343,62 @@ spec: And create it: ```shell -kubectl create -f my-crontab.yaml +kubectl apply -f my-crontab.yaml crontab "my-new-cron-object" created ``` +### Publish Validation Schema in OpenAPI v2 + +{{< feature-state state="alpha" for_kubernetes_version="1.14" >}} + +Starting with Kubernetes 1.14, [custom resource validation schema](#validation) can be published as part +of [OpenAPI v2 spec](/docs/concepts/overview/kubernetes-api/#openapi-and-swagger-definitions) from +Kubernetes API server. + +[kubectl](/docs/reference/kubectl/overview) consumes the published schema to perform client-side validation +(`kubectl create` and `kubectl apply`), schema explanation (`kubectl explain`) on custom resources. +The published schema can be consumed for other purposes. The feature is Alpha in 1.14 and disabled by default. +You can enable the feature using the `CustomResourcePublishOpenAPI` feature gate on the +[kube-apiserver](/docs/admin/kube-apiserver): + +``` +--feature-gates=CustomResourcePublishOpenAPI=true +``` + +Custom resource validation schema will be converted to OpenAPI v2 schema, and +show up in `definitions` and `paths` fields in the [OpenAPI v2 spec](/docs/concepts/overview/kubernetes-api/#openapi-and-swagger-definitions). +The following modifications are applied during the conversion to keep backwards compatiblity with +kubectl in previous 1.13 version. These modifications prevent kubectl from being over-strict and rejecting +valid OpenAPI schemas that it doesn't understand. The conversion won't modify the validation schema defined in CRD, +and therefore won't affect [validation](#validation) in the API server. + +1. The following fields are removed as they aren't supported by OpenAPI v2 (in future versions OpenAPI v3 will be used without these restrictions) + - The fields `oneOf`, `anyOf` and `not` are removed +2. The following fields are removed as they aren't allowed by kubectl in + previous 1.13 version + - For a schema with a `$ref` + - the fields `properties` and `type` are removed + - if the `$ref` is outside of the `definitions`, the field `$ref` is removed + - For a schema of a primitive data type (which means the field `type` has two elements: one type and one format) + - if any one of the two elements is `null`, the field `type` is removed + - otherwise, the fields `type` and `properties` are removed + - For a schema of more than two types + - the fields `type` and `properties` are removed + - For a schema of `null` type + - the field `type` is removed + - For a schema of `array` type + - if the schema doesn't have exactly one item, the fields `type` and `items` are + removed + - For a schema with no type specified + - the field `properties` is removed +3. The following fields are removed as they aren't supported by the OpenAPI protobuf implementation + - The fields `id`, `schema`, `definitions`, `additionalItems`, `dependencies`, + and `patternProperties` are removed + - For a schema with a `externalDocs` + - if the `externalDocs` has `url` defined, the field `externalDocs` is removed + - For a schema with `items` defined + - if the field `items` has multiple schemas, the field `items` is removed + ### Additional printer columns Starting with Kubernetes 1.11, kubectl uses server-side printing. The server decides which @@ -387,7 +439,7 @@ columns. 2. Create the CustomResourceDefinition: ```shell - kubectl create -f resourcedefinition.yaml + kubectl apply -f resourcedefinition.yaml ``` 3. Create an instance using the `my-crontab.yaml` from the previous section. @@ -560,7 +612,7 @@ spec: And create it: ```shell -kubectl create -f resourcedefinition.yaml +kubectl apply -f resourcedefinition.yaml ``` After the CustomResourceDefinition object has been created, you can create custom objects. @@ -581,7 +633,7 @@ spec: and create it: ```shell -kubectl create -f my-crontab.yaml +kubectl apply -f my-crontab.yaml ``` Then new namespaced RESTful API endpoints are created at: @@ -645,7 +697,7 @@ spec: And create it: ```shell -kubectl create -f resourcedefinition.yaml +kubectl apply -f resourcedefinition.yaml ``` After the CustomResourceDefinition object has been created, you can create custom objects. @@ -665,7 +717,7 @@ spec: and create it: ```shell -kubectl create -f my-crontab.yaml +kubectl apply -f my-crontab.yaml ``` You can specify the category using `kubectl get`: diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-api.md b/content/en/docs/tasks/administer-cluster/access-cluster-api.md index 55b84d5daab64..16434ca0a4563 100644 --- a/content/en/docs/tasks/administer-cluster/access-cluster-api.md +++ b/content/en/docs/tasks/administer-cluster/access-cluster-api.md @@ -87,12 +87,15 @@ directly to the API server, like this: Using `grep/cut` approach: -``` shell -# Check all possible clusters, as you .KUBECONFIG may have multiple contexts -kubectl config view -o jsonpath='{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}' +```shell +# Check all possible clusters, as you .KUBECONFIG may have multiple contexts: +kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}' + +# Select name of cluster you want to interact with from above output: +export CLUSTER_NAME="some_server_name" # Point to the API server refering the cluster name -APISERVER=$(kubectl config view -o jsonpath='{.clusters[?(@.name=="$CLUSTER_NAME")].cluster.server}') +APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(@.name==\"$CLUSTER_NAME\")].cluster.server}") # Gets the token value TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 -d) diff --git a/content/en/docs/tasks/administer-cluster/cluster-management.md b/content/en/docs/tasks/administer-cluster/cluster-management.md index 2908666dcbb3b..3533a0a22313f 100644 --- a/content/en/docs/tasks/administer-cluster/cluster-management.md +++ b/content/en/docs/tasks/administer-cluster/cluster-management.md @@ -65,6 +65,10 @@ Google Kubernetes Engine automatically updates master components (e.g. `kube-api The node upgrade process is user-initiated and is described in the [Google Kubernetes Engine documentation](https://cloud.google.com/kubernetes-engine/docs/clusters/upgrade). +### Upgrading an Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) cluster + +Oracle creates and manages a set of master nodes in the Oracle control plane on your behalf (and associated Kubernetes infrastructure such as etcd nodes) to ensure you have a highly available managed Kubernetes control plane. You can also seamlessly upgrade these master nodes to new versions of Kubernetes with zero downtime. These actions are described in the [OKE documentation](https://docs.cloud.oracle.com/iaas/Content/ContEng/Tasks/contengupgradingk8smasternode.htm). + ### Upgrading clusters on other platforms Different providers, and tools, will manage upgrades differently. It is recommended that you consult their main documentation regarding upgrades. diff --git a/content/en/docs/tasks/administer-cluster/configure-multiple-schedulers.md b/content/en/docs/tasks/administer-cluster/configure-multiple-schedulers.md index b3a2c5d311acd..fe835c684d4a5 100644 --- a/content/en/docs/tasks/administer-cluster/configure-multiple-schedulers.md +++ b/content/en/docs/tasks/administer-cluster/configure-multiple-schedulers.md @@ -96,7 +96,9 @@ kubectl create -f my-scheduler.yaml Verify that the scheduler pod is running: ```shell -$ kubectl get pods --namespace=kube-system +kubectl get pods --namespace=kube-system +``` +``` NAME READY STATUS RESTARTS AGE .... my-scheduler-lnf4s-4744f 1/1 Running 0 2m @@ -116,7 +118,9 @@ First, update the following fields in your YAML file: If RBAC is enabled on your cluster, you must update the `system:kube-scheduler` cluster role. Add your scheduler name to the resourceNames of the rule applied for endpoints resources, as in the following example: ``` -$ kubectl edit clusterrole system:kube-scheduler +kubectl edit clusterrole system:kube-scheduler +``` +```yaml - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: diff --git a/content/en/docs/tasks/administer-cluster/declare-network-policy.md b/content/en/docs/tasks/administer-cluster/declare-network-policy.md index 164bb358c20d1..9579dac9980a0 100644 --- a/content/en/docs/tasks/administer-cluster/declare-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/declare-network-policy.md @@ -110,7 +110,7 @@ spec: Use kubectl to create a NetworkPolicy from the above nginx-policy.yaml file: ```console -kubectl create -f nginx-policy.yaml +kubectl apply -f nginx-policy.yaml ``` ```none diff --git a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md index c521b0291d43b..8dc097b2d2525 100644 --- a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md +++ b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md @@ -27,7 +27,7 @@ Create a file named busybox.yaml with the following contents: Then create a pod using this file and verify its status: ```shell -kubectl create -f https://k8s.io/examples/admin/dns/busybox.yaml +kubectl apply -f https://k8s.io/examples/admin/dns/busybox.yaml pod/busybox created kubectl get pods busybox diff --git a/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md b/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md index eaded418130ea..487a2a0fe345e 100644 --- a/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md +++ b/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md @@ -73,14 +73,14 @@ If you have a DNS Deployment, your scale target is: Deployment/ -where is the name of your DNS Deployment. For example, if +where `` is the name of your DNS Deployment. For example, if your DNS Deployment name is coredns, your scale target is Deployment/coredns. If you have a DNS ReplicationController, your scale target is: ReplicationController/ -where is the name of your DNS ReplicationController. For example, +where `` is the name of your DNS ReplicationController. For example, if your DNS ReplicationController name is kube-dns-v20, your scale target is ReplicationController/kube-dns-v20. @@ -98,7 +98,7 @@ In the file, replace `` with your scale target. Go to the directory that contains your configuration file, and enter this command to create the Deployment: - kubectl create -f dns-horizontal-autoscaler.yaml + kubectl apply -f dns-horizontal-autoscaler.yaml The output of a successful command is: @@ -145,7 +145,7 @@ There are other supported scaling patterns. For details, see ## Disable DNS horizontal autoscaling -There are a few options for turning DNS horizontal autoscaling. Which option to +There are a few options for tuning DNS horizontal autoscaling. Which option to use depends on different conditions. ### Option 1: Scale down the dns-autoscaler deployment to 0 replicas @@ -183,8 +183,8 @@ The output is: ### Option 3: Delete the dns-autoscaler manifest file from the master node This option works if dns-autoscaler is under control of the -[Addon Manager](https://git.k8s.io/kubernetes/cluster/addons/README.md)'s -control, and you have write access to the master node. +[Addon Manager](https://git.k8s.io/kubernetes/cluster/addons/README.md), +and you have write access to the master node. Sign in to the master node and delete the corresponding manifest file. The common path for this dns-autoscaler is: @@ -238,6 +238,3 @@ is under consideration as a future development. Learn more about the [implementation of cluster-proportional-autoscaler](https://github.com/kubernetes-incubator/cluster-proportional-autoscaler). {{% /capture %}} - - - diff --git a/content/en/docs/tasks/administer-cluster/highly-available-master.md b/content/en/docs/tasks/administer-cluster/highly-available-master.md index 598339a3b15e2..192eca3c93ff8 100644 --- a/content/en/docs/tasks/administer-cluster/highly-available-master.md +++ b/content/en/docs/tasks/administer-cluster/highly-available-master.md @@ -42,7 +42,7 @@ Set the following flag: The following sample command sets up a HA-compatible cluster in the GCE zone europe-west1-b: ```shell -$ MULTIZONE=true KUBE_GCE_ZONE=europe-west1-b ENABLE_ETCD_QUORUM_READS=true ./cluster/kube-up.sh +MULTIZONE=true KUBE_GCE_ZONE=europe-west1-b ENABLE_ETCD_QUORUM_READS=true ./cluster/kube-up.sh ``` Note that the commands above create a cluster with one master; @@ -65,7 +65,7 @@ as those are inherited from when you started your HA-compatible cluster. The following sample command replicates the master on an existing HA-compatible cluster: ```shell -$ KUBE_GCE_ZONE=europe-west1-c KUBE_REPLICATE_EXISTING_MASTER=true ./cluster/kube-up.sh +KUBE_GCE_ZONE=europe-west1-c KUBE_REPLICATE_EXISTING_MASTER=true ./cluster/kube-up.sh ``` ## Removing a master replica @@ -82,7 +82,7 @@ If empty: any replica from the given zone will be removed. The following sample command removes a master replica from an existing HA cluster: ```shell -$ KUBE_DELETE_NODES=false KUBE_GCE_ZONE=europe-west1-c ./cluster/kube-down.sh +KUBE_DELETE_NODES=false KUBE_GCE_ZONE=europe-west1-c ./cluster/kube-down.sh ``` ## Handling master replica failures @@ -94,13 +94,13 @@ The following sample commands demonstrate this process: 1. Remove the broken replica: ```shell -$ KUBE_DELETE_NODES=false KUBE_GCE_ZONE=replica_zone KUBE_REPLICA_NAME=replica_name ./cluster/kube-down.sh +KUBE_DELETE_NODES=false KUBE_GCE_ZONE=replica_zone KUBE_REPLICA_NAME=replica_name ./cluster/kube-down.sh ```
  1. Add a new replica in place of the old one:
```shell -$ KUBE_GCE_ZONE=replica-zone KUBE_REPLICATE_EXISTING_MASTER=true ./cluster/kube-up.sh +KUBE_GCE_ZONE=replica-zone KUBE_REPLICATE_EXISTING_MASTER=true ./cluster/kube-up.sh ``` ## Best practices for replicating masters for HA clusters diff --git a/content/en/docs/tasks/administer-cluster/ip-masq-agent.md b/content/en/docs/tasks/administer-cluster/ip-masq-agent.md index cd7c90ac75751..60d53a53cdad8 100644 --- a/content/en/docs/tasks/administer-cluster/ip-masq-agent.md +++ b/content/en/docs/tasks/administer-cluster/ip-masq-agent.md @@ -61,7 +61,7 @@ By default, in GCE/Google Kubernetes Engine starting with Kubernetes version 1.7 To create an ip-masq-agent, run the following kubectl command: ` -kubectl create -f https://raw.githubusercontent.com/kubernetes-incubator/ip-masq-agent/master/ip-masq-agent.yaml +kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/ip-masq-agent/master/ip-masq-agent.yaml ` You must also apply the appropriate node label to any nodes in your cluster that you want the agent to run on. diff --git a/content/en/docs/tasks/administer-cluster/kms-provider.md b/content/en/docs/tasks/administer-cluster/kms-provider.md index 0b1d54be09436..b07f13f7ccafd 100644 --- a/content/en/docs/tasks/administer-cluster/kms-provider.md +++ b/content/en/docs/tasks/administer-cluster/kms-provider.md @@ -91,7 +91,7 @@ resources: endpoint: unix:///tmp/socketfile.sock cachesize: 100 timeout: 3s - - identity: {} + - identity: {} ``` 2. Set the `--encryption-provider-config` flag on the kube-apiserver to point to the location of the configuration file. diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md index 436b6ccfbd302..913e672ac01d0 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md @@ -23,7 +23,9 @@ You should be familiar with [PKI certificates and requirements in Kubernetes](/d ## Renew certificates with the certificates API -Kubeadm can renew certificates with the `kubeadm alpha certs renew` commands. +The Kubernetes certificates normally reach their expiration date after one year. + +Kubeadm can renew certificates with the `kubeadm alpha certs renew` commands; you should run these commands on control-plane nodes only. Typically this is done by loading on-disk CA certificates and keys and using them to issue new certificates. This approach works well if your certificate tree is self-contained. However, if your certificates are externally @@ -65,14 +67,18 @@ You pass these arguments in any of the following ways: ### Approve requests If you set up an external signer such as [cert-manager][cert-manager], certificate signing requests (CSRs) are automatically approved. -Otherwise, you must manually approve certificates with the [`kubectl certificates`][certs] command. +Otherwise, you must manually approve certificates with the [`kubectl certificate`][certs] command. The following kubeadm command outputs the name of the certificate to approve, then blocks and waits for approval to occur: ```shell -$ sudo kubeadm alpha certs renew apiserver --use-api & +sudo kubeadm alpha certs renew apiserver --use-api & +``` +``` [1] 2890 [certs] certificate request "kubeadm-cert-kube-apiserver-ld526" created -$ kubectl certificate approve kubeadm-cert-kube-apiserver-ld526 +``` +```shell +kubectl certificate approve kubeadm-cert-kube-apiserver-ld526 certificatesigningrequest.certificates.k8s.io/kubeadm-cert-kube-apiserver-ld526 approved [1]+ Done sudo kubeadm alpha certs renew apiserver --use-api ``` @@ -89,16 +95,16 @@ To better integrate with external CAs, kubeadm can also produce certificate sign A CSR represents a request to a CA for a signed certificate for a client. In kubeadm terms, any certificate that would normally be signed by an on-disk CA can be produced as a CSR instead. A CA, however, cannot be produced as a CSR. -You can create an individual CSR with `kubeadm init phase certs apiserver --use-csr`. -The `--use-csr` flag can be applied only to individual phases. After [all certificates are in place][certs], you can run `kubeadm init --external-ca`. +You can create an individual CSR with `kubeadm init phase certs apiserver --csr-only`. +The `--csr-only` flag can be applied only to individual phases. After [all certificates are in place][certs], you can run `kubeadm init --external-ca`. You can pass in a directory with `--csr-dir` to output the CSRs to the specified location. -If `--csr-dire` is not specified, the default certificate directory (`/etc/kubernetes/pki`) is used. +If `--csr-dir` is not specified, the default certificate directory (`/etc/kubernetes/pki`) is used. Both the CSR and the accompanying private key are given in the output. After a certificate is signed, the certificate and the private key must be copied to the PKI directory (by default `/etc/kubernetes/pki`). ### Renew certificates -Certificates can be renewed with `kubeadm alpha certs renew --use-csr`. +Certificates can be renewed with `kubeadm alpha certs renew --csr-only`. As with `kubeadm init`, an output directory can be specified with the `--csr-dir` flag. To use the new certificates, copy the signed certificate and private key into the PKI directory (by default `/etc/kubernetes/pki`) diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md deleted file mode 100644 index 6589c9cbfe170..0000000000000 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md +++ /dev/null @@ -1,279 +0,0 @@ ---- -reviewers: -- sig-cluster-lifecycle -title: Upgrading kubeadm clusters from v1.10 to v1.11 -content_template: templates/task ---- - -{{% capture overview %}} - -This page explains how to upgrade a Kubernetes cluster created with `kubeadm` from version 1.10.x to version 1.11.x, and from version 1.11.x to 1.11.y, where `y > x`. - -{{% /capture %}} - -{{% capture prerequisites %}} - -- You need to have a `kubeadm` Kubernetes cluster running version 1.10.0 or later. Swap must be disabled. The cluster should use a static control plane and etcd pods. -- Make sure you read the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md) carefully. -- Make sure to back up any important components, such as app-level state stored in a database. `kubeadm upgrade` does not touch your workloads, only components internal to Kubernetes, but backups are always a best practice. - -### Additional information - -- All containers are restarted after upgrade, because the container spec hash value is changed. -- You can upgrade only from one minor version to the next minor version. That is, you cannot skip versions when you upgrade. For example, you can upgrade only from 1.10 to 1.11, not from 1.9 to 1.11. -- The default DNS provider in version 1.11 is [CoreDNS](https://coredns.io/) rather than [kube-dns](https://github.com/kubernetes/dns). -To keep `kube-dns`, pass `--feature-gates=CoreDNS=false` to `kubeadm upgrade apply`. - -{{% /capture %}} - -{{% capture steps %}} - -## Upgrade the control plane - -1. On your master node, run the following (as root): - - export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) # or manually specify a released Kubernetes version - export ARCH=amd64 # or: arm, arm64, ppc64le, s390x - curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > /usr/bin/kubeadm - chmod a+rx /usr/bin/kubeadm - - Note that upgrading the `kubeadm` package on your system prior to upgrading the control plane causes a failed upgrade. Even though `kubeadm` ships in the Kubernetes repositories, it's important to install it manually. The kubeadm team is working on fixing this limitation. - -1. Verify that the download works and has the expected version: - - ```shell - kubeadm version - ``` - -1. On the master node, run: - - ```shell - kubeadm upgrade plan - ``` - - You should see output similar to this: - - - - ```shell - [preflight] Running pre-flight checks. - [upgrade] Making sure the cluster is healthy: - [upgrade/config] Making sure the configuration is correct: - [upgrade/config] Reading configuration from the cluster... - [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' - I0618 20:32:32.950358 15307 feature_gate.go:230] feature gates: &{map[]} - [upgrade] Fetching available versions to upgrade to - [upgrade/versions] Cluster version: v1.10.4 - [upgrade/versions] kubeadm version: v1.11.0-beta.2.78+e0b33dbc2bde88 - - Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': - COMPONENT CURRENT AVAILABLE - Kubelet 1 x v1.10.4 v1.11.0 - - Upgrade to the latest version in the v1.10 series: - - COMPONENT CURRENT AVAILABLE - API Server v1.10.4 v1.11.0 - Controller Manager v1.10.4 v1.11.0 - Scheduler v1.10.4 v1.11.0 - Kube Proxy v1.10.4 v1.11.0 - CoreDNS 1.1.3 - Kube DNS 1.14.8 - Etcd 3.1.12 3.2.18 - - You can now apply the upgrade by executing the following command: - - kubeadm upgrade apply v1.11.0 - - Note: Before you can perform this upgrade, you have to update kubeadm to v1.11.0. - - _____________________________________________________________________ - ``` - - This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to. - -1. Choose a version to upgrade to, and run the appropriate command. For example: - - ```shell - kubeadm upgrade apply v1.11.0 - ``` - - If you currently use `kube-dns` and wish to continue doing so, add `--feature-gates=CoreDNS=false`. - - You should see output similar to this: - - - - ```shell - [preflight] Running pre-flight checks. - [upgrade] Making sure the cluster is healthy: - [upgrade/config] Making sure the configuration is correct: - [upgrade/config] Reading configuration from the cluster... - [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' - I0614 20:56:08.320369 30918 feature_gate.go:230] feature gates: &{map[]} - [upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file. - [upgrade/version] You have chosen to change the cluster version to "v1.11.0-beta.2.78+e0b33dbc2bde88" - [upgrade/versions] Cluster version: v1.10.4 - [upgrade/versions] kubeadm version: v1.11.0-beta.2.78+e0b33dbc2bde88 - [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y - [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] - [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.11.0-beta.2.78+e0b33dbc2bde88"... - Static pod: kube-apiserver-ip-172-31-85-18 hash: 7a329408b21bc0c44d7b3b78ff8187bf - Static pod: kube-controller-manager-ip-172-31-85-18 hash: 24fd3157627c7567b687968967c6a5e8 - Static pod: kube-scheduler-ip-172-31-85-18 hash: 5179266fb24d4c1834814c4f69486371 - Static pod: etcd-ip-172-31-85-18 hash: 9dfc197f444be11fcc70ab1467b030b8 - [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests089436939/etcd.yaml" - [certificates] Using the existing etcd/ca certificate and key. - [certificates] Using the existing etcd/server certificate and key. - [certificates] Using the existing etcd/peer certificate and key. - [certificates] Using the existing etcd/healthcheck-client certificate and key. - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-06-14-20-56-11/etcd.yaml" - [upgrade/staticpods] Waiting for the kubelet to restart the component - Static pod: etcd-ip-172-31-85-18 hash: 9dfc197f444be11fcc70ab1467b030b8 - < snip > - [apiclient] Found 1 Pods for label selector component=etcd - [upgrade/staticpods] Component "etcd" upgraded successfully! - [upgrade/etcd] Waiting for etcd to become available - [util/etcd] Waiting 0s for initial delay - [util/etcd] Attempting to see if all cluster endpoints are available 1/10 - [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests089436939" - [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests089436939/kube-apiserver.yaml" - [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests089436939/kube-controller-manager.yaml" - [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests089436939/kube-scheduler.yaml" - [certificates] Using the existing etcd/ca certificate and key. - [certificates] Using the existing apiserver-etcd-client certificate and key. - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-06-14-20-56-11/kube-apiserver.yaml" - [upgrade/staticpods] Waiting for the kubelet to restart the component - Static pod: kube-apiserver-ip-172-31-85-18 hash: 7a329408b21bc0c44d7b3b78ff8187bf - < snip > - [apiclient] Found 1 Pods for label selector component=kube-apiserver - [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-06-14-20-56-11/kube-controller-manager.yaml" - [upgrade/staticpods] Waiting for the kubelet to restart the component - Static pod: kube-controller-manager-ip-172-31-85-18 hash: 24fd3157627c7567b687968967c6a5e8 - Static pod: kube-controller-manager-ip-172-31-85-18 hash: 63992ff14733dcb9dcfa6ac0a3b8031a - [apiclient] Found 1 Pods for label selector component=kube-controller-manager - [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-06-14-20-56-11/kube-scheduler.yaml" - [upgrade/staticpods] Waiting for the kubelet to restart the component - Static pod: kube-scheduler-ip-172-31-85-18 hash: 5179266fb24d4c1834814c4f69486371 - Static pod: kube-scheduler-ip-172-31-85-18 hash: 831e4b9425f758e572392976311e56d9 - [apiclient] Found 1 Pods for label selector component=kube-scheduler - [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! - [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace - [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster - [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace - [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" - [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "ip-172-31-85-18" as an annotation - [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials - [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token - [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster - [addons] Applied essential addon: CoreDNS - [addons] Applied essential addon: kube-proxy - - [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.11.0-beta.2.78+e0b33dbc2bde88". Enjoy! - - [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. - ``` - -1. Manually upgrade your Software Defined Network (SDN). - - Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. - Check the [addons](/docs/concepts/cluster-administration/addons/) page to - find your CNI provider and see whether additional upgrade steps are required. - -## Upgrade master and node packages - -1. Prepare each host for maintenance, marking it unschedulable and evicting the workload: - - ```shell - kubectl drain $HOST --ignore-daemonsets - ``` - - On the master host, you must add `--ignore-daemonsets`: - - ```shell - kubectl drain ip-172-31-85-18 - node "ip-172-31-85-18" cordoned - error: unable to drain node "ip-172-31-85-18", aborting command... - - There are pending nodes to be drained: - ip-172-31-85-18 - error: DaemonSet-managed pods (use --ignore-daemonsets to ignore): calico-node-5798d, kube-proxy-thjp9 - ``` - - ``` - kubectl drain ip-172-31-85-18 --ignore-daemonsets - node "ip-172-31-85-18" already cordoned - WARNING: Ignoring DaemonSet-managed pods: calico-node-5798d, kube-proxy-thjp9 - node "ip-172-31-85-18" drained - ``` - -1. Upgrade the Kubernetes package version on each `$HOST` node by running the Linux package manager for your distribution: - - {{< tabs name="k8s_install" >}} - {{% tab name="Ubuntu, Debian or HypriotOS" %}} - apt-get update - apt-get upgrade -y kubelet kubeadm - {{% /tab %}} - {{% tab name="CentOS, RHEL or Fedora" %}} - yum upgrade -y kubelet kubeadm --disableexcludes=kubernetes - {{% /tab %}} - {{< /tabs >}} - -## Upgrade kubelet on each node - -1. On each node except the master node, upgrade the kubelet config: - - ```shell - sudo kubeadm upgrade node config --kubelet-version $(kubelet --version | cut -d ' ' -f 2) - ``` - -1. Restart the kubectl process: - - ```shell - sudo systemctl restart kubelet - ``` - -1. Verify that the new version of the `kubelet` is running on the host: - - ```shell - systemctl status kubelet - ``` - -1. Bring the host back online by marking it schedulable: - - ```shell - kubectl uncordon $HOST - ``` - -1. After the kubelet is upgraded on all hosts, verify that all nodes are available again by running the following command from anywhere -- for example, from outside the cluster: - - ```shell - kubectl get nodes - ``` - - The `STATUS` column should show `Ready` for all your hosts, and the version number should be updated. - -{{% /capture %}} - -## Recovering from a failure state - -If `kubeadm upgrade` fails and does not roll back, for example because of an unexpected shutdown during execution, -you can run `kubeadm upgrade` again. This command is idempotent and eventually makes sure that the actual state is the desired state you declare. - -To recover from a bad state, you can also run `kubeadm upgrade --force` without changing the version that your cluster is running. - -## How it works - -`kubeadm upgrade apply` does the following: - -- Checks that your cluster is in an upgradeable state: - - The API server is reachable, - - All nodes are in the `Ready` state - - The control plane is healthy -- Enforces the version skew policies. -- Makes sure the control plane images are available or available to pull to the machine. -- Upgrades the control plane components or rollbacks if any of them fails to come up. -- Applies the new `kube-dns` and `kube-proxy` manifests and enforces that all necessary RBAC rules are created. -- Creates new certificate and key files of the API server and backs up old files if they're about to expire in 180 days. diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-12.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-12.md index 314b5cde7065b..fa8831a92ca05 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-12.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-12.md @@ -39,12 +39,14 @@ This page explains how to upgrade a Kubernetes cluster created with `kubeadm` fr {{< tabs name="k8s_install" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} + # replace "x" with the latest patch version apt-mark unhold kubeadm && \ - apt-get update && apt-get upgrade -y kubeadm && \ + apt-get update && apt-get upgrade -y kubeadm=1.12.x-00 && \ apt-mark hold kubeadm {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} - yum upgrade -y kubeadm --disableexcludes=kubernetes + # replace "x" with the latest patch version + yum upgrade -y kubeadm-1.12.x --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}} @@ -230,11 +232,13 @@ This page explains how to upgrade a Kubernetes cluster created with `kubeadm` fr {{< tabs name="k8s_upgrade" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} + # replace "x" with the latest patch version apt-get update - apt-get upgrade -y kubelet kubeadm + apt-get upgrade -y kubelet=1.12.x-00 kubeadm=1.12.x-00 {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} - yum upgrade -y kubelet kubeadm --disableexcludes=kubernetes + # replace "x" with the latest patch version + yum upgrade -y kubelet-1.12.x kubeadm-1.12.x --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}} diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14.md new file mode 100644 index 0000000000000..7b4f54ee3c1c8 --- /dev/null +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14.md @@ -0,0 +1,382 @@ +--- +reviewers: +- sig-cluster-lifecycle +title: Upgrading kubeadm clusters from v1.13 to v1.14 +content_template: templates/task +--- + +{{% capture overview %}} + +This page explains how to upgrade a Kubernetes cluster created with kubeadm from version 1.13.x to version 1.14.x, +and from version 1.14.x to 1.14.y (where `y > x`). + +The upgrade workflow at high level is the following: + +1. Upgrade the primary control plane node. +1. Upgrade additional control plane nodes. +1. Upgrade worker nodes. + +{{< note >}} +With the release of Kubernetes v1.14, the kubeadm instructions for upgrading both HA and single control plane clusters +are merged into a single document. +{{}} + +{{% /capture %}} + +{{% capture prerequisites %}} + +- You need to have a kubeadm Kubernetes cluster running version 1.13.0 or later. +- [Swap must be disabled](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux). +- The cluster should use a static control plane and etcd pods. +- Make sure you read the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.14.md) carefully. +- Make sure to back up any important components, such as app-level state stored in a database. + `kubeadm upgrade` does not touch your workloads, only components internal to Kubernetes, but backups are always a best practice. + +### Additional information + +- All containers are restarted after upgrade, because the container spec hash value is changed. +- You only can upgrade from one MINOR version to the next MINOR version, + or between PATCH versions of the same MINOR. That is, you cannot skip MINOR versions when you upgrade. + For example, you can upgrade from 1.y to 1.y+1, but not from 1.y to 1.y+2. + +{{% /capture %}} + +{{% capture steps %}} + +## Determine which version to upgrade to + +1. Find the latest stable 1.14 version: + + {{< tabs name="k8s_install_versions" >}} + {{% tab name="Ubuntu, Debian or HypriotOS" %}} + apt update + apt-cache policy kubeadm + # find the latest 1.14 version in the list + # it should look like 1.14.x-00, where x is the latest patch + {{% /tab %}} + {{% tab name="CentOS, RHEL or Fedora" %}} + yum list --showduplicates kubeadm --disableexcludes=kubernetes + # find the latest 1.14 version in the list + # it should look like 1.14.x-0, where x is the latest patch + {{% /tab %}} + {{< /tabs >}} + +## Upgrade the first control plane node + +1. On your first control plane node, upgrade kubeadm: + + {{< tabs name="k8s_install_kubeadm_first_cp" >}} + {{% tab name="Ubuntu, Debian or HypriotOS" %}} + # replace x in 1.14.x-00 with the latest patch version + apt-mark unhold kubeadm && \ + apt-get update && apt-get install -y kubeadm=1.14.x-00 && \ + apt-mark hold kubeadm + {{% /tab %}} + {{% tab name="CentOS, RHEL or Fedora" %}} + # replace x in 1.14.x-0 with the latest patch version + yum install -y kubeadm-1.14.x-0 --disableexcludes=kubernetes + {{% /tab %}} + {{< /tabs >}} + +1. Verify that the download works and has the expected version: + + ```shell + kubeadm version + ``` + +1. On the control plane node, run: + + ```shell + sudo kubeadm upgrade plan + ``` + + You should see output similar to this: + + ```shell + [preflight] Running pre-flight checks. + [upgrade] Making sure the cluster is healthy: + [upgrade/config] Making sure the configuration is correct: + [upgrade/config] Reading configuration from the cluster... + [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' + [upgrade] Fetching available versions to upgrade to + [upgrade/versions] Cluster version: v1.13.3 + [upgrade/versions] kubeadm version: v1.14.0 + + Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': + COMPONENT CURRENT AVAILABLE + Kubelet 2 x v1.13.3 v1.14.0 + + Upgrade to the latest version in the v1.13 series: + + COMPONENT CURRENT AVAILABLE + API Server v1.13.3 v1.14.0 + Controller Manager v1.13.3 v1.14.0 + Scheduler v1.13.3 v1.14.0 + Kube Proxy v1.13.3 v1.14.0 + CoreDNS 1.2.6 1.3.1 + Etcd 3.2.24 3.3.10 + + You can now apply the upgrade by executing the following command: + + kubeadm upgrade apply v1.14.0 + + _____________________________________________________________________ + ``` + + This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to. + +1. Choose a version to upgrade to, and run the appropriate command. For example: + + ```shell + sudo kubeadm upgrade apply v1.14.x + ``` + + - Replace `x` with the patch version you picked for this ugprade. + + You should see output similar to this: + + ```shell + [preflight] Running pre-flight checks. + [upgrade] Making sure the cluster is healthy: + [upgrade/config] Making sure the configuration is correct: + [upgrade/config] Reading configuration from the cluster... + [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' + [upgrade/version] You have chosen to change the cluster version to "v1.14.0" + [upgrade/versions] Cluster version: v1.13.3 + [upgrade/versions] kubeadm version: v1.14.0 + [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y + [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] + [upgrade/prepull] Prepulling image for component etcd. + [upgrade/prepull] Prepulling image for component kube-scheduler. + [upgrade/prepull] Prepulling image for component kube-apiserver. + [upgrade/prepull] Prepulling image for component kube-controller-manager. + [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd + [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler + [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager + [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver + [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd + [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager + [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler + [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver + [upgrade/prepull] Prepulled image for component etcd. + [upgrade/prepull] Prepulled image for component kube-apiserver. + [upgrade/prepull] Prepulled image for component kube-scheduler. + [upgrade/prepull] Prepulled image for component kube-controller-manager. + [upgrade/prepull] Successfully prepulled the images for all the control plane components + [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.14.0"... + Static pod: kube-apiserver-myhost hash: 6436b0d8ee0136c9d9752971dda40400 + Static pod: kube-controller-manager-myhost hash: 8ee730c1a5607a87f35abb2183bf03f2 + Static pod: kube-scheduler-myhost hash: 4b52d75cab61380f07c0c5a69fb371d4 + [upgrade/etcd] Upgrading to TLS for etcd + Static pod: etcd-myhost hash: 877025e7dd7adae8a04ee20ca4ecb239 + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-14-20-52-44/etcd.yaml" + [upgrade/staticpods] Waiting for the kubelet to restart the component + [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) + Static pod: etcd-myhost hash: 877025e7dd7adae8a04ee20ca4ecb239 + Static pod: etcd-myhost hash: 877025e7dd7adae8a04ee20ca4ecb239 + Static pod: etcd-myhost hash: 64a28f011070816f4beb07a9c96d73b6 + [apiclient] Found 1 Pods for label selector component=etcd + [upgrade/staticpods] Component "etcd" upgraded successfully! + [upgrade/etcd] Waiting for etcd to become available + [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests043818770" + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-14-20-52-44/kube-apiserver.yaml" + [upgrade/staticpods] Waiting for the kubelet to restart the component + [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) + Static pod: kube-apiserver-myhost hash: 6436b0d8ee0136c9d9752971dda40400 + Static pod: kube-apiserver-myhost hash: 6436b0d8ee0136c9d9752971dda40400 + Static pod: kube-apiserver-myhost hash: 6436b0d8ee0136c9d9752971dda40400 + Static pod: kube-apiserver-myhost hash: b8a6533e241a8c6dab84d32bb708b8a1 + [apiclient] Found 1 Pods for label selector component=kube-apiserver + [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-14-20-52-44/kube-controller-manager.yaml" + [upgrade/staticpods] Waiting for the kubelet to restart the component + [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) + Static pod: kube-controller-manager-myhost hash: 8ee730c1a5607a87f35abb2183bf03f2 + Static pod: kube-controller-manager-myhost hash: 6f77d441d2488efd9fc2d9a9987ad30b + [apiclient] Found 1 Pods for label selector component=kube-controller-manager + [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-14-20-52-44/kube-scheduler.yaml" + [upgrade/staticpods] Waiting for the kubelet to restart the component + [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) + Static pod: kube-scheduler-myhost hash: 4b52d75cab61380f07c0c5a69fb371d4 + Static pod: kube-scheduler-myhost hash: a24773c92bb69c3748fcce5e540b7574 + [apiclient] Found 1 Pods for label selector component=kube-scheduler + [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! + [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace + [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster + [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace + [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" + [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials + [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token + [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster + [addons] Applied essential addon: CoreDNS + [addons] Applied essential addon: kube-proxy + + [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.14.0". Enjoy! + + [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. + ``` + +1. Manually upgrade your CNI provider plugin. + + Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. + Check the [addons](/docs/concepts/cluster-administration/addons/) page to + find your CNI provider and see whether additional upgrade steps are required. + +1. Upgrade the kubelet and kubectl on the control plane node: + + {{< tabs name="k8s_install_kubelet" >}} + {{% tab name="Ubuntu, Debian or HypriotOS" %}} + # replace x in 1.14.x-00 with the latest patch version + apt-mark unhold kubelet && \ + apt-get update && apt-get install -y kubelet=1.14.x-00 kubectl=1.14.x-00 && \ + apt-mark hold kubelet + {{% /tab %}} + {{% tab name="CentOS, RHEL or Fedora" %}} + # replace x in 1.14.x-0 with the latest patch version + yum install -y kubelet-1.14.x-0 kubectl-1.14.x-0 --disableexcludes=kubernetes + {{% /tab %}} + {{< /tabs >}} + +1. Restart the kubelet + + ```shell + sudo systemctl restart kubelet + ``` + +## Upgrade additional control plane nodes + +1. Same as the first control plane node but use: + +``` +sudo kubeadm upgrade node experimental-control-plane +``` + +instead of: + +``` +sudo kubeadm upgrade apply +``` + +Also `sudo kubeadm upgrade plan` is not needed. + +## Ugrade worker nodes + +The upgrade procedure on worker nodes should be executed one node at a time or few nodes at a time, +without compromising the minimum required capacity for running your workloads. + +### Upgrade kubeadm + +1. Upgrade kubeadm on all worker nodes: + + {{< tabs name="k8s_install_kubeadm_worker_nodes" >}} + {{% tab name="Ubuntu, Debian or HypriotOS" %}} + # replace x in 1.14.x-00 with the latest patch version + apt-mark unhold kubeadm && \ + apt-get update && apt-get install -y kubeadm=1.14.x-00 && \ + apt-mark hold kubeadm + {{% /tab %}} + {{% tab name="CentOS, RHEL or Fedora" %}} + # replace x in 1.14.x-0 with the latest patch version + yum install -y kubeadm-1.14.x-0 --disableexcludes=kubernetes + {{% /tab %}} + {{< /tabs >}} + +### Cordon the node + +1. Prepare the node for maintenance by marking it unschedulable and evicting the workloads. Run: + + ```shell + kubectl drain $NODE --ignore-daemonsets + ``` + + You should see output similar to this: + + ```shell + kubectl drain ip-172-31-85-18 + node "ip-172-31-85-18" cordoned + error: unable to drain node "ip-172-31-85-18", aborting command... + + There are pending nodes to be drained: + ip-172-31-85-18 + error: DaemonSet-managed pods (use --ignore-daemonsets to ignore): calico-node-5798d, kube-proxy-thjp9 + ``` + +### Upgrade the kubelet config + +1. Upgrade the kubelet config: + + ```shell + sudo kubeadm upgrade node config --kubelet-version v1.14.x + ``` + + Replace `x` with the patch version you picked for this ugprade. + + +### Upgrade kubelet and kubectl + +1. Upgrade the Kubernetes package version by running the Linux package manager for your distribution: + + {{< tabs name="k8s_kubelet_and_kubectl" >}} + {{% tab name="Ubuntu, Debian or HypriotOS" %}} + # replace x in 1.14.x-00 with the latest patch version + apt-get update + apt-get install -y kubelet=1.14.x-00 kubectl=1.14.x-00 + {{% /tab %}} + {{% tab name="CentOS, RHEL or Fedora" %}} + # replace x in 1.14.x-0 with the latest patch version + yum install -y kubelet-1.14.x-0 kubectl-1.14.x-0 --disableexcludes=kubernetes + {{% /tab %}} + {{< /tabs >}} + +1. Restart the kubelet + + ```shell + sudo systemctl restart kubelet + ``` + +### Uncordon the node + +1. Bring the node back online by marking it schedulable: + + ```shell + kubectl uncordon $NODE + ``` + +## Verify the status of the cluster + +After the kubelet is upgraded on all nodes verify that all nodes are available again by running the following command from anywhere kubectl can access the cluster: + +```shell +kubectl get nodes +``` + +The `STATUS` column should show `Ready` for all your nodes, and the version number should be updated. + +{{% /capture %}} + +## Recovering from a failure state + +If `kubeadm upgrade` fails and does not roll back, for example because of an unexpected shutdown during execution, you can run `kubeadm upgrade` again. +This command is idempotent and eventually makes sure that the actual state is the desired state you declare. + +To recover from a bad state, you can also run `kubeadm upgrade --force` without changing the version that your cluster is running. + +## How it works + +`kubeadm upgrade apply` does the following: + +- Checks that your cluster is in an upgradeable state: + - The API server is reachable + - All nodes are in the `Ready` state + - The control plane is healthy +- Enforces the version skew policies. +- Makes sure the control plane images are available or available to pull to the machine. +- Upgrades the control plane components or rollbacks if any of them fails to come up. +- Applies the new `kube-dns` and `kube-proxy` manifests and makes sure that all necessary RBAC rules are created. +- Creates new certificate and key files of the API server and backs up old files if they're about to expire in 180 days. + +`kubeadm upgrade node experimental-control-plane` does the following on additional control plane nodes: +- Fetches the kubeadm `ClusterConfiguration` from the cluster. +- Optionally backups the kube-apiserver certificate. +- Upgrades the static Pod manifests for the control plane components. diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace.md index 249265300f13c..78693d4405410 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace.md @@ -45,7 +45,7 @@ Here's the configuration file for a LimitRange: Create the LimitRange: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/cpu-constraints.yaml --namespace=constraints-cpu-example +kubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints.yaml --namespace=constraints-cpu-example ``` View detailed information about the LimitRange: @@ -96,7 +96,7 @@ minimum and maximum CPU constraints imposed by the LimitRange. Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/cpu-constraints-pod.yaml --namespace=constraints-cpu-example +kubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod.yaml --namespace=constraints-cpu-example ``` Verify that the Pod's Container is running: @@ -138,7 +138,7 @@ CPU request of 500 millicpu and a cpu limit of 1.5 cpu. Attempt to create the Pod: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-2.yaml --namespace=constraints-cpu-example +kubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-2.yaml --namespace=constraints-cpu-example ``` The output shows that the Pod does not get created, because the Container specifies a CPU limit that is @@ -159,7 +159,7 @@ CPU request of 100 millicpu and a CPU limit of 800 millicpu. Attempt to create the Pod: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-3.yaml --namespace=constraints-cpu-example +kubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-3.yaml --namespace=constraints-cpu-example ``` The output shows that the Pod does not get created, because the Container specifies a CPU @@ -180,7 +180,7 @@ specify a CPU request, and it does not specify a CPU limit. Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-4.yaml --namespace=constraints-cpu-example +kubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-4.yaml --namespace=constraints-cpu-example ``` View detailed information about the Pod: diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace.md index 53edcd7f25c36..df5ec46de5298 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace.md @@ -40,7 +40,7 @@ a default CPU request and a default CPU limit. Create the LimitRange in the default-cpu-example namespace: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/cpu-defaults.yaml --namespace=default-cpu-example +kubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults.yaml --namespace=default-cpu-example ``` Now if a Container is created in the default-cpu-example namespace, and the @@ -56,7 +56,7 @@ does not specify a CPU request and limit. Create the Pod. ```shell -kubectl create -f https://k8s.io/examples/admin/resource/cpu-defaults-pod.yaml --namespace=default-cpu-example +kubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults-pod.yaml --namespace=default-cpu-example ``` View the Pod's specification: @@ -91,7 +91,7 @@ Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/cpu-defaults-pod-2.yaml --namespace=default-cpu-example +kubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults-pod-2.yaml --namespace=default-cpu-example ``` View the Pod specification: @@ -121,7 +121,7 @@ specifies a CPU request, but not a limit: Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/cpu-defaults-pod-3.yaml --namespace=default-cpu-example +kubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults-pod-3.yaml --namespace=default-cpu-example ``` View the Pod specification: diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md index aca790e87ac5f..e6a6e1c2b0d39 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md @@ -45,7 +45,7 @@ Here's the configuration file for a LimitRange: Create the LimitRange: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/memory-constraints.yaml --namespace=constraints-mem-example +kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints.yaml --namespace=constraints-mem-example ``` View detailed information about the LimitRange: @@ -90,7 +90,7 @@ minimum and maximum memory constraints imposed by the LimitRange. Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/memory-constraints-pod.yaml --namespace=constraints-mem-example +kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod.yaml --namespace=constraints-mem-example ``` Verify that the Pod's Container is running: @@ -132,7 +132,7 @@ memory request of 800 MiB and a memory limit of 1.5 GiB. Attempt to create the Pod: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/memory-constraints-pod-2.yaml --namespace=constraints-mem-example +kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-2.yaml --namespace=constraints-mem-example ``` The output shows that the Pod does not get created, because the Container specifies a memory limit that is @@ -153,7 +153,7 @@ memory request of 100 MiB and a memory limit of 800 MiB. Attempt to create the Pod: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/memory-constraints-pod-3.yaml --namespace=constraints-mem-example +kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-3.yaml --namespace=constraints-mem-example ``` The output shows that the Pod does not get created, because the Container specifies a memory @@ -176,7 +176,7 @@ specify a memory request, and it does not specify a memory limit. Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/memory-constraints-pod-4.yaml --namespace=constraints-mem-example +kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-4.yaml --namespace=constraints-mem-example ``` View detailed information about the Pod: diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md index 94f07c040c6eb..197d61171734e 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md @@ -42,7 +42,7 @@ a default memory request and a default memory limit. Create the LimitRange in the default-mem-example namespace: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/memory-defaults.yaml --namespace=default-mem-example +kubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults.yaml --namespace=default-mem-example ``` Now if a Container is created in the default-mem-example namespace, and the @@ -58,7 +58,7 @@ does not specify a memory request and limit. Create the Pod. ```shell -kubectl create -f https://k8s.io/examples/admin/resource/memory-defaults-pod.yaml --namespace=default-mem-example +kubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults-pod.yaml --namespace=default-mem-example ``` View detailed information about the Pod: @@ -99,7 +99,7 @@ Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/memory-defaults-pod-2.yaml --namespace=default-mem-example +kubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults-pod-2.yaml --namespace=default-mem-example ``` View detailed information about the Pod: @@ -129,7 +129,7 @@ specifies a memory request, but not a limit: Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/memory-defaults-pod-3.yaml --namespace=default-mem-example +kubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults-pod-3.yaml --namespace=default-mem-example ``` View the Pod's specification: diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace.md index 4bdc646742f54..9558766410663 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace.md @@ -44,7 +44,7 @@ Here is the configuration file for a ResourceQuota object: Create the ResourceQuota: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/quota-mem-cpu.yaml --namespace=quota-mem-cpu-example +kubectl apply -f https://k8s.io/examples/admin/resource/quota-mem-cpu.yaml --namespace=quota-mem-cpu-example ``` View detailed information about the ResourceQuota: @@ -71,7 +71,7 @@ Here is the configuration file for a Pod: Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/quota-mem-cpu-pod.yaml --namespace=quota-mem-cpu-example +kubectl apply -f https://k8s.io/examples/admin/resource/quota-mem-cpu-pod.yaml --namespace=quota-mem-cpu-example ``` Verify that the Pod's Container is running: @@ -117,7 +117,7 @@ request exceeds the memory request quota. 600 MiB + 700 MiB > 1 GiB. Attempt to create the Pod: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/quota-mem-cpu-pod-2.yaml --namespace=quota-mem-cpu-example +kubectl apply -f https://k8s.io/examples/admin/resource/quota-mem-cpu-pod-2.yaml --namespace=quota-mem-cpu-example ``` The second Pod does not get created. The output shows that creating the second Pod diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace.md index 1cad0ee7bd5dc..31cac82cf1016 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace.md @@ -42,7 +42,7 @@ Here is the configuration file for a ResourceQuota object: Create the ResourceQuota: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/quota-pod.yaml --namespace=quota-pod-example +kubectl apply -f https://k8s.io/examples/admin/resource/quota-pod.yaml --namespace=quota-pod-example ``` View detailed information about the ResourceQuota: @@ -74,7 +74,7 @@ In the configuration file, `replicas: 3` tells Kubernetes to attempt to create t Create the Deployment: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/quota-pod-deployment.yaml --namespace=quota-pod-example +kubectl apply -f https://k8s.io/examples/admin/resource/quota-pod-deployment.yaml --namespace=quota-pod-example ``` View detailed information about the Deployment: diff --git a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md index 4cd6eba7461c2..7779b25d738db 100644 --- a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md +++ b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md @@ -7,7 +7,8 @@ content_template: templates/task --- {{% capture overview %}} -Kubernetes _namespaces_ help different projects, teams, or customers to share a Kubernetes cluster. +Kubernetes {{< glossary_tooltip text="namespaces" term_id="namespace" >}} +help different projects, teams, or customers to share a Kubernetes cluster. It does this by providing the following: @@ -44,7 +45,9 @@ Services, and Deployments used by the cluster. Assuming you have a fresh cluster, you can inspect the available namespaces by doing the following: ```shell -$ kubectl get namespaces +kubectl get namespaces +``` +``` NAME STATUS AGE default Active 13m ``` @@ -62,34 +65,36 @@ are relaxed to enable agile development. The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of Pods, Services, and Deployments that run the production site. -One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production. +One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: `development` and `production`. Let's create two new namespaces to hold our work. -Use the file [`namespace-dev.json`](/examples/admin/namespace-dev.json) which describes a development namespace: +Use the file [`namespace-dev.json`](/examples/admin/namespace-dev.json) which describes a `development` namespace: {{< codenew language="json" file="admin/namespace-dev.json" >}} -Create the development namespace using kubectl. +Create the `development` namespace using kubectl. ```shell -$ kubectl create -f https://k8s.io/examples/admin/namespace-dev.json +kubectl create -f https://k8s.io/examples/admin/namespace-dev.json ``` -Save the following contents into file [`namespace-prod.json`](/examples/admin/namespace-prod.json) which describes a production namespace: +Save the following contents into file [`namespace-prod.json`](/examples/admin/namespace-prod.json) which describes a `production` namespace: {{< codenew language="json" file="admin/namespace-prod.json" >}} -And then let's create the production namespace using kubectl. +And then let's create the `production` namespace using kubectl. ```shell -$ kubectl create -f https://k8s.io/examples/admin/namespace-prod.json +kubectl create -f https://k8s.io/examples/admin/namespace-prod.json ``` To be sure things are right, let's list all of the namespaces in our cluster. ```shell -$ kubectl get namespaces --show-labels +kubectl get namespaces --show-labels +``` +``` NAME STATUS AGE LABELS default Active 32m development Active 29s name=development @@ -102,12 +107,14 @@ A Kubernetes namespace provides the scope for Pods, Services, and Deployments in Users interacting with one namespace do not see the content in another namespace. -To demonstrate this, let's spin up a simple Deployment and Pods in the development namespace. +To demonstrate this, let's spin up a simple Deployment and Pods in the `development` namespace. We first check what is the current context: ```shell -$ kubectl config view +kubectl config view +``` +```yaml apiVersion: v1 clusters: - cluster: @@ -132,18 +139,22 @@ users: user: password: h5M0FtUUIflBSdI7 username: admin - -$ kubectl config current-context +``` +```shell +kubectl config current-context +``` +``` lithe-cocoa-92103_kubernetes ``` The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context. ```shell -$ kubectl config set-context dev --namespace=development \ +kubectl config set-context dev --namespace=development \ --cluster=lithe-cocoa-92103_kubernetes \ --user=lithe-cocoa-92103_kubernetes -$ kubectl config set-context prod --namespace=production \ + +kubectl config set-context prod --namespace=production \ --cluster=lithe-cocoa-92103_kubernetes \ --user=lithe-cocoa-92103_kubernetes ``` @@ -155,7 +166,9 @@ new request contexts depending on which namespace you wish to work against. To view the new contexts: ```shell -$ kubectl config view +kubectl config view +``` +```yaml apiVersion: v1 clusters: - cluster: @@ -192,62 +205,72 @@ users: username: admin ``` -Let's switch to operate in the development namespace. +Let's switch to operate in the `development` namespace. ```shell -$ kubectl config use-context dev +kubectl config use-context dev ``` You can verify your current context by doing the following: ```shell -$ kubectl config current-context +kubectl config current-context +``` +``` dev ``` -At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace. +At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the `development` namespace. Let's create some contents. ```shell -$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2 +kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2 ``` -We have just created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname. +We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname. Note that `kubectl run` creates deployments only on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead. If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) for more details. ```shell -$ kubectl get deployment +kubectl get deployment +``` +``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE snowflake 2 2 2 2 2m +``` -$ kubectl get pods -l run=snowflake +```shell +kubectl get pods -l run=snowflake +``` +``` NAME READY STATUS RESTARTS AGE snowflake-3968820950-9dgr8 1/1 Running 0 2m snowflake-3968820950-vgc4n 1/1 Running 0 2m ``` -And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace. +And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the `production` namespace. -Let's switch to the production namespace and show how resources in one namespace are hidden from the other. +Let's switch to the `production` namespace and show how resources in one namespace are hidden from the other. ```shell -$ kubectl config use-context prod +kubectl config use-context prod ``` -The production namespace should be empty, and the following commands should return nothing. +The `production` namespace should be empty, and the following commands should return nothing. ```shell -$ kubectl get deployment -$ kubectl get pods +kubectl get deployment +kubectl get pods ``` Production likes to run cattle, so let's create some cattle pods. ```shell -$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5 +kubectl run cattle --image=kubernetes/serve_hostname --replicas=5 -$ kubectl get deployment +kubectl get deployment +``` +``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE cattle 5 5 5 5 10s diff --git a/content/en/docs/tasks/administer-cluster/namespaces.md b/content/en/docs/tasks/administer-cluster/namespaces.md index ce1ebbce3df85..6900d6d558069 100644 --- a/content/en/docs/tasks/administer-cluster/namespaces.md +++ b/content/en/docs/tasks/administer-cluster/namespaces.md @@ -7,7 +7,7 @@ content_template: templates/task --- {{% capture overview %}} -This page shows how to view, work in, and delete namespaces. The page also shows how to use Kubernetes namespaces to subdivide your cluster. +This page shows how to view, work in, and delete {{< glossary_tooltip text="namespaces" term_id="namespace" >}}. The page also shows how to use Kubernetes namespaces to subdivide your cluster. {{% /capture %}} {{% capture prerequisites %}} @@ -22,7 +22,9 @@ This page shows how to view, work in, and delete namespaces. The page also shows 1. List the current namespaces in a cluster using: ```shell -$ kubectl get namespaces +kubectl get namespaces +``` +``` NAME STATUS AGE default Active 11d kube-system Active 11d @@ -38,13 +40,15 @@ Kubernetes starts with three initial namespaces: You can also get the summary of a specific namespace using: ```shell -$ kubectl get namespaces +kubectl get namespaces ``` Or you can get detailed information with: ```shell -$ kubectl describe namespaces +kubectl describe namespaces +``` +``` Name: default Labels: Annotations: @@ -89,7 +93,7 @@ metadata: Then run: ```shell -$ kubectl create -f ./my-namespace.yaml +kubectl create -f ./my-namespace.yaml ``` Note that the name of your namespace must be a DNS compatible label. @@ -103,7 +107,7 @@ More information on `finalizers` can be found in the namespace [design doc](http 1. Delete a namespace with ```shell -$ kubectl delete namespaces +kubectl delete namespaces ``` {{< warning >}} @@ -122,7 +126,9 @@ Services, and Deployments used by the cluster. Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following: ```shell -$ kubectl get namespaces +kubectl get namespaces +``` +``` NAME STATUS AGE default Active 13m ``` @@ -140,30 +146,32 @@ are relaxed to enable agile development. The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of Pods, Services, and Deployments that run the production site. -One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production. +One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: `development` and `production`. Let's create two new namespaces to hold our work. -Use the file [`namespace-dev.json`](/examples/admin/namespace-dev.json) which describes a development namespace: +Use the file [`namespace-dev.json`](/examples/admin/namespace-dev.json) which describes a `development` namespace: {{< codenew language="json" file="admin/namespace-dev.json" >}} -Create the development namespace using kubectl. +Create the `development` namespace using kubectl. ```shell -$ kubectl create -f https://k8s.io/examples/admin/namespace-dev.json +kubectl create -f https://k8s.io/examples/admin/namespace-dev.json ``` -And then let's create the production namespace using kubectl. +And then let's create the `production` namespace using kubectl. ```shell -$ kubectl create -f https://k8s.io/examples/admin/namespace-prod.json +kubectl create -f https://k8s.io/examples/admin/namespace-prod.json ``` To be sure things are right, list all of the namespaces in our cluster. ```shell -$ kubectl get namespaces --show-labels +kubectl get namespaces --show-labels +``` +``` NAME STATUS AGE LABELS default Active 32m development Active 29s name=development @@ -176,12 +184,14 @@ A Kubernetes namespace provides the scope for Pods, Services, and Deployments in Users interacting with one namespace do not see the content in another namespace. -To demonstrate this, let's spin up a simple Deployment and Pods in the development namespace. +To demonstrate this, let's spin up a simple Deployment and Pods in the `development` namespace. We first check what is the current context: ```shell -$ kubectl config view +kubectl config view +``` +```yaml apiVersion: v1 clusters: - cluster: @@ -206,81 +216,96 @@ users: user: password: h5M0FtUUIflBSdI7 username: admin +``` -$ kubectl config current-context +```shell +kubectl config current-context +``` +``` lithe-cocoa-92103_kubernetes ``` The next step is to define a context for the kubectl client to work in each namespace. The values of "cluster" and "user" fields are copied from the current context. ```shell -$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes -$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes +kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes +kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes ``` The above commands provided two request contexts you can alternate against depending on what namespace you wish to work against. -Let's switch to operate in the development namespace. +Let's switch to operate in the `development` namespace. ```shell -$ kubectl config use-context dev +kubectl config use-context dev ``` You can verify your current context by doing the following: ```shell -$ kubectl config current-context +kubectl config current-context dev ``` -At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace. +At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the `development` namespace. Let's create some contents. ```shell -$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2 +kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2 ``` -We have just created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname. +We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname. Note that `kubectl run` creates deployments only on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead. If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) for more details. ```shell -$ kubectl get deployment +kubectl get deployment +``` +``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE snowflake 2 2 2 2 2m - -$ kubectl get pods -l run=snowflake +``` +```shell +kubectl get pods -l run=snowflake +``` +``` NAME READY STATUS RESTARTS AGE snowflake-3968820950-9dgr8 1/1 Running 0 2m snowflake-3968820950-vgc4n 1/1 Running 0 2m ``` -And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace. +And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the `production` namespace. -Let's switch to the production namespace and show how resources in one namespace are hidden from the other. +Let's switch to the `production` namespace and show how resources in one namespace are hidden from the other. ```shell -$ kubectl config use-context prod +kubectl config use-context prod ``` -The production namespace should be empty, and the following commands should return nothing. +The `production` namespace should be empty, and the following commands should return nothing. ```shell -$ kubectl get deployment -$ kubectl get pods +kubectl get deployment +kubectl get pods ``` Production likes to run cattle, so let's create some cattle pods. ```shell -$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5 +kubectl run cattle --image=kubernetes/serve_hostname --replicas=5 -$ kubectl get deployment +kubectl get deployment +``` +``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE cattle 5 5 5 5 10s +``` +```shell kubectl get pods -l run=cattle +``` +``` NAME READY STATUS RESTARTS AGE cattle-2263376956-41xy6 1/1 Running 0 34s cattle-2263376956-kw466 1/1 Running 0 34s diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md index ed3fd0f7157c8..0e26feba6ee96 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md @@ -26,20 +26,28 @@ To get familiar with Cilium easily you can follow the [Cilium Kubernetes Getting Started Guide](https://cilium.readthedocs.io/en/stable/gettingstarted/minikube/) to perform a basic DaemonSet installation of Cilium in minikube. -As Cilium requires a standalone etcd instance, for minikube you can deploy it -by running: +To start minikube, minimal version required is >= v0.33.1, run the with the +following arguments: ```shell -kubectl create -n kube-system -f https://raw.githubusercontent.com/cilium/cilium/v1.3/examples/kubernetes/addons/etcd/standalone-etcd.yaml +minikube version +``` +``` +minikube version: v0.33.1 ``` -After etcd is up and running you can deploy Cilium Kubernetes descriptor which -is a simple ''all-in-one'' YAML file that includes DaemonSet configurations for -Cilium, to connect to the etcd instance previously deployed as well as -appropriate RBAC settings: +```shell +minikube start --network-plugin=cni --memory=4096 +``` + +For minikube you can deploy this simple ''all-in-one'' YAML file that includes +DaemonSet configurations for Cilium, and the necessary configurations to connect +to the etcd instance deployed in minikube as well as appropriate RBAC settings: ```shell -$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.3/examples/kubernetes/1.12/cilium.yaml +kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.4/examples/kubernetes/1.13/cilium-minikube.yaml +``` +``` configmap/cilium-config created daemonset.apps/cilium created clusterrolebinding.rbac.authorization.k8s.io/cilium created @@ -54,7 +62,7 @@ policies using an example application. ## Deploying Cilium for Production Use For detailed instructions around deploying Cilium for production, see: -[Cilium Kubernetes Installation Guide](https://cilium.readthedocs.io/en/latest/kubernetes/install/) +[Cilium Kubernetes Installation Guide](https://cilium.readthedocs.io/en/stable/kubernetes/intro/) This documentation includes detailed requirements, instructions and example production DaemonSet files. @@ -83,7 +91,7 @@ There are two main components to be aware of: - One `cilium` Pod runs on each node in your cluster and enforces network policy on the traffic to/from Pods on that node using Linux BPF. - For production deployments, Cilium should leverage a key-value store -(e.g., etcd). The [Cilium Kubernetes Installation Guide](https://cilium.readthedocs.io/en/latest/kubernetes/install/) +(e.g., etcd). The [Cilium Kubernetes Installation Guide](https://cilium.readthedocs.io/en/stable/kubernetes/intro/) will provide the necessary steps on how to install this required key-value store as well how to configure it in Cilium. diff --git a/content/en/docs/tasks/administer-cluster/out-of-resource.md b/content/en/docs/tasks/administer-cluster/out-of-resource.md index 1f0e4db4ad82d..9051f57d15b80 100644 --- a/content/en/docs/tasks/administer-cluster/out-of-resource.md +++ b/content/en/docs/tasks/administer-cluster/out-of-resource.md @@ -225,7 +225,7 @@ If necessary, `kubelet` evicts Pods one at a time to reclaim disk when `DiskPres is encountered. If the `kubelet` is responding to `inode` starvation, it reclaims `inodes` by evicting Pods with the lowest quality of service first. If the `kubelet` is responding to lack of available disk, it ranks Pods within a quality of service -that consumes the largest amount of disk and kill those first. +that consumes the largest amount of disk and kills those first. #### With `imagefs` diff --git a/content/en/docs/tasks/administer-cluster/quota-api-object.md b/content/en/docs/tasks/administer-cluster/quota-api-object.md index 971a4901eb0e9..c6cff9076901e 100644 --- a/content/en/docs/tasks/administer-cluster/quota-api-object.md +++ b/content/en/docs/tasks/administer-cluster/quota-api-object.md @@ -43,7 +43,7 @@ Here is the configuration file for a ResourceQuota object: Create the ResourceQuota: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/quota-objects.yaml --namespace=quota-object-example +kubectl apply -f https://k8s.io/examples/admin/resource/quota-objects.yaml --namespace=quota-object-example ``` View detailed information about the ResourceQuota: @@ -77,7 +77,7 @@ Here is the configuration file for a PersistentVolumeClaim object: Create the PersistentVolumeClaim: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/quota-objects-pvc.yaml --namespace=quota-object-example +kubectl apply -f https://k8s.io/examples/admin/resource/quota-objects-pvc.yaml --namespace=quota-object-example ``` Verify that the PersistentVolumeClaim was created: @@ -102,7 +102,7 @@ Here is the configuration file for a second PersistentVolumeClaim: Attempt to create the second PersistentVolumeClaim: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/quota-objects-pvc-2.yaml --namespace=quota-object-example +kubectl apply -f https://k8s.io/examples/admin/resource/quota-objects-pvc-2.yaml --namespace=quota-object-example ``` The output shows that the second PersistentVolumeClaim was not created, diff --git a/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md b/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md index 923db9a03c606..c259369bf0309 100644 --- a/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md +++ b/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md @@ -88,7 +88,7 @@ be configured to use the `systemd` cgroup driver. ### Kube Reserved -- **Kubelet Flag**: `--kube-reserved=[cpu=100m][,][memory=100Mi][,][ephemeral-storage=1Gi]` +- **Kubelet Flag**: `--kube-reserved=[cpu=100m][,][memory=100Mi][,][ephemeral-storage=1Gi][,][pid=1000]` - **Kubelet Flag**: `--kube-reserved-cgroup=` `kube-reserved` is meant to capture resource reservation for kubernetes system @@ -102,6 +102,10 @@ post](https://kubernetes.io/blog/2016/11/visualize-kubelet-performance-with-node explains how the dashboard can be interpreted to come up with a suitable `kube-reserved` reservation. +In addition to `cpu`, `memory`, and `ephemeral-storage`, `pid` may be +specified to reserve the specified number of process IDs for +kubernetes system daemons. + To optionally enforce `kube-reserved` on system daemons, specify the parent control group for kube daemons as the value for `--kube-reserved-cgroup` kubelet flag. @@ -118,7 +122,7 @@ exist. Kubelet will fail if an invalid cgroup is specified. ### System Reserved -- **Kubelet Flag**: `--system-reserved=[cpu=100m][,][memory=100Mi][,][ephemeral-storage=1Gi]` +- **Kubelet Flag**: `--system-reserved=[cpu=100m][,][memory=100Mi][,][ephemeral-storage=1Gi][,][pid=1000]` - **Kubelet Flag**: `--system-reserved-cgroup=` @@ -128,6 +132,10 @@ like `sshd`, `udev`, etc. `system-reserved` should reserve `memory` for the Reserving resources for user login sessions is also recommended (`user.slice` in systemd world). +In addition to `cpu`, `memory`, and `ephemeral-storage`, `pid` may be +specified to reserve the specified number of process IDs for OS system +daemons. + To optionally enforce `system-reserved` on system daemons, specify the parent control group for OS system daemons as the value for `--system-reserved-cgroup` kubelet flag. @@ -182,7 +190,8 @@ container runtime. However, Kubelet cannot burst and use up all available Node resources if `kube-reserved` is enforced. Be extra careful while enforcing `system-reserved` reservation since it can lead -to critical system services being CPU starved or OOM killed on the node. The +to critical system services being CPU starved, OOM killed, or unable +to fork on the node. The recommendation is to enforce `system-reserved` only if a user has profiled their nodes exhaustively to come up with precise estimates and is confident in their ability to recover if any process in that group is oom_killed. @@ -211,7 +220,7 @@ Here is an example to illustrate Node Allocatable computation: * `--eviction-hard` is set to `memory.available<500Mi,nodefs.available<10%` Under this scenario, `Allocatable` will be `14.5 CPUs`, `28.5Gi` of memory and -`98Gi` of local storage. +`88Gi` of local storage. Scheduler ensures that the total memory `requests` across all pods on this node does not exceed `28.5Gi` and storage doesn't exceed `88Gi`. Kubelet evicts pods whenever the overall memory usage across pods exceeds `28.5Gi`, diff --git a/content/en/docs/tasks/administer-cluster/running-cloud-controller.md b/content/en/docs/tasks/administer-cluster/running-cloud-controller.md index 2110c98385ede..83c24f639efee 100644 --- a/content/en/docs/tasks/administer-cluster/running-cloud-controller.md +++ b/content/en/docs/tasks/administer-cluster/running-cloud-controller.md @@ -36,11 +36,6 @@ Successfully running cloud-controller-manager requires some changes to your clus * `kube-apiserver` and `kube-controller-manager` MUST NOT specify the `--cloud-provider` flag. This ensures that it does not run any cloud specific loops that would be run by cloud controller manager. In the future, this flag will be deprecated and removed. * `kubelet` must run with `--cloud-provider=external`. This is to ensure that the kubelet is aware that it must be initialized by the cloud controller manager before it is scheduled any work. -* `kube-apiserver` SHOULD NOT run the `PersistentVolumeLabel` admission controller - since the cloud controller manager takes over labeling persistent volumes. -* For the `cloud-controller-manager` to label persistent volumes, initializers will need to be enabled and an InitializerConfiguration needs to be added to the system. Follow [these instructions](/docs/reference/access-authn-authz/extensible-admission-controllers/#enable-initializers-alpha-feature) to enable initializers. Use the following YAML to create the InitializerConfiguration: - -{{< codenew file="admin/cloud/pvl-initializer-config.yaml" >}} Keep in mind that setting up your cluster to use cloud controller manager will change your cluster behaviour in a few ways: @@ -53,7 +48,6 @@ As of v1.8, cloud controller manager can implement: * node controller - responsible for updating kubernetes nodes using cloud APIs and deleting kubernetes nodes that were deleted on your cloud. * service controller - responsible for loadbalancers on your cloud against services of type LoadBalancer. * route controller - responsible for setting up network routes on your cloud -* persistent volume labels controller - responsible for setting the zone and region labels on PersistentVolumes created in GCP and AWS clouds. * any other features you would like to implement if you are running an out-of-tree provider. diff --git a/content/en/docs/tasks/administer-cluster/safely-drain-node.md b/content/en/docs/tasks/administer-cluster/safely-drain-node.md index 2cb77e3149871..4762d9902b7ae 100644 --- a/content/en/docs/tasks/administer-cluster/safely-drain-node.md +++ b/content/en/docs/tasks/administer-cluster/safely-drain-node.md @@ -117,7 +117,7 @@ itself. To attempt an eviction (perhaps more REST-precisely, to attempt to You can attempt an eviction using `curl`: ```bash -$ curl -v -H 'Content-type: application/json' http://127.0.0.1:8080/api/v1/namespaces/default/pods/quux/eviction -d @eviction.json +curl -v -H 'Content-type: application/json' http://127.0.0.1:8080/api/v1/namespaces/default/pods/quux/eviction -d @eviction.json ``` The API can respond in one of three ways: diff --git a/content/en/docs/tasks/administer-cluster/sysctl-cluster.md b/content/en/docs/tasks/administer-cluster/sysctl-cluster.md index 3fadc0454099d..4262d943dece7 100644 --- a/content/en/docs/tasks/administer-cluster/sysctl-cluster.md +++ b/content/en/docs/tasks/administer-cluster/sysctl-cluster.md @@ -36,7 +36,7 @@ process file system. The parameters cover various subsystems such as: To get a list of all parameters, you can run ```shell -$ sudo sysctl -a +sudo sysctl -a ``` ## Enabling Unsafe Sysctls @@ -76,14 +76,14 @@ application tuning. _Unsafe_ sysctls are enabled on a node-by-node basis with a flag of the kubelet, e.g.: ```shell -$ kubelet --allowed-unsafe-sysctls \ +kubelet --allowed-unsafe-sysctls \ 'kernel.msg*,net.ipv4.route.min_pmtu' ... ``` For minikube, this can be done via the `extra-config` flag: ```shell -$ minikube start --extra-config="kubelet.AllowedUnsafeSysctls=kernel.msg*,net.ipv4.route.min_pmtu"... +minikube start --extra-config="kubelet.allowed-unsafe-sysctls=kernel.msg*,net.ipv4.route.min_pmtu"... ``` Only _namespaced_ sysctls can be enabled this way. diff --git a/content/en/docs/tasks/administer-federation/_index.md b/content/en/docs/tasks/administer-federation/_index.md deleted file mode 100755 index e3cb1fe59da7d..0000000000000 --- a/content/en/docs/tasks/administer-federation/_index.md +++ /dev/null @@ -1,5 +0,0 @@ ---- -title: "Federation - Run an App on Multiple Clusters" -weight: 160 ---- - diff --git a/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md b/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md index 516e33d1ed331..3ebd53370bc02 100644 --- a/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md +++ b/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md @@ -22,7 +22,7 @@ Each node in your cluster must have at least 1 CPU. A few of the steps on this page require you to run the [metrics-server](https://github.com/kubernetes-incubator/metrics-server) -service in your cluster. If you do not have the metrics-server +service in your cluster. If you have the metrics-server running, you can skip those steps. If you are running minikube, run the following command to enable @@ -77,7 +77,7 @@ The `-cpus "2"` argument tells the Container to attempt to use 2 CPUs. Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/resource/cpu-request-limit.yaml --namespace=cpu-example +kubectl apply -f https://k8s.io/examples/pods/resource/cpu-request-limit.yaml --namespace=cpu-example ``` Verify that the Pod Container is running: @@ -168,7 +168,7 @@ capacity of any Node in your cluster. Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/resource/cpu-request-limit-2.yaml --namespace=cpu-example +kubectl apply -f https://k8s.io/examples/pods/resource/cpu-request-limit-2.yaml --namespace=cpu-example ``` View the Pod status: diff --git a/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md b/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md index fd8e610a212f9..0f418370c30c6 100644 --- a/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md +++ b/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md @@ -21,7 +21,7 @@ Each node in your cluster must have at least 300 MiB of memory. A few of the steps on this page require you to run the [metrics-server](https://github.com/kubernetes-incubator/metrics-server) -service in your cluster. If you do not have the metrics-server +service in your cluster. If you have the metrics-server running, you can skip those steps. If you are running Minikube, run the following command to enable the @@ -76,7 +76,7 @@ The `"--vm-bytes", "150M"` arguments tell the Container to attempt to allocate 1 Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/resource/memory-request-limit.yaml --namespace=mem-example +kubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit.yaml --namespace=mem-example ``` Verify that the Pod Container is running: @@ -146,7 +146,7 @@ will attempt to allocate 250 MiB of memory, which is well above the 100 MiB limi Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/resource/memory-request-limit-2.yaml --namespace=mem-example +kubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit-2.yaml --namespace=mem-example ``` View detailed information about the Pod: @@ -223,7 +223,7 @@ kubectl describe nodes The output includes a record of the Container being killed because of an out-of-memory condition: ``` -Warning OOMKilling Memory cgroup out of memory: Kill process 4481 (stress) score 1994 or sacrifice child +Warning OOMKilling Memory cgroup out of memory: Kill process 4481 (stress) score 1994 or sacrifice child ``` Delete your Pod: @@ -252,7 +252,7 @@ of any Node in your cluster. Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/resource/memory-request-limit-3.yaml --namespace=mem-example +kubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit-3.yaml --namespace=mem-example ``` View the Pod status: diff --git a/content/en/docs/tasks/configure-pod-container/assign-pods-nodes.md b/content/en/docs/tasks/configure-pod-container/assign-pods-nodes.md index 7b7c912123b78..55c5a0754bbf2 100644 --- a/content/en/docs/tasks/configure-pod-container/assign-pods-nodes.md +++ b/content/en/docs/tasks/configure-pod-container/assign-pods-nodes.md @@ -71,7 +71,7 @@ a `disktype=ssd` label. chosen node: ```shell - kubectl create -f https://k8s.io/examples/pods/pod-nginx.yaml + kubectl apply -f https://k8s.io/examples/pods/pod-nginx.yaml ``` 1. Verify that the pod is running on your chosen node: @@ -86,6 +86,13 @@ a `disktype=ssd` label. NAME READY STATUS RESTARTS AGE IP NODE nginx 1/1 Running 0 13s 10.200.0.4 worker0 ``` +## Create a pod that gets scheduled to specific node + +You can also schedule a pod to one specific node via setting `nodeName`. + +{{< codenew file="pods/pod-nginx-specific-node.yaml" >}} + +Use the configuration file to create a pod that will get scheduled on `foo-node` only. {{% /capture %}} diff --git a/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md b/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md index 3c461a2ecbd86..67aedfafca051 100644 --- a/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md +++ b/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md @@ -38,7 +38,7 @@ nginx gracefully. This is helpful if the Container is being terminated because o Create the Pod: - kubectl create -f https://k8s.io/examples/pods/lifecycle-events.yaml + kubectl apply -f https://k8s.io/examples/pods/lifecycle-events.yaml Verify that the Container in the Pod is running: diff --git a/content/en/docs/tasks/configure-pod-container/configure-gmsa.md b/content/en/docs/tasks/configure-pod-container/configure-gmsa.md new file mode 100644 index 0000000000000..c6f43f96a8f49 --- /dev/null +++ b/content/en/docs/tasks/configure-pod-container/configure-gmsa.md @@ -0,0 +1,204 @@ +--- +title: Configure GMSA for Windows pods and containers +content_template: templates/task +weight: 20 +--- + +{{% capture overview %}} + +{{< feature-state for_k8s_version="v1.14" state="alpha" >}} + +This page shows how to configure [Group Managed Service Accounts](https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview) (GMSA) for pods and containers that will run on Windows nodes. Group Managed Service Accounts are a specific type of Active Directory account that provides automatic password management, simplified service principal name (SPN) management, and the ability to delegate the management to other administrators across multiple servers. + +In Kubernetes, GMSA credential specs are configured at a Kubernetes cluster-wide scope as custom resources. Windows pods, as well as individual containers within a pod, can be configured to use a GMSA for domain based functions (e.g. Kerberos authentication) when interacting with other Windows services. As of v1.14, the only container runtime interface that supports GMSA for Windows workloads is Dockershim. Implementation of GMSA through CRI and other runtimes is planned for the future. + +{{< note >}} +Currently this feature is in alpha state. While the overall goals and functionality will not change, the way in which the GMSA credspec references are specified in pod specs may change from annotations to API fields. Please take this into consideration when testing or adopting this feature. +{{< /note >}} + +{{% /capture %}} + +{{% capture prerequisites %}} + +You need to have a Kubernetes cluster and the kubectl command-line tool must be configured to communicate with your cluster. The cluster is expected to have Windows worker nodes where pods with containers running Windows workloads requiring GMSA credentials will get scheduled. This section covers a set of initial steps required once for each cluster: + +### Enable the WindowsGMSA feature gate +In the alpha state, the `WindowsGMSA` feature gate needs to be enabled on kubelet on Windows nodes. This is required to pass down the GMSA credential specs from the cluster scoped configurations to the container runtime. See [Feature Gates](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/) for an explanation of enabling feature gates. Please make sure `--feature-gates=WindowsGMSA=true` parameter exists in the kubelet.exe command line. + +### Install the GMSACredentialSpec CRD +A [CustomResourceDefinition](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/) (CRD) for GMSA credential spec resources needs to be configured on the cluster to define the custom resource type `GMSACredentialSpec`. Download the GMSA CRD [YAML](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/gmsa-crd.yml) and save it as gmsa-crd.yaml. +Next, install the CRD with `kubectl apply -f gmsa-crd.yaml` + +### Install webhooks to validate GMSA users +Two webhooks need to be configured on the Kubernetes cluster to populate and validate GMSA credential spec references at the pod or container level: + +1. A mutating webhook that expands references to GMSAs (by name from a pod specification) into the full credential spec in JSON form within the pod spec. + +1. A validating webhook ensures all references to GMSAs are authorized to be used by the pod service account. + +Installing the above webhooks and associated objects require the steps below: + +1. Create a certificate key pair (that will be used to allow the webhook container to communicate to the cluster) + +1. Install a secret with the certificate from above. + +1. Create a deployment for the core webhook logic. + +1. Create the validating and mutating webhook configurations referring to the deployment. + +A [script](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/deploy-gmsa-webhook.sh) can be used to deploy and configure the GMSA webhooks and associated objects mentioned above. The script can be run with a ```--dry-run``` option to allow you to review the changes that would be made to your cluster. + +The [YAML template](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/gmsa-webhook.yml.tpl) used by the script may also be used to deploy the webhooks and associated objects manually (with appropriate substitutions for the parameters) + +{{% /capture %}} + +{{% capture steps %}} + +## Configure GMSAs and Windows nodes in Active Directory +Before pods in Kubernetes can be configured to use GMSAs, the desired GMSAs need to be provisioned in Active Directory as described [here](https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts#BKMK_Step1). Windows worker nodes (that are part of the Kubernetes cluster) need to be configured in Active Directory to access the secret credentials associated with the desired GMSA as described [here](https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts#to-add-member-hosts-using-the-set-adserviceaccount-cmdlet) + +## Create GMSA credential spec resources +With the GMSACredentialSpec CRD installed (as described earlier), custom resources containing GMSA credential specs can be configured. The GMSA credential spec does not contain secret or sensitive data. It is information that a container runtime can use to describe the desired GMSA of a container to Windows. GMSA credential specs can be generated in YAML format with a utility [PowerShell script](https://github.com/kubernetes-sigs/windows-gmsa/tree/master/scripts/GenerateCredentialSpecResource.ps1). + +Following are the steps for generating a GMSA credential spec YAML manually in JSON format and then converting it: + +1. Import the CredentialSpec [module](https://github.com/MicrosoftDocs/Virtualization-Documentation/blob/live/windows-server-container-tools/ServiceAccounts/CredentialSpec.psm1): `ipmo CredentialSpec.psm1` + +1. Create a credential spec in JSON format using `New-CredentialSpec`. To create a GMSA credential spec named WebApp1, invoke `New-CredentialSpec -Name WebApp1 -AccountName WebApp1 -Domain $(Get-ADDomain -Current LocalComputer)` + +1. Use `Get-CredentialSpec` to show the path of the JSON file. + +1. Convert the credspec file from JSON to YAML format and apply the necessary header fields `apiVersion`, `kind`, `metadata` and `credspec` to make it a GMSACredentialSpec custom resource that can be configured in Kubernetes. + +The following YAML configuration describes a GMSA credential spec named `gmsa-WebApp1`: + +``` +apiVersion: windows.k8s.io/v1alpha1 +kind: GMSACredentialSpec +metadata: + name: gmsa-WebApp1 #This is an arbitrary name but it will be used as a reference +credspec: + ActiveDirectoryConfig: + GroupManagedServiceAccounts: + - Name: WebApp1 #Username of the GMSA account + Scope: CONTOSO #NETBIOS Domain Name + - Name: WebApp1 #Username of the GMSA account + Scope: contoso.com #DNS Domain Name + CmsPlugins: + - ActiveDirectory + DomainJoinConfig: + DnsName: contoso.com #DNS Domain Name + DnsTreeName: contoso.com #DNS Domain Name Root + Guid: 244818ae-87ac-4fcd-92ec-e79e5252348a #GUID + MachineAccountName: WebApp1 #Username of the GMSA account + NetBiosName: CONTOSO #NETBIOS Domain Name + Sid: S-1-5-21-2126449477-2524075714-3094792973 #SID of GMSA +``` + +The above credential spec resource may be saved as `gmsa-Webapp1-credspec.yaml` and applied to the cluster using: `kubectl apply -f gmsa-Webapp1-credspec.yml` + +## Configure cluster role to enable RBAC on specific GMSA credential specs +A cluster role needs to be defined for each GMSA credential spec resource. This authorizes the `use` verb on a specific GMSA resource by a subject which is typically a service account. The following example shows a cluster role that authorizes usage of the `gmsa-WebApp1` credential spec from above. Save the file as gmsa-webapp1-role.yaml and apply using `kubectl apply -f gmsa-webapp1-role.yaml` + +``` +#Create the Role to read the credspec +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: webapp1-role +rules: +- apiGroups: ["windows.k8s.io"] + resources: ["gmsacredentialspecs"] + verbs: ["use"] + resourceNames: ["gmsa-WebApp1"] +``` + +## Assign role to service accounts to use specific GMSA credspecs +A service account (that pods will be configured with) needs to be bound to the cluster role create above. This authorizes the service account to "use" the desired GMSA credential spec resource. The following shows the default service account being bound to a cluster role `webapp1-role` to use `gmsa-WebApp1` credential spec resource created above. + +``` +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: allow-default-svc-account-read-on-gmsa-WebApp1 + namespace: default +subjects: +- kind: ServiceAccount + name: default + namespace: default +roleRef: + kind: ClusterRole + name: webapp1-role + apiGroup: rbac.authorization.k8s.io +``` + +## Configure GMSA credential spec reference in pod spec +In the alpha stage of the feature, the annotation `pod.alpha.windows.kubernetes.io/gmsa-credential-spec-name` is used to specify references to desired GMSA credential spec custom resources in pod specs. This configures all containers in the pod spec to use the specified GMSA. A sample pod spec with the annotation populated to refer to `gmsa-WebApp1`: + +``` +apiVersion: apps/v1beta1 +kind: Deployment +metadata: + labels: + run: with-creds + name: with-creds + namespace: default +spec: + replicas: 1 + selector: + matchLabels: + run: with-creds + template: + metadata: + labels: + run: with-creds + annotations: + pod.alpha.windows.kubernetes.io/gmsa-credential-spec-name: gmsa-WebApp1 # This must be the name of the cred spec you created + spec: + containers: + - image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019 + imagePullPolicy: Always + name: iis + nodeSelector: + beta.kubernetes.io/os: windows +``` + +Individual containers in a pod spec can also specify the desired GMSA credspec using annotation `.container.alpha.windows.kubernetes.io/gmsa-credential-spec`. For example: + +``` +apiVersion: apps/v1beta1 +kind: Deployment +metadata: + labels: + run: with-creds + name: with-creds + namespace: default +spec: + replicas: 1 + selector: + matchLabels: + run: with-creds + template: + metadata: + labels: + run: with-creds + annotations: + iis.container.alpha.windows.kubernetes.io/gmsa-credential-spec-name: gmsa-WebApp1 # This must be the name of the cred spec you created + spec: + containers: + - image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019 + imagePullPolicy: Always + name: iis + nodeSelector: + beta.kubernetes.io/os: windows +``` + +As pod specs with GMSA annotations (as described above) are applied in a cluster configured for GMSA, the following sequence of events take place: + +1. The mutating webhook resolves and expands all references to GMSA credential spec resources to the contents of the GMSA credential spec. + +1. The validating webhook ensures the service account associated with the pod is authorized for the "use" verb on the specified GMSA credential spec. + +1. The container runtime configures each Windows container with the specified GMSA credential spec so that the container can assume the identity of the GMSA in Active Directory and access services in the domain using that identity. + +{{% /capture %}} diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-probes.md index 36c4f758ac070..f2ba952f7b02d 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-probes.md +++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-probes.md @@ -62,7 +62,7 @@ code. After 30 seconds, `cat /tmp/healthy` returns a failure code. Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/probe/exec-liveness.yaml +kubectl apply -f https://k8s.io/examples/pods/probe/exec-liveness.yaml ``` Within 30 seconds, view the Pod events: @@ -163,7 +163,7 @@ checks will fail, and the kubelet will kill and restart the Container. To try the HTTP liveness check, create a Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/probe/http-liveness.yaml +kubectl apply -f https://k8s.io/examples/pods/probe/http-liveness.yaml ``` After 10 seconds, view Pod events to verify that liveness probes have failed and @@ -204,7 +204,7 @@ will be restarted. To try the TCP liveness check, create a Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/probe/tcp-liveness-readiness.yaml +kubectl apply -f https://k8s.io/examples/pods/probe/tcp-liveness-readiness.yaml ``` After 15 seconds, view Pod events to verify that liveness probes: diff --git a/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md b/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md index cc3f7a3fde61c..d51327390a47b 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md +++ b/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md @@ -73,7 +73,7 @@ PersistentVolumeClaim requests to this PersistentVolume. Create the PersistentVolume: - kubectl create -f https://k8s.io/examples/pods/storage/pv-volume.yaml + kubectl apply -f https://k8s.io/examples/pods/storage/pv-volume.yaml View information about the PersistentVolume: @@ -98,7 +98,7 @@ Here is the configuration file for the PersistentVolumeClaim: Create the PersistentVolumeClaim: - kubectl create -f https://k8s.io/examples/pods/storage/pv-claim.yaml + kubectl apply -f https://k8s.io/examples/pods/storage/pv-claim.yaml After you create the PersistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim's requirements. If the control @@ -138,7 +138,7 @@ is a volume. Create the Pod: - kubectl create -f https://k8s.io/examples/pods/storage/pv-pod.yaml + kubectl apply -f https://k8s.io/examples/pods/storage/pv-pod.yaml Verify that the Container in the Pod is running; diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index a34de6eb45bcb..90b9e07d52c9a 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -2,6 +2,9 @@ title: Configure a Pod to Use a ConfigMap content_template: templates/task weight: 150 +card: + name: tasks + weight: 50 --- {{% capture overview %}} @@ -18,7 +21,10 @@ ConfigMaps allow you to decouple configuration artifacts from image content to k {{% capture steps %}} -## Create a ConfigMap +## Create a ConfigMap +You can use either `kubectl create configmap` or a ConfigMap generator in `kustomization.yaml` to create a ConfigMap. Note that `kubectl` starts to support `kustomization.yaml` since 1.14. + +### Create a ConfigMap Using kubectl create configmap Use the `kubectl create configmap` command to create configmaps from [directories](#create-configmaps-from-directories), [files](#create-configmaps-from-files), or [literal values](#create-configmaps-from-literal-values): @@ -37,23 +43,27 @@ You can use [`kubectl describe`](/docs/reference/generated/kubectl/kubectl-comma [`kubectl get`](/docs/reference/generated/kubectl/kubectl-commands/#get) to retrieve information about a ConfigMap. -### Create ConfigMaps from directories +#### Create ConfigMaps from directories You can use `kubectl create configmap` to create a ConfigMap from multiple files in the same directory. For example: ```shell -mkdir -p configure-pod-container/configmap/kubectl/ -wget https://k8s.io/docs/tasks/configure-pod-container/configmap/kubectl/game.properties -O configure-pod-container/configmap/kubectl/game.properties -wget https://k8s.io/docs/tasks/configure-pod-container/configmap/kubectl/ui.properties -O configure-pod-container/configmap/kubectl/ui.properties -kubectl create configmap game-config --from-file=configure-pod-container/configmap/kubectl/ +# Create the local directory +mkdir -p configure-pod-container/configmap/ + +# Download the sample files into `configure-pod-container/configmap/` directory +wget https://k8s.io/examples/configmap/game.properties -O configure-pod-container/configmap/game.properties +wget https://k8s.io/examples/configmap/ui.properties -O configure-pod-container/configmap/ui.properties + +# Create the configmap +kubectl create configmap game-config --from-file=configure-pod-container/configmap/ ``` -combines the contents of the `configure-pod-container/configmap/kubectl/` directory +combines the contents of the `configure-pod-container/configmap/` directory ```shell -ls configure-pod-container/configmap/kubectl/ game.properties ui.properties ``` @@ -62,6 +72,10 @@ into the following ConfigMap: ```shell kubectl describe configmaps game-config +``` + +where the output is similar to this: +``` Name: game-config Namespace: default Labels: @@ -73,11 +87,12 @@ game.properties: 158 bytes ui.properties: 83 bytes ``` -The `game.properties` and `ui.properties` files in the `configure-pod-container/configmap/kubectl/` directory are represented in the `data` section of the ConfigMap. +The `game.properties` and `ui.properties` files in the `configure-pod-container/configmap/` directory are represented in the `data` section of the ConfigMap. ```shell kubectl get configmaps game-config -o yaml ``` +The output is similar to this: ```yaml apiVersion: v1 @@ -105,20 +120,25 @@ metadata: uid: b4952dc3-d670-11e5-8cd0-68f728db1985 ``` -### Create ConfigMaps from files +#### Create ConfigMaps from files You can use `kubectl create configmap` to create a ConfigMap from an individual file, or from multiple files. For example, ```shell -kubectl create configmap game-config-2 --from-file=configure-pod-container/configmap/kubectl/game.properties +kubectl create configmap game-config-2 --from-file=configure-pod-container/configmap/game.properties ``` would produce the following ConfigMap: ```shell kubectl describe configmaps game-config-2 +``` + +where the output is similar to this: + +``` Name: game-config-2 Namespace: default Labels: @@ -129,14 +149,21 @@ Data game.properties: 158 bytes ``` -You can pass in the `--from-file` argument multiple times to create a ConfigMap from multiple data sources. +You can pass in the `--from-file` argument multiple times to create a ConfigMap from multiple data sources. ```shell -kubectl create configmap game-config-2 --from-file=configure-pod-container/configmap/kubectl/game.properties --from-file=configure-pod-container/configmap/kubectl/ui.properties +kubectl create configmap game-config-2 --from-file=configure-pod-container/configmap/game.properties --from-file=configure-pod-container/configmap/ui.properties ``` +Describe the above `game-config-2` configmap created + ```shell kubectl describe configmaps game-config-2 +``` + +The output is similar to this: + +``` Name: game-config-2 Namespace: default Labels: @@ -149,6 +176,7 @@ ui.properties: 83 bytes ``` Use the option `--from-env-file` to create a ConfigMap from an env-file, for example: + ```shell # Env-files contain a list of environment variables. # These syntax rules apply: @@ -157,8 +185,11 @@ Use the option `--from-env-file` to create a ConfigMap from an env-file, for exa # Blank lines are ignored. # There is no special handling of quotation marks (i.e. they will be part of the ConfigMap value)). -wget https://k8s.io/docs/tasks/configure-pod-container/configmap/kubectl/game-env-file.properties -O configure-pod-container/configmap/kubectl/game-env-file.properties -cat configure-pod-container/configmap/kubectl/game-env-file.properties +# Download the sample files into `configure-pod-container/configmap/` directory +wget https://k8s.io/examples/configmap/game-env-file.properties -O configure-pod-container/configmap/game-env-file.properties + +# The env-file `game-env-file.properties` looks like below +cat configure-pod-container/configmap/game-env-file.properties enemies=aliens lives=3 allowed="true" @@ -168,7 +199,7 @@ allowed="true" ```shell kubectl create configmap game-config-env-file \ - --from-env-file=configure-pod-container/configmap/kubectl/game-env-file.properties + --from-env-file=configure-pod-container/configmap/game-env-file.properties ``` would produce the following ConfigMap: @@ -177,6 +208,7 @@ would produce the following ConfigMap: kubectl get configmap game-config-env-file -o yaml ``` +where the output is similar to this: ```yaml apiVersion: v1 data: @@ -196,10 +228,13 @@ metadata: When passing `--from-env-file` multiple times to create a ConfigMap from multiple data sources, only the last env-file is used: ```shell -wget https://k8s.io/docs/tasks/configure-pod-container/configmap/kubectl/ui-env-file.properties -O configure-pod-container/configmap/kubectl/ui-env-file.properties +# Download the sample files into `configure-pod-container/configmap/` directory +wget https://k8s.io/examples/configmap/ui-env-file.properties -O configure-pod-container/configmap/ui-env-file.properties + +# Create the configmap kubectl create configmap config-multi-env-files \ - --from-env-file=configure-pod-container/configmap/kubectl/game-env-file.properties \ - --from-env-file=configure-pod-container/configmap/kubectl/ui-env-file.properties + --from-env-file=configure-pod-container/configmap/game-env-file.properties \ + --from-env-file=configure-pod-container/configmap/ui-env-file.properties ``` would produce the following ConfigMap: @@ -208,6 +243,7 @@ would produce the following ConfigMap: kubectl get configmap config-multi-env-files -o yaml ``` +where the output is similar to this: ```yaml apiVersion: v1 data: @@ -237,11 +273,15 @@ where `` is the key you want to use in the ConfigMap and `./kustomization.yaml +configMapGenerator: +- name: game-config-4 + files: + - configure-pod-container/configmap/kubectl/game.properties +EOF +``` + +Apply the kustomization directory to create the ConfigMap object. +```shell +kubectl apply -k . +configmap/game-config-4-m9dm2f92bt created +``` + +You can check that the ConfigMap was created like this: + +```shell +kubectl get configmap +NAME DATA AGE +game-config-4-m9dm2f92bt 1 37s + + +kubectl describe configmaps/game-config-4-m9dm2f92bt +Name: game-config-4-m9dm2f92bt +Namespace: default +Labels: +Annotations: kubectl.kubernetes.io/last-applied-configuration: + {"apiVersion":"v1","data":{"game.properties":"enemies=aliens\nlives=3\nenemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.p... + +Data +==== +game.properties: +---- +enemies=aliens +lives=3 +enemies.cheat=true +enemies.cheat.level=noGoodRotten +secret.code.passphrase=UUDDLRLRBABAS +secret.code.allowed=true +secret.code.lives=30 +Events: +``` + +Note that the generated ConfigMap name has a suffix appended by hashing the contents. This ensures that a +new ConfigMap is generated each time the content is modified. + +#### Define the key to use when generating a ConfigMap from a file +You can define a key other than the file name to use in the ConfigMap generator. +For example, to generate a ConfigMap from files `configure-pod-container/configmap/kubectl/game.properties` +with the key `game-special-key` + +```shell +# Create a kustomization.yaml file with ConfigMapGenerator +cat <./kustomization.yaml +configMapGenerator: +- name: game-config-5 + files: + - game-special-key=configure-pod-container/configmap/kubectl/game.properties +EOF +``` + +Apply the kustomization directory to create the ConfigMap object. +```shell +kubectl apply -k . +configmap/game-config-5-m67dt67794 created +``` + +#### Generate ConfigMaps from Literals +To generate a ConfigMap from literals `special.type=charm` and `special.how=very`, +you can specify the ConfigMap generator in `kusotmization.yaml` as +```shell +# Create a kustomization.yaml file with ConfigMapGenerator +cat <./kustomization.yaml +configMapGenerator: +- name: special-config-2 + literals: + - special.how=very + - special.type=charm +EOF +``` +Apply the kustomization directory to create the ConfigMap object. +```shell +kubectl apply -k . +configmap/special-config-2-c92b5mmcf2 created +``` ## Define container environment variables using ConfigMap data @@ -303,158 +439,83 @@ metadata: kubectl create configmap special-config --from-literal=special.how=very ``` -1. Assign the `special.how` value defined in the ConfigMap to the `SPECIAL_LEVEL_KEY` environment variable in the Pod specification. +2. Assign the `special.how` value defined in the ConfigMap to the `SPECIAL_LEVEL_KEY` environment variable in the Pod specification. - ```shell - kubectl edit pod dapi-test-pod - ``` + {{< codenew file="pods/pod-single-configmap-env-variable.yaml" >}} - ```yaml - apiVersion: v1 - kind: Pod - metadata: - name: dapi-test-pod - spec: - containers: - - name: test-container - image: k8s.gcr.io/busybox - command: [ "/bin/sh", "-c", "env" ] - env: - # Define the environment variable - - name: SPECIAL_LEVEL_KEY - valueFrom: - configMapKeyRef: - # The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY - name: special-config - # Specify the key associated with the value - key: special.how - restartPolicy: Never - ``` - -1. Save the changes to the Pod specification. Now, the Pod's output includes `SPECIAL_LEVEL_KEY=very`. + Create the Pod: + + ```shell + kubectl create -f https://k8s.io/examples/pods/pod-single-configmap-env-variable.yaml + ``` + + Now, the Pod's output includes `SPECIAL_LEVEL_KEY=very`. ### Define container environment variables with data from multiple ConfigMaps -1. As with the previous example, create the ConfigMaps first. - - ```yaml - apiVersion: v1 - kind: ConfigMap - metadata: - name: special-config - namespace: default - data: - special.how: very - ``` + * As with the previous example, create the ConfigMaps first. - ```yaml - apiVersion: v1 - kind: ConfigMap - metadata: - name: env-config - namespace: default - data: - log_level: INFO - ``` + {{< codenew file="configmap/configmaps.yaml" >}} -1. Define the environment variables in the Pod specification. - - ```yaml - apiVersion: v1 - kind: Pod - metadata: - name: dapi-test-pod - spec: - containers: - - name: test-container - image: k8s.gcr.io/busybox - command: [ "/bin/sh", "-c", "env" ] - env: - - name: SPECIAL_LEVEL_KEY - valueFrom: - configMapKeyRef: - name: special-config - key: special.how - - name: LOG_LEVEL - valueFrom: - configMapKeyRef: - name: env-config - key: log_level - restartPolicy: Never - ``` + Create the ConfigMap: + + ```shell + kubectl create -f https://k8s.io/examples/configmap/configmaps.yaml + ``` + +* Define the environment variables in the Pod specification. + + {{< codenew file="pods/pod-multiple-configmap-env-variable.yaml" >}} + + Create the Pod: -1. Save the changes to the Pod specification. Now, the Pod's output includes `SPECIAL_LEVEL_KEY=very` and `LOG_LEVEL=INFO`. + ```shell + kubectl create -f https://k8s.io/examples/pods/pod-multiple-configmap-env-variable.yaml + ``` + + Now, the Pod's output includes `SPECIAL_LEVEL_KEY=very` and `LOG_LEVEL=INFO`. ## Configure all key-value pairs in a ConfigMap as container environment variables - {{< note >}} - This functionality is available in Kubernetes v1.6 and later. - {{< /note >}} +{{< note >}} +This functionality is available in Kubernetes v1.6 and later. +{{< /note >}} -1. Create a ConfigMap containing multiple key-value pairs. - - ```yaml - apiVersion: v1 - kind: ConfigMap - metadata: - name: special-config - namespace: default - data: - SPECIAL_LEVEL: very - SPECIAL_TYPE: charm - ``` +* Create a ConfigMap containing multiple key-value pairs. -1. Use `envFrom` to define all of the ConfigMap's data as container environment variables. The key from the ConfigMap becomes the environment variable name in the Pod. + {{< codenew file="configmap/configmap-multikeys.yaml" >}} + + Create the ConfigMap: + + ```shell + kubectl create -f https://k8s.io/examples/configmap/configmap-multikeys.yaml + ``` + +* Use `envFrom` to define all of the ConfigMap's data as container environment variables. The key from the ConfigMap becomes the environment variable name in the Pod. - ```yaml - apiVersion: v1 - kind: Pod - metadata: - name: dapi-test-pod - spec: - containers: - - name: test-container - image: k8s.gcr.io/busybox - command: [ "/bin/sh", "-c", "env" ] - envFrom: - - configMapRef: - name: special-config - restartPolicy: Never - ``` + {{< codenew file="pods/pod-configmap-envFrom.yaml" >}} -1. Save the changes to the Pod specification. Now, the Pod's output includes `SPECIAL_LEVEL=very` and `SPECIAL_TYPE=charm`. + Create the Pod: + + ```shell + kubectl create -f https://k8s.io/examples/pods/pod-configmap-envFrom.yaml + ``` + + Now, the Pod's output includes `SPECIAL_LEVEL=very` and `SPECIAL_TYPE=charm`. ## Use ConfigMap-defined environment variables in Pod commands You can use ConfigMap-defined environment variables in the `command` section of the Pod specification using the `$(VAR_NAME)` Kubernetes substitution syntax. -For example: +For example, the following Pod specification -The following Pod specification +{{< codenew file="pods/pod-configmap-env-var-valueFrom.yaml" >}} -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: dapi-test-pod -spec: - containers: - - name: test-container - image: k8s.gcr.io/busybox - command: [ "/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ] - env: - - name: SPECIAL_LEVEL_KEY - valueFrom: - configMapKeyRef: - name: special-config - key: SPECIAL_LEVEL - - name: SPECIAL_TYPE_KEY - valueFrom: - configMapKeyRef: - name: special-config - key: SPECIAL_TYPE - restartPolicy: Never +created by running + +```shell +kubectl create -f https://k8s.io/examples/pods/pod-configmap-env-var-valueFrom.yaml ``` produces the following output in the `test-container` container: @@ -469,15 +530,12 @@ As explained in [Create ConfigMaps from files](#create-configmaps-from-files), w The examples in this section refer to a ConfigMap named special-config, shown below. -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: special-config - namespace: default -data: - special.level: very - special.type: charm +{{< codenew file="configmap/configmap-multikeys.yaml" >}} + +Create the ConfigMap: + +```shell +kubectl create -f https://k8s.io/examples/configmap/configmap-multikeys.yaml ``` ### Populate a Volume with data stored in a ConfigMap @@ -486,29 +544,15 @@ Add the ConfigMap name under the `volumes` section of the Pod specification. This adds the ConfigMap data to the directory specified as `volumeMounts.mountPath` (in this case, `/etc/config`). The `command` section references the `special.level` item stored in the ConfigMap. -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: dapi-test-pod -spec: - containers: - - name: test-container - image: k8s.gcr.io/busybox - command: [ "/bin/sh", "-c", "ls /etc/config/" ] - volumeMounts: - - name: config-volume - mountPath: /etc/config - volumes: - - name: config-volume - configMap: - # Provide the name of the ConfigMap containing the files you want - # to add to the container - name: special-config - restartPolicy: Never -``` - -When the pod runs, the command (`"ls /etc/config/"`) produces the output below: +{{< codenew file="pods/pod-configmap-volume.yaml" >}} + +Create the Pod: + +```shell +kubectl create -f https://k8s.io/examples/pods/pod-configmap-volume.yaml +``` + +When the pod runs, the command `ls /etc/config/` produces the output below: ```shell special.level @@ -524,30 +568,15 @@ If there are some files in the `/etc/config/` directory, they will be deleted. Use the `path` field to specify the desired file path for specific ConfigMap items. In this case, the `special.level` item will be mounted in the `config-volume` volume at `/etc/config/keys`. -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: dapi-test-pod -spec: - containers: - - name: test-container - image: k8s.gcr.io/busybox - command: [ "/bin/sh","-c","cat /etc/config/keys" ] - volumeMounts: - - name: config-volume - mountPath: /etc/config - volumes: - - name: config-volume - configMap: - name: special-config - items: - - key: special.level - path: keys - restartPolicy: Never -``` - -When the pod runs, the command (`"cat /etc/config/keys"`) produces the output below: +{{< codenew file="pods/pod-configmap-volume-specific-key.yaml" >}} + +Create the Pod: + +```shell +kubectl create -f https://k8s.io/examples/pods/pod-configmap-volume-specific-key.yaml +``` + +When the pod runs, the command `cat /etc/config/keys` produces the output below: ```shell very @@ -563,9 +592,7 @@ basis. The [Secrets](/docs/concepts/configuration/secret/#using-secrets-as-files When a ConfigMap already being consumed in a volume is updated, projected keys are eventually updated as well. Kubelet is checking whether the mounted ConfigMap is fresh on every periodic sync. However, it is using its local ttl-based cache for getting the current value of the ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period + ttl of ConfigMaps cache in kubelet. {{< note >}} -A container using a ConfigMap as a -[subPath](/docs/concepts/storage/volumes/#using-subpath) volume will not receive -ConfigMap updates. +A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes/#using-subpath) volume will not receive ConfigMap updates. {{< /note >}} {{% /capture %}} @@ -608,14 +635,17 @@ data: ```shell kubectl get events + ``` + + The output is similar to this: + ``` LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE 0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames {kubelet, 127.0.0.1} Keys [1badkey, 2alsobad] from the EnvFrom configMap default/myconfig were skipped since they are considered invalid environment variable names. ``` - ConfigMaps reside in a specific [namespace](/docs/concepts/overview/working-with-objects/namespaces/). A ConfigMap can only be referenced by pods residing in the same namespace. -- Kubelet doesn't support the use of ConfigMaps for pods not found on the API server. - This includes pods created via the Kubelet's --manifest-url flag, --config flag, or the Kubelet REST API. +- Kubelet doesn't support the use of ConfigMaps for pods not found on the API server. This includes pods created via the Kubelet's `--manifest-url` flag, `--config` flag, or the Kubelet REST API. {{< note >}} These are not commonly-used ways to create pods. @@ -629,3 +659,4 @@ data: {{% /capture %}} +` diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-initialization.md b/content/en/docs/tasks/configure-pod-container/configure-pod-initialization.md index 6ab76afd09f8e..a418a8d7c0bc6 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-initialization.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-initialization.md @@ -43,7 +43,7 @@ of the nginx server. Create the Pod: - kubectl create -f https://k8s.io/examples/pods/init-containers.yaml + kubectl apply -f https://k8s.io/examples/pods/init-containers.yaml Verify that the nginx container is running: diff --git a/content/en/docs/tasks/configure-pod-container/configure-projected-volume-storage.md b/content/en/docs/tasks/configure-pod-container/configure-projected-volume-storage.md index 122f3e0beb49d..cb7e2957e3eac 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-projected-volume-storage.md +++ b/content/en/docs/tasks/configure-pod-container/configure-projected-volume-storage.md @@ -42,7 +42,7 @@ Here is the configuration file for the Pod: ``` 1. Create the Pod: ```shell - kubectl create -f https://k8s.io/examples/pods/storage/projected.yaml + kubectl apply -f https://k8s.io/examples/pods/storage/projected.yaml ``` 1. Verify that the Pod's Container is running, and then watch for changes to the Pod: diff --git a/content/en/docs/tasks/configure-pod-container/configure-service-account.md b/content/en/docs/tasks/configure-pod-container/configure-service-account.md index 1777b5e41a5b9..0d2e7e4114ff2 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-service-account.md +++ b/content/en/docs/tasks/configure-pod-container/configure-service-account.md @@ -11,22 +11,20 @@ weight: 90 {{% capture overview %}} A service account provides an identity for processes that run in a Pod. -*This is a user introduction to Service Accounts. See also the +*This is a user introduction to Service Accounts. See also the [Cluster Admin Guide to Service Accounts](/docs/reference/access-authn-authz/service-accounts-admin/).* {{< note >}} This document describes how service accounts behave in a cluster set up -as recommended by the Kubernetes project. Your cluster administrator may have +as recommended by the Kubernetes project. Your cluster administrator may have customized the behavior in your cluster, in which case this documentation may not apply. {{< /note >}} When you (a human) access the cluster (for example, using `kubectl`), you are authenticated by the apiserver as a particular User Account (currently this is -usually `admin`, unless your cluster administrator has customized your -cluster). Processes in containers inside pods can also contact the apiserver. -When they do, they are authenticated as a particular Service Account (for example, -`default`). +usually `admin`, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. +When they do, they are authenticated as a particular Service Account (for example, `default`). {{% /capture %}} @@ -43,16 +41,12 @@ When they do, they are authenticated as a particular Service Account (for exampl When you create a pod, if you do not specify a service account, it is automatically assigned the `default` service account in the same namespace. -If you get the raw json or yaml for a pod you have created (for example, `kubectl get pods/podname -o yaml`), -you can see the `spec.serviceAccountName` field has been -[automatically set](/docs/user-guide/working-with-resources/#resources-are-automatically-modified). +If you get the raw json or yaml for a pod you have created (for example, `kubectl get pods/ -o yaml`), you can see the `spec.serviceAccountName` field has been [automatically set](/docs/user-guide/working-with-resources/#resources-are-automatically-modified). -You can access the API from inside a pod using automatically mounted service account credentials, -as described in [Accessing the Cluster](/docs/user-guide/accessing-the-cluster/#accessing-the-api-from-a-pod). +You can access the API from inside a pod using automatically mounted service account credentials, as described in [Accessing the Cluster](/docs/user-guide/accessing-the-cluster/#accessing-the-api-from-a-pod). The API permissions of the service account depend on the [authorization plugin and policy](/docs/reference/access-authn-authz/authorization/#authorization-modules) in use. -In version 1.6+, you can opt out of automounting API credentials for a service account by setting -`automountServiceAccountToken: false` on the service account: +In version 1.6+, you can opt out of automounting API credentials for a service account by setting `automountServiceAccountToken: false` on the service account: ```yaml apiVersion: v1 @@ -85,6 +79,10 @@ You can list this and any other serviceAccount resources in the namespace with t ```shell kubectl get serviceAccounts +``` +The output is similar to this: + +``` NAME SECRETS AGE default 1 1d ``` @@ -92,19 +90,22 @@ default 1 1d You can create additional ServiceAccount objects like this: ```shell -kubectl create -f - < @@ -181,10 +185,15 @@ The content of `token` is elided here. ## Add ImagePullSecrets to a service account First, create an imagePullSecret, as described [here](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod). -Next, verify it has been created. For example: +Next, verify it has been created. For example: ```shell kubectl get secrets myregistrykey +``` + +The output is similar to this: + +``` NAME TYPE DATA AGE myregistrykey   kubernetes.io/.dockerconfigjson   1       1d ``` @@ -195,12 +204,15 @@ Next, modify the default service account for the namespace to use this secret as kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}' ``` -Interactive version requiring manual edit: +Interactive version requires manual edit: ```shell kubectl get serviceaccounts default -o yaml > ./sa.yaml +``` + +The output of the `sa.yaml` file is similar to this: -cat sa.yaml +```shell apiVersion: v1 kind: ServiceAccount metadata: @@ -212,13 +224,13 @@ metadata: uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6 secrets: - name: default-token-uudge +``` -vi sa.yaml -[editor session not shown] -[delete line with key "resourceVersion"] -[add lines with "imagePullSecrets:"] +Using your editor of choice (for example `vi`), open the `sa.yaml` file, delete line with key `resourceVersion`, add lines with `imagePullSecrets:` and save. -cat sa.yaml +The output of the `sa.yaml` file is similar to this: + +```shell apiVersion: v1 kind: ServiceAccount metadata: @@ -231,9 +243,12 @@ secrets: - name: default-token-uudge imagePullSecrets: - name: myregistrykey +``` + +Finally replace the serviceaccount with the new updated `sa.yaml` file +```shell kubectl replace serviceaccount default -f ./sa.yaml -serviceaccounts/default ``` Now, any new pods created in the current namespace will have this added to their spec: @@ -274,32 +289,17 @@ This behavior is configured on a PodSpec using a ProjectedVolume type called pod with a token with an audience of "vault" and a validity duration of two hours, you would configure the following in your PodSpec: -```yaml -kind: Pod -apiVersion: v1 -spec: - containers: - - image: nginx - name: nginx - volumeMounts: - - mountPath: /var/run/secrets/tokens - name: vault-token - volumes: - - name: vault-token - projected: - sources: - - serviceAccountToken: - path: vault-token - expirationSeconds: 7200 - audience: vault +{{< codenew file="pods/pod-projected-svc-token.yaml" >}} + +Create the Pod: + +```shell +kubectl create -f https://k8s.io/examples/pods/pod-projected-svc-token.yaml ``` The kubelet will request and store the token on behalf of the pod, make the -token available to the pod at a configurable file path, and refresh the token as -it approaches expiration. Kubelet proactively rotates the token if it is older -than 80% of its total TTL, or if the token is older than 24 hours. +token available to the pod at a configurable file path, and refresh the token as it approaches expiration. Kubelet proactively rotates the token if it is older than 80% of its total TTL, or if the token is older than 24 hours. -The application is responsible for reloading the token when it rotates. Periodic -reloading (e.g. once every 5 minutes) is sufficient for most usecases. +The application is responsible for reloading the token when it rotates. Periodic reloading (e.g. once every 5 minutes) is sufficient for most usecases. {{% /capture %}} diff --git a/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md b/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md index c5c65fa94957f..826cd20ae596a 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md +++ b/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md @@ -37,7 +37,7 @@ restarts. Here is the configuration file for the Pod: 1. Create the Pod: ```shell - kubectl create -f https://k8s.io/examples/pods/storage/redis.yaml + kubectl apply -f https://k8s.io/examples/pods/storage/redis.yaml ``` 1. Verify that the Pod's Container is running, and then watch for changes to diff --git a/content/en/docs/tasks/configure-pod-container/extended-resource.md b/content/en/docs/tasks/configure-pod-container/extended-resource.md index 48fdcf112641d..e71f09ea76d25 100644 --- a/content/en/docs/tasks/configure-pod-container/extended-resource.md +++ b/content/en/docs/tasks/configure-pod-container/extended-resource.md @@ -43,7 +43,7 @@ In the configuration file, you can see that the Container requests 3 dongles. Create a Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/resource/extended-resource-pod.yaml +kubectl apply -f https://k8s.io/examples/pods/resource/extended-resource-pod.yaml ``` Verify that the Pod is running: @@ -80,7 +80,7 @@ used three of the four available dongles. Attempt to create a Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/resource/extended-resource-pod-2.yaml +kubectl apply -f https://k8s.io/examples/pods/resource/extended-resource-pod-2.yaml ``` Describe the Pod diff --git a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md index 836d89c0c4dc4..e66bd7310dea0 100644 --- a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md +++ b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md @@ -56,9 +56,46 @@ The output contains a section similar to this: If you use a Docker credentials store, you won't see that `auth` entry but a `credsStore` entry with the name of the store as value. {{< /note >}} -## Create a Secret in the cluster that holds your authorization token +## Create a Secret based on existing Docker credentials {#registry-secret-existing-credentials} -A Kubernetes cluster uses the Secret of `docker-registry` type to authenticate with a container registry to pull a private image. +A Kubernetes cluster uses the Secret of `docker-registry` type to authenticate with +a container registry to pull a private image. + +If you already ran `docker login`, you can copy that credential into Kubernetes: + +```shell +kubectl create secret generic regcred \ + --from-file=.dockerconfigjson= \ + --type=kubernetes.io/dockerconfigjson +``` + +If you need more control (for example, to set a namespace or a label on the new +secret) then you can customise the Secret before storing it. +Be sure to: + +- set the name of the data item to `.dockerconfigjson` +- base64 encode the docker file and paste that string, unbroken + as the value for field `data[".dockerconfigjson"]` +- set `type` to `kubernetes.io/dockerconfigjson` + +Example: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: myregistrykey + namespace: awesomeapps +data: + .dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== +type: kubernetes.io/dockerconfigjson +``` + +If you get the error message `error: no objects passed to create`, it may mean the base64 encoded string is invalid. +If you get an error message like `Secret "myregistrykey" is invalid: data[.dockerconfigjson]: invalid value ...`, it means +the base64 encoded string in the data was successfully decoded, but could not be parsed as a `.docker/config.json` file. + +## Create a Secret by providing credentials on the command line Create this Secret, naming it `regcred`: @@ -75,6 +112,13 @@ where: You have successfully set your Docker credentials in the cluster as a Secret called `regcred`. +{{< note >}} +Typing secrets on the command line may store them in your shell history unprotected, and +those secrets might also be visible to other users on your PC during the time that +`kubectl` is running. +{{< /note >}} + + ## Inspecting the Secret `regcred` To understand the contents of the `regcred` Secret you just created, start by viewing the Secret in YAML format: @@ -152,7 +196,7 @@ The `imagePullSecrets` field in the configuration file specifies that Kubernetes Create a Pod that uses your Secret, and verify that the Pod is running: ```shell -kubectl create -f my-private-reg-pod.yaml +kubectl apply -f my-private-reg-pod.yaml kubectl get pod private-reg ``` diff --git a/content/en/docs/tasks/configure-pod-container/quality-service-pod.md b/content/en/docs/tasks/configure-pod-container/quality-service-pod.md index 8b344a9c6bf7c..51ebedac592d7 100644 --- a/content/en/docs/tasks/configure-pod-container/quality-service-pod.md +++ b/content/en/docs/tasks/configure-pod-container/quality-service-pod.md @@ -55,7 +55,7 @@ memory request, both equal to 200 MiB. The Container has a CPU limit and a CPU r Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/qos/qos-pod.yaml --namespace=qos-example +kubectl apply -f https://k8s.io/examples/pods/qos/qos-pod.yaml --namespace=qos-example ``` View detailed information about the Pod: @@ -111,7 +111,7 @@ and a memory request of 100 MiB. Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/qos/qos-pod-2.yaml --namespace=qos-example +kubectl apply -f https://k8s.io/examples/pods/qos/qos-pod-2.yaml --namespace=qos-example ``` View detailed information about the Pod: @@ -156,7 +156,7 @@ limits or requests: Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/qos/qos-pod-3.yaml --namespace=qos-example +kubectl apply -f https://k8s.io/examples/pods/qos/qos-pod-3.yaml --namespace=qos-example ``` View detailed information about the Pod: @@ -195,7 +195,7 @@ criteria for QoS class Guaranteed, and one of its Containers has a memory reques Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/qos/qos-pod-4.yaml --namespace=qos-example +kubectl apply -f https://k8s.io/examples/pods/qos/qos-pod-4.yaml --namespace=qos-example ``` View detailed information about the Pod: diff --git a/content/en/docs/tasks/configure-pod-container/security-context.md b/content/en/docs/tasks/configure-pod-container/security-context.md index e7a3f6dd9905c..39dbc60927b78 100644 --- a/content/en/docs/tasks/configure-pod-container/security-context.md +++ b/content/en/docs/tasks/configure-pod-container/security-context.md @@ -52,15 +52,16 @@ Here is a configuration file for a Pod that has a `securityContext` and an `empt {{< codenew file="pods/security/security-context.yaml" >}} In the configuration file, the `runAsUser` field specifies that for any Containers in -the Pod, the first process runs with user ID 1000. The `fsGroup` field specifies that -group ID 2000 is associated with all Containers in the Pod. Group ID 2000 is also -associated with the volume mounted at `/data/demo` and with any files created in that -volume. +the Pod, all processes run with user ID 1000. The `runAsGroup` field specifies the primary group ID of 3000 for +all processes within any containers of the Pod. If this field is ommitted, the primary group ID of the containers +will be root(0). Any files created will also be owned by user 1000 and group 3000 when `runAsGroup` is specified. +Since `fsGroup` field is specified, all processes of the container are also part of the supplementary group ID 2000. +The owner for volume `/data/demo` and any files created in that volume will be Group ID 2000. Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/security/security-context.yaml +kubectl apply -f https://k8s.io/examples/pods/security/security-context.yaml ``` Verify that the Pod's Container is running: @@ -123,6 +124,16 @@ The output shows that `testfile` has group ID 2000, which is the value of `fsGro -rw-r--r-- 1 1000 2000 6 Jun 6 20:08 testfile ``` +Run the following command: + +```shell +$ id +uid=1000 gid=3000 groups=2000 +``` +You will see that gid is 3000 which is same as `runAsGroup` field. If the `runAsGroup` was ommitted the gid would +remain as 0(root) and the process will be able to interact with files that are owned by root(0) group and that have +the required group permissions for root(0) group. + Exit your shell: ```shell @@ -146,7 +157,7 @@ and the Container have a `securityContext` field: Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/security/security-context-2.yaml +kubectl apply -f https://k8s.io/examples/pods/security/security-context-2.yaml ``` Verify that the Pod's Container is running: @@ -199,7 +210,7 @@ Here is configuration file that does not add or remove any Container capabilitie Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/security/security-context-3.yaml +kubectl apply -f https://k8s.io/examples/pods/security/security-context-3.yaml ``` Verify that the Pod's Container is running: @@ -261,7 +272,7 @@ adds the `CAP_NET_ADMIN` and `CAP_SYS_TIME` capabilities: Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/security/security-context-4.yaml +kubectl apply -f https://k8s.io/examples/pods/security/security-context-4.yaml ``` Get a shell into the running Container: @@ -357,5 +368,3 @@ After you specify an MCS label for a Pod, all Pods with the same label can acces {{% /capture %}} - - diff --git a/content/en/docs/tasks/configure-pod-container/share-process-namespace.md b/content/en/docs/tasks/configure-pod-container/share-process-namespace.md index b2b97815f08fa..3564ba1ff0e62 100644 --- a/content/en/docs/tasks/configure-pod-container/share-process-namespace.md +++ b/content/en/docs/tasks/configure-pod-container/share-process-namespace.md @@ -43,7 +43,7 @@ Process Namespace Sharing is enabled using the `ShareProcessNamespace` field of 1. Create the pod `nginx` on your cluster: - kubectl create -f https://k8s.io/examples/pods/share-process-namespace.yaml + kubectl apply -f https://k8s.io/examples/pods/share-process-namespace.yaml 1. Attach to the `shell` container and run `ps`: diff --git a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md index 867b4bcd0d920..a6396b3dc30b9 100644 --- a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md +++ b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md @@ -124,7 +124,7 @@ you need is an existing `docker-compose.yml` file. ```bash $ kompose up We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. - If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead. + If you need different kind of resources, use the 'kompose convert' and 'kubectl apply -f' commands instead. INFO Successfully created Service: redis INFO Successfully created Service: web @@ -135,7 +135,7 @@ you need is an existing `docker-compose.yml` file. ``` 3. To convert the `docker-compose.yml` file to files that you can use with - `kubectl`, run `kompose convert` and then `kubectl create -f `. + `kubectl`, run `kompose convert` and then `kubectl apply -f `. ```bash $ kompose convert @@ -148,7 +148,7 @@ you need is an existing `docker-compose.yml` file. ``` ```bash - $ kubectl create -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,redis-master-deployment.yaml,redis-slave-deployment.yaml + $ kubectl apply -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,redis-master-deployment.yaml,redis-slave-deployment.yaml service/frontend created service/redis-master created service/redis-slave created @@ -309,7 +309,7 @@ Kompose supports a straightforward way to deploy your "composed" application to ```sh $ kompose --file ./examples/docker-guestbook.yml up We are going to create Kubernetes deployments and services for your Dockerized application. -If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead. +If you need different kind of resources, use the 'kompose convert' and 'kubectl apply -f' commands instead. INFO Successfully created service: redis-master INFO Successfully created service: redis-slave @@ -341,7 +341,7 @@ pod/redis-slave-2504961300-nve7b 1/1 Running 0 4m **Note**: - You must have a running Kubernetes cluster with a pre-configured kubectl context. -- Only deployments and services are generated and deployed to Kubernetes. If you need different kind of resources, use the `kompose convert` and `kubectl create -f` commands instead. +- Only deployments and services are generated and deployed to Kubernetes. If you need different kind of resources, use the `kompose convert` and `kubectl apply -f` commands instead. ### OpenShift ```sh @@ -426,7 +426,7 @@ INFO Image 'docker.io/foo/bar' from directory 'build' built successfully INFO Pushing image 'foo/bar:latest' to registry 'docker.io' INFO Attempting authentication credentials 'https://index.docker.io/v1/ INFO Successfully pushed image 'foo/bar:latest' to registry 'docker.io' -INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead. +INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl apply -f' commands instead. INFO Deploying application in "default" namespace INFO Successfully created Service: foo diff --git a/content/en/docs/tasks/debug-application-cluster/audit.md b/content/en/docs/tasks/debug-application-cluster/audit.md index ec099f595f415..6d7735cb499ee 100644 --- a/content/en/docs/tasks/debug-application-cluster/audit.md +++ b/content/en/docs/tasks/debug-application-cluster/audit.md @@ -207,13 +207,13 @@ By default truncate is disabled in both `webhook` and `log`, a cluster administr {{< feature-state for_k8s_version="v1.13" state="alpha" >}} -In Kubernetes version 1.13, you can configure dynamic audit webhook backends AuditSink API objects. +In Kubernetes version 1.13, you can configure dynamic audit webhook backends AuditSink API objects. To enable dynamic auditing you must set the following apiserver flags: -- `--audit-dynamic-configuration`: the primary switch. When the feature is at GA, the only required flag. -- `--feature-gates=DynamicAuditing=true`: feature gate at alpha and beta. -- `--runtime-config=auditregistration.k8s.io/v1alpha1=true`: enable API. +- `--audit-dynamic-configuration`: the primary switch. When the feature is at GA, the only required flag. +- `--feature-gates=DynamicAuditing=true`: feature gate at alpha and beta. +- `--runtime-config=auditregistration.k8s.io/v1alpha1=true`: enable API. When enabled, an AuditSink object can be provisioned: @@ -301,7 +301,11 @@ Fluent-plugin-forest and fluent-plugin-rewrite-tag-filter are plugins for fluent # route audit according to namespace element in context @type rewrite_tag_filter - rewriterule1 namespace ^(.+) ${tag}.$1 + + key namespace + pattern /^(.+)/ + tag ${tag}.$1 + @@ -420,8 +424,8 @@ plugin which supports full-text search and analytics. [gce-audit-profile]: https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh#L735 [kubeconfig]: /docs/tasks/access-application-cluster/configure-access-multiple-clusters/ [fluentd]: http://www.fluentd.org/ -[fluentd_install_doc]: http://docs.fluentd.org/v0.12/articles/quickstart#step1-installing-fluentd -[fluentd_plugin_management_doc]: https://docs.fluentd.org/v0.12/articles/plugin-management +[fluentd_install_doc]: https://docs.fluentd.org/v1.0/articles/quickstart#step-1:-installing-fluentd +[fluentd_plugin_management_doc]: https://docs.fluentd.org/v1.0/articles/plugin-management [logstash]: https://www.elastic.co/products/logstash [logstash_install_doc]: https://www.elastic.co/guide/en/logstash/current/installing-logstash.html [kube-aggregator]: /docs/concepts/api-extension/apiserver-aggregation diff --git a/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md b/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md index a04200e9a0984..2ae12d2d48f97 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md @@ -26,7 +26,7 @@ For this example we'll use a Deployment to create two pods, similar to the earli Create deployment by running following command: ```shell -kubectl create -f https://k8s.io/examples/application/nginx-with-request.yaml +kubectl apply -f https://k8s.io/examples/application/nginx-with-request.yaml ``` ```none @@ -293,8 +293,8 @@ kubectl describe node kubernetes-node-861h ```none Name: kubernetes-node-861h Role -Labels: beta.kubernetes.io/arch=amd64 - beta.kubernetes.io/os=linux +Labels: kubernetes.io/arch=amd64 + kubernetes.io/os=linux kubernetes.io/hostname=kubernetes-node-861h Annotations: node.alpha.kubernetes.io/ttl=0 volumes.kubernetes.io/controller-managed-attach-detach=true diff --git a/content/en/docs/tasks/debug-application-cluster/debug-application.md b/content/en/docs/tasks/debug-application-cluster/debug-application.md index 94adc0578c8eb..c3b8afb6a752e 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-application.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-application.md @@ -31,7 +31,7 @@ your Service? The first step in debugging a Pod is taking a look at it. Check the current state of the Pod and recent events with the following command: ```shell -$ kubectl describe pods ${POD_NAME} +kubectl describe pods ${POD_NAME} ``` Look at the state of the containers in the pod. Are they all `Running`? Have there been recent restarts? @@ -68,19 +68,19 @@ First, take a look at the logs of the current container: ```shell -$ kubectl logs ${POD_NAME} ${CONTAINER_NAME} +kubectl logs ${POD_NAME} ${CONTAINER_NAME} ``` If your container has previously crashed, you can access the previous container's crash log with: ```shell -$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME} +kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME} ``` Alternately, you can run commands inside that container with `exec`: ```shell -$ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN} +kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN} ``` {{< note >}} @@ -90,7 +90,7 @@ $ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ As an example, to look at the logs from a running Cassandra pod, you might run ```shell -$ kubectl exec cassandra -- cat /var/log/cassandra/system.log +kubectl exec cassandra -- cat /var/log/cassandra/system.log ``` If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host, @@ -107,7 +107,7 @@ For example, if you misspelled `command` as `commnd` then the pod will be create will not use the command line you intended it to use. The first thing to do is to delete your pod and try creating it again with the `--validate` option. -For example, run `kubectl create --validate -f mypod.yaml`. +For example, run `kubectl apply --validate -f mypod.yaml`. If you misspelled `command` as `commnd` then will give an error like this: ```shell @@ -145,7 +145,7 @@ First, verify that there are endpoints for the service. For every Service object You can view this resource with: ```shell -$ kubectl get endpoints ${SERVICE_NAME} +kubectl get endpoints ${SERVICE_NAME} ``` Make sure that the endpoints match up with the number of containers that you expect to be a member of your service. @@ -168,7 +168,7 @@ spec: You can use: ```shell -$ kubectl get pods --selector=name=nginx,type=frontend +kubectl get pods --selector=name=nginx,type=frontend ``` to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service. diff --git a/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md b/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md index 806347eff0301..1f996b104265f 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md @@ -63,7 +63,7 @@ case you can try several things: information: ```shell - kubectl get nodes -o yaml | grep '\sname\|cpu\|memory' + kubectl get nodes -o yaml | egrep '\sname:\|cpu:\|memory:' kubectl get nodes -o json | jq '.items[] | {name: .metadata.name, cap: .status.capacity}' ``` diff --git a/content/en/docs/tasks/debug-application-cluster/debug-service.md b/content/en/docs/tasks/debug-application-cluster/debug-service.md index 29a3cb047bb05..01b794ce005e7 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-service.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-service.md @@ -41,7 +41,7 @@ OUTPUT If the command is "kubectl ARGS": ```shell -$ kubectl ARGS +kubectl ARGS OUTPUT ``` @@ -51,16 +51,18 @@ For many steps here you will want to see what a `Pod` running in the cluster sees. The simplest way to do this is to run an interactive busybox `Pod`: ```none -$ kubectl run -it --rm --restart=Never busybox --image=busybox sh -If you don't see a command prompt, try pressing enter. +kubectl run -it --rm --restart=Never busybox --image=busybox sh / # ``` +{{< note >}} +If you don't see a command prompt, try pressing enter. +{{< /note >}} If you already have a running `Pod` that you prefer to use, you can run a command in it using: ```shell -$ kubectl exec -c -- +kubectl exec -c -- ``` ## Setup @@ -70,7 +72,7 @@ probably debugging your own `Service` you can substitute your own details, or yo can follow along and get a second data point. ```shell -$ kubectl run hostnames --image=k8s.gcr.io/serve_hostname \ +kubectl run hostnames --image=k8s.gcr.io/serve_hostname \ --labels=app=hostnames \ --port=9376 \ --replicas=3 @@ -108,7 +110,7 @@ spec: Confirm your `Pods` are running: ```shell -$ kubectl get pods -l app=hostnames +kubectl get pods -l app=hostnames NAME READY STATUS RESTARTS AGE hostnames-632524106-bbpiw 1/1 Running 0 2m hostnames-632524106-ly40y 1/1 Running 0 2m @@ -134,7 +136,7 @@ wget: unable to resolve host address 'hostnames' So the first thing to check is whether that `Service` actually exists: ```shell -$ kubectl get svc hostnames +kubectl get svc hostnames No resources found. Error from server (NotFound): services "hostnames" not found ``` @@ -143,14 +145,14 @@ So we have a culprit, let's create the `Service`. As before, this is for the walk-through - you can use your own `Service`'s details here. ```shell -$ kubectl expose deployment hostnames --port=80 --target-port=9376 +kubectl expose deployment hostnames --port=80 --target-port=9376 service/hostnames exposed ``` And read it back, just to be sure: ```shell -$ kubectl get svc hostnames +kubectl get svc hostnames NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hostnames ClusterIP 10.0.1.175 80/TCP 5s ``` @@ -301,7 +303,7 @@ It might sound silly, but you should really double and triple check that your and verify it: ```shell -$ kubectl get service hostnames -o json +kubectl get service hostnames -o json ``` ```json { @@ -341,11 +343,13 @@ $ kubectl get service hostnames -o json } ``` -Is the port you are trying to access in `spec.ports[]`? Is the `targetPort` -correct for your `Pods` (many `Pods` choose to use a different port than the -`Service`)? If you meant it to be a numeric port, is it a number (9376) or a -string "9376"? If you meant it to be a named port, do your `Pods` expose a port -with the same name? Is the port's `protocol` the same as the `Pod`'s? +* Is the port you are trying to access in `spec.ports[]`? +* Is the `targetPort` correct for your `Pods` (many `Pods` choose to use a different port than the `Service`)? +* If you meant it to be a numeric port, is it a number (9376) or a +string "9376"? +* If you meant it to be a named port, do your `Pods` expose a port +with the same name? +* Is the port's `protocol` the same as the `Pod`'s? ## Does the Service have any Endpoints? @@ -356,7 +360,7 @@ actually being selected by the `Service`. Earlier we saw that the `Pods` were running. We can re-check that: ```shell -$ kubectl get pods -l app=hostnames +kubectl get pods -l app=hostnames NAME READY STATUS RESTARTS AGE hostnames-0uton 1/1 Running 0 1h hostnames-bvc05 1/1 Running 0 1h @@ -371,7 +375,7 @@ has. Inside the Kubernetes system is a control loop which evaluates the selector of every `Service` and saves the results into an `Endpoints` object. ```shell -$ kubectl get endpoints hostnames +kubectl get endpoints hostnames NAME ENDPOINTS hostnames 10.244.0.5:9376,10.244.0.6:9376,10.244.0.7:9376 ``` @@ -414,7 +418,7 @@ Another thing to check is that your `Pods` are not crashing or being restarted. Frequent restarts could lead to intermittent connectivity issues. ```shell -$ kubectl get pods -l app=hostnames +kubectl get pods -l app=hostnames NAME READY STATUS RESTARTS AGE hostnames-632524106-bbpiw 1/1 Running 0 2m hostnames-632524106-ly40y 1/1 Running 0 2m @@ -489,7 +493,7 @@ u@node$ iptables-save | grep hostnames There should be 2 rules for each port on your `Service` (just one in this example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST". If you do -not see these, try restarting `kube-proxy` with the `-V` flag set to 4, and +not see these, try restarting `kube-proxy` with the `-v` flag set to 4, and then look at the logs again. Almost nobody should be using the "userspace" mode any more, so we won't spend @@ -559,7 +563,7 @@ If this still fails, look at the `kube-proxy` logs for specific lines like: Setting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376] ``` -If you don't see those, try restarting `kube-proxy` with the `-V` flag set to 4, and +If you don't see those, try restarting `kube-proxy` with the `-v` flag set to 4, and then look at the logs again. ### A Pod cannot reach itself via Service IP diff --git a/content/en/docs/tasks/debug-application-cluster/determine-reason-pod-failure.md b/content/en/docs/tasks/debug-application-cluster/determine-reason-pod-failure.md index b684ef2ddfff9..1353420d63d7d 100644 --- a/content/en/docs/tasks/debug-application-cluster/determine-reason-pod-failure.md +++ b/content/en/docs/tasks/debug-application-cluster/determine-reason-pod-failure.md @@ -38,7 +38,7 @@ the container starts. 1. Create a Pod based on the YAML configuration file: - kubectl create -f https://k8s.io/examples/debug/termination.yaml + kubectl apply -f https://k8s.io/examples/debug/termination.yaml In the YAML file, in the `cmd` and `args` fields, you can see that the container sleeps for 10 seconds and then writes "Sleep expired" to diff --git a/content/en/docs/tasks/debug-application-cluster/events-stackdriver.md b/content/en/docs/tasks/debug-application-cluster/events-stackdriver.md index 9f81b3ee900a6..e3e9150f330f4 100644 --- a/content/en/docs/tasks/debug-application-cluster/events-stackdriver.md +++ b/content/en/docs/tasks/debug-application-cluster/events-stackdriver.md @@ -59,7 +59,7 @@ average, approximately 100Mb RAM and 100m CPU is needed. Deploy event exporter to your cluster using the following command: ```shell -kubectl create -f https://k8s.io/examples/debug/event-exporter.yaml +kubectl apply -f https://k8s.io/examples/debug/event-exporter.yaml ``` Since event exporter accesses the Kubernetes API, it requires permissions to diff --git a/content/en/docs/tasks/debug-application-cluster/get-shell-running-container.md b/content/en/docs/tasks/debug-application-cluster/get-shell-running-container.md index 8b97c679f12cc..f3ff92c1964b2 100644 --- a/content/en/docs/tasks/debug-application-cluster/get-shell-running-container.md +++ b/content/en/docs/tasks/debug-application-cluster/get-shell-running-container.md @@ -33,7 +33,7 @@ runs the nginx image. Here is the configuration file for the Pod: Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/application/shell-demo.yaml +kubectl apply -f https://k8s.io/examples/application/shell-demo.yaml ``` Verify that the Container is running: diff --git a/content/en/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md b/content/en/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md index 93ae3c38ddb32..327bfdf9253e2 100644 --- a/content/en/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md +++ b/content/en/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md @@ -39,7 +39,9 @@ Now, when you create a cluster, a message will indicate that the Fluentd log collection daemons that run on each node will target Elasticsearch: ```shell -$ cluster/kube-up.sh +cluster/kube-up.sh +``` +``` ... Project: kubernetes-satnam Zone: us-central1-b @@ -63,7 +65,9 @@ all be running in the kube-system namespace soon after the cluster comes to life. ```shell -$ kubectl get pods --namespace=kube-system +kubectl get pods --namespace=kube-system +``` +``` NAME READY STATUS RESTARTS AGE elasticsearch-logging-v1-78nog 1/1 Running 0 2h elasticsearch-logging-v1-nj2nb 1/1 Running 0 2h diff --git a/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md b/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md index 1a5bc4e8e6011..d075944516953 100644 --- a/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md +++ b/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md @@ -97,7 +97,7 @@ than Google Kubernetes Engine. Proceed at your own risk. 1. Deploy a `ConfigMap` with the logging agent configuration by running the following command: ``` - kubectl create -f https://k8s.io/examples/debug/fluentd-gcp-configmap.yaml + kubectl apply -f https://k8s.io/examples/debug/fluentd-gcp-configmap.yaml ``` The command creates the `ConfigMap` in the `default` namespace. You can download the file @@ -106,7 +106,7 @@ than Google Kubernetes Engine. Proceed at your own risk. 1. Deploy the logging agent `DaemonSet` by running the following command: ``` - kubectl create -f https://k8s.io/examples/debug/fluentd-gcp-ds.yaml + kubectl apply -f https://k8s.io/examples/debug/fluentd-gcp-ds.yaml ``` You can download and edit this file before using it as well. @@ -135,17 +135,19 @@ synthetic log generator pod specification [counter-pod.yaml](/examples/debug/cou {{< codenew file="debug/counter-pod.yaml" >}} This pod specification has one container that runs a bash script -that writes out the value of a counter and the date once per +that writes out the value of a counter and the datetime once per second, and runs indefinitely. Let's create this pod in the default namespace. ```shell -kubectl create -f https://k8s.io/examples/debug/counter-pod.yaml +kubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml ``` You can observe the running pod: ```shell -$ kubectl get pods +kubectl get pods +``` +``` NAME READY STATUS RESTARTS AGE counter 1/1 Running 0 5m ``` @@ -155,7 +157,9 @@ has to download the container image first. When the pod status changes to `Runni you can use the `kubectl logs` command to view the output of this counter pod. ```shell -$ kubectl logs counter +kubectl logs counter +``` +``` 0: Mon Jan 1 00:00:00 UTC 2001 1: Mon Jan 1 00:00:01 UTC 2001 2: Mon Jan 1 00:00:02 UTC 2001 @@ -169,21 +173,27 @@ if the pod is evicted from the node, log files are lost. Let's demonstrate this by deleting the currently running counter container: ```shell -$ kubectl delete pod counter +kubectl delete pod counter +``` +``` pod "counter" deleted ``` and then recreating it: ```shell -$ kubectl create -f https://k8s.io/examples/debug/counter-pod.yaml +kubectl create -f https://k8s.io/examples/debug/counter-pod.yaml +``` +``` pod/counter created ``` After some time, you can access logs from the counter pod again: ```shell -$ kubectl logs counter +kubectl logs counter +``` +``` 0: Mon Jan 1 00:01:00 UTC 2001 1: Mon Jan 1 00:01:01 UTC 2001 2: Mon Jan 1 00:01:02 UTC 2001 @@ -226,7 +236,9 @@ It uses Stackdriver Logging [filtering syntax](https://cloud.google.com/logging/ to query specific logs. For example, you can run the following command: ```none -$ gcloud beta logging read 'logName="projects/$YOUR_PROJECT_ID/logs/count"' --format json | jq '.[].textPayload' +gcloud beta logging read 'logName="projects/$YOUR_PROJECT_ID/logs/count"' --format json | jq '.[].textPayload' +``` +``` ... "2: Mon Jan 1 00:01:02 UTC 2001\n" "1: Mon Jan 1 00:01:01 UTC 2001\n" @@ -329,7 +341,7 @@ by running the following command: kubectl get cm fluentd-gcp-config --namespace kube-system -o yaml > fluentd-gcp-configmap.yaml ``` -Then in the value for the key `containers.input.conf` insert a new filter right after +Then in the value of the key `containers.input.conf` insert a new filter right after the `source` section. {{< note >}} diff --git a/content/en/docs/tasks/debug-application-cluster/monitor-node-health.md b/content/en/docs/tasks/debug-application-cluster/monitor-node-health.md index 43b3c8fc860d3..6db5585f92dfe 100644 --- a/content/en/docs/tasks/debug-application-cluster/monitor-node-health.md +++ b/content/en/docs/tasks/debug-application-cluster/monitor-node-health.md @@ -73,7 +73,7 @@ OS distro.*** * **Step 2:** Start node problem detector with `kubectl`: ```shell - kubectl create -f https://k8s.io/examples/debug/node-problem-detector.yaml + kubectl apply -f https://k8s.io/examples/debug/node-problem-detector.yaml ``` ### Addon Pod @@ -105,7 +105,7 @@ node-problem-detector-config --from-file=config/`. ```shell kubectl delete -f https://k8s.io/examples/debug/node-problem-detector.yaml # If you have a node-problem-detector running - kubectl create -f https://k8s.io/examples/debug/node-problem-detector-configmap.yaml + kubectl apply -f https://k8s.io/examples/debug/node-problem-detector-configmap.yaml ``` ***Notice that this approach only applies to node problem detector started with `kubectl`.*** diff --git a/content/en/docs/tasks/extend-kubectl/kubectl-plugins.md b/content/en/docs/tasks/extend-kubectl/kubectl-plugins.md index 387b14c8022f4..a78bab5360050 100644 --- a/content/en/docs/tasks/extend-kubectl/kubectl-plugins.md +++ b/content/en/docs/tasks/extend-kubectl/kubectl-plugins.md @@ -9,7 +9,7 @@ content_template: templates/task {{% capture overview %}} -{{< feature-state state="beta" >}} +{{< feature-state state="stable" >}} This guide demonstrates how to install and write extensions for [kubectl](/docs/reference/kubectl/kubectl/). By thinking of core `kubectl` commands as essential building blocks for interacting with a Kubernetes cluster, a cluster administrator can think of plugins as a means of utilizing these building blocks to create more complex behavior. Plugins extend `kubectl` with new sub-commands, allowing for new and custom features not included in the main distribution of `kubectl`. @@ -24,8 +24,6 @@ You need to have a working `kubectl` binary installed. Plugins were officially introduced as an alpha feature in the v1.8.0 release. They have been re-worked in the v1.12.0 release to support a wider range of use-cases. So, while some parts of the plugins feature were already available in previous versions, a `kubectl` version of 1.12.0 or later is recommended if you are following these docs. {{< /note >}} -Until a GA version is released, plugins should be considered unstable, and their underlying mechanism is prone to change. - {{% /capture %}} {{% capture steps %}} @@ -96,25 +94,35 @@ sudo mv ./kubectl-foo /usr/local/bin You may now invoke your plugin as a `kubectl` command: ``` -$ kubectl foo +kubectl foo +``` +``` I am a plugin named kubectl-foo ``` All args and flags are passed as-is to the executable: ``` -$ kubectl foo version +kubectl foo version +``` +``` 1.0.0 ``` All environment variables are also passed as-is to the executable: ```bash -$ export KUBECONFIG=~/.kube/config -$ kubectl foo config +export KUBECONFIG=~/.kube/config +kubectl foo config +``` +``` /home//.kube/config +``` -$ KUBECONFIG=/etc/kube/config kubectl foo config +```shell +KUBECONFIG=/etc/kube/config kubectl foo config +``` +``` /etc/kube/config ``` @@ -142,22 +150,27 @@ Example: ```bash # create a plugin -$ echo '#!/bin/bash\n\necho "My first command-line argument was $1"' > kubectl-foo-bar-baz -$ sudo chmod +x ./kubectl-foo-bar-baz +echo '#!/bin/bash\n\necho "My first command-line argument was $1"' > kubectl-foo-bar-baz +sudo chmod +x ./kubectl-foo-bar-baz # "install" our plugin by placing it on our PATH -$ sudo mv ./kubectl-foo-bar-baz /usr/local/bin +sudo mv ./kubectl-foo-bar-baz /usr/local/bin # ensure our plugin is recognized by kubectl -$ kubectl plugin list +kubectl plugin list +``` +``` The following kubectl-compatible plugins are available: /usr/local/bin/kubectl-foo-bar-baz - +``` +``` # test that calling our plugin via a "kubectl" command works # even when additional arguments and flags are passed to our # plugin executable by the user. -$ kubectl foo bar baz arg1 --meaningless-flag=true +kubectl foo bar baz arg1 --meaningless-flag=true +``` +``` My first command-line argument was arg1 ``` @@ -172,14 +185,16 @@ Example: ```bash # create a plugin containing an underscore in its filename -$ echo '#!/bin/bash\n\necho "I am a plugin with a dash in my name"' > ./kubectl-foo_bar -$ sudo chmod +x ./kubectl-foo_bar +echo '#!/bin/bash\n\necho "I am a plugin with a dash in my name"' > ./kubectl-foo_bar +sudo chmod +x ./kubectl-foo_bar # move the plugin into your PATH -$ sudo mv ./kubectl-foo_bar /usr/local/bin +sudo mv ./kubectl-foo_bar /usr/local/bin # our plugin can now be invoked from `kubectl` like so: -$ kubectl foo-bar +kubectl foo-bar +``` +``` I am a plugin with a dash in my name ``` @@ -188,11 +203,17 @@ The command from the above example, can be invoked using either a dash (`-`) or ```bash # our plugin can be invoked with a dash -$ kubectl foo-bar +kubectl foo-bar +``` +``` I am a plugin with a dash in my name +``` +```bash # it can also be invoked using an underscore -$ kubectl foo_bar +kubectl foo_bar +``` +``` I am a plugin with a dash in my name ``` @@ -203,7 +224,9 @@ For example, given a PATH with the following value: `PATH=/usr/local/bin/plugins such that the output of the `kubectl plugin list` command is: ```bash -$ PATH=/usr/local/bin/plugins:/usr/local/bin/moreplugins kubectl plugin list +PATH=/usr/local/bin/plugins:/usr/local/bin/moreplugins kubectl plugin list +``` +```bash The following kubectl-compatible plugins are available: /usr/local/bin/plugins/kubectl-foo @@ -223,23 +246,39 @@ There is another kind of overshadowing that can occur with plugin filenames. Giv ```bash # for a given kubectl command, the plugin with the longest possible filename will always be preferred -$ kubectl foo bar baz +kubectl foo bar baz +``` +``` Plugin kubectl-foo-bar-baz is executed +``` -$ kubectl foo bar +```bash +kubectl foo bar +``` +``` Plugin kubectl-foo-bar is executed +``` -$ kubectl foo bar baz buz +```bash +kubectl foo bar baz buz +``` +``` Plugin kubectl-foo-bar-baz is executed, with "buz" as its first argument +``` -$ kubectl foo bar buz +```bash +kubectl foo bar buz +``` +``` Plugin kubectl-foo-bar is executed, with "buz" as its first argument ``` This design choice ensures that plugin sub-commands can be implemented across multiple files, if needed, and that these sub-commands can be nested under a "parent" plugin command: ```bash -$ ls ./plugin_command_tree +ls ./plugin_command_tree +``` +``` kubectl-parent kubectl-parent-subcommand kubectl-parent-subcommand-subsubcommand @@ -250,7 +289,9 @@ kubectl-parent-subcommand-subsubcommand You can use the aforementioned `kubectl plugin list` command to ensure that your plugin is visible by `kubectl`, and verify that there are no warnings preventing it from being called as a `kubectl` command. ```bash -$ kubectl plugin list +kubectl plugin list +``` +``` The following kubectl-compatible plugins are available: test/fixtures/pkg/kubectl/plugins/kubectl-foo diff --git a/content/en/docs/tasks/federation/_index.md b/content/en/docs/tasks/federation/_index.md index fc7458f1d92c0..869c63fc6a443 100755 --- a/content/en/docs/tasks/federation/_index.md +++ b/content/en/docs/tasks/federation/_index.md @@ -1,5 +1,5 @@ --- -title: "Federation - Run an App on Multiple Clusters" +title: "Federation" weight: 120 --- diff --git a/content/en/docs/tasks/federation/administer-federation/_index.md b/content/en/docs/tasks/federation/administer-federation/_index.md new file mode 100755 index 0000000000000..555416fb9bc1d --- /dev/null +++ b/content/en/docs/tasks/federation/administer-federation/_index.md @@ -0,0 +1,5 @@ +--- +title: "Administer Federation Control Plane" +weight: 160 +--- + diff --git a/content/en/docs/tasks/administer-federation/cluster.md b/content/en/docs/tasks/federation/administer-federation/cluster.md similarity index 97% rename from content/en/docs/tasks/administer-federation/cluster.md rename to content/en/docs/tasks/federation/administer-federation/cluster.md index 6e350f4b25341..11afbbe159d58 100644 --- a/content/en/docs/tasks/administer-federation/cluster.md +++ b/content/en/docs/tasks/federation/administer-federation/cluster.md @@ -5,9 +5,9 @@ content_template: templates/task {{% capture overview %}} -{{< note >}} -{{< include "federation-current-state.md" >}} -{{< /note >}} +{{< deprecationfilewarning >}} +{{< include "federation-deprecation-warning-note.md" >}} +{{< /deprecationfilewarning >}} This guide explains how to use Clusters API resource in a Federation control plane. diff --git a/content/en/docs/tasks/administer-federation/configmap.md b/content/en/docs/tasks/federation/administer-federation/configmap.md similarity index 95% rename from content/en/docs/tasks/administer-federation/configmap.md rename to content/en/docs/tasks/federation/administer-federation/configmap.md index 4123b4ab22cc5..cf36e2e6ea7c1 100644 --- a/content/en/docs/tasks/administer-federation/configmap.md +++ b/content/en/docs/tasks/federation/administer-federation/configmap.md @@ -5,9 +5,9 @@ content_template: templates/task {{% capture overview %}} -{{< note >}} -{{< include "federation-current-state.md" >}} -{{< /note >}} +{{< deprecationfilewarning >}} +{{< include "federation-deprecation-warning-note.md" >}} +{{< /deprecationfilewarning >}} This guide explains how to use ConfigMaps in a Federation control plane. diff --git a/content/en/docs/tasks/administer-federation/daemonset.md b/content/en/docs/tasks/federation/administer-federation/daemonset.md similarity index 95% rename from content/en/docs/tasks/administer-federation/daemonset.md rename to content/en/docs/tasks/federation/administer-federation/daemonset.md index 54a04493f6945..dd9ed4f93aed7 100644 --- a/content/en/docs/tasks/administer-federation/daemonset.md +++ b/content/en/docs/tasks/federation/administer-federation/daemonset.md @@ -5,9 +5,9 @@ content_template: templates/task {{% capture overview %}} -{{< note >}} -{{< include "federation-current-state.md" >}} -{{< /note >}} +{{< deprecationfilewarning >}} +{{< include "federation-deprecation-warning-note.md" >}} +{{< /deprecationfilewarning >}} This guide explains how to use DaemonSets in a federation control plane. diff --git a/content/en/docs/tasks/administer-federation/deployment.md b/content/en/docs/tasks/federation/administer-federation/deployment.md similarity index 97% rename from content/en/docs/tasks/administer-federation/deployment.md rename to content/en/docs/tasks/federation/administer-federation/deployment.md index 624a527cfc953..cf80b9610a96d 100644 --- a/content/en/docs/tasks/administer-federation/deployment.md +++ b/content/en/docs/tasks/federation/administer-federation/deployment.md @@ -5,9 +5,9 @@ content_template: templates/task {{% capture overview %}} -{{< note >}} -{{< include "federation-current-state.md" >}} -{{< /note >}} +{{< deprecationfilewarning >}} +{{< include "federation-deprecation-warning-note.md" >}} +{{< /deprecationfilewarning >}} This guide explains how to use Deployments in the Federation control plane. diff --git a/content/en/docs/tasks/administer-federation/events.md b/content/en/docs/tasks/federation/administer-federation/events.md similarity index 92% rename from content/en/docs/tasks/administer-federation/events.md rename to content/en/docs/tasks/federation/administer-federation/events.md index e855afb3d1ab8..2c8cfee4ffc1d 100644 --- a/content/en/docs/tasks/administer-federation/events.md +++ b/content/en/docs/tasks/federation/administer-federation/events.md @@ -5,9 +5,9 @@ content_template: templates/concept {{% capture overview %}} -{{< note >}} -{{< include "federation-current-state.md" >}} -{{< /note >}} +{{< deprecationfilewarning >}} +{{< include "federation-deprecation-warning-note.md" >}} +{{< /deprecationfilewarning >}} This guide explains how to use events in federation control plane to help in debugging. diff --git a/content/en/docs/tasks/administer-federation/hpa.md b/content/en/docs/tasks/federation/administer-federation/hpa.md similarity index 98% rename from content/en/docs/tasks/administer-federation/hpa.md rename to content/en/docs/tasks/federation/administer-federation/hpa.md index 496a7032a6694..ee7c85482bfe2 100644 --- a/content/en/docs/tasks/administer-federation/hpa.md +++ b/content/en/docs/tasks/federation/administer-federation/hpa.md @@ -7,9 +7,9 @@ content_template: templates/task {{< feature-state state="alpha" >}} -{{< note >}} -{{< include "federation-current-state.md" >}} -{{< /note >}} +{{< deprecationfilewarning >}} +{{< include "federation-deprecation-warning-note.md" >}} +{{< /deprecationfilewarning >}} This guide explains how to use federated horizontal pod autoscalers (HPAs) in the federation control plane. diff --git a/content/en/docs/tasks/administer-federation/ingress.md b/content/en/docs/tasks/federation/administer-federation/ingress.md similarity index 99% rename from content/en/docs/tasks/administer-federation/ingress.md rename to content/en/docs/tasks/federation/administer-federation/ingress.md index 51bfce65d5a44..60b0d61845fc1 100644 --- a/content/en/docs/tasks/administer-federation/ingress.md +++ b/content/en/docs/tasks/federation/administer-federation/ingress.md @@ -5,9 +5,9 @@ content_template: templates/task {{% capture overview %}} -{{< note >}} -{{< include "federation-current-state.md" >}} -{{< /note >}} +{{< deprecationfilewarning >}} +{{< include "federation-deprecation-warning-note.md" >}} +{{< /deprecationfilewarning >}} This page explains how to use Kubernetes Federated Ingress to deploy a common HTTP(S) virtual IP load balancer across a federated service running in diff --git a/content/en/docs/tasks/administer-federation/job.md b/content/en/docs/tasks/federation/administer-federation/job.md similarity index 97% rename from content/en/docs/tasks/administer-federation/job.md rename to content/en/docs/tasks/federation/administer-federation/job.md index d495d1e42ed3f..77f98836dd184 100644 --- a/content/en/docs/tasks/administer-federation/job.md +++ b/content/en/docs/tasks/federation/administer-federation/job.md @@ -5,9 +5,9 @@ content_template: templates/task {{% capture overview %}} -{{< note >}} -{{< include "federation-current-state.md" >}} -{{< /note >}} +{{< deprecationfilewarning >}} +{{< include "federation-deprecation-warning-note.md" >}} +{{< /deprecationfilewarning >}} This guide explains how to use jobs in the federation control plane. diff --git a/content/en/docs/tasks/administer-federation/namespaces.md b/content/en/docs/tasks/federation/administer-federation/namespaces.md similarity index 96% rename from content/en/docs/tasks/administer-federation/namespaces.md rename to content/en/docs/tasks/federation/administer-federation/namespaces.md index e2e58af00505b..71019d81f05f7 100644 --- a/content/en/docs/tasks/administer-federation/namespaces.md +++ b/content/en/docs/tasks/federation/administer-federation/namespaces.md @@ -5,9 +5,9 @@ content_template: templates/task {{% capture overview %}} -{{< note >}} -{{< include "federation-current-state.md" >}} -{{< /note >}} +{{< deprecationfilewarning >}} +{{< include "federation-deprecation-warning-note.md" >}} +{{< /deprecationfilewarning >}} This guide explains how to use Namespaces in Federation control plane. diff --git a/content/en/docs/tasks/administer-federation/replicaset.md b/content/en/docs/tasks/federation/administer-federation/replicaset.md similarity index 97% rename from content/en/docs/tasks/administer-federation/replicaset.md rename to content/en/docs/tasks/federation/administer-federation/replicaset.md index 932abd7095110..0ffef6a69299c 100644 --- a/content/en/docs/tasks/administer-federation/replicaset.md +++ b/content/en/docs/tasks/federation/administer-federation/replicaset.md @@ -5,9 +5,9 @@ content_template: templates/task {{% capture overview %}} -{{< note >}} -{{< include "federation-current-state.md" >}} -{{< /note >}} +{{< deprecationfilewarning >}} +{{< include "federation-deprecation-warning-note.md" >}} +{{< /deprecationfilewarning >}} This guide explains how to use ReplicaSets in the Federation control plane. diff --git a/content/en/docs/tasks/administer-federation/secret.md b/content/en/docs/tasks/federation/administer-federation/secret.md similarity index 96% rename from content/en/docs/tasks/administer-federation/secret.md rename to content/en/docs/tasks/federation/administer-federation/secret.md index 2de0d059a8056..e50fd13005570 100644 --- a/content/en/docs/tasks/administer-federation/secret.md +++ b/content/en/docs/tasks/federation/administer-federation/secret.md @@ -5,9 +5,9 @@ content_template: templates/concept {{% capture overview %}} -{{< note >}} -{{< include "federation-current-state.md" >}} -{{< /note >}} +{{< deprecationfilewarning >}} +{{< include "federation-deprecation-warning-note.md" >}} +{{< /deprecationfilewarning >}} This guide explains how to use secrets in Federation control plane. diff --git a/content/en/docs/tasks/federation/federation-service-discovery.md b/content/en/docs/tasks/federation/federation-service-discovery.md index b80a6d8d22ea6..ea06eaa17fec5 100644 --- a/content/en/docs/tasks/federation/federation-service-discovery.md +++ b/content/en/docs/tasks/federation/federation-service-discovery.md @@ -1,16 +1,17 @@ --- +title: Cross-cluster Service Discovery using Federated Services reviewers: - bprashanth - quinton-hoole content_template: templates/task -title: Cross-cluster Service Discovery using Federated Services +weight: 140 --- {{% capture overview %}} -{{< note >}} -{{< include "federation-current-state.md" >}} -{{< /note >}} +{{< deprecationfilewarning >}} +{{< include "federation-deprecation-warning-note.md" >}} +{{< /deprecationfilewarning >}} This guide explains how to use Kubernetes Federated Services to deploy a common Service across multiple Kubernetes clusters. This makes it @@ -118,8 +119,9 @@ The status of your Federated Service will automatically reflect the real-time status of the underlying Kubernetes services, for example: ``` shell -$kubectl --context=federation-cluster describe services nginx - +kubectl --context=federation-cluster describe services nginx +``` +``` Name: nginx Namespace: default Labels: run=nginx @@ -187,7 +189,9 @@ this. For example, if your Federation is configured to use Google Cloud DNS, and a managed DNS domain 'example.com': ``` shell -$ gcloud dns managed-zones describe example-dot-com +gcloud dns managed-zones describe example-dot-com +``` +``` creationTime: '2016-06-26T18:18:39.229Z' description: Example domain for Kubernetes Cluster Federation dnsName: example.com. @@ -202,7 +206,9 @@ nameServers: ``` ```shell -$ gcloud dns record-sets list --zone example-dot-com +gcloud dns record-sets list --zone example-dot-com +``` +``` NAME TYPE TTL DATA example.com. NS 21600 ns-cloud-e1.googledomains.com., ns-cloud-e2.googledomains.com. example.com. OA 21600 ns-cloud-e1.googledomains.com. cloud-dns-hostmaster.google.com. 1 21600 3600 1209600 300 @@ -225,12 +231,12 @@ nginx.mynamespace.myfederation.svc.europe-west1-d.example.com. CNAME 180 If your Federation is configured to use AWS Route53, you can use one of the equivalent AWS tools, for example: ``` shell -$ aws route53 list-hosted-zones +aws route53 list-hosted-zones ``` and ``` shell -$ aws route53 list-resource-record-sets --hosted-zone-id Z3ECL0L9QLOVBX +aws route53 list-resource-record-sets --hosted-zone-id Z3ECL0L9QLOVBX ``` {{< /note >}} diff --git a/content/en/docs/tasks/federation/set-up-cluster-federation-kubefed.md b/content/en/docs/tasks/federation/set-up-cluster-federation-kubefed.md index 739d143931e6c..9a751661e8fa4 100644 --- a/content/en/docs/tasks/federation/set-up-cluster-federation-kubefed.md +++ b/content/en/docs/tasks/federation/set-up-cluster-federation-kubefed.md @@ -1,14 +1,16 @@ --- +title: Set up Cluster Federation with Kubefed reviewers: - madhusudancs content_template: templates/task -title: Set up Cluster Federation with Kubefed +weight: 125 --- {{% capture overview %}} -{{< note >}} -{{< include "federation-current-state.md" >}} -{{< /note >}} + +{{< deprecationfilewarning >}} +{{< include "federation-deprecation-warning-note.md" >}} +{{< /deprecationfilewarning >}} Kubernetes version 1.5 and above includes a new command line tool called [`kubefed`](/docs/admin/kubefed/) to help you administrate your federated @@ -52,7 +54,7 @@ now maintained. Consequently, the federation release information is available on [release page](https://github.com/kubernetes/federation/releases). {{< /note >}} -### For k8s versions 1.8.x and earlier: +### For Kubernetes versions 1.8.x and earlier: ```shell curl -LO https://storage.googleapis.com/kubernetes-release/release/${RELEASE-VERSION}/kubernetes-client-linux-amd64.tar.gz @@ -70,7 +72,7 @@ sudo cp kubernetes/client/bin/kubefed /usr/local/bin sudo chmod +x /usr/local/bin/kubefed ``` -### For k8s versions 1.9.x and above: +### For Kubernetes versions 1.9.x and above: ```shell curl -LO https://storage.cloud.google.com/kubernetes-federation-release/release/${RELEASE-VERSION}/federation-client-linux-amd64.tar.gz diff --git a/content/en/docs/tasks/federation/set-up-coredns-provider-federation.md b/content/en/docs/tasks/federation/set-up-coredns-provider-federation.md index b2379f79b981c..572a348a822df 100644 --- a/content/en/docs/tasks/federation/set-up-coredns-provider-federation.md +++ b/content/en/docs/tasks/federation/set-up-coredns-provider-federation.md @@ -1,13 +1,14 @@ --- title: Set up CoreDNS as DNS provider for Cluster Federation content_template: templates/tutorial +weight: 130 --- {{% capture overview %}} -{{< note >}} -{{< include "federation-current-state.md" >}} -{{< /note >}} +{{< deprecationfilewarning >}} +{{< include "federation-deprecation-warning-note.md" >}} +{{< /deprecationfilewarning >}} This page shows how to configure and deploy CoreDNS to be used as the DNS provider for Cluster Federation. diff --git a/content/en/docs/tasks/federation/set-up-placement-policies-federation.md b/content/en/docs/tasks/federation/set-up-placement-policies-federation.md index cb1e02d8cf8ff..4329245d95c54 100644 --- a/content/en/docs/tasks/federation/set-up-placement-policies-federation.md +++ b/content/en/docs/tasks/federation/set-up-placement-policies-federation.md @@ -1,13 +1,14 @@ --- title: Set up placement policies in Federation content_template: templates/task +weight: 135 --- {{% capture overview %}} -{{< note >}} -{{< include "federation-current-state.md" >}} -{{< /note >}} +{{< deprecationfilewarning >}} +{{< include "federation-deprecation-warning-note.md" >}} +{{< /deprecationfilewarning >}} This page shows how to enforce policy-based placement decisions over Federated resources using an external policy engine. @@ -32,7 +33,7 @@ After deploying the Federation control plane, you must configure an Admission Controller in the Federation API server that enforces placement decisions received from the external policy engine. - kubectl create -f scheduling-policy-admission.yaml + kubectl apply -f scheduling-policy-admission.yaml Shown below is an example ConfigMap for the Admission Controller: @@ -82,7 +83,7 @@ decisions in the Federation control plane. Create a Service in the host cluster to contact the external policy engine: - kubectl create -f policy-engine-service.yaml + kubectl apply -f policy-engine-service.yaml Shown below is an example Service for OPA. @@ -90,7 +91,7 @@ Shown below is an example Service for OPA. Create a Deployment in the host cluster with the Federation control plane: - kubectl create -f policy-engine-deployment.yaml + kubectl apply -f policy-engine-deployment.yaml Shown below is an example Deployment for OPA. diff --git a/content/en/docs/tasks/inject-data-application/define-command-argument-container.md b/content/en/docs/tasks/inject-data-application/define-command-argument-container.md index f7f2e2035fb40..66ebd69c134ab 100644 --- a/content/en/docs/tasks/inject-data-application/define-command-argument-container.md +++ b/content/en/docs/tasks/inject-data-application/define-command-argument-container.md @@ -47,7 +47,7 @@ file for the Pod defines a command and two arguments: 1. Create a Pod based on the YAML configuration file: ```shell - kubectl create -f https://k8s.io/examples/pods/commands.yaml + kubectl apply -f https://k8s.io/examples/pods/commands.yaml ``` 1. List the running Pods: diff --git a/content/en/docs/tasks/inject-data-application/define-environment-variable-container.md b/content/en/docs/tasks/inject-data-application/define-environment-variable-container.md index ec70cae998771..d10bbd323f67c 100644 --- a/content/en/docs/tasks/inject-data-application/define-environment-variable-container.md +++ b/content/en/docs/tasks/inject-data-application/define-environment-variable-container.md @@ -37,7 +37,7 @@ Pod: 1. Create a Pod based on the YAML configuration file: ```shell - kubectl create -f https://k8s.io/examples/pods/inject/envars.yaml + kubectl apply -f https://k8s.io/examples/pods/inject/envars.yaml ``` 1. List the running Pods: diff --git a/content/en/docs/tasks/inject-data-application/distribute-credentials-secure.md b/content/en/docs/tasks/inject-data-application/distribute-credentials-secure.md index a2533ac9c0d77..ff77fab7d77e8 100644 --- a/content/en/docs/tasks/inject-data-application/distribute-credentials-secure.md +++ b/content/en/docs/tasks/inject-data-application/distribute-credentials-secure.md @@ -42,7 +42,7 @@ username and password: 1. Create the Secret ```shell - kubectl create -f https://k8s.io/examples/pods/inject/secret.yaml + kubectl apply -f https://k8s.io/examples/pods/inject/secret.yaml ``` 1. View information about the Secret: @@ -98,7 +98,7 @@ Here is a configuration file you can use to create a Pod: 1. Create the Pod: ```shell - kubectl create -f https://k8s.io/examples/pods/inject/secret-pod.yaml + kubectl apply -f https://k8s.io/examples/pods/inject/secret-pod.yaml ``` 1. Verify that your Pod is running: @@ -153,7 +153,7 @@ Here is a configuration file you can use to create a Pod: 1. Create the Pod: ```shell - kubectl create -f https://k8s.io/examples/pods/inject/secret-envars-pod.yaml + kubectl apply -f https://k8s.io/examples/pods/inject/secret-envars-pod.yaml ``` 1. Verify that your Pod is running: diff --git a/content/en/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md b/content/en/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md index d9bcccdd9e04c..2fe432a7a4d4a 100644 --- a/content/en/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md +++ b/content/en/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md @@ -56,7 +56,7 @@ fields of the Container in the Pod. Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/inject/dapi-volume.yaml +kubectl apply -f https://k8s.io/examples/pods/inject/dapi-volume.yaml ``` Verify that Container in the Pod is running: @@ -172,7 +172,7 @@ default value of `1` which means cores for cpu and bytes for memory. Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/inject/dapi-volume-resources.yaml +kubectl apply -f https://k8s.io/examples/pods/inject/dapi-volume-resources.yaml ``` Get a shell into the Container that is running in your Pod: diff --git a/content/en/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md b/content/en/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md index ddb32c380a34e..b808dd9e2e219 100644 --- a/content/en/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md +++ b/content/en/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md @@ -55,7 +55,7 @@ Container in the Pod. Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/inject/dapi-envars-pod.yaml +kubectl apply -f https://k8s.io/examples/pods/inject/dapi-envars-pod.yaml ``` Verify that the Container in the Pod is running: @@ -130,7 +130,7 @@ from Container fields. Create the Pod: ```shell -kubectl create -f https://k8s.io/examples/pods/inject/dapi-envars-container.yaml +kubectl apply -f https://k8s.io/examples/pods/inject/dapi-envars-container.yaml ``` Verify that the Container in the Pod is running: diff --git a/content/en/docs/tasks/inject-data-application/podpreset.md b/content/en/docs/tasks/inject-data-application/podpreset.md index 10fff8cec432f..beb57754c3661 100644 --- a/content/en/docs/tasks/inject-data-application/podpreset.md +++ b/content/en/docs/tasks/inject-data-application/podpreset.md @@ -36,13 +36,15 @@ Preset. Create the PodPreset: ```shell -kubectl create -f https://k8s.io/examples/podpreset/preset.yaml +kubectl apply -f https://k8s.io/examples/podpreset/preset.yaml ``` Examine the created PodPreset: ```shell -$ kubectl get podpreset +kubectl get podpreset +``` +``` NAME AGE allow-database 1m ``` @@ -54,13 +56,15 @@ The new PodPreset will act upon any pod that has label `role: frontend`. Create a pod: ```shell -$ kubectl create -f https://k8s.io/examples/podpreset/pod.yaml +kubectl create -f https://k8s.io/examples/podpreset/pod.yaml ``` List the running Pods: ```shell -$ kubectl get pods +kubectl get pods +``` +``` NAME READY STATUS RESTARTS AGE website 1/1 Running 0 4m ``` @@ -72,7 +76,7 @@ website 1/1 Running 0 4m To see above output, run the following command: ```shell -$ kubectl get pod website -o yaml +kubectl get pod website -o yaml ``` ## Pod Spec with ConfigMap Example @@ -157,7 +161,9 @@ when there is a conflict. **If we run `kubectl describe...` we can see the event:** ```shell -$ kubectl describe ... +kubectl describe ... +``` +``` .... Events: FirstSeen LastSeen Count From SubobjectPath Reason Message @@ -169,7 +175,9 @@ Events: Once you don't need a pod preset anymore, you can delete it with `kubectl`: ```shell -$ kubectl delete podpreset allow-database +kubectl delete podpreset allow-database +``` +``` podpreset "allow-database" deleted ``` diff --git a/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md b/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md index d74635e42e72d..168e9a495974e 100644 --- a/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md +++ b/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md @@ -47,71 +47,93 @@ This example cron job config `.spec` file prints the current time and a hello me {{< codenew file="application/job/cronjob.yaml" >}} -Run the example cron job by downloading the example file and then running this command: +Run the example CronJob by using this command: ```shell -$ kubectl create -f ./cronjob.yaml -cronjob "hello" created +kubectl create -f https://k8s.io/examples/application/job/cronjob.yaml +``` +The output is similar to this: + +``` +cronjob.batch/hello created ``` Alternatively, you can use `kubectl run` to create a cron job without writing a full config: ```shell -$ kubectl run hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "date; echo Hello from the Kubernetes cluster" -cronjob "hello" created +kubectl run hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "date; echo Hello from the Kubernetes cluster" ``` After creating the cron job, get its status using this command: ```shell -$ kubectl get cronjob hello -NAME SCHEDULE SUSPEND ACTIVE LAST-SCHEDULE -hello */1 * * * * False 0 +kubectl get cronjob hello +``` +The output is similar to this: + +``` +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +hello */1 * * * * False 0 10s ``` As you can see from the results of the command, the cron job has not scheduled or run any jobs yet. Watch for the job to be created in around one minute: ```shell -$ kubectl get jobs --watch -NAME DESIRED SUCCESSFUL AGE -hello-4111706356 1 1 2s +kubectl get jobs --watch +``` +The output is similar to this: + +``` +NAME COMPLETIONS DURATION AGE +hello-4111706356 0/1 0s +hello-4111706356 0/1 0s 0s +hello-4111706356 1/1 5s 5s ``` Now you've seen one running job scheduled by the "hello" cron job. You can stop watching the job and view the cron job again to see that it scheduled the job: ```shell -$ kubectl get cronjob hello -NAME SCHEDULE SUSPEND ACTIVE LAST-SCHEDULE -hello */1 * * * * False 0 Mon, 29 Aug 2016 14:34:00 -0700 +kubectl get cronjob hello +``` +The output is similar to this: + +``` +NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE +hello */1 * * * * False 0 50s 75s ``` -You should see that the cron job "hello" successfully scheduled a job at the time specified in `LAST-SCHEDULE`. -There are currently 0 active jobs, meaning that the job has completed or failed. +You should see that the cron job `hello` successfully scheduled a job at the time specified in `LAST SCHEDULE`. There are currently 0 active jobs, meaning that the job has completed or failed. Now, find the pods that the last scheduled job created and view the standard output of one of the pods. -Note that the job name and pod name are different. + +{{< note >}} +The job name and pod name are different. +{{< /note >}} ```shell # Replace "hello-4111706356" with the job name in your system -$ pods=$(kubectl get pods --selector=job-name=hello-4111706356 --output=jsonpath={.items..metadata.name}) +pods=$(kubectl get pods --selector=job-name=hello-4111706356 --output=jsonpath={.items.metadata.name}) +``` +Show pod log: -$ echo $pods -hello-4111706356-o9qcm +```shell +kubectl logs $pods +``` +The output is similar to this: -$ kubectl logs $pods -Mon Aug 29 21:34:09 UTC 2016 +``` +Fri Feb 22 11:02:09 UTC 2019 Hello from the Kubernetes cluster ``` ## Deleting a Cron Job -When you don't need a cron job any more, delete it with `kubectl delete cronjob`: +When you don't need a cron job any more, delete it with `kubectl delete cronjob `: ```shell -$ kubectl delete cronjob hello -cronjob "hello" deleted +kubectl delete cronjob hello ``` Deleting the cron job removes all the jobs and pods it created and stops it from creating additional jobs. @@ -161,21 +183,19 @@ After the deadline, the cron job does not start the job. Jobs that do not meet their deadline in this way count as failed jobs. If this field is not specified, the jobs have no deadline. -The CronJob controller counts how many missed schedules happen for a cron job. If there are more than 100 missed -schedules, the cron job is no longer scheduled. When `.spec.startingDeadlineSeconds` is not set, the CronJob -controller counts missed schedules from `status.lastScheduleTime` until now. For example, one cron job is -supposed to run every minute, the `status.lastScheduleTime` of the cronjob is 5:00am, but now it's 7:00am. -That means 120 schedules were missed, so the cron job is no longer scheduled. If the `.spec.startingDeadlineSeconds` -field is set (not null), the CronJob controller counts how many missed jobs occurred from the value of -`.spec.startingDeadlineSeconds` until now. For example, if it is set to `200`, it counts how many missed -schedules occurred in the last 200 seconds. In that case, if there were more than 100 missed schedules in the -last 200 seconds, the cron job is no longer scheduled. +The CronJob controller counts how many missed schedules happen for a cron job. If there are more than 100 missed schedules, the cron job is no longer scheduled. When `.spec.startingDeadlineSeconds` is not set, the CronJob controller counts missed schedules from `status.lastScheduleTime` until now. + +For example, one cron job is supposed to run every minute, the `status.lastScheduleTime` of the cronjob is 5:00am, but now it's 7:00am. That means 120 schedules were missed, so the cron job is no longer scheduled. + +If the `.spec.startingDeadlineSeconds` field is set (not null), the CronJob controller counts how many missed jobs occurred from the value of `.spec.startingDeadlineSeconds` until now. + +For example, if it is set to `200`, it counts how many missed schedules occurred in the last 200 seconds. In that case, if there were more than 100 missed schedules in the last 200 seconds, the cron job is no longer scheduled. ### Concurrency Policy The `.spec.concurrencyPolicy` field is also optional. It specifies how to treat concurrent executions of a job that is created by this cron job. -the spec may specify only one of the following concurrency policies: +The spec may specify only one of the following concurrency policies: * `Allow` (default): The cron job allows concurrently running jobs * `Forbid`: The cron job does not allow concurrent runs; if it is time for a new job run and the previous job run hasn't finished yet, the cron job skips the new job run diff --git a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md index ae9577a5a4add..0399e24c13785 100644 --- a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md +++ b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md @@ -46,9 +46,16 @@ cluster and reuse it for many jobs, as well as for long-running services. Start RabbitMQ as follows: ```shell -$ kubectl create -f examples/celery-rabbitmq/rabbitmq-service.yaml +kubectl create -f examples/celery-rabbitmq/rabbitmq-service.yaml +``` +``` service "rabbitmq-service" created -$ kubectl create -f examples/celery-rabbitmq/rabbitmq-controller.yaml +``` + +```shell +kubectl create -f examples/celery-rabbitmq/rabbitmq-controller.yaml +``` +``` replicationcontroller "rabbitmq-controller" created ``` @@ -64,7 +71,9 @@ First create a temporary interactive Pod. ```shell # Create a temporary interactive container -$ kubectl run -i --tty temp --image ubuntu:18.04 +kubectl run -i --tty temp --image ubuntu:18.04 +``` +``` Waiting for pod default/temp-loe07 to be running, status is Pending, pod ready: false ... [ previous line repeats several times .. hit return when it stops ] ... ``` @@ -161,9 +170,11 @@ For our example, we will create the queue and fill it using the amqp command lin In practice, you might write a program to fill the queue using an amqp client library. ```shell -$ /usr/bin/amqp-declare-queue --url=$BROKER_URL -q job1 -d +/usr/bin/amqp-declare-queue --url=$BROKER_URL -q job1 -d job1 -$ for f in apple banana cherry date fig grape lemon melon +``` +```shell +for f in apple banana cherry date fig grape lemon melon do /usr/bin/amqp-publish --url=$BROKER_URL -r job1 -p -b $f done @@ -184,7 +195,7 @@ example program: Give the script execution permission: ```shell -$ chmod +x worker.py +chmod +x worker.py ``` Now, build an image. If you are working in the source @@ -195,7 +206,7 @@ and [worker.py](/examples/application/job/rabbitmq/worker.py). In either case, build the image with this command: ```shell -$ docker build -t job-wq-1 . +docker build -t job-wq-1 . ``` For the [Docker Hub](https://hub.docker.com/), tag your app image with @@ -234,13 +245,15 @@ done. So we set, `.spec.completions: 8` for the example, since we put 8 items i So, now run the Job: ```shell -kubectl create -f ./job.yaml +kubectl apply -f ./job.yaml ``` Now wait a bit, then check on the job. ```shell -$ kubectl describe jobs/job-wq-1 +kubectl describe jobs/job-wq-1 +``` +``` Name: job-wq-1 Namespace: default Selector: controller-uid=41d75705-92df-11e7-b85e-fa163ee3c11f @@ -295,7 +308,7 @@ This approach creates a pod for every work item. If your work items only take a though, creating a Pod for every work item may add a lot of overhead. Consider another [example](/docs/tasks/job/fine-parallel-processing-work-queue/), that executes multiple work items per Pod. -In this example, we used use the `amqp-consume` utility to read the message +In this example, we use the `amqp-consume` utility to read the message from the queue and run our actual program. This has the advantage that you do not need to modify your program to be aware of the queue. A [different example](/docs/tasks/job/fine-parallel-processing-work-queue/), shows how to diff --git a/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md b/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md index 3f0de4e88499c..16b9327c76790 100644 --- a/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md +++ b/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md @@ -48,19 +48,7 @@ For this example, for simplicity, we will start a single instance of Redis. See the [Redis Example](https://github.com/kubernetes/examples/tree/master/guestbook) for an example of deploying Redis scalably and redundantly. -If you are working from the website source tree, you can go to the following -directory and start a temporary Pod running Redis and a service so we can find it. - -```shell -$ cd content/en/examples/application/job/redis -$ kubectl create -f ./redis-pod.yaml -pod/redis-master created -$ kubectl create -f ./redis-service.yaml -service/redis created -``` - -If you're not working from the source tree, you could also download the following -files directly: +You could also download the following files directly: - [`redis-pod.yaml`](/examples/application/job/redis/redis-pod.yaml) - [`redis-service.yaml`](/examples/application/job/redis/redis-service.yaml) @@ -78,7 +66,7 @@ printed. Start a temporary interactive pod for running the Redis CLI. ```shell -$ kubectl run -i --tty temp --image redis --command "/bin/sh" +kubectl run -i --tty temp --image redis --command "/bin/sh" Waiting for pod default/redis2-c7h78 to be running, status is Pending, pod ready: false Hit enter for command prompt ``` @@ -138,9 +126,7 @@ client library to get work. Here it is: {{< codenew language="python" file="application/job/redis/worker.py" >}} -If you are working from the source tree, change directory to the -`content/en/examples/application/job/redis/` directory. -Otherwise, download [`worker.py`](/examples/application/job/redis/worker.py), +You could also download [`worker.py`](/examples/application/job/redis/worker.py), [`rediswq.py`](/examples/application/job/redis/rediswq.py), and [`Dockerfile`](/examples/application/job/redis/Dockerfile) files, then build the image: @@ -196,13 +182,13 @@ too. So, now run the Job: ```shell -kubectl create -f ./job.yaml +kubectl apply -f ./job.yaml ``` Now wait a bit, then check on the job. ```shell -$ kubectl describe jobs/job-wq-2 +kubectl describe jobs/job-wq-2 Name: job-wq-2 Namespace: default Selector: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f @@ -229,7 +215,7 @@ Events: 33s 33s 1 {job-controller } Normal SuccessfulCreate Created pod: job-wq-2-lglf8 -$ kubectl logs pods/job-wq-2-7r7b2 +kubectl logs pods/job-wq-2-7r7b2 Worker with sessionID: bbd72d0a-9e5c-4dd6-abf6-416cc267991f Initial queue state: empty=False Working on banana diff --git a/content/en/docs/tasks/job/parallel-processing-expansion.md b/content/en/docs/tasks/job/parallel-processing-expansion.md index b71f1c7c2e6f8..1567ac6f9de2d 100644 --- a/content/en/docs/tasks/job/parallel-processing-expansion.md +++ b/content/en/docs/tasks/job/parallel-processing-expansion.md @@ -43,8 +43,8 @@ Next, expand the template into multiple files, one for each item to be processed ```shell # Expand files into a temporary directory -$ mkdir ./jobs -$ for i in apple banana cherry +mkdir ./jobs +for i in apple banana cherry do cat job-tmpl.yaml | sed "s/\$ITEM/$i/" > ./jobs/job-$i.yaml done @@ -53,7 +53,7 @@ done Check if it worked: ```shell -$ ls jobs/ +ls jobs/ job-apple.yaml job-banana.yaml job-cherry.yaml @@ -66,7 +66,7 @@ to generate the Job objects. Next, create all the jobs with one kubectl command: ```shell -$ kubectl create -f ./jobs +kubectl create -f ./jobs job "process-item-apple" created job "process-item-banana" created job "process-item-cherry" created @@ -75,7 +75,7 @@ job "process-item-cherry" created Now, check on the jobs: ```shell -$ kubectl get jobs -l jobgroup=jobexample +kubectl get jobs -l jobgroup=jobexample NAME DESIRED SUCCESSFUL AGE process-item-apple 1 1 31s process-item-banana 1 1 31s @@ -89,7 +89,7 @@ do not care to see.) We can check on the pods as well using the same label selector: ```shell -$ kubectl get pods -l jobgroup=jobexample +kubectl get pods -l jobgroup=jobexample NAME READY STATUS RESTARTS AGE process-item-apple-kixwv 0/1 Completed 0 4m process-item-banana-wrsf7 0/1 Completed 0 4m @@ -100,7 +100,7 @@ There is not a single command to check on the output of all jobs at once, but looping over all the pods is pretty easy: ```shell -$ for p in $(kubectl get pods -l jobgroup=jobexample -o name) +for p in $(kubectl get pods -l jobgroup=jobexample -o name) do kubectl logs $p done @@ -178,7 +178,7 @@ cat job.yaml.jinja2 | render_template > jobs.yaml Or sent directly to kubectl, like this: ```shell -cat job.yaml.jinja2 | render_template | kubectl create -f - +cat job.yaml.jinja2 | render_template | kubectl apply -f - ``` ## Alternatives diff --git a/content/en/docs/tasks/manage-daemon/update-daemon-set.md b/content/en/docs/tasks/manage-daemon/update-daemon-set.md index 4b380f772d4d3..38baee076e8c3 100644 --- a/content/en/docs/tasks/manage-daemon/update-daemon-set.md +++ b/content/en/docs/tasks/manage-daemon/update-daemon-set.md @@ -56,7 +56,7 @@ If you haven't created the DaemonSet in the system, check your DaemonSet manifest with the following command instead: ```shell -kubectl create -f ds.yaml --dry-run -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' +kubectl apply -f ds.yaml --dry-run -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' ``` The output from both commands should be: @@ -76,7 +76,7 @@ step 3. After verifying the update strategy of the DaemonSet manifest, create the DaemonSet: ```shell -kubectl create -f ds.yaml +kubectl apply -f ds.yaml ``` Alternatively, use `kubectl apply` to create the same DaemonSet if you plan to diff --git a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md index 91ae88bb8e17e..c751d00261cae 100644 --- a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md +++ b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md @@ -142,9 +142,9 @@ Report issues with this device plugin and installation method to [GoogleCloudPla Instructions for using NVIDIA GPUs on GKE are [here](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus) -## Clusters containing different types of NVIDIA GPUs +## Clusters containing different types of GPUs -If different nodes in your cluster have different types of NVIDIA GPUs, then you +If different nodes in your cluster have different types of GPUs, then you can use [Node Labels and Node Selectors](/docs/tasks/configure-pod-container/assign-pods-nodes/) to schedule pods to appropriate nodes. @@ -156,6 +156,39 @@ kubectl label nodes accelerator=nvidia-tesla-k80 kubectl label nodes accelerator=nvidia-tesla-p100 ``` +For AMD GPUs, you can deploy [Node Labeller](https://github.com/RadeonOpenCompute/k8s-device-plugin/tree/master/cmd/k8s-node-labeller), which automatically labels your nodes with GPU properties. Currently supported properties: + +* Device ID (-device-id) +* VRAM Size (-vram) +* Number of SIMD (-simd-count) +* Number of Compute Unit (-cu-count) +* Firmware and Feature Versions (-firmware) +* GPU Family, in two letters acronym (-family) + * SI - Southern Islands + * CI - Sea Islands + * KV - Kaveri + * VI - Volcanic Islands + * CZ - Carrizo + * AI - Arctic Islands + * RV - Raven + +Example result: + + $ kubectl describe node cluster-node-23 + Name: cluster-node-23 + Roles: + Labels: beta.amd.com/gpu.cu-count.64=1 + beta.amd.com/gpu.device-id.6860=1 + beta.amd.com/gpu.family.AI=1 + beta.amd.com/gpu.simd-count.256=1 + beta.amd.com/gpu.vram.16G=1 + beta.kubernetes.io/arch=amd64 + beta.kubernetes.io/os=linux + kubernetes.io/hostname=cluster-node-23 + Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock + node.alpha.kubernetes.io/ttl: 0 + ...... + Specify the GPU type in the pod spec: ```yaml diff --git a/content/en/docs/tasks/manage-hugepages/scheduling-hugepages.md b/content/en/docs/tasks/manage-hugepages/scheduling-hugepages.md index 82b3c34d279c2..50044d907f081 100644 --- a/content/en/docs/tasks/manage-hugepages/scheduling-hugepages.md +++ b/content/en/docs/tasks/manage-hugepages/scheduling-hugepages.md @@ -6,10 +6,10 @@ content_template: templates/task --- {{% capture overview %}} -{{< feature-state state="beta" >}} +{{< feature-state state="stable" >}} Kubernetes supports the allocation and consumption of pre-allocated huge pages -by applications in a Pod as a **beta** feature. This page describes how users +by applications in a Pod as a **GA** feature. This page describes how users can consume huge pages and the current limitations. {{% /capture %}} diff --git a/content/en/docs/tasks/run-application/configure-pdb.md b/content/en/docs/tasks/run-application/configure-pdb.md index 36a06deddae90..c11ee1eb5e7f9 100644 --- a/content/en/docs/tasks/run-application/configure-pdb.md +++ b/content/en/docs/tasks/run-application/configure-pdb.md @@ -167,7 +167,7 @@ automatically responds to changes in the number of replicas of the corresponding ## Create the PDB object -You can create the PDB object with a command like `kubectl create -f mypdb.yaml`. +You can create the PDB object with a command like `kubectl apply -f mypdb.yaml`. You cannot update PDB objects. They must be deleted and re-created. @@ -179,7 +179,9 @@ Assuming you don't actually have pods matching `app: zookeeper` in your namespac then you'll see something like this: ```shell -$ kubectl get poddisruptionbudgets +kubectl get poddisruptionbudgets +``` +``` NAME MIN-AVAILABLE ALLOWED-DISRUPTIONS AGE zk-pdb 2 0 7s ``` @@ -187,7 +189,9 @@ zk-pdb 2 0 7s If there are matching pods (say, 3), then you would see something like this: ```shell -$ kubectl get poddisruptionbudgets +kubectl get poddisruptionbudgets +``` +``` NAME MIN-AVAILABLE ALLOWED-DISRUPTIONS AGE zk-pdb 2 1 7s ``` @@ -198,7 +202,9 @@ counted the matching pods, and updated the status of the PDB. You can get more information about the status of a PDB with this command: ```shell -$ kubectl get poddisruptionbudgets zk-pdb -o yaml +kubectl get poddisruptionbudgets zk-pdb -o yaml +``` +```yaml apiVersion: policy/v1beta1 kind: PodDisruptionBudget metadata: diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index 8cdf150782167..d6f95f0651332 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -65,7 +65,9 @@ It defines an index.php page which performs some CPU intensive computations: First, we will start a deployment running the image and expose it as a service: ```shell -$ kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --expose --port=80 +kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --expose --port=80 +``` +``` service/php-apache created deployment.apps/php-apache created ``` @@ -82,14 +84,18 @@ Roughly speaking, HPA will increase and decrease the number of replicas See [here](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#autoscaling-algorithm) for more details on the algorithm. ```shell -$ kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10 +kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10 +``` +``` horizontalpodautoscaler.autoscaling/php-apache autoscaled ``` We may check the current status of autoscaler by running: ```shell -$ kubectl get hpa +kubectl get hpa +``` +``` NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 18s @@ -104,17 +110,19 @@ Now, we will see how the autoscaler reacts to increased load. We will start a container, and send an infinite loop of queries to the php-apache service (please run it in a different terminal): ```shell -$ kubectl run -i --tty load-generator --image=busybox /bin/sh +kubectl run -i --tty load-generator --image=busybox /bin/sh Hit enter for command prompt -$ while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done +while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done ``` Within a minute or so, we should see the higher CPU load by executing: ```shell -$ kubectl get hpa +kubectl get hpa +``` +``` NAME REFERENCE TARGET CURRENT MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache/scale 305% / 50% 305% 1 10 1 3m @@ -124,7 +132,9 @@ Here, CPU consumption has increased to 305% of the request. As a result, the deployment was resized to 7 replicas: ```shell -$ kubectl get deployment php-apache +kubectl get deployment php-apache +``` +``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE php-apache 7 7 7 7 19m ``` @@ -145,11 +155,17 @@ the load generation by typing ` + C`. Then we will verify the result state (after a minute or so): ```shell -$ kubectl get hpa +kubectl get hpa +``` +``` NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 11m +``` -$ kubectl get deployment php-apache +```shell +kubectl get deployment php-apache +``` +``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE php-apache 1 1 1 1 27m ``` @@ -172,7 +188,7 @@ by making use of the `autoscaling/v2beta2` API version. First, get the YAML of your HorizontalPodAutoscaler in the `autoscaling/v2beta2` form: ```shell -$ kubectl get hpa.v2beta2.autoscaling -o yaml > /tmp/hpa-v2.yaml +kubectl get hpa.v2beta2.autoscaling -o yaml > /tmp/hpa-v2.yaml ``` Open the `/tmp/hpa-v2.yaml` file in an editor, and you should see YAML which looks like this: @@ -288,7 +304,7 @@ spec: resource: name: cpu target: - kind: AverageUtilization + type: AverageUtilization averageUtilization: 50 - type: Pods pods: @@ -401,7 +417,9 @@ The conditions appear in the `status.conditions` field. To see the conditions a we can use `kubectl describe hpa`: ```shell -$ kubectl describe hpa cm-test +kubectl describe hpa cm-test +``` +```shell Name: cm-test Namespace: prom Labels: @@ -454,7 +472,9 @@ can use the following file to create it declaratively: We will create the autoscaler by executing the following command: ```shell -$ kubectl create -f https://k8s.io/examples/application/hpa/php-apache.yaml +kubectl create -f https://k8s.io/examples/application/hpa/php-apache.yaml +``` +``` horizontalpodautoscaler.autoscaling/php-apache created ``` diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md index 2e3df51fd9eab..38d615293d7a9 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -249,7 +249,7 @@ Kubernetes 1.6 adds support for making use of custom metrics in the Horizontal P You can add custom metrics for the Horizontal Pod Autoscaler to use in the `autoscaling/v2beta2` API. Kubernetes then queries the new custom metrics API to fetch the values of the appropriate custom metrics. -See [Support for metrics APIs](#support-for-metrics-APIs) for the requirements. +See [Support for metrics APIs](#support-for-metrics-apis) for the requirements. ## Support for metrics APIs diff --git a/content/en/docs/tasks/run-application/rolling-update-replication-controller.md b/content/en/docs/tasks/run-application/rolling-update-replication-controller.md index e0ace5c4c9794..1802465d11323 100644 --- a/content/en/docs/tasks/run-application/rolling-update-replication-controller.md +++ b/content/en/docs/tasks/run-application/rolling-update-replication-controller.md @@ -37,7 +37,7 @@ A rolling update works by: Rolling updates are initiated with the `kubectl rolling-update` command: - $ kubectl rolling-update NAME \ + kubectl rolling-update NAME \ ([NEW_NAME] --image=IMAGE | -f FILE) {{% /capture %}} @@ -50,7 +50,7 @@ Rolling updates are initiated with the `kubectl rolling-update` command: To initiate a rolling update using a configuration file, pass the new file to `kubectl rolling-update`: - $ kubectl rolling-update NAME -f FILE + kubectl rolling-update NAME -f FILE The configuration file must: @@ -66,17 +66,17 @@ Replication controller configuration files are described in ### Examples // Update pods of frontend-v1 using new replication controller data in frontend-v2.json. - $ kubectl rolling-update frontend-v1 -f frontend-v2.json + kubectl rolling-update frontend-v1 -f frontend-v2.json // Update pods of frontend-v1 using JSON data passed into stdin. - $ cat frontend-v2.json | kubectl rolling-update frontend-v1 -f - + cat frontend-v2.json | kubectl rolling-update frontend-v1 -f - ## Updating the container image To update only the container image, pass a new image name and tag with the `--image` flag and (optionally) a new controller name: - $ kubectl rolling-update NAME [NEW_NAME] --image=IMAGE:TAG + kubectl rolling-update NAME [NEW_NAME] --image=IMAGE:TAG The `--image` flag is only supported for single-container pods. Specifying `--image` with multi-container pods returns an error. @@ -95,10 +95,10 @@ Moreover, the use of `:latest` is not recommended, see ### Examples // Update the pods of frontend-v1 to frontend-v2 - $ kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2 + kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2 // Update the pods of frontend, keeping the replication controller name - $ kubectl rolling-update frontend --image=image:v2 + kubectl rolling-update frontend --image=image:v2 ## Required and optional fields @@ -165,14 +165,18 @@ spec: To update to version 1.9.1, you can use [`kubectl rolling-update --image`](https://git.k8s.io/community/contributors/design-proposals/cli/simple-rolling-update.md) to specify the new image: ```shell -$ kubectl rolling-update my-nginx --image=nginx:1.9.1 +kubectl rolling-update my-nginx --image=nginx:1.9.1 +``` +``` Created my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 ``` In another window, you can see that `kubectl` added a `deployment` label to the pods, whose value is a hash of the configuration, to distinguish the new pods from the old: ```shell -$ kubectl get pods -l app=nginx -L deployment +kubectl get pods -l app=nginx -L deployment +``` +``` NAME READY STATUS RESTARTS AGE DEPLOYMENT my-nginx-ccba8fbd8cc8160970f63f9a2696fc46-k156z 1/1 Running 0 1m ccba8fbd8cc8160970f63f9a2696fc46 my-nginx-ccba8fbd8cc8160970f63f9a2696fc46-v95yh 1/1 Running 0 35s ccba8fbd8cc8160970f63f9a2696fc46 @@ -199,7 +203,9 @@ replicationcontroller "my-nginx" rolling updated If you encounter a problem, you can stop the rolling update midway and revert to the previous version using `--rollback`: ```shell -$ kubectl rolling-update my-nginx --rollback +kubectl rolling-update my-nginx --rollback +``` +``` Setting "my-nginx" replicas to 1 Continuing update with existing controller my-nginx. Scaling up nginx from 1 to 1, scaling down my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 from 1 to 0 (keep 1 pods available, don't exceed 2 pods) @@ -239,7 +245,9 @@ spec: and roll it out: ```shell -$ kubectl rolling-update my-nginx -f ./nginx-rc.yaml +kubectl rolling-update my-nginx -f ./nginx-rc.yaml +``` +``` Created my-nginx-v4 Scaling up my-nginx-v4 from 0 to 5, scaling down my-nginx from 4 to 0 (keep 4 pods available, don't exceed 5 pods) Scaling my-nginx-v4 up to 1 diff --git a/content/en/docs/tasks/run-application/run-replicated-stateful-application.md b/content/en/docs/tasks/run-application/run-replicated-stateful-application.md index 8ba5248b09dee..1c74858f247a4 100644 --- a/content/en/docs/tasks/run-application/run-replicated-stateful-application.md +++ b/content/en/docs/tasks/run-application/run-replicated-stateful-application.md @@ -60,7 +60,7 @@ and a StatefulSet. Create the ConfigMap from the following YAML configuration file: ```shell -kubectl create -f https://k8s.io/examples/application/mysql/mysql-configmap.yaml +kubectl apply -f https://k8s.io/examples/application/mysql/mysql-configmap.yaml ``` {{< codenew file="application/mysql/mysql-configmap.yaml" >}} @@ -80,7 +80,7 @@ based on information provided by the StatefulSet controller. Create the Services from the following YAML configuration file: ```shell -kubectl create -f https://k8s.io/examples/application/mysql/mysql-services.yaml +kubectl apply -f https://k8s.io/examples/application/mysql/mysql-services.yaml ``` {{< codenew file="application/mysql/mysql-services.yaml" >}} @@ -106,7 +106,7 @@ writes. Finally, create the StatefulSet from the following YAML configuration file: ```shell -kubectl create -f https://k8s.io/examples/application/mysql/mysql-statefulset.yaml +kubectl apply -f https://k8s.io/examples/application/mysql/mysql-statefulset.yaml ``` {{< codenew file="application/mysql/mysql-statefulset.yaml" >}} diff --git a/content/en/docs/tasks/run-application/run-single-instance-stateful-application.md b/content/en/docs/tasks/run-application/run-single-instance-stateful-application.md index 275dcfec0735a..87f0b01ad0b32 100644 --- a/content/en/docs/tasks/run-application/run-single-instance-stateful-application.md +++ b/content/en/docs/tasks/run-application/run-single-instance-stateful-application.md @@ -53,11 +53,11 @@ for a secure solution. 1. Deploy the PV and PVC of the YAML file: - kubectl create -f https://k8s.io/examples/application/mysql/mysql-pv.yaml + kubectl apply -f https://k8s.io/examples/application/mysql/mysql-pv.yaml 1. Deploy the contents of the YAML file: - kubectl create -f https://k8s.io/examples/application/mysql/mysql-deployment.yaml + kubectl apply -f https://k8s.io/examples/application/mysql/mysql-deployment.yaml 1. Display information about the Deployment: diff --git a/content/en/docs/tasks/run-application/scale-stateful-set.md b/content/en/docs/tasks/run-application/scale-stateful-set.md index c47fd8f47d13b..462025836dafe 100644 --- a/content/en/docs/tasks/run-application/scale-stateful-set.md +++ b/content/en/docs/tasks/run-application/scale-stateful-set.md @@ -50,7 +50,7 @@ kubectl scale statefulsets --replicas= Alternatively, you can do [in-place updates](/docs/concepts/cluster-administration/manage-deployment/#in-place-updates-of-resources) on your StatefulSets. -If your StatefulSet was initially created with `kubectl apply` or `kubectl create --save-config`, +If your StatefulSet was initially created with `kubectl apply`, update `.spec.replicas` of the StatefulSet manifests, and then do a `kubectl apply`: ```shell diff --git a/content/en/docs/tasks/run-application/update-api-object-kubectl-patch.md b/content/en/docs/tasks/run-application/update-api-object-kubectl-patch.md index d5c3f68213973..d77108977b767 100644 --- a/content/en/docs/tasks/run-application/update-api-object-kubectl-patch.md +++ b/content/en/docs/tasks/run-application/update-api-object-kubectl-patch.md @@ -31,7 +31,7 @@ is a Pod that has one container: Create the Deployment: ```shell -kubectl create -f https://k8s.io/examples/application/deployment-patch.yaml +kubectl apply -f https://k8s.io/examples/application/deployment-patch.yaml ``` View the Pods associated with your Deployment: diff --git a/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md b/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md index 728d7a8950854..a265c91974538 100644 --- a/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md +++ b/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md @@ -53,12 +53,20 @@ svc-cat/catalog 0.0.1 service-catalog API server and controller-manag... Your Kubernetes cluster must have RBAC enabled, which requires your Tiller Pod(s) to have `cluster-admin` access. -If you are using Minikube, run the `minikube start` command with the following flag: +When using Minikube v0.25 or older, you must run Minikube with RBAC explicitly enabled: ```shell minikube start --extra-config=apiserver.Authorization.Mode=RBAC ``` +When using Minikube v0.26+, run: + +```shell +minikube start +``` + +With Minikube v0.26+, do not specify `--extra-config`. The flag has since been changed to --extra-config=apiserver.authorization-mode and Minikube now uses RBAC by default. Specifying the older flag may cause the start command to hang. + If you are using `hack/local-up-cluster.sh`, set the `AUTHORIZATION_MODE` environment variable with the following values: ``` diff --git a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md index 2180f05d369d6..2edf0486f6f4b 100644 --- a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md +++ b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md @@ -104,7 +104,7 @@ Generate a CSR yaml blob and send it to the apiserver by running the following command: ```shell -cat <] ``` - {{< note >}} - If you do not specify a `DownloadLocation`, `kubectl` will be installed in the user's temp Directory. - {{< /note >}} + {{< note >}}If you do not specify a `DownloadLocation`, `kubectl` will be installed in the user's temp Directory.{{< /note >}} The installer creates `$HOME/.kube` and instructs it to create a config file @@ -118,36 +120,41 @@ If you are on Windows and using [Powershell Gallery](https://www.powershellgalle kubectl version ``` - {{< note >}} - Updating the installation is performed by rerunning the two commands listed in step 1. - {{< /note >}} + {{< note >}}Updating the installation is performed by rerunning the two commands listed in step 1.{{< /note >}} -## Install with Chocolatey on Windows +## Install on Windows using Chocolatey or scoop -If you are on Windows and using [Chocolatey](https://chocolatey.org) package manager, you can install kubectl with Chocolatey. +To install kubectl on Windows you can use either [Chocolatey](https://chocolatey.org) package manager or [scoop](https://scoop.sh) command-line installer. +{{< tabs name="kubectl_win_install" >}} +{{% tab name="choco" %}} -1. Run the installation command: - - ``` choco install kubernetes-cli - ``` - + +{{% /tab %}} +{{% tab name="scoop" %}} + + scoop install kubectl + +{{% /tab %}} +{{< /tabs >}} 2. Test to ensure the version you installed is sufficiently up-to-date: ``` kubectl version ``` -3. Change to your %HOME% directory: - For example: `cd C:\users\yourusername` +3. Navigate to your home directory: -4. Create the .kube directory: + ``` + cd %USERPROFILE% + ``` +4. Create the `.kube` directory: ``` mkdir .kube ``` -5. Change to the .kube directory you just created: +5. Change to the `.kube` directory you just created: ``` cd .kube @@ -159,9 +166,7 @@ If you are on Windows and using [Chocolatey](https://chocolatey.org) package man New-Item config -type file ``` - {{< note >}} - Edit the config file with a text editor of your choice, such as Notepad. - {{< /note >}} + {{< note >}}Edit the config file with a text editor of your choice, such as Notepad.{{< /note >}} ## Download as part of the Google Cloud SDK @@ -284,63 +289,139 @@ kubectl cluster-info dump ## Enabling shell autocompletion -kubectl includes autocompletion support, which can save a lot of typing! +kubectl provides autocompletion support for Bash and Zsh, which can save you a lot of typing! -The completion script itself is generated by kubectl, so you typically just need to invoke it from your profile. +Below are the procedures to set up autocompletion for Bash (including the difference between Linux and macOS) and Zsh. -Common examples are provided here. For more details, consult `kubectl completion -h`. +{{< tabs name="kubectl_autocompletion" >}} -### On Linux, using bash -On CentOS Linux, you may need to install the bash-completion package which is not installed by default. +{{% tab name="Bash on Linux" %}} -```shell -yum install bash-completion -y -``` +### Introduction -To add kubectl autocompletion to your current shell, run `source <(kubectl completion bash)`. +The kubectl completion script for Bash can be generated with the command `kubectl completion bash`. Sourcing the completion script in your shell enables kubectl autocompletion. -To add kubectl autocompletion to your profile, so it is automatically loaded in future shells run: +However, the completion script depends on [**bash-completion**](https://github.com/scop/bash-completion), which means that you have to install this software first (you can test if you have bash-completion already installed by running `type _init_completion`). + +### Install bash-completion + +bash-completion is provided by many package managers (see [here](https://github.com/scop/bash-completion#installation)). You can install it with `apt-get install bash-completion` or `yum install bash-completion`, etc. + +The above commands create `/usr/share/bash-completion/bash_completion`, which is the main script of bash-completion. Depending on your package manager, you have to manually source this file in your `~/.bashrc` file. + +To find out, reload your shell and run `type _init_completion`. If the command succeeds, you're already set, otherwise add the following to your `~/.bashrc` file: ```shell -echo "source <(kubectl completion bash)" >> ~/.bashrc +source /usr/share/bash-completion/bash_completion ``` -### On macOS, using bash -On macOS, you will need to install bash-completion support via [Homebrew](https://brew.sh/) first: +Reload your shell and verify that bash-completion is correctly installed by typing `type _init_completion`. + +### Enable kubectl autocompletion + +You now need to ensure that the kubectl completion script gets sourced in all your shell sessions. There are two ways in which you can do this: + +- Source the completion script in your `~/.bashrc` file: + + ```shell + echo 'source <(kubectl completion bash)' >>~/.bashrc + ``` + +- Add the completion script to the `/etc/bash_completion.d` directory: + + ```shell + kubectl completion bash >/etc/bash_completion.d/kubectl + ``` + +{{< note >}} +bash-completion sources all completion scripts in `/etc/bash_completion.d`. +{{< /note >}} + +Both approaches are equivalent. After reloading your shell, kubectl autocompletion should be working. + +{{% /tab %}} + + +{{% tab name="Bash on macOS" %}} + +{{< warning>}} +macOS includes Bash 3.2 by default. The kubectl completion script requires Bash 4.1+ and doesn't work with Bash 3.2. A possible way around this is to install a newer version of Bash on macOS (see instructions [here](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)). The below instructions only work if you are using Bash 4.1+. +{{< /warning >}} + +### Introduction + +The kubectl completion script for Bash can be generated with the command `kubectl completion bash`. Sourcing the completion script in your shell enables kubectl autocompletion. + +However, the completion script depends on [**bash-completion**](https://github.com/scop/bash-completion), which means that you have to install this software first (you can test if you have bash-completion already installed by running `type _init_completion`). + +### Install bash-completion + +You can install bash-completion with Homebrew: ```shell -## If running Bash 3.2 included with macOS -brew install bash-completion -## or, if running Bash 4.1+ brew install bash-completion@2 ``` -Follow the "caveats" section of brew's output to add the appropriate bash completion path to your local .bashrc. - -If you installed kubectl using the [Homebrew instructions](#install-with-homebrew-on-macos) then kubectl completion should start working immediately. +{{< note >}} +The `@2` stands for bash-completion 2, which is required by the kubectl completion script (it doesn't work with bash-completion 1). In turn, bash-completion 2 requires Bash 4.1+, that's why you needed to upgrade Bash. +{{< /note >}} -If you have installed kubectl manually, you need to add kubectl autocompletion to the bash-completion: +As stated in the output of `brew install` ("Caveats" section), add the following lines to your `~/.bashrc` or `~/.bash_profile` file: ```shell -kubectl completion bash > $(brew --prefix)/etc/bash_completion.d/kubectl +export BASH_COMPLETION_COMPAT_DIR=/usr/local/etc/bash_completion.d +[[ -r /usr/local/etc/profile.d/bash_completion.sh ]] && . /usr/local/etc/profile.d/bash_completion.sh ``` -The Homebrew project is independent from Kubernetes, so the bash-completion packages are not guaranteed to work. +Reload your shell and verify that bash-completion is correctly installed by typing `type _init_completion`. + +### Enable kubectl autocompletion + +You now need to ensure that the kubectl completion script gets sourced in all your shell sessions. There are multiple ways in which you can do this: -### Using Zsh -If you are using zsh edit the ~/.zshrc file and add the following code to enable kubectl autocompletion: +- Source the completion script in your `~/.bashrc` file: + + ```shell + echo 'source <(kubectl completion bash)' >>~/.bashrc + + ``` + +- Add the completion script to `/usr/local/etc/bash_completion.d`: + + ```shell + kubectl completion bash >/usr/local/etc/bash_completion.d/kubectl + ``` + +- If you installed kubectl with Homebrew (as explained [here](#install-with-homebrew-on-macos)), then the completion script was automatically installed to `/usr/local/etc/bash_completion.d/kubectl`. In that case, you don't need to do anything. + +{{< note >}} +bash-completion (if installed with Homebrew) sources all the completion scripts in the directory that is set in the `BASH_COMPLETION_COMPAT_DIR` environment variable. +{{< /note >}} + +All approaches are equivalent. After reloading your shell, kubectl autocompletion should be working. +{{% /tab %}} + +{{% tab name="Zsh" %}} + +The kubectl completion script for Zsh can be generated with the command `kubectl completion zsh`. Sourcing the completion script in your shell enables kubectl autocompletion. + +To do so in all your shell sessions, add the following to your `~/.zshrc` file: ```shell -if [ $commands[kubectl] ]; then - source <(kubectl completion zsh) -fi +source <(kubectl completion zsh) ``` -Or when using [Oh-My-Zsh](http://ohmyz.sh/), edit the ~/.zshrc file and update the `plugins=` line to include the kubectl plugin. +After reloading your shell, kubectl autocompletion should be working. + +If you get an error like `complete:13: command not found: compdef`, then add the following to the beginning of your `~/.zshrc` file: ```shell -plugins=(kubectl) +autoload -Uz compinit +compinit ``` +{{% /tab %}} +{{< /tabs >}} + {{% /capture %}} {{% capture whatsnext %}} diff --git a/content/en/docs/tasks/tools/install-minikube.md b/content/en/docs/tasks/tools/install-minikube.md index 32edccfd2c00a..e36e453dd0f5a 100644 --- a/content/en/docs/tasks/tools/install-minikube.md +++ b/content/en/docs/tasks/tools/install-minikube.md @@ -2,6 +2,9 @@ title: Install Minikube content_template: templates/task weight: 20 +card: + name: tasks + weight: 10 --- {{% capture overview %}} @@ -59,7 +62,7 @@ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/miniku Here's an easy way to add the Minikube executable to your path: ```shell -sudo cp minikube /usr/local/bin && rm minikube +sudo mv minikube /usr/local/bin ``` ### Linux @@ -111,4 +114,19 @@ To install Minikube manually on windows using [Windows Installer](https://docs.m {{% /capture %}} +## Cleanup everything to start fresh +If you have previously installed minikube, and run: +```shell +minikube start +``` + +And this command returns an error: +```shell +machine does not exist +``` + +You need to wipe the configuration files: +```shell +rm -rf ~/.minikube +``` diff --git a/content/en/docs/test.md b/content/en/docs/test.md index e7faf57d12ec9..1e682f538dd39 100644 --- a/content/en/docs/test.md +++ b/content/en/docs/test.md @@ -344,7 +344,7 @@ Warnings point out something that could cause harm if ignored. To add shortcodes to includes. {{< note >}} -{{< include "federation-current-state.md" >}} +{{< include "task-tutorial-prereqs.md" >}} {{< /note >}} ## Katacoda Embedded Live Environment diff --git a/content/en/docs/tutorials/clusters/apparmor.md b/content/en/docs/tutorials/clusters/apparmor.md index 34d85179fbbf1..4e1fff809e902 100644 --- a/content/en/docs/tutorials/clusters/apparmor.md +++ b/content/en/docs/tutorials/clusters/apparmor.md @@ -46,7 +46,9 @@ Make sure: receiving the expected protections, it is important to verify the Kubelet version of your nodes: ```shell - $ kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {@.status.nodeInfo.kubeletVersion}\n{end}' + kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {@.status.nodeInfo.kubeletVersion}\n{end}' + ``` + ``` gke-test-default-pool-239f5d02-gyn2: v1.4.0 gke-test-default-pool-239f5d02-x1kf: v1.4.0 gke-test-default-pool-239f5d02-xwux: v1.4.0 @@ -58,7 +60,7 @@ Make sure: module is enabled, check the `/sys/module/apparmor/parameters/enabled` file: ```shell - $ cat /sys/module/apparmor/parameters/enabled + cat /sys/module/apparmor/parameters/enabled Y ``` @@ -76,7 +78,9 @@ Make sure: expanded. You can verify that your nodes are running docker with: ```shell - $ kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {@.status.nodeInfo.containerRuntimeVersion}\n{end}' + kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {@.status.nodeInfo.containerRuntimeVersion}\n{end}' + ``` + ``` gke-test-default-pool-239f5d02-gyn2: docker://1.11.2 gke-test-default-pool-239f5d02-x1kf: docker://1.11.2 gke-test-default-pool-239f5d02-xwux: docker://1.11.2 @@ -91,7 +95,9 @@ Make sure: node by checking the `/sys/kernel/security/apparmor/profiles` file. For example: ```shell - $ ssh gke-test-default-pool-239f5d02-gyn2 "sudo cat /sys/kernel/security/apparmor/profiles | sort" + ssh gke-test-default-pool-239f5d02-gyn2 "sudo cat /sys/kernel/security/apparmor/profiles | sort" + ``` + ``` apparmor-test-deny-write (enforce) apparmor-test-audit-write (enforce) docker-default (enforce) @@ -107,7 +113,9 @@ on nodes by checking the node ready condition message (though this is likely to later release): ```shell -$ kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {.status.conditions[?(@.reason=="KubeletReady")].message}\n{end}' +kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {.status.conditions[?(@.reason=="KubeletReady")].message}\n{end}' +``` +``` gke-test-default-pool-239f5d02-gyn2: kubelet is posting ready status. AppArmor enabled gke-test-default-pool-239f5d02-x1kf: kubelet is posting ready status. AppArmor enabled gke-test-default-pool-239f5d02-xwux: kubelet is posting ready status. AppArmor enabled @@ -148,14 +156,18 @@ prerequisites have not been met, the Pod will be rejected, and will not run. To verify that the profile was applied, you can look for the AppArmor security option listed in the container created event: ```shell -$ kubectl get events | grep Created +kubectl get events | grep Created +``` +``` 22s 22s 1 hello-apparmor Pod spec.containers{hello} Normal Created {kubelet e2e-test-stclair-minion-group-31nt} Created container with docker id 269a53b202d3; Security:[seccomp=unconfined apparmor=k8s-apparmor-example-deny-write] ``` You can also verify directly that the container's root process is running with the correct profile by checking its proc attr: ```shell -$ kubectl exec cat /proc/1/attr/current +kubectl exec cat /proc/1/attr/current +``` +``` k8s-apparmor-example-deny-write (enforce) ``` @@ -173,12 +185,12 @@ nodes. For this example we'll just use SSH to install the profiles, but other ap discussed in [Setting up nodes with profiles](#setting-up-nodes-with-profiles). ```shell -$ NODES=( +NODES=( # The SSH-accessible domain names of your nodes gke-test-default-pool-239f5d02-gyn2.us-central1-a.my-k8s gke-test-default-pool-239f5d02-x1kf.us-central1-a.my-k8s gke-test-default-pool-239f5d02-xwux.us-central1-a.my-k8s) -$ for NODE in ${NODES[*]}; do ssh $NODE 'sudo apparmor_parser -q < profile k8s-apparmor-example-deny-write flags=(attach_disconnected) { @@ -198,14 +210,16 @@ Next, we'll run a simple "Hello AppArmor" pod with the deny-write profile: {{< codenew file="pods/security/hello-apparmor.yaml" >}} ```shell -$ kubectl create -f ./hello-apparmor.yaml +kubectl create -f ./hello-apparmor.yaml ``` If we look at the pod events, we can see that the Pod container was created with the AppArmor profile "k8s-apparmor-example-deny-write": ```shell -$ kubectl get events | grep hello-apparmor +kubectl get events | grep hello-apparmor +``` +``` 14s 14s 1 hello-apparmor Pod Normal Scheduled {default-scheduler } Successfully assigned hello-apparmor to gke-test-default-pool-239f5d02-gyn2 14s 14s 1 hello-apparmor Pod spec.containers{hello} Normal Pulling {kubelet gke-test-default-pool-239f5d02-gyn2} pulling image "busybox" 13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Pulled {kubelet gke-test-default-pool-239f5d02-gyn2} Successfully pulled image "busybox" @@ -216,14 +230,18 @@ $ kubectl get events | grep hello-apparmor We can verify that the container is actually running with that profile by checking its proc attr: ```shell -$ kubectl exec hello-apparmor cat /proc/1/attr/current +kubectl exec hello-apparmor cat /proc/1/attr/current +``` +``` k8s-apparmor-example-deny-write (enforce) ``` Finally, we can see what happens if we try to violate the profile by writing to a file: ```shell -$ kubectl exec hello-apparmor touch /tmp/test +kubectl exec hello-apparmor touch /tmp/test +``` +``` touch: /tmp/test: Permission denied error: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1 ``` @@ -231,7 +249,9 @@ error: error executing remote command: command terminated with non-zero exit cod To wrap up, let's look at what happens if we try to specify a profile that hasn't been loaded: ```shell -$ kubectl create -f /dev/stdin <`: Refers to a profile loaded on the node (localhost) by name. - The possible profile names are detailed in the - [core policy reference](http://wiki.apparmor.net/index.php/AppArmor_Core_Policy_Reference#Profile_names_and_attachment_specifications). + [core policy reference](https://gitlab.com/apparmor/apparmor/wikis/AppArmor_Core_Policy_Reference#profile-names-and-attachment-specifications). - `unconfined`: This effectively disables AppArmor on the container. Any other profile reference format is invalid. diff --git a/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md b/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md index 47c426813ddb3..df28e4669280f 100644 --- a/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md +++ b/content/en/docs/tutorials/configuration/configure-redis-using-configmap.md @@ -14,9 +14,10 @@ This page provides a real world example of how to configure Redis using a Config {{% capture objectives %}} -* Create a ConfigMap. -* Create a pod specification using the ConfigMap. -* Create the pod. +* Create a `kustomization.yaml` file containing: + * a ConfigMap generator + * a Pod resource config using the ConfigMap +* Apply the directory by running `kubectl apply -k ./` * Verify that the configuration was correctly applied. {{% /capture %}} @@ -24,6 +25,7 @@ This page provides a real world example of how to configure Redis using a Config {{% capture prerequisites %}} * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} +* The example shown on this page works with `kubectl` 1.14 and above. * Understand [Configure Containers Using a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/). {{% /capture %}} @@ -35,49 +37,48 @@ This page provides a real world example of how to configure Redis using a Config You can follow the steps below to configure a Redis cache using data stored in a ConfigMap. -First create a ConfigMap from the `redis-config` file: +First create a `kustomization.yaml` containing a ConfigMap from the `redis-config` file: {{< codenew file="pods/config/redis-config" >}} ```shell curl -OL https://k8s.io/examples/pods/config/redis-config -kubectl create configmap example-redis-config --from-file=redis-config -``` -```shell -configmap/example-redis-config created +cat <./kustomization.yaml +configMapGenerator: +- name: example-redis-config + files: + - redis-config +EOF ``` -Examine the created ConfigMap: +Add the pod resource config to the `kustomization.yaml`: + +{{< codenew file="pods/config/redis-pod.yaml" >}} ```shell -kubectl get configmap example-redis-config -o yaml -``` +curl -OL https://k8s.io/examples/pods/config/redis-pod.yaml -```yaml -apiVersion: v1 -data: - redis-config: | - maxmemory 2mb - maxmemory-policy allkeys-lru -kind: ConfigMap -metadata: - creationTimestamp: 2016-03-30T18:14:41Z - name: example-redis-config - namespace: default - resourceVersion: "24686" - selfLink: /api/v1/namespaces/default/configmaps/example-redis-config - uid: 460a2b6e-f6a3-11e5-8ae5-42010af00002 +cat <>./kustomization.yaml +resources: +- redis-pod.yaml +EOF ``` -Now create a pod specification that uses the config data stored in the ConfigMap: +Apply the kustomization directory to create both the ConfigMap and Pod objects: -{{< codenew file="pods/config/redis-pod.yaml" >}} - -Create the pod: +```shell +kubectl apply -k . +``` +Examine the created objects by ```shell -kubectl create -f https://k8s.io/examples/pods/config/redis-pod.yaml +> kubectl get -k . +NAME DATA AGE +configmap/example-redis-config-dgh9dg555m 1 52s + +NAME READY STATUS RESTARTS AGE +pod/redis 1/1 Running 0 52s ``` In the example, the config volume is mounted at `/redis-master`. diff --git a/content/en/docs/tutorials/hello-minikube.md b/content/en/docs/tutorials/hello-minikube.md index 8a5de37cbba8e..5099beadf1725 100644 --- a/content/en/docs/tutorials/hello-minikube.md +++ b/content/en/docs/tutorials/hello-minikube.md @@ -8,6 +8,9 @@ menu: weight: 10 post: >

Ready to get your hands dirty? Build a simple Kubernetes cluster that runs "Hello World" for Node.js.

+card: + name: tutorials + weight: 10 --- {{% capture overview %}} @@ -161,7 +164,6 @@ Kubernetes [*Service*](/docs/concepts/services-networking/service/). 4. Katacoda environment only: Click the plus sign, and then click **Select port to view on Host 1**. 5. Katacoda environment only: Type `30369` (see port opposite to `8080` in services output), and then click -**Display Port**. This opens up a browser window that serves your app and shows the "Hello World" message. diff --git a/content/en/docs/tutorials/kubernetes-basics/_index.html b/content/en/docs/tutorials/kubernetes-basics/_index.html index 6830dca167a58..342cf2cdd7c16 100644 --- a/content/en/docs/tutorials/kubernetes-basics/_index.html +++ b/content/en/docs/tutorials/kubernetes-basics/_index.html @@ -2,6 +2,10 @@ title: Learn Kubernetes Basics linkTitle: Learn Kubernetes Basics weight: 10 +card: + name: tutorials + weight: 20 + title: Walkthrough the basics --- diff --git a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html index e093c4ba2f11f..37b1e52b7d8a3 100644 --- a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -34,7 +34,7 @@

Kubernetes Deployments

master schedules mentioned application instances onto individual Nodes in the cluster.

-

Once the application instances are created, a Kubernetes Deployment Controller continuously monitors those instances. If the Node hosting an instance goes down or is deleted, the Deployment controller replaces it. This provides a self-healing mechanism to address machine failure or maintenance.

+

Once the application instances are created, a Kubernetes Deployment Controller continuously monitors those instances. If the Node hosting an instance goes down or is deleted, the Deployment controller replaces the instance with an instance on another Node in the cluster. This provides a self-healing mechanism to address machine failure or maintenance.

In a pre-orchestration world, installation scripts would often be used to start applications, but they did not allow recovery from machine failure. By both creating your application instances and keeping them running across Nodes, Kubernetes Deployments provide a fundamentally different approach to application management.

diff --git a/content/en/docs/tutorials/online-training/overview.md b/content/en/docs/tutorials/online-training/overview.md index e52c100556774..7b3348d9da685 100644 --- a/content/en/docs/tutorials/online-training/overview.md +++ b/content/en/docs/tutorials/online-training/overview.md @@ -11,26 +11,37 @@ Here are some of the sites that offer online training for Kubernetes: {{% capture body %}} -* [Scalable Microservices with Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615) +* [Certified Kubernetes Administrator Preparation Course (Linux Academy)](https://linuxacademy.com/linux/training/course/name/certified-kubernetes-administrator-preparation-course) -* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x) +* [Certified Kubernetes Application Developer Preparation Course with Practice Tests (KodeKloud.com)](https://kodekloud.com/p/kubernetes-certification-course) + +* [Getting Started with Google Kubernetes Engine (Coursera)](https://www.coursera.org/learn/google-kubernetes-engine) * [Getting Started with Kubernetes (Pluralsight)](https://www.pluralsight.com/courses/getting-started-kubernetes) +* [Getting Started with Kubernetes Clusters on OCI Oracle Kubernetes Engine (OKE) (Learning Library)](https://apexapps.oracle.com/pls/apex/f?p=44785:50:0:::50:P50_EVENT_ID,P50_COURSE_ID:5935,256) + +* [Google Kubernetes Engine Deep Dive (Linux Academy)] (https://linuxacademy.com/google-cloud-platform/training/course/name/google-kubernetes-engine-deep-dive) + * [Hands-on Introduction to Kubernetes (Instruqt)](https://play.instruqt.com/public/topics/getting-started-with-kubernetes) -* [Learn Kubernetes using Interactive Hands-on Scenarios (Katacoda)](https://www.katacoda.com/courses/kubernetes/) +* [IBM Cloud: Deploying Microservices with Kubernetes (Coursera)](https://www.coursera.org/learn/deploy-micro-kube-ibm-cloud) -* [Certified Kubernetes Administrator Preparation Course (LinuxAcademy.com)](https://linuxacademy.com/linux/training/course/name/certified-kubernetes-administrator-preparation-course) +* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x) -* [Kubernetes the Hard Way (LinuxAcademy.com)](https://linuxacademy.com/linux/training/course/name/kubernetes-the-hard-way) +* [Kubernetes Essentials (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/kubernetes-essentials) * [Kubernetes for the Absolute Beginners with Hands-on Labs (KodeKloud.com)](https://kodekloud.com/p/kubernetes-for-the-absolute-beginners-hands-on) -* [Certified Kubernetes Application Developer Preparation Course with Practice Tests (KodeKloud.com)](https://kodekloud.com/p/kubernetes-certification-course) +* [Kubernetes Quick Start (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/kubernetes-quick-start) -{{% /capture %}} +* [Kubernetes the Hard Way (Linux Academy)](https://linuxacademy.com/linux/training/course/name/kubernetes-the-hard-way) +* [Learn Kubernetes using Interactive Hands-on Scenarios (Katacoda)](https://www.katacoda.com/courses/kubernetes/) +* [Monitoring Kubernetes With Prometheus (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/kubernetes-and-prometheus) +* [Scalable Microservices with Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615) +* [Self-paced Kubernetes online course (Learnk8s Academy)](https://learnk8s.io/academy) +{{% /capture %}} diff --git a/content/en/docs/tutorials/services/source-ip.md b/content/en/docs/tutorials/services/source-ip.md index 5d2accce8391a..b61b1941144f6 100644 --- a/content/en/docs/tutorials/services/source-ip.md +++ b/content/en/docs/tutorials/services/source-ip.md @@ -107,7 +107,7 @@ client_address=10.244.3.8 command=GET ... ``` -If the client pod and server pod are in the same node, the client_address is the client pod's IP address. However, if the client pod and server pod are in different nodes, the client_address is the client pod's node flannel IP address. +The client_address is always the client pod's IP address, whether the client pod and server pod are in the same node or in different nodes. ## Source IP for Services with Type=NodePort diff --git a/content/en/docs/tutorials/stateful-application/basic-stateful-set.md b/content/en/docs/tutorials/stateful-application/basic-stateful-set.md index 93fee4e2100c3..d595942a0a7c3 100644 --- a/content/en/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/content/en/docs/tutorials/stateful-application/basic-stateful-set.md @@ -73,11 +73,11 @@ kubectl get pods -w -l app=nginx ``` In the second terminal, use -[`kubectl create`](/docs/reference/generated/kubectl/kubectl-commands/#create) to create the +[`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) to create the Headless Service and StatefulSet defined in `web.yaml`. ```shell -kubectl create -f web.yaml +kubectl apply -f web.yaml service/nginx created statefulset.apps/web created ``` @@ -160,7 +160,7 @@ Using `nslookup` on the Pods' hostnames, you can examine their in-cluster DNS addresses. ```shell -kubectl run -i --tty --image busybox dns-test --restart=Never --rm /bin/sh +kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm nslookup web-0.nginx Server: 10.0.0.10 Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local @@ -783,7 +783,7 @@ you deleted the `nginx` Service ( which you should not have ), you will see an error indicating that the Service already exists. ```shell -kubectl create -f web.yaml +kubectl apply -f web.yaml statefulset.apps/web created Error from server (AlreadyExists): error when creating "web.yaml": services "nginx" already exists ``` @@ -883,7 +883,7 @@ service "nginx" deleted Recreate the StatefulSet and Headless Service one more time. ```shell -kubectl create -f web.yaml +kubectl apply -f web.yaml service/nginx created statefulset.apps/web created ``` @@ -947,7 +947,7 @@ kubectl get po -l app=nginx -w In another terminal, create the StatefulSet and Service in the manifest. ```shell -kubectl create -f web-parallel.yaml +kubectl apply -f web-parallel.yaml service/nginx created statefulset.apps/web created ``` diff --git a/content/en/docs/tutorials/stateful-application/cassandra.md b/content/en/docs/tutorials/stateful-application/cassandra.md index 7313f8c0e0879..f2169a5d8caf3 100644 --- a/content/en/docs/tutorials/stateful-application/cassandra.md +++ b/content/en/docs/tutorials/stateful-application/cassandra.md @@ -76,7 +76,7 @@ The following `Service` is used for DNS lookups between Cassandra Pods and clien 1. Create a Service to track all Cassandra StatefulSet nodes from the `cassandra-service.yaml` file: ```shell - kubectl create -f https://k8s.io/examples/application/cassandra/cassandra-service.yaml + kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-service.yaml ``` ### Validating (optional) @@ -110,7 +110,7 @@ This example uses the default provisioner for Minikube. Please update the follow 1. Create the Cassandra StatefulSet from the `cassandra-statefulset.yaml` file: ```shell - kubectl create -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml + kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml ``` ## Validating The Cassandra StatefulSet diff --git a/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md b/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md index 63b6b032db012..f471de069dac6 100644 --- a/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md +++ b/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md @@ -4,6 +4,10 @@ reviewers: - ahmetb content_template: templates/tutorial weight: 20 +card: + name: tutorials + weight: 40 + title: "Stateful Example: Wordpress with Persistent Volumes" --- {{% capture overview %}} @@ -23,16 +27,19 @@ The files provided in this tutorial are using GA Deployment APIs and are specifi {{% capture objectives %}} * Create PersistentVolumeClaims and PersistentVolumes -* Create a Secret -* Deploy MySQL -* Deploy WordPress +* Create a `kustomization.yaml` with + * a Secret generator + * MySQL resource configs + * WordPress resource configs +* Apply the kustomization directory by `kubectl apply -k ./` * Clean up {{% /capture %}} {{% capture prerequisites %}} -{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} +The example shown on this page works with `kubectl` 1.14 and above. Download the following configuration files: @@ -64,107 +71,108 @@ If you are bringing up a cluster that needs to use the `hostPath` provisioner, t If you have a Kubernetes cluster running on Google Kubernetes Engine, please follow [this guide](https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk). {{< /note >}} -## Create a Secret for MySQL Password +## Create a kustomization.yaml -A [Secret](/docs/concepts/configuration/secret/) is an object that stores a piece of sensitive data like a password or key. The manifest files are already configured to use a Secret, but you have to create your own Secret. +### Add a Secret generator +A [Secret](/docs/concepts/configuration/secret/) is an object that stores a piece of sensitive data like a password or key. Since 1.14, `kubectl` supports the management of Kubernetes objects using a kustomization file. You can create a Secret by generators in `kustomization.yaml`. -1. Create the Secret object from the following command. You will need to replace - `YOUR_PASSWORD` with the password you want to use. +Add a Secret generator in `kustomization.yaml` from the following command. You will need to replace `YOUR_PASSWORD` with the password you want to use. - ```shell - kubectl create secret generic mysql-pass --from-literal=password=YOUR_PASSWORD - ``` - -2. Verify that the Secret exists by running the following command: - - ```shell - kubectl get secrets - ``` - - The response should be like this: - - ``` - NAME TYPE DATA AGE - mysql-pass Opaque 1 42s - ``` - -{{< note >}} -To protect the Secret from exposure, neither `get` nor `describe` show its contents. -{{< /note >}} +```shell +cat <./kustomization.yaml +secretGenerator: +- name: mysql-pass + literals: + - password=YOUR_PASSWORD +EOF +``` -## Deploy MySQL +## Add resource configs for MySQL and WordPress The following manifest describes a single-instance MySQL Deployment. The MySQL container mounts the PersistentVolume at /var/lib/mysql. The `MYSQL_ROOT_PASSWORD` environment variable sets the database password from the Secret. {{< codenew file="application/wordpress/mysql-deployment.yaml" >}} -1. Deploy MySQL from the `mysql-deployment.yaml` file: +1. Download the MySQL deployment configuration file. ```shell - kubectl create -f https://k8s.io/examples/application/wordpress/mysql-deployment.yaml + curl -LO https://k8s.io/examples/application/wordpress/mysql-deployment.yaml ``` + +2. Download the WordPress configuration file. -2. Verify that a PersistentVolume got dynamically provisioned. Note that it can - It can take up to a few minutes for the PVs to be provisioned and bound. + ```shell + curl -LO https://k8s.io/examples/application/wordpress/wordpress-deployment.yaml + ``` + +3. Add them to `kustomization.yaml` file. ```shell - kubectl get pvc + cat <>./kustomization.yaml + resources: + - mysql-deployment.yaml + - wordpress-deployment.yaml + EOF ``` - The response should be like this: +## Apply and Verify +The `kustomization.yaml` contains all the resources for deploying a WordPress site and a +MySQL database. You can apply the directory by +```shell +kubectl apply -k ./ +``` - ``` - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - mysql-pv-claim Bound pvc-91e44fbf-d477-11e7-ac6a-42010a800002 20Gi RWO standard 29s - ``` +Now you can verify that all objects exist. -3. Verify that the Pod is running by running the following command: +1. Verify that the Secret exists by running the following command: ```shell - kubectl get pods + kubectl get secrets ``` - {{< note >}} - It can take up to a few minutes for the Pod's Status to be `RUNNING`. - {{< /note >}} - The response should be like this: + ```shell + NAME TYPE DATA AGE + mysql-pass-c57bb4t7mf Opaque 1 9s ``` - NAME READY STATUS RESTARTS AGE - wordpress-mysql-1894417608-x5dzt 1/1 Running 0 40s - ``` - -## Deploy WordPress -The following manifest describes a single-instance WordPress Deployment and Service. It uses many of the same features like a PVC for persistent storage and a Secret for the password. But it also uses a different setting: `type: LoadBalancer`. This setting exposes WordPress to traffic from outside of the cluster. - -{{< codenew file="application/wordpress/wordpress-deployment.yaml" >}} +2. Verify that a PersistentVolume got dynamically provisioned. + + ```shell + kubectl get pvc + ``` + + {{< note >}} + It can take up to a few minutes for the PVs to be provisioned and bound. + {{< /note >}} -1. Create a WordPress Service and Deployment from the `wordpress-deployment.yaml` file: + The response should be like this: ```shell - kubectl create -f https://k8s.io/examples/application/wordpress/wordpress-deployment.yaml + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE + mysql-pv-claim Bound pvc-8cbd7b2e-4044-11e9-b2bb-42010a800002 20Gi RWO standard 77s + wp-pv-claim Bound pvc-8cd0df54-4044-11e9-b2bb-42010a800002 20Gi RWO standard 77s ``` -2. Verify that a PersistentVolume got dynamically provisioned: +3. Verify that the Pod is running by running the following command: ```shell - kubectl get pvc + kubectl get pods ``` {{< note >}} - It can take up to a few minutes for the PVs to be provisioned and bound. + It can take up to a few minutes for the Pod's Status to be `RUNNING`. {{< /note >}} The response should be like this: ``` - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - wp-pv-claim Bound pvc-e69d834d-d477-11e7-ac6a-42010a800002 20Gi RWO standard 7s + NAME READY STATUS RESTARTS AGE + wordpress-mysql-1894417608-x5dzt 1/1 Running 0 40s ``` -3. Verify that the Service is running by running the following command: +4. Verify that the Service is running by running the following command: ```shell kubectl get services wordpress @@ -181,7 +189,7 @@ The following manifest describes a single-instance WordPress Deployment and Serv Minikube can only expose Services through `NodePort`. The EXTERNAL-IP is always pending. {{< /note >}} -4. Run the following command to get the IP Address for the WordPress Service: +5. Run the following command to get the IP Address for the WordPress Service: ```shell minikube service wordpress --url @@ -193,7 +201,7 @@ The following manifest describes a single-instance WordPress Deployment and Serv http://1.2.3.4:32406 ``` -5. Copy the IP address, and load the page in your browser to view your site. +6. Copy the IP address, and load the page in your browser to view your site. You should see the WordPress set up page similar to the following screenshot. @@ -207,23 +215,10 @@ Do not leave your WordPress installation on this page. If another user finds it, {{% capture cleanup %}} -1. Run the following command to delete your Secret: - - ```shell - kubectl delete secret mysql-pass - ``` - -2. Run the following commands to delete all Deployments and Services: - - ```shell - kubectl delete deployment -l app=wordpress - kubectl delete service -l app=wordpress - ``` - -3. Run the following commands to delete the PersistentVolumeClaims. The dynamically provisioned PersistentVolumes will be automatically deleted. +1. Run the following command to delete your Secret, Deployments, Services and PersistentVolumeClaims: ```shell - kubectl delete pvc -l app=wordpress + kubectl delete -k ./ ``` {{% /capture %}} diff --git a/content/en/docs/tutorials/stateless-application/guestbook.md b/content/en/docs/tutorials/stateless-application/guestbook.md index 2d82a7a045d1c..b8d7045e325ef 100644 --- a/content/en/docs/tutorials/stateless-application/guestbook.md +++ b/content/en/docs/tutorials/stateless-application/guestbook.md @@ -4,6 +4,10 @@ reviewers: - ahmetb content_template: templates/tutorial weight: 20 +card: + name: tutorials + weight: 30 + title: "Stateless Example: PHP Guestbook with Redis" --- {{% capture overview %}} diff --git a/content/en/docs/user-journeys/users/cluster-operator/intermediate.md b/content/en/docs/user-journeys/users/cluster-operator/intermediate.md index b3c02c8824872..e4b44abe3fe5f 100644 --- a/content/en/docs/user-journeys/users/cluster-operator/intermediate.md +++ b/content/en/docs/user-journeys/users/cluster-operator/intermediate.md @@ -91,7 +91,7 @@ A common configuration on [Minikube](https://github.com/kubernetes/minikube) and There is a [walkthrough of how to install this configuration in your cluster](https://blog.kublr.com/how-to-utilize-the-heapster-influxdb-grafana-stack-in-kubernetes-for-monitoring-pods-4a553f4d36c9). As of Kubernetes 1.11, Heapster is deprecated, as per [sig-instrumentation](https://github.com/kubernetes/community/tree/master/sig-instrumentation). See [Prometheus vs. Heapster vs. Kubernetes Metrics APIs](https://brancz.com/2018/01/05/prometheus-vs-heapster-vs-kubernetes-metrics-apis/) for more information alternatives. -Hosted data analytics services such as [Datadog](https://docs.datadoghq.com/integrations/kubernetes/) also offer Kubernetes integration. +Hosted monitoring, APM, or data analytics services such as [Datadog](https://docs.datadoghq.com/integrations/kubernetes/) or [Instana](https://www.instana.com/supported-integrations/kubernetes-monitoring/) also offer Kubernetes integration. ## Additional resources diff --git a/content/en/examples/admin/dns/dns-horizontal-autoscaler.yaml b/content/en/examples/admin/dns/dns-horizontal-autoscaler.yaml index 5e6d55a6b280a..b868c053322fd 100644 --- a/content/en/examples/admin/dns/dns-horizontal-autoscaler.yaml +++ b/content/en/examples/admin/dns/dns-horizontal-autoscaler.yaml @@ -8,7 +8,7 @@ metadata: spec: selector: matchLabels: - k8s-app: dns-autoscaler + k8s-app: dns-autoscaler template: metadata: labels: @@ -18,16 +18,16 @@ spec: - name: autoscaler image: k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.1.1 resources: - requests: - cpu: "20m" - memory: "10Mi" + requests: + cpu: 20m + memory: 10Mi command: - - /cluster-proportional-autoscaler - - --namespace=kube-system - - --configmap=dns-autoscaler - - --target= - # When cluster is using large nodes(with more cores), "coresPerReplica" should dominate. - # If using small nodes, "nodesPerReplica" should dominate. - - --default-params={"linear":{"coresPerReplica":256,"nodesPerReplica":16,"min":1}} - - --logtostderr=true - - --v=2 + - /cluster-proportional-autoscaler + - --namespace=kube-system + - --configmap=dns-autoscaler + - --target= + # When cluster is using large nodes(with more cores), "coresPerReplica" should dominate. + # If using small nodes, "nodesPerReplica" should dominate. + - --default-params={"linear":{"coresPerReplica":256,"nodesPerReplica":16,"min":1}} + - --logtostderr=true + - --v=2 diff --git a/content/en/examples/configmap/configmap-multikeys.yaml b/content/en/examples/configmap/configmap-multikeys.yaml new file mode 100644 index 0000000000000..289702d123caf --- /dev/null +++ b/content/en/examples/configmap/configmap-multikeys.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: special-config + namespace: default +data: + SPECIAL_LEVEL: very + SPECIAL_TYPE: charm diff --git a/content/en/examples/configmap/configmaps.yaml b/content/en/examples/configmap/configmaps.yaml new file mode 100644 index 0000000000000..91b9f29755c2e --- /dev/null +++ b/content/en/examples/configmap/configmaps.yaml @@ -0,0 +1,15 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: special-config + namespace: default +data: + special.how: very +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: env-config + namespace: default +data: + log_level: INFO diff --git a/content/en/docs/tasks/configure-pod-container/configmap/kubectl/game-env-file.properties b/content/en/examples/configmap/game-env-file.properties similarity index 100% rename from content/en/docs/tasks/configure-pod-container/configmap/kubectl/game-env-file.properties rename to content/en/examples/configmap/game-env-file.properties diff --git a/content/en/docs/tasks/configure-pod-container/configmap/kubectl/game.properties b/content/en/examples/configmap/game.properties similarity index 100% rename from content/en/docs/tasks/configure-pod-container/configmap/kubectl/game.properties rename to content/en/examples/configmap/game.properties diff --git a/content/en/docs/tasks/configure-pod-container/configmap/kubectl/ui-env-file.properties b/content/en/examples/configmap/ui-env-file.properties similarity index 100% rename from content/en/docs/tasks/configure-pod-container/configmap/kubectl/ui-env-file.properties rename to content/en/examples/configmap/ui-env-file.properties diff --git a/content/en/docs/tasks/configure-pod-container/configmap/kubectl/ui.properties b/content/en/examples/configmap/ui.properties similarity index 100% rename from content/en/docs/tasks/configure-pod-container/configmap/kubectl/ui.properties rename to content/en/examples/configmap/ui.properties diff --git a/content/en/examples/examples_test.go b/content/en/examples/examples_test.go index 3d0fefdc25580..e01cd543fb064 100644 --- a/content/en/examples/examples_test.go +++ b/content/en/examples/examples_test.go @@ -184,16 +184,16 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) { t.ObjectMeta.Name = "skip-for-good" } errors = job.Strategy.Validate(nil, t) - case *extensions.DaemonSet: + case *apps.DaemonSet: if t.Namespace == "" { t.Namespace = api.NamespaceDefault } - errors = ext_validation.ValidateDaemonSet(t) - case *extensions.Deployment: + errors = apps_validation.ValidateDaemonSet(t) + case *apps.Deployment: if t.Namespace == "" { t.Namespace = api.NamespaceDefault } - errors = ext_validation.ValidateDeployment(t) + errors = apps_validation.ValidateDeployment(t) case *extensions.Ingress: if t.Namespace == "" { t.Namespace = api.NamespaceDefault @@ -201,11 +201,11 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) { errors = ext_validation.ValidateIngress(t) case *policy.PodSecurityPolicy: errors = policy_validation.ValidatePodSecurityPolicy(t) - case *extensions.ReplicaSet: + case *apps.ReplicaSet: if t.Namespace == "" { t.Namespace = api.NamespaceDefault } - errors = ext_validation.ValidateReplicaSet(t) + errors = apps_validation.ValidateReplicaSet(t) case *batch.CronJob: if t.Namespace == "" { t.Namespace = api.NamespaceDefault @@ -298,12 +298,11 @@ func TestExampleObjectSchemas(t *testing.T) { "namespace-prod": {&api.Namespace{}}, }, "admin/cloud": { - "ccm-example": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &extensions.DaemonSet{}}, - "pvl-initializer-config": {&admissionregistration.InitializerConfiguration{}}, + "ccm-example": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &apps.DaemonSet{}}, }, "admin/dns": { "busybox": {&api.Pod{}}, - "dns-horizontal-autoscaler": {&extensions.Deployment{}}, + "dns-horizontal-autoscaler": {&apps.Deployment{}}, }, "admin/logging": { "fluentd-sidecar-config": {&api.ConfigMap{}}, @@ -337,42 +336,42 @@ func TestExampleObjectSchemas(t *testing.T) { "quota-objects-pvc": {&api.PersistentVolumeClaim{}}, "quota-objects-pvc-2": {&api.PersistentVolumeClaim{}}, "quota-pod": {&api.ResourceQuota{}}, - "quota-pod-deployment": {&extensions.Deployment{}}, + "quota-pod-deployment": {&apps.Deployment{}}, }, "admin/sched": { - "my-scheduler": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &extensions.Deployment{}}, + "my-scheduler": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &apps.Deployment{}}, "pod1": {&api.Pod{}}, "pod2": {&api.Pod{}}, "pod3": {&api.Pod{}}, }, "application": { - "deployment": {&extensions.Deployment{}}, - "deployment-patch": {&extensions.Deployment{}}, - "deployment-scale": {&extensions.Deployment{}}, - "deployment-update": {&extensions.Deployment{}}, - "nginx-app": {&api.Service{}, &extensions.Deployment{}}, - "nginx-with-request": {&extensions.Deployment{}}, + "deployment": {&apps.Deployment{}}, + "deployment-patch": {&apps.Deployment{}}, + "deployment-scale": {&apps.Deployment{}}, + "deployment-update": {&apps.Deployment{}}, + "nginx-app": {&api.Service{}, &apps.Deployment{}}, + "nginx-with-request": {&apps.Deployment{}}, "shell-demo": {&api.Pod{}}, - "simple_deployment": {&extensions.Deployment{}}, - "update_deployment": {&extensions.Deployment{}}, + "simple_deployment": {&apps.Deployment{}}, + "update_deployment": {&apps.Deployment{}}, }, "application/cassandra": { "cassandra-service": {&api.Service{}}, "cassandra-statefulset": {&apps.StatefulSet{}, &storage.StorageClass{}}, }, "application/guestbook": { - "frontend-deployment": {&extensions.Deployment{}}, + "frontend-deployment": {&apps.Deployment{}}, "frontend-service": {&api.Service{}}, - "redis-master-deployment": {&extensions.Deployment{}}, + "redis-master-deployment": {&apps.Deployment{}}, "redis-master-service": {&api.Service{}}, - "redis-slave-deployment": {&extensions.Deployment{}}, + "redis-slave-deployment": {&apps.Deployment{}}, "redis-slave-service": {&api.Service{}}, }, "application/hpa": { "php-apache": {&autoscaling.HorizontalPodAutoscaler{}}, }, "application/nginx": { - "nginx-deployment": {&extensions.Deployment{}}, + "nginx-deployment": {&apps.Deployment{}}, "nginx-svc": {&api.Service{}}, }, "application/job": { @@ -389,7 +388,7 @@ func TestExampleObjectSchemas(t *testing.T) { }, "application/mysql": { "mysql-configmap": {&api.ConfigMap{}}, - "mysql-deployment": {&api.Service{}, &extensions.Deployment{}}, + "mysql-deployment": {&api.Service{}, &apps.Deployment{}}, "mysql-pv": {&api.PersistentVolume{}, &api.PersistentVolumeClaim{}}, "mysql-services": {&api.Service{}, &api.Service{}}, "mysql-statefulset": {&apps.StatefulSet{}}, @@ -399,34 +398,38 @@ func TestExampleObjectSchemas(t *testing.T) { "web-parallel": {&api.Service{}, &apps.StatefulSet{}}, }, "application/wordpress": { - "mysql-deployment": {&api.Service{}, &api.PersistentVolumeClaim{}, &extensions.Deployment{}}, - "wordpress-deployment": {&api.Service{}, &api.PersistentVolumeClaim{}, &extensions.Deployment{}}, + "mysql-deployment": {&api.Service{}, &api.PersistentVolumeClaim{}, &apps.Deployment{}}, + "wordpress-deployment": {&api.Service{}, &api.PersistentVolumeClaim{}, &apps.Deployment{}}, }, "application/zookeeper": { "zookeeper": {&api.Service{}, &api.Service{}, &policy.PodDisruptionBudget{}, &apps.StatefulSet{}}, }, + "configmap": { + "configmaps": {&api.ConfigMap{}, &api.ConfigMap{}}, + "configmap-multikeys": {&api.ConfigMap{}}, + }, "controllers": { - "daemonset": {&extensions.DaemonSet{}}, - "frontend": {&extensions.ReplicaSet{}}, + "daemonset": {&apps.DaemonSet{}}, + "frontend": {&apps.ReplicaSet{}}, "hpa-rs": {&autoscaling.HorizontalPodAutoscaler{}}, "job": {&batch.Job{}}, - "replicaset": {&extensions.ReplicaSet{}}, + "replicaset": {&apps.ReplicaSet{}}, "replication": {&api.ReplicationController{}}, - "nginx-deployment": {&extensions.Deployment{}}, + "nginx-deployment": {&apps.Deployment{}}, }, "debug": { "counter-pod": {&api.Pod{}}, - "event-exporter": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &extensions.Deployment{}}, + "event-exporter": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &apps.Deployment{}}, "fluentd-gcp-configmap": {&api.ConfigMap{}}, - "fluentd-gcp-ds": {&extensions.DaemonSet{}}, - "node-problem-detector": {&extensions.DaemonSet{}}, - "node-problem-detector-configmap": {&extensions.DaemonSet{}}, + "fluentd-gcp-ds": {&apps.DaemonSet{}}, + "node-problem-detector": {&apps.DaemonSet{}}, + "node-problem-detector-configmap": {&apps.DaemonSet{}}, "termination": {&api.Pod{}}, }, "federation": { - "policy-engine-deployment": {&extensions.Deployment{}}, + "policy-engine-deployment": {&apps.Deployment{}}, "policy-engine-service": {&api.Service{}}, - "replicaset-example-policy": {&extensions.ReplicaSet{}}, + "replicaset-example-policy": {&apps.ReplicaSet{}}, "scheduling-policy-admission": {&api.ConfigMap{}}, }, "podpreset": { @@ -441,20 +444,28 @@ func TestExampleObjectSchemas(t *testing.T) { "preset": {&settings.PodPreset{}}, "proxy": {&settings.PodPreset{}}, "replicaset-merged": {&api.Pod{}}, - "replicaset": {&extensions.ReplicaSet{}}, + "replicaset": {&apps.ReplicaSet{}}, }, "pods": { - "commands": {&api.Pod{}}, - "init-containers": {&api.Pod{}}, - "lifecycle-events": {&api.Pod{}}, - "pod-nginx": {&api.Pod{}}, - "pod-with-node-affinity": {&api.Pod{}}, - "pod-with-pod-affinity": {&api.Pod{}}, - "private-reg-pod": {&api.Pod{}}, - "share-process-namespace": {&api.Pod{}}, - "simple-pod": {&api.Pod{}}, - "pod-rs": {&api.Pod{}, &api.Pod{}}, - "two-container-pod": {&api.Pod{}}, + "commands": {&api.Pod{}}, + "init-containers": {&api.Pod{}}, + "lifecycle-events": {&api.Pod{}}, + "pod-configmap-env-var-valueFrom": {&api.Pod{}}, + "pod-configmap-envFrom": {&api.Pod{}}, + "pod-configmap-volume": {&api.Pod{}}, + "pod-configmap-volume-specific-key": {&api.Pod{}}, + "pod-multiple-configmap-env-variable": {&api.Pod{}}, + "pod-nginx-specific-node": {&api.Pod{}}, + "pod-nginx": {&api.Pod{}}, + "pod-projected-svc-token": {&api.Pod{}}, + "pod-rs": {&api.Pod{}, &api.Pod{}}, + "pod-single-configmap-env-variable": {&api.Pod{}}, + "pod-with-node-affinity": {&api.Pod{}}, + "pod-with-pod-affinity": {&api.Pod{}}, + "private-reg-pod": {&api.Pod{}}, + "share-process-namespace": {&api.Pod{}}, + "simple-pod": {&api.Pod{}}, + "two-container-pod": {&api.Pod{}}, }, "pods/config": { "redis-pod": {&api.Pod{}}, @@ -514,24 +525,24 @@ func TestExampleObjectSchemas(t *testing.T) { "nginx-service": {&api.Service{}}, }, "service/access": { - "frontend": {&api.Service{}, &extensions.Deployment{}}, + "frontend": {&api.Service{}, &apps.Deployment{}}, "hello-service": {&api.Service{}}, - "hello": {&extensions.Deployment{}}, + "hello": {&apps.Deployment{}}, }, "service/networking": { - "curlpod": {&extensions.Deployment{}}, + "curlpod": {&apps.Deployment{}}, "custom-dns": {&api.Pod{}}, "hostaliases-pod": {&api.Pod{}}, "ingress": {&extensions.Ingress{}}, - "nginx-secure-app": {&api.Service{}, &extensions.Deployment{}}, + "nginx-secure-app": {&api.Service{}, &apps.Deployment{}}, "nginx-svc": {&api.Service{}}, - "run-my-nginx": {&extensions.Deployment{}}, + "run-my-nginx": {&apps.Deployment{}}, }, "windows": { "configmap-pod": {&api.ConfigMap{}, &api.Pod{}}, - "daemonset": {&extensions.DaemonSet{}}, - "deploy-hyperv": {&extensions.Deployment{}}, - "deploy-resource": {&extensions.Deployment{}}, + "daemonset": {&apps.DaemonSet{}}, + "deploy-hyperv": {&apps.Deployment{}}, + "deploy-resource": {&apps.Deployment{}}, "emptydir-pod": {&api.Pod{}}, "hostpath-volume-pod": {&api.Pod{}}, "secret-pod": {&api.Secret{}, &api.Pod{}}, diff --git a/content/en/examples/podpreset/allow-db-merged.yaml b/content/en/examples/podpreset/allow-db-merged.yaml index 4f5af10abdc46..8a0ad101d7d64 100644 --- a/content/en/examples/podpreset/allow-db-merged.yaml +++ b/content/en/examples/podpreset/allow-db-merged.yaml @@ -34,4 +34,4 @@ spec: emptyDir: {} - name: secret-volume secret: - secretName: config-details + secretName: config-details diff --git a/content/en/examples/podpreset/allow-db.yaml b/content/en/examples/podpreset/allow-db.yaml index a5504789fea04..0cca13bab2c3d 100644 --- a/content/en/examples/podpreset/allow-db.yaml +++ b/content/en/examples/podpreset/allow-db.yaml @@ -27,4 +27,4 @@ spec: emptyDir: {} - name: secret-volume secret: - secretName: config-details + secretName: config-details diff --git a/content/en/examples/pods/lifecycle-events.yaml b/content/en/examples/pods/lifecycle-events.yaml index e5fcffcc9e755..4b79d7289c568 100644 --- a/content/en/examples/pods/lifecycle-events.yaml +++ b/content/en/examples/pods/lifecycle-events.yaml @@ -12,5 +12,5 @@ spec: command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"] preStop: exec: - command: ["/usr/sbin/nginx","-s","quit"] + command: ["/bin/sh","-c","nginx -s quit; while killall -0 nginx; do sleep 1; done"] diff --git a/content/en/examples/pods/pod-configmap-env-var-valueFrom.yaml b/content/en/examples/pods/pod-configmap-env-var-valueFrom.yaml new file mode 100644 index 0000000000000..a72b4335ce3b0 --- /dev/null +++ b/content/en/examples/pods/pod-configmap-env-var-valueFrom.yaml @@ -0,0 +1,21 @@ +apiVersion: v1 +kind: Pod +metadata: + name: dapi-test-pod +spec: + containers: + - name: test-container + image: k8s.gcr.io/busybox + command: [ "/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ] + env: + - name: SPECIAL_LEVEL_KEY + valueFrom: + configMapKeyRef: + name: special-config + key: SPECIAL_LEVEL + - name: SPECIAL_TYPE_KEY + valueFrom: + configMapKeyRef: + name: special-config + key: SPECIAL_TYPE + restartPolicy: Never diff --git a/content/en/examples/pods/pod-configmap-envFrom.yaml b/content/en/examples/pods/pod-configmap-envFrom.yaml new file mode 100644 index 0000000000000..70ae7e5bcfaf9 --- /dev/null +++ b/content/en/examples/pods/pod-configmap-envFrom.yaml @@ -0,0 +1,13 @@ +apiVersion: v1 +kind: Pod +metadata: + name: dapi-test-pod +spec: + containers: + - name: test-container + image: k8s.gcr.io/busybox + command: [ "/bin/sh", "-c", "env" ] + envFrom: + - configMapRef: + name: special-config + restartPolicy: Never diff --git a/content/en/examples/pods/pod-configmap-volume-specific-key.yaml b/content/en/examples/pods/pod-configmap-volume-specific-key.yaml new file mode 100644 index 0000000000000..7a7c7bf605a5a --- /dev/null +++ b/content/en/examples/pods/pod-configmap-volume-specific-key.yaml @@ -0,0 +1,20 @@ +apiVersion: v1 +kind: Pod +metadata: + name: dapi-test-pod +spec: + containers: + - name: test-container + image: k8s.gcr.io/busybox + command: [ "/bin/sh","-c","cat /etc/config/keys" ] + volumeMounts: + - name: config-volume + mountPath: /etc/config + volumes: + - name: config-volume + configMap: + name: special-config + items: + - key: special.level + path: keys + restartPolicy: Never diff --git a/content/en/examples/pods/pod-configmap-volume.yaml b/content/en/examples/pods/pod-configmap-volume.yaml new file mode 100644 index 0000000000000..23b0f7718e157 --- /dev/null +++ b/content/en/examples/pods/pod-configmap-volume.yaml @@ -0,0 +1,19 @@ +apiVersion: v1 +kind: Pod +metadata: + name: dapi-test-pod +spec: + containers: + - name: test-container + image: k8s.gcr.io/busybox + command: [ "/bin/sh", "-c", "ls /etc/config/" ] + volumeMounts: + - name: config-volume + mountPath: /etc/config + volumes: + - name: config-volume + configMap: + # Provide the name of the ConfigMap containing the files you want + # to add to the container + name: special-config + restartPolicy: Never diff --git a/content/en/examples/pods/pod-multiple-configmap-env-variable.yaml b/content/en/examples/pods/pod-multiple-configmap-env-variable.yaml new file mode 100644 index 0000000000000..4790a9c661c84 --- /dev/null +++ b/content/en/examples/pods/pod-multiple-configmap-env-variable.yaml @@ -0,0 +1,21 @@ +apiVersion: v1 +kind: Pod +metadata: + name: dapi-test-pod +spec: + containers: + - name: test-container + image: k8s.gcr.io/busybox + command: [ "/bin/sh", "-c", "env" ] + env: + - name: SPECIAL_LEVEL_KEY + valueFrom: + configMapKeyRef: + name: special-config + key: special.how + - name: LOG_LEVEL + valueFrom: + configMapKeyRef: + name: env-config + key: log_level + restartPolicy: Never diff --git a/content/en/examples/pods/pod-nginx-specific-node.yaml b/content/en/examples/pods/pod-nginx-specific-node.yaml new file mode 100644 index 0000000000000..5923400d64534 --- /dev/null +++ b/content/en/examples/pods/pod-nginx-specific-node.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: Pod +metadata: + name: nginx +spec: + nodeName: foo-node # schedule pod to specific node + containers: + - name: nginx + image: nginx + imagePullPolicy: IfNotPresent diff --git a/content/en/examples/pods/pod-projected-svc-token.yaml b/content/en/examples/pods/pod-projected-svc-token.yaml new file mode 100644 index 0000000000000..1c6ba249806cb --- /dev/null +++ b/content/en/examples/pods/pod-projected-svc-token.yaml @@ -0,0 +1,20 @@ +kind: Pod +apiVersion: v1 +metadata: + name: nginx +spec: + containers: + - image: nginx + name: nginx + volumeMounts: + - mountPath: /var/run/secrets/tokens + name: vault-token + serviceAccountName: acct + volumes: + - name: vault-token + projected: + sources: + - serviceAccountToken: + path: vault-token + expirationSeconds: 7200 + audience: vault diff --git a/content/en/examples/pods/pod-single-configmap-env-variable.yaml b/content/en/examples/pods/pod-single-configmap-env-variable.yaml new file mode 100644 index 0000000000000..c86123afd76ce --- /dev/null +++ b/content/en/examples/pods/pod-single-configmap-env-variable.yaml @@ -0,0 +1,19 @@ +apiVersion: v1 +kind: Pod +metadata: + name: dapi-test-pod +spec: + containers: + - name: test-container + image: k8s.gcr.io/busybox + command: [ "/bin/sh", "-c", "env" ] + env: + # Define the environment variable + - name: SPECIAL_LEVEL_KEY + valueFrom: + configMapKeyRef: + # The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY + name: special-config + # Specify the key associated with the value + key: special.how + restartPolicy: Never diff --git a/content/en/examples/pods/probe/http-liveness.yaml b/content/en/examples/pods/probe/http-liveness.yaml index 23d37b480a06e..670af18399e20 100644 --- a/content/en/examples/pods/probe/http-liveness.yaml +++ b/content/en/examples/pods/probe/http-liveness.yaml @@ -15,7 +15,7 @@ spec: path: /healthz port: 8080 httpHeaders: - - name: X-Custom-Header + - name: Custom-Header value: Awesome initialDelaySeconds: 3 periodSeconds: 3 diff --git a/content/en/includes/federated-task-tutorial-prereqs.md b/content/en/includes/federated-task-tutorial-prereqs.md index c5ec939c07894..b254407a676a3 100644 --- a/content/en/includes/federated-task-tutorial-prereqs.md +++ b/content/en/includes/federated-task-tutorial-prereqs.md @@ -1,8 +1,5 @@ -This guide assumes that you have a running Kubernetes Cluster -Federation installation. If not, then head over to the -[federation admin guide](/docs/tutorials/federation/set-up-cluster-federation-kubefed/) to learn how to -bring up a cluster federation (or have your cluster administrator do -this for you). -Other tutorials, such as Kelsey Hightower's -[Federated Kubernetes Tutorial](https://github.com/kelseyhightower/kubernetes-cluster-federation), -might also help you create a Federated Kubernetes cluster. \ No newline at end of file +This guide assumes that you have a running Kubernetes Cluster Federation installation. +If not, then head over to the [federation admin guide](/docs/tutorials/federation/set-up-cluster-federation-kubefed/) to learn how to +bring up a cluster federation (or have your cluster administrator do this for you). +Other tutorials, such as Kelsey Hightower's [Federated Kubernetes Tutorial](https://github.com/kelseyhightower/kubernetes-cluster-federation), +might also help you create a Federated Kubernetes cluster. diff --git a/content/en/includes/federation-content-moved.md b/content/en/includes/federation-content-moved.md deleted file mode 100644 index 87a10e7199193..0000000000000 --- a/content/en/includes/federation-content-moved.md +++ /dev/null @@ -1,2 +0,0 @@ -The topics in the [Federation API](/docs/federation/api-reference/) section of the Kubernetes docs -are being moved to the [Reference](/docs/reference/) section. The content in this topic has moved to: diff --git a/content/en/includes/federation-current-state.md b/content/en/includes/federation-current-state.md deleted file mode 100644 index d04fda15e051e..0000000000000 --- a/content/en/includes/federation-current-state.md +++ /dev/null @@ -1 +0,0 @@ -`Federation V1`, the current Kubernetes federation API which reuses the Kubernetes API resources 'as is', is currently considered alpha for many of its features. There is no clear path to evolve the API to GA; however, there is a `Federation V2` effort in progress to implement a dedicated federation API apart from the Kubernetes API. The details are available at [sig-multicluster community page](https://github.com/kubernetes/community/tree/master/sig-multicluster). diff --git a/content/en/includes/federation-deprecation-warning-note.md b/content/en/includes/federation-deprecation-warning-note.md new file mode 100644 index 0000000000000..b7a05b1077095 --- /dev/null +++ b/content/en/includes/federation-deprecation-warning-note.md @@ -0,0 +1,3 @@ +Use of `Federation v1` is strongly discouraged. `Federation V1` never achieved GA status and is no longer under active development. Documentation is for historical purposes only. + +For more information, see the intended replacement, [Kubernetes Federation v2](https://github.com/kubernetes-sigs/federation-v2). diff --git a/content/fr/OWNERS b/content/fr/OWNERS new file mode 100644 index 0000000000000..c91ec02821e6f --- /dev/null +++ b/content/fr/OWNERS @@ -0,0 +1,13 @@ +# See the OWNERS docs at https://go.k8s.io/owners + +# This is the localization project for French. +# Teams and members are visible at https://github.com/orgs/kubernetes/teams. + +reviewers: +- sig-docs-fr-reviews + +approvers: +- sig-docs-fr-owners + +labels: +- language/fr diff --git a/content/fr/_common-resources/index.md b/content/fr/_common-resources/index.md new file mode 100644 index 0000000000000..3d65eaa0ff97e --- /dev/null +++ b/content/fr/_common-resources/index.md @@ -0,0 +1,3 @@ +--- +headless: true +--- \ No newline at end of file diff --git a/content/fr/_index.html b/content/fr/_index.html new file mode 100644 index 0000000000000..493bf1ccd2869 --- /dev/null +++ b/content/fr/_index.html @@ -0,0 +1,65 @@ +--- +title: "La meilleure solution d'orchestration de conteneurs en production" +abstract: "Déploiement, mise à l'échelle et gestion automatisés des conteneurs" +cid: home +--- + +{{< deprecationwarning >}} + +{{< blocks/section id="oceanNodes" >}} +{{% blocks/feature image="flower" %}} + +### [Kubernetes (k8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) est un système open-source permettant d'automatiser le déploiement, la mise à l'échelle et la gestion des applications conteneurisées. + +Les conteneurs qui composent une application sont regroupés dans des unités logiques pour en faciliter la gestion et la découverte. Kubernetes s’appuie sur [15 années d’expérience dans la gestion de charges de travail de production (workloads) chez Google](http://queue.acm.org/detail.cfm?id=2898444), associé aux meilleures idées et pratiques de la communauté. +{{% /blocks/feature %}} + +{{% blocks/feature image="scalable" %}} +#### Quelque soit le nombre + +Conçu selon les mêmes principes qui permettent à Google de gérer des milliards de conteneurs par semaine, Kubernetes peut évoluer sans augmenter votre équipe d'opérations. +{{% /blocks/feature %}} + +{{% blocks/feature image="blocks" %}} +#### Quelque soit la complexité + +Qu'il s'agisse de tester localement ou d'une implémentation globale, Kubernetes est suffisamment flexible pour fournir vos applications de manière cohérente et simple, quelle que soit la complexité de vos besoins. + +{{% /blocks/feature %}} + +{{% blocks/feature image="suitcase" %}} + +#### Quelque soit l'endroit + +Kubernetes est une solution open-source qui vous permet de tirer parti de vos infrastructure qu'elles soient sur site (on-premises), hybride ou en Cloud publique. +Vous pourrez ainsi répartir sans effort vos workloads là où vous le souhaitez. + +{{% /blocks/feature %}} + +{{< /blocks/section >}} + +{{< blocks/section id="video" background-image="kub_video_banner_homepage" >}} + +
+

Les défis de la migration de plus de 150 microservices vers Kubernetes

+

Par Sarah Wells, directrice technique des opérations et de la fiabilité, Financial Times

+ +
+
+
+ Venez au KubeCon Barcelone du 20 au 23 mai 2019 +
+
+
+
+ Venez au KubeCon Shanghai du 24 au 26 juin 2019 +
+
+ + +
+{{< /blocks/section >}} + +{{< blocks/kubernetes-features >}} + +{{< blocks/case-studies >}} diff --git a/content/fr/case-studies/_index.html b/content/fr/case-studies/_index.html new file mode 100644 index 0000000000000..b783e0330d128 --- /dev/null +++ b/content/fr/case-studies/_index.html @@ -0,0 +1,10 @@ +--- +title: Études de cas +linkTitle: Études de cas +bigheader: Études de cas d'utilisation de Kubernetes +abstract: Une collection de cas d'utilisation de Kubernetes en production. +layout: basic +class: gridPage +cid: caseStudies +--- + diff --git a/content/fr/docs/_index.md b/content/fr/docs/_index.md new file mode 100644 index 0000000000000..05e96e2901631 --- /dev/null +++ b/content/fr/docs/_index.md @@ -0,0 +1,3 @@ +--- +title: Documentation +--- diff --git a/content/fr/docs/concepts/_index.md b/content/fr/docs/concepts/_index.md new file mode 100644 index 0000000000000..cd1ea84780bee --- /dev/null +++ b/content/fr/docs/concepts/_index.md @@ -0,0 +1,91 @@ +--- +title: Concepts +main_menu: true +content_template: templates/concept +weight: 40 +--- + +{{% capture overview %}} + +La section Concepts vous aide à mieux comprendre les composants du système Kubernetes et les abstractions que Kubernetes utilise pour représenter votre cluster. +Elle vous aide également à mieux comprendre le fonctionnement de Kubernetes en général. + +{{% /capture %}} + +{{% capture body %}} + +## Vue d'ensemble + +Pour utiliser Kubernetes, vous utilisez *les objets de l'API Kubernetes* pour décrire *l'état souhaité* de votre cluster: quelles applications ou autres processus que vous souhaitez exécuter, quelles images de conteneur elles utilisent, le nombre de réplicas, les ressources réseau et disque que vous mettez à disposition, et plus encore. +Vous définissez l'état souhaité en créant des objets à l'aide de l'API Kubernetes, généralement via l'interface en ligne de commande, `kubectl`. +Vous pouvez également utiliser l'API Kubernetes directement pour interagir avec le cluster et définir ou modifier l'état souhaité. + +Une fois que vous avez défini l'état souhaité, le *plan de contrôle Kubernetes* (control plane en anglais) permet de faire en sorte que l'état actuel du cluster corresponde à l'état souhaité. +Pour ce faire, Kubernetes effectue automatiquement diverses tâches, telles que le démarrage ou le redémarrage de conteneurs, la mise à jour du nombre de réplicas d'une application donnée, etc. +Le control plane Kubernetes comprend un ensemble de processus en cours d'exécution sur votre cluster: + +* Le **maître Kubernetes** (Kubernetes master en anglais) qui est un ensemble de trois processus qui s'exécutent sur un seul nœud de votre cluster, désigné comme nœud maître (master node en anglais). Ces processus sont: [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) et [kube-scheduler](/docs/admin/kube-scheduler/). +* Chaque nœud non maître de votre cluster exécute deux processus: + * **[kubelet](/docs/admin/kubelet/)**, qui communique avec le Kubernetes master. + * **[kube-proxy](/docs/admin/kube-proxy/)**, un proxy réseau reflétant les services réseau Kubernetes sur chaque nœud. + +## Objets Kubernetes + +Kubernetes contient un certain nombre d'abstractions représentant l'état de votre système: applications et processus conteneurisés déployés, leurs ressources réseau et disque associées, ainsi que d'autres informations sur les activités de votre cluster. +Ces abstractions sont représentées par des objets de l'API Kubernetes; consultez [Vue d'ensemble des objets Kubernetes](/docs/concepts/abstractions/overview/) pour plus d'informations. + +Les objets de base de Kubernetes incluent: + +* [Pod](/docs/concepts/workloads/pods/pod-overview/) +* [Service](/docs/concepts/services-networking/service/) +* [Volume](/docs/concepts/storage/volumes/) +* [Namespace](/docs/concepts/overview/working-with-objects/namespaces/) + +En outre, Kubernetes contient un certain nombre d'abstractions de niveau supérieur appelées Contrôleurs. +Les contrôleurs s'appuient sur les objets de base et fournissent des fonctionnalités supplémentaires. + +Voici quelques exemples: + +* [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) +* [Deployment](/docs/concepts/workloads/controllers/deployment/) +* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) +* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) +* [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/) + +## Kubernetes control plane + +Les différentes parties du control plane Kubernetes, telles que les processus Kubernetes master et kubelet, déterminent la manière dont Kubernetes communique avec votre cluster. +Le control plane conserve un enregistrement de tous les objets Kubernetes du système et exécute des boucles de contrôle continues pour gérer l'état de ces objets. +À tout moment, les boucles de contrôle du control plane répondent aux modifications du cluster et permettent de faire en sorte que l'état réel de tous les objets du système corresponde à l'état souhaité que vous avez fourni. + +Par exemple, lorsque vous utilisez l'API Kubernetes pour créer un objet Deployment, vous fournissez un nouvel état souhaité pour le système. +Le control plane Kubernetes enregistre la création de cet objet et exécute vos instructions en lançant les applications requises et en les planifiant vers des nœuds de cluster, afin que l'état actuel du cluster corresponde à l'état souhaité. + +### Kubernetes master + +Le Kubernetes master est responsable du maintien de l'état souhaité pour votre cluster. +Lorsque vous interagissez avec Kubernetes, par exemple en utilisant l'interface en ligne de commande `kubectl`, vous communiquez avec le master Kubernetes de votre cluster. + +> Le "master" fait référence à un ensemble de processus gérant l'état du cluster. +En règle générale, tous les processus sont exécutés sur un seul nœud du cluster. +Ce nœud est également appelé master. +Le master peut également être répliqué pour la disponibilité et la redondance. + +### Noeuds Kubernetes + +Les nœuds d’un cluster sont les machines (serveurs physiques, machines virtuelles, etc.) qui exécutent vos applications et vos workflows. +Le master node Kubernetes contrôle chaque noeud; vous interagirez rarement directement avec les nœuds. + +#### Metadonnées des objets Kubernetes + +* [Annotations](/docs/concepts/overview/working-with-objects/annotations/) + +{{% /capture %}} + +{{% capture whatsnext %}} + +Si vous souhaitez écrire une page de concept, consultez +[Utilisation de modèles de page](/docs/home/contribute/page-templates/) +pour plus d'informations sur le type de page pour la documentation d'un concept. + +{{% /capture %}} diff --git a/content/fr/docs/concepts/architecture/_index.md b/content/fr/docs/concepts/architecture/_index.md new file mode 100755 index 0000000000000..ef6bd42ed8d06 --- /dev/null +++ b/content/fr/docs/concepts/architecture/_index.md @@ -0,0 +1,4 @@ +--- +title: Architecture de Kubernetes +weight: 30 +--- diff --git a/content/fr/docs/concepts/architecture/master-node-communication.md b/content/fr/docs/concepts/architecture/master-node-communication.md new file mode 100644 index 0000000000000..075cabaa7f31b --- /dev/null +++ b/content/fr/docs/concepts/architecture/master-node-communication.md @@ -0,0 +1,76 @@ +--- +reviewers: +- sieben +title: Communication Master-Node +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + +Ce document répertorie les canaux de communication entre l'API du noeud maître (apiserver of master node en anglais) et le rester du cluster Kubernetes. +L'objectif est de permettre aux utilisateurs de personnaliser leur installation afin de sécuriser la configuration réseau, de sorte que le cluster puisse être exécuté sur un réseau non approuvé (ou sur des adresses IP entièrement publiques d'un fournisseur de cloud). + +{{% /capture %}} + +{{% capture body %}} + +## Communication du Cluster vers le Master + +Tous les canaux de communication du cluster au master se terminent au apiserver (aucun des autres composants principaux n'est conçu pour exposer des services distants). +Dans un déploiement typique, l'apiserver est configuré pour écouter les connexions distantes sur un port HTTPS sécurisé (443) avec un ou plusieurs types d'[authentification](/docs/reference/access-authn-authz/authentication/) client. +Une ou plusieurs formes d'[autorisation](/docs/reference/access-authn-authz/authorization/) devraient être activée, notamment si les [requêtes anonymes](/docs/reference/access-authn-authz/authentication/#anonymous-requests) ou [jeton de compte de service](/docs/reference/access-authn-authz/authentication/#service-account-tokens) sont autorisés. + +Le certificat racine public du cluster doit être configuré pour que les nœuds puissent se connecter en toute sécurité à l'apiserver avec des informations d'identification client valides. +Par exemple, dans un déploiement GKE par défaut, les informations d'identification client fournies au kubelet sont sous la forme d'un certificat client. +Consultez [amorçage TLS de kubelet](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) pour le provisioning automatisé des certificats de client Kubelet. + +Les pods qui souhaitent se connecter à l'apiserver peuvent le faire de manière sécurisée en utilisant un compte de service afin que Kubernetes injecte automatiquement le certificat racine public et un jeton de support valide dans le pod lorsqu'il est instancié. +Le service `kubernetes` (dans tous les namespaces) est configuré avec une adresse IP virtuelle redirigée (via kube-proxy) vers le point de terminaison HTTPS sur le apiserver. + +Les composants du master communiquent également avec l'apiserver du cluster via le port sécurisé. + +Par conséquent, le mode de fonctionnement par défaut pour les connexions du cluster (nœuds et pods s'exécutant sur les nœuds) au master est sécurisé par défaut et peut s'exécuter sur des réseaux non sécurisés et/ou publics. + +## Communication du Master vers le Cluster + +Il existe deux voies de communication principales du master (apiserver) au cluster. +La première est du processus apiserver au processus kubelet qui s'exécute sur chaque nœud du cluster. +La seconde part de l'apiserver vers n'importe quel nœud, pod ou service via la fonctionnalité proxy de l'apiserver. + +### Communication de l'apiserver vers le kubelet + +Les connexions de l'apiserver au kubelet sont utilisées pour: + + * Récupérer les logs des pods. + * S'attacher (via kubectl) à des pods en cours d'exécution. + * Fournir la fonctionnalité de transfert de port du kubelet. + +Ces connexions se terminent au point de terminaison HTTPS du kubelet. +Par défaut, l'apiserver ne vérifie pas le certificat du kubelet, ce qui rend la connexion sujette aux attaques de type "man-in-the-middle", et **non sûre** sur des réseaux non approuvés et/ou publics. + +Pour vérifier cette connexion, utilisez l'argument `--kubelet-certificate-authority` pour fournir à apiserver un ensemble de certificats racine à utiliser pour vérifier le certificat du kubelet. + +Si ce n'est pas possible, utilisez [SSH tunneling](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) entre l'apiserver et le kubelet si nécessaire pour éviter la connexion sur un réseau non sécurisé ou public. + +Finalement, l'[authentification et/ou autorisation du Kubelet](/docs/admin/kubelet-authentication-authorization/) devrait être activé pour sécuriser l'API kubelet. + +### apiserver vers nodes, pods et services + +Les connexions de l'apiserver à un nœud, à un pod ou à un service sont définies par défaut en connexions HTTP. +Elles ne sont donc ni authentifiées ni chiffrées. +Elles peuvent être exécutées sur une connexion HTTPS sécurisée en préfixant `https:` au nom du nœud, du pod ou du service dans l'URL de l'API. +Cependant ils ne valideront pas le certificat fourni par le point de terminaison HTTPS ni ne fourniront les informations d'identification du client. +De plus, aucune garantie d'intégrité n'est fournie. +Ces connexions **ne sont actuellement pas sûres** pour fonctionner sur des réseaux non sécurisés et/ou publics. + +### SSH Tunnels + +Kubernetes prend en charge les tunnels SSH pour protéger les communications master -> cluster. +Dans cette configuration, l'apiserver initie un tunnel SSH vers chaque nœud du cluster (en se connectant au serveur ssh sur le port 22) et transmet tout le trafic destiné à un kubelet, un nœud, un pod ou un service via un tunnel. +Ce tunnel garantit que le trafic n'est pas exposé en dehors du réseau dans lequel les nœuds sont en cours d'exécution. + +Les tunnels SSH étant actuellement obsolètes, vous ne devriez pas choisir de les utiliser à moins de savoir ce que vous faites. +Un remplacement pour ce canal de communication est en cours de conception. + +{{% /capture %}} diff --git a/content/fr/docs/concepts/architecture/nodes.md b/content/fr/docs/concepts/architecture/nodes.md new file mode 100644 index 0000000000000..e1fcf69ac516a --- /dev/null +++ b/content/fr/docs/concepts/architecture/nodes.md @@ -0,0 +1,231 @@ +--- +reviewers: +- sieben +title: Noeuds +content_template: templates/concept +weight: 10 +--- + +{{% capture overview %}} + +Un nœud est une machine de travail dans Kubernetes, connue auparavant sous le nom de `minion`. +Un nœud peut être une machine virtuelle ou une machine physique, selon le cluster. +Chaque nœud contient les services nécessaires à l'exécution de [pods](/docs/concepts/workloads/pods/pod/) et est géré par les composants du master. +Les services sur un nœud incluent le [container runtime](/docs/concepts/overview/components/#node-components), kubelet and kube-proxy. +Consultez la section [Le Nœud Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) dans le document de conception de l'architecture pour plus de détails. + +{{% /capture %}} + +{{% capture body %}} + +## Statut du nœud + +Le statut d'un nœud contient les informations suivantes: + +* [Addresses](#addresses) +* [Condition](#condition) +* [Capacity](#capacity) +* [Info](#info) + +Chaque section est décrite en détail ci-dessous. + +### Adresses + +L'utilisation de ces champs varie en fonction de votre fournisseur de cloud ou de votre configuration physique. + +* HostName: Le nom d'hôte tel que rapporté par le noyau du nœud. Peut être remplacé via le paramètre kubelet `--hostname-override`. +* ExternalIP: En règle générale, l'adresse IP du nœud pouvant être routé en externe (disponible de l'extérieur du cluster). +* InternalIP: En règle générale, l'adresse IP du nœud pouvant être routé uniquement dans le cluster. + +### Condition + +Le champ `conditions` décrit le statut de tous les nœuds `Running`. + +| Node Condition | Description | +|----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `OutOfDisk` | `True` si l'espace disponible sur le nœud est insuffisant pour l'ajout de nouveaux pods, sinon `False` | +| `Ready` | `True` si le noeud est sain et prêt à accepter des pods, `False` si le noeud n'est pas sain et n'accepte pas de pods, et `Unknown` si le contrôleur de noeud n'a pas reçu d'information du noeud depuis `node-monitor-grace-period` (la valeur par défaut est de 40 secondes) | +| `MemoryPressure` | `True` s'il existe une pression sur la mémoire du noeud, c'est-à-dire si la mémoire du noeud est faible; autrement `False` | +| `PIDPressure` | `True` s'il existe une pression sur le nombre de processus, c'est-à-dire s'il y a trop de processus sur le nœud; autrement `False` | +| `DiskPressure` | `True` s'il existe une pression sur la taille du disque, c'est-à-dire si la capacité du disque est faible; autrement `False` | +| `NetworkUnavailable` | `True` si le réseau pour le noeud n'est pas correctement configuré, sinon `False` | + +La condition de noeud est représentée sous la forme d'un objet JSON. +Par exemple, la réponse suivante décrit un nœud sain. + +```json +"conditions": [ + { + "type": "Ready", + "status": "True" + } +] +``` + +Si le statut de l'état Ready reste `Unknown` ou `False` plus longtemps que `pod-eviction-timeout`, un argument est passé au [kube-controller-manager](/docs/admin/kube-controller-manager/) et les pods sur le nœud sont programmés pour être supprimés par le contrôleur du nœud. +Le délai d’expulsion par défaut est de **cinq minutes**.. +Dans certains cas, lorsque le nœud est inaccessible, l'apiserver est incapable de communiquer avec le kubelet sur le nœud. +La décision de supprimer les pods ne peut pas être communiquée au kublet tant que la communication avec l'apiserver n'est pas rétablie. +Entre-temps, les pods dont la suppression est planifiée peuvent continuer à s'exécuter sur le nœud inaccessible. + +Dans les versions de Kubernetes antérieures à 1.5, le contrôleur de noeud [forcait la suppression](/docs/concepts/workloads/pods/pod/#force-deletion-of-pods) de ces pods inaccessibles de l'apiserver. +Toutefois, dans la version 1.5 et ultérieure, le contrôleur de noeud ne force pas la suppression des pods tant qu'il n'est pas confirmé qu'ils ont cessé de fonctionner dans le cluster. +Vous pouvez voir que les pods en cours d'exécution sur un nœud inaccessible sont dans l'état `Terminating` ou` Unknown`. +Dans les cas où Kubernetes ne peut pas déduire de l'infrastructure sous-jacente si un nœud a définitivement quitté un cluster, l'administrateur du cluster peut avoir besoin de supprimer l'objet nœud à la main. +La suppression de l'objet nœud de Kubernetes entraîne la suppression de tous les objets Pod exécutés sur le nœud de l'apiserver et libère leurs noms. + +Dans la version 1.12, la fonctionnalité `TaintNodesByCondition` est promue en version bêta, ce qui permet au contrôleur de cycle de vie du nœud de créer automatiquement des [marquages](/docs/concepts/configuration/taint-and-toleration/) (taints en anglais) qui représentent des conditions. +De même, l'ordonnanceur ignore les conditions lors de la prise en compte d'un nœud; au lieu de cela, il regarde les taints du nœud et les tolérances d'un pod. + +Les utilisateurs peuvent désormais choisir entre l'ancien modèle de planification et un nouveau modèle de planification plus flexible. +Un pod qui n’a aucune tolérance est programmé selon l’ancien modèle. +Mais un pod qui tolère les taints d'un nœud particulier peut être programmé sur ce nœud. + +{{< caution >}} +L'activation de cette fonctionnalité crée un léger délai entre le moment où une condition est observée et le moment où une taint est créée. +Ce délai est généralement inférieur à une seconde, mais il peut augmenter le nombre de pods programmés avec succès mais rejetés par le kubelet. +{{< /caution >}} + +### Capacité + +Décrit les ressources disponibles sur le nœud: CPU, mémoire et nombre maximal de pods pouvant être planifiés sur le nœud. + +### Info + +Informations générales sur le noeud, telles que la version du noyau, la version de Kubernetes (versions de kubelet et kube-proxy), la version de Docker (si utilisée), le nom du système d'exploitation. +Les informations sont collectées par Kubelet à partir du noeud. + +## Gestion + +Contrairement aux [pods](/docs/concepts/workloads/pods/) et aux [services] (/docs/concepts/services-networking/service/), un nœud n'est pas créé de manière inhérente par Kubernetes: il est créé de manière externe par un cloud tel que Google Compute Engine, ou bien il existe dans votre pool de machines physiques ou virtuelles. +Ainsi, lorsque Kubernetes crée un nœud, il crée un objet qui représente le nœud. +Après la création, Kubernetes vérifie si le nœud est valide ou non. +Par exemple, si vous essayez de créer un nœud à partir du contenu suivant: + +```json +{ + "kind": "Node", + "apiVersion": "v1", + "metadata": { + "name": "10.240.79.157", + "labels": { + "name": "my-first-k8s-node" + } + } +} +``` + +Kubernetes crée un objet noeud en interne (la représentation) et valide le noeud en vérifiant son intégrité en fonction du champ `metadata.name`. +Si le nœud est valide, c'est-à-dire si tous les services nécessaires sont en cours d'exécution, il est éligible pour exécuter un pod. +Sinon, il est ignoré pour toute activité de cluster jusqu'à ce qu'il devienne valide. + +{{< note >}} +Kubernetes conserve l'objet pour le nœud non valide et vérifie s'il devient valide. +Vous devez explicitement supprimer l'objet Node pour arrêter ce processus. +{{< /note >}} + +Actuellement, trois composants interagissent avec l'interface de noeud Kubernetes: le contrôleur de noeud, kubelet et kubectl. + +### Contrôleur de nœud + +Le contrôleur de noeud (node controller en anglais) est un composant du master Kubernetes qui gère divers aspects des noeuds. + +Le contrôleur de nœud a plusieurs rôles dans la vie d'un nœud. +La première consiste à affecter un bloc CIDR au nœud lorsqu’il est enregistré (si l’affectation CIDR est activée). + +La seconde consiste à tenir à jour la liste interne des nœuds du contrôleur de nœud avec la liste des machines disponibles du fournisseur de cloud. +Lorsqu'il s'exécute dans un environnement de cloud, chaque fois qu'un nœud est en mauvaise santé, le contrôleur de nœud demande au fournisseur de cloud si la machine virtuelle de ce nœud est toujours disponible. +Sinon, le contrôleur de nœud supprime le nœud de sa liste de nœuds. + +La troisième est la surveillance de la santé des nœuds. +Le contrôleur de noeud est responsable de la mise à jour de la condition NodeReady de NodeStatus vers ConditionUnknown lorsqu'un noeud devient inaccessible (le contrôleur de noeud cesse de recevoir des heartbeats pour une raison quelconque, par exemple en raison d'une panne du noeud), puis de l'éviction ultérieure de tous les pods du noeud. (en utilisant une terminaison propre) si le nœud continue d’être inaccessible. +(Les délais d'attente par défaut sont de 40 secondes pour commencer à signaler ConditionUnknown et de 5 minutes après cela pour commencer à expulser les pods.) +Le contrôleur de nœud vérifie l'état de chaque nœud toutes les `--node-monitor-period` secondes. + +Dans les versions de Kubernetes antérieures à 1.13, NodeStatus correspond au heartbeat du nœud. +À partir de Kubernetes 1.13, la fonctionnalité de bail de nœud (node lease en anglais) est introduite en tant que fonctionnalité alpha (feature gate `NodeLease`, [KEP-0009](https://github.com/kubernetes/community/blob/master/keps/sig-node/0009-node-heartbeat.md)). +Lorsque la fonction de node lease est activée, chaque noeud a un objet `Lease` associé dans le namespace `kube-node-lease` qui est renouvelé périodiquement par le noeud, et NodeStatus et le node lease sont traités comme des heartbeat du noeud. +Les node leases sont renouvelés fréquemment lorsque NodeStatus est signalé de nœud à master uniquement lorsque des modifications ont été apportées ou que suffisamment de temps s'est écoulé (la valeur par défaut est 1 minute, ce qui est plus long que le délai par défaut de 40 secondes pour les nœuds inaccessibles). +Étant donné qu'un node lease est beaucoup plus léger qu'un NodeStatus, cette fonctionnalité rends le heartbeat d'un nœud nettement moins coûteux, tant du point de vue de l'évolutivité que des performances. + +Dans Kubernetes 1.4, nous avons mis à jour la logique du contrôleur de noeud afin de mieux gérer les cas où un grand nombre de noeuds rencontrent des difficultés pour atteindre le master (par exemple parce que le master a un problème de réseau). +À partir de la version 1.4, le contrôleur de noeud examine l’état de tous les noeuds du cluster lorsqu’il prend une décision concernant l’éviction des pods. + +Dans la plupart des cas, le contrôleur de noeud limite le taux d’expulsion à `--node-eviction-rate` (0,1 par défaut) par seconde, ce qui signifie qu’il n’expulsera pas les pods de plus d’un nœud toutes les 10 secondes. + +Le comportement d'éviction de noeud change lorsqu'un noeud d'une zone de disponibilité donnée devient défaillant. +Le contrôleur de nœud vérifie quel pourcentage de nœuds de la zone est défaillant (la condition NodeReady est ConditionUnknown ou ConditionFalse) en même temps. +Si la fraction de nœuds défaillant est au moins `--unhealthy-zone-threshold` (valeur par défaut de 0,55), le taux d'expulsion est réduit: si le cluster est petit (c'est-à-dire inférieur ou égal à ` --large-cluster-size-threshold` noeuds - valeur par défaut 50) puis les expulsions sont arrêtées, sinon le taux d'expulsion est réduit à `--secondary-node-eviction-rate` (valeur par défaut de 0,01) par seconde. +Ces stratégies sont implémentées par zone de disponibilité car une zone de disponibilité peut être partitionnée à partir du master, tandis que les autres restent connectées. +Si votre cluster ne s'étend pas sur plusieurs zones de disponibilité de fournisseur de cloud, il n'existe qu'une seule zone de disponibilité (la totalité du cluster). + +L'une des principales raisons de la répartition de vos nœuds entre les zones de disponibilité est de pouvoir déplacer la charge de travail vers des zones saines lorsqu'une zone entière tombe en panne. +Par conséquent, si tous les nœuds d’une zone sont défaillants, le contrôleur de nœud expulse à la vitesse normale `--node-eviction-rate`. +Le cas pathologique se produit lorsque toutes les zones sont complètement défaillantes (c'est-à-dire qu'il n'y a pas de nœuds sains dans le cluster). +Dans ce cas, le contrôleur de noeud suppose qu'il existe un problème de connectivité au master et arrête toutes les expulsions jusqu'à ce que la connectivité soit restaurée. + +À partir de Kubernetes 1.6, NodeController est également responsable de l'expulsion des pods s'exécutant sur des noeuds avec des marques `NoExecute`, lorsque les pods ne tolèrent pas ces marques. +De plus, en tant que fonctionnalité alpha désactivée par défaut, NodeController est responsable de l'ajout de marques correspondant aux problèmes de noeud tels que les noeuds inaccessibles ou non prêts. +Voir [cette documentation](/docs/concepts/configuration/taint-and-toleration/) pour plus de détails sur les marques `NoExecute` et cette fonctionnalité alpha. + +À partir de la version 1.8, le contrôleur de noeud peut être chargé de créer des tâches représentant les conditions de noeud. +Ceci est une fonctionnalité alpha de la version 1.8. + +### Auto-enregistrement des nœuds + +Lorsque l'indicateur de kubelet `--register-node` est à true (valeur par défaut), le kubelet tente de s'enregistrer auprès du serveur d'API. +C'est le modèle préféré, utilisé par la plupart des distributions Linux. + +Pour l'auto-enregistrement (self-registration en anglais), le kubelet est lancé avec les options suivantes: + + - `--kubeconfig` - Chemin d'accès aux informations d'identification pour s'authentifier auprès de l'apiserver. + - `--cloud-provider` - Comment lire les métadonnées d'un fournisseur de cloud sur lui-même. + - `--register-node` - Enregistrement automatique avec le serveur API. + - `--register-with-taints` - Enregistrez le noeud avec la liste donnée de marques (comma separated `=:`). Sans effet si `register-node` est à false. + - `--node-ip` - Adresse IP du noeud. + - `--node-labels` - Labels à ajouter lors de l’enregistrement du noeud dans le cluster (voir Restrictions des labels appliquées par le [plugin NodeRestriction admission](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) dans les versions 1.13+). + - `--node-status-update-frequency` - Spécifie la fréquence à laquelle kubelet publie le statut de nœud sur master. + +Quand le mode [autorisation de nœud](/docs/reference/access-authn-authz/node/) et [plugin NodeRestriction admission](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) sont activés, les kubelets sont uniquement autorisés à créer / modifier leur propre ressource de noeud. + +#### Administration manuelle de noeuds + +Un administrateur de cluster peut créer et modifier des objets de nœud. + +Si l'administrateur souhaite créer des objets de noeud manuellement, définissez l'argument de kubelet: `--register-node=false`. + +L'administrateur peut modifier les ressources du nœud (quel que soit le réglage de `--register-node`). +Les modifications comprennent la définition de labels sur le nœud et son marquage comme non programmable. + +Les étiquettes sur les nœuds peuvent être utilisées avec les sélecteurs de nœuds sur les pods pour contrôler la planification. Par exemple, pour contraindre un pod à ne pouvoir s'exécuter que sur un sous-ensemble de nœuds. + +Marquer un nœud comme non planifiable empêche la planification de nouveaux pods sur ce nœud, mais n'affecte pas les pods existants sur le nœud. +Ceci est utile comme étape préparatoire avant le redémarrage d'un nœud, etc. Par exemple, pour marquer un nœud comme non programmable, exécutez la commande suivante: + +```shell +kubectl cordon $NODENAME +``` + +{{< note >}} +Les pods créés par un contrôleur DaemonSet contournent le planificateur Kubernetes et ne respectent pas l'attribut unschedulable sur un nœud. +Cela suppose que les démons appartiennent à la machine même si celle-ci est en cours de vidage des applications pendant qu'elle se prépare au redémarrage. +{{< /note >}} + +### Capacité de nœud + +La capacité du nœud (nombre de CPU et quantité de mémoire) fait partie de l’objet Node. +Normalement, les nœuds s'enregistrent et indiquent leur capacité lors de la création de l'objet Node. +Si vous faites une [administration manuelle de nœud](#manual-node-administration), alors vous devez définir la capacité du nœud lors de l'ajout d'un nœud. + +Le scheduler Kubernetes veille à ce qu'il y ait suffisamment de ressources pour tous les pods d'un noeud. +Il vérifie que la somme des demandes des conteneurs sur le nœud n'est pas supérieure à la capacité du nœud. +Cela inclut tous les conteneurs lancés par le kubelet, mais pas les conteneurs lancés directement par le [conteneur runtime](/docs/concepts/overview/components/#noeud-composants), ni aucun processus exécuté en dehors des conteneurs. + +Si vous souhaitez réserver explicitement des ressources pour des processus autres que Pod, suivez ce tutoriel pour: [réserver des ressources pour les démons système](/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved). + +## API Object + +L'objet Node est une ressource de niveau supérieur dans l'API REST de Kubernetes. +Plus de détails sur l'objet API peuvent être trouvés à l'adresse suivante: [Node API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core). + +{{% /capture %}} diff --git a/content/fr/docs/concepts/cluster-administration/_index.md b/content/fr/docs/concepts/cluster-administration/_index.md new file mode 100755 index 0000000000000..b83ee60d5e624 --- /dev/null +++ b/content/fr/docs/concepts/cluster-administration/_index.md @@ -0,0 +1,5 @@ +--- +title: "Administration d'un cluster" +weight: 100 +--- + diff --git a/content/fr/docs/concepts/cluster-administration/certificates.md b/content/fr/docs/concepts/cluster-administration/certificates.md new file mode 100644 index 0000000000000..c0a80b5bfd5f1 --- /dev/null +++ b/content/fr/docs/concepts/cluster-administration/certificates.md @@ -0,0 +1,248 @@ +--- +title: Certificats +content_template: templates/concept +weight: 20 +--- + + +{{% capture overview %}} + +Lorsque vous utilisez l'authentification par certificats client, vous pouvez générer des certificats +manuellement grâce à `easyrsa`, `openssl` ou `cfssl`. + +{{% /capture %}} + + +{{% capture body %}} + +### easyrsa + +**easyrsa** peut générer manuellement des certificats pour votre cluster. + +1. Téléchargez, décompressez et initialisez la version corrigée de easyrsa3. + + curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz + tar xzf easy-rsa.tar.gz + cd easy-rsa-master/easyrsa3 + ./easyrsa init-pki +1. Générez une CA. (`--batch` pour le mode automatique. `--req-cn` CN par défaut à utiliser) + + ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass +1. Générer un certificat de serveur et une clé. + L' argument `--subject-alt-name` définit les adresses IP et noms DNS possibles par lesquels l'API + serveur peut être atteind. La `MASTER_CLUSTER_IP` est généralement la première adresse IP du CIDR des services + qui est spécifié en tant qu'argument `--service-cluster-ip-range` pour l'API Server et + le composant controller manager. L'argument `--days` est utilisé pour définir le nombre de jours + après lesquels le certificat expire. + L’exemple ci-dessous suppose également que vous utilisez `cluster.local` par défaut comme + nom de domaine DNS. + + ./easyrsa --subject-alt-name="IP:${MASTER_IP},"\ + "IP:${MASTER_CLUSTER_IP},"\ + "DNS:kubernetes,"\ + "DNS:kubernetes.default,"\ + "DNS:kubernetes.default.svc,"\ + "DNS:kubernetes.default.svc.cluster,"\ + "DNS:kubernetes.default.svc.cluster.local" \ + --days=10000 \ + build-server-full server nopass +1. Copiez `pki/ca.crt`, `pki/issued/server.crt`, et `pki/private/server.key` dans votre répertoire. +1. Personnalisez et ajoutez les lignes suivantes aux paramètres de démarrage de l'API Server: + + --client-ca-file=/yourdirectory/ca.crt + --tls-cert-file=/yourdirectory/server.crt + --tls-private-key-file=/yourdirectory/server.key + +### openssl + +**openssl** peut générer manuellement des certificats pour votre cluster. + +1. Générez ca.key en 2048bit: + + openssl genrsa -out ca.key 2048 +1. A partir de la clé ca.key générez ca.crt (utilisez -days pour définir la durée du certificat): + + openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt +1. Générez server.key en 2048bit: + + openssl genrsa -out server.key 2048 +1. Créez un fichier de configuration pour générer une demande de signature de certificat (CSR). + Assurez-vous de remplacer les valeurs marquées par des "< >" (par exemple, ``) + avec des valeurs réelles avant de l'enregistrer dans un fichier (par exemple, `csr.conf`). + Notez que la valeur de `MASTER_CLUSTER_IP` est celle du service Cluster IP pour l' + API Server comme décrit dans la sous-section précédente. + L’exemple ci-dessous suppose également que vous utilisez `cluster.local` par défaut comme + nom de domaine DNS. + + [ req ] + default_bits = 2048 + prompt = no + default_md = sha256 + req_extensions = req_ext + distinguished_name = dn + + [ dn ] + C = + ST = + L = + O = + OU = + CN = + + [ req_ext ] + subjectAltName = @alt_names + + [ alt_names ] + DNS.1 = kubernetes + DNS.2 = kubernetes.default + DNS.3 = kubernetes.default.svc + DNS.4 = kubernetes.default.svc.cluster + DNS.5 = kubernetes.default.svc.cluster.local + IP.1 = + IP.2 = + + [ v3_ext ] + authorityKeyIdentifier=keyid,issuer:always + basicConstraints=CA:FALSE + keyUsage=keyEncipherment,dataEncipherment + extendedKeyUsage=serverAuth,clientAuth + subjectAltName=@alt_names +1. Générez la demande de signature de certificat basée sur le fichier de configuration: + + openssl req -new -key server.key -out server.csr -config csr.conf +1. Générez le certificat de serveur en utilisant ca.key, ca.crt et server.csr: + + openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \ + -CAcreateserial -out server.crt -days 10000 \ + -extensions v3_ext -extfile csr.conf +1. Vérifiez le certificat: + + openssl x509 -noout -text -in ./server.crt + +Enfin, ajoutez les mêmes paramètres aux paramètres de démarrage de l'API Server. + +### cfssl + +**cfssl** est un autre outil pour la génération de certificat. + +1. Téléchargez, décompressez et préparez les outils de ligne de commande comme indiqué ci-dessous. + Notez que vous devrez peut-être adapter les exemples de commandes en fonction du matériel, + de l'architecture et de la version de cfssl que vous utilisez. + + curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o cfssl + chmod +x cfssl + curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o cfssljson + chmod +x cfssljson + curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o cfssl-certinfo + chmod +x cfssl-certinfo +1. Créez un répertoire pour contenir les artefacts et initialiser cfssl: + + mkdir cert + cd cert + ../cfssl print-defaults config > config.json + ../cfssl print-defaults csr > csr.json +1. Créez un fichier JSON pour générer le fichier d'autorité de certification, par exemple, `ca-config.json`: + + { + "signing": { + "default": { + "expiry": "8760h" + }, + "profiles": { + "kubernetes": { + "usages": [ + "signing", + "key encipherment", + "server auth", + "client auth" + ], + "expiry": "8760h" + } + } + } + } +1. Créez un fichier JSON pour la demande de signature de certificat de l'autorité de certification, par exemple, + `ca-csr.json`. Assurez-vous de remplacer les valeurs marquées par des "< >" par + les vraies valeurs que vous voulez utiliser. + + { + "CN": "kubernetes", + "key": { + "algo": "rsa", + "size": 2048 + }, + "names":[{ + "C": "", + "ST": "", + "L": "", + "O": "", + "OU": "" + }] + } +1. Générez la clé de CA (`ca-key.pem`) et le certificat (`ca.pem`): + + ../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca +1. Créer un fichier JSON pour générer des clés et des certificats pour l'API Server, + par exemple, `server-csr.json`. Assurez-vous de remplacer les valeurs entre "< >" par + les vraies valeurs que vous voulez utiliser. `MASTER_CLUSTER_IP` est le service Cluster IP + de l'API Server, comme décrit dans la sous-section précédente. + L’exemple ci-dessous suppose également que vous utilisez `cluster.local` par défaut comme + nom de domaine DNS. + + { + "CN": "kubernetes", + "hosts": [ + "127.0.0.1", + "", + "", + "kubernetes", + "kubernetes.default", + "kubernetes.default.svc", + "kubernetes.default.svc.cluster", + "kubernetes.default.svc.cluster.local" + ], + "key": { + "algo": "rsa", + "size": 2048 + }, + "names": [{ + "C": "", + "ST": "", + "L": "", + "O": "", + "OU": "" + }] + } +1. Générez la clé et le certificat pour l'API Server, qui sont par défaut + sauvegardés respectivement dans les fichiers `server-key.pem` et` server.pem`: + + ../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \ + --config=ca-config.json -profile=kubernetes \ + server-csr.json | ../cfssljson -bare server + + +## Distribuer un certificat auto-signé + +Un client peut refuser de reconnaître un certificat auto-signé comme valide. +Pour un déploiement hors production ou pour un déploiement exécuté derrière un +pare-feu d'entreprise, vous pouvez distribuer un certificat auto-signé à tous les clients et +actualiser la liste locale pour les certificats valides. + +Sur chaque client, effectuez les opérations suivantes: + +```bash +$ sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt +$ sudo update-ca-certificates +Updating certificates in /etc/ssl/certs... +1 added, 0 removed; done. +Running hooks in /etc/ca-certificates/update.d.... +done. +``` + +## API pour les certificats + +Vous pouvez utiliser l’API `certificates.k8s.io` pour faire créer des +Certificats x509 à utiliser pour l'authentification, comme documenté +[ici](/docs/tasks/tls/managing-tls-in-a-cluster). + +{{% /capture %}} diff --git a/content/fr/docs/concepts/cluster-administration/cluster-administration-overview.md b/content/fr/docs/concepts/cluster-administration/cluster-administration-overview.md new file mode 100644 index 0000000000000..f7ce34b29bd0b --- /dev/null +++ b/content/fr/docs/concepts/cluster-administration/cluster-administration-overview.md @@ -0,0 +1,68 @@ +--- +title: Vue d'ensemble de l'administration d'un cluster +content_template: templates/concept +weight: 10 +--- + +{{% capture overview %}} +La vue d'ensemble de l'administration d'un cluster est destinée à toute personne créant ou administrant un cluster Kubernetes. +Il suppose une certaine familiarité avec les [concepts](/docs/concepts/) de Kubernetes. +{{% /capture %}} + +{{% capture body %}} +## Planifier le déploiement d'un cluster + +Voir le guide: [choisir la bonne solution](/docs/setup/pick-right-solution/) pour des exemples de planification, de mise en place et de configuration de clusters Kubernetes. Les solutions répertoriées dans cet article s'appellent des *distributions*. + +Avant de choisir un guide, voici quelques considérations: + + - Voulez-vous simplement essayer Kubernetes sur votre machine ou voulez-vous créer un cluster haute disponibilité à plusieurs nœuds? Choisissez les distributions les mieux adaptées à vos besoins. + - **Si vous recherchez la haute disponibilité**, apprenez à configurer des [clusters multi zones](/docs/concepts/cluster-administration/federation/). + - Utiliserez-vous **un cluster Kubernetes hébergé**, comme [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), ou **hébergerez-vous votre propre cluster**? + - Votre cluster sera-t-il **on-premises**, ou **sur un cloud (IaaS)**? Kubernetes ne prend pas directement en charge les clusters hybrides. Cependant, vous pouvez configurer plusieurs clusters. + - **Si vous configurez Kubernetes on-premises**, choisissez le [modèle réseau](/docs/concepts/cluster-administration/networking/) qui vous convient le mieux. + - Voulez-vous faire tourner Kubernetes sur du **bare metal** ou sur des **machines virtuelles (VMs)**? + - Voulez-vous **simplement faire tourner un cluster**, ou vous attendez-vous à faire du **développement actif sur le code du projet Kubernetes**? Dans ce dernier cas, choisissez une distribution activement développée. Certaines distributions n’utilisent que des versions binaires, mais offrent une plus grande variété de choix. + - Familiarisez-vous avec les [composants](/docs/admin/cluster-components/) nécessaires pour faire tourner un cluster. + +A noter: Toutes les distributions ne sont pas activement maintenues. Choisissez des distributions qui ont été testées avec une version récente de Kubernetes. + +## Gérer un cluster + +* [Gérer un cluster](/docs/tasks/administer-cluster/cluster-management/) décrit plusieurs rubriques relatives au cycle de vie d’un cluster: création d’un nouveau cluster, mise à niveau des nœuds maître et des workers de votre cluster, maintenance des nœuds (mises à niveau du noyau, par exemple) et mise à niveau de la version de l’API Kubernetes d’un cluster en cours d’exécution. + +* Apprenez comment [gérer les nœuds](/docs/concepts/nodes/node/). + +* Apprenez à configurer et gérer les [quotas de ressources](/docs/concepts/policy/resource-quotas/) pour les clusters partagés. + +## Sécuriser un cluster + +* La rubrique [Certificats](/docs/concepts/cluster-administration/certificates/) décrit les étapes à suivre pour générer des certificats à l’aide de différentes suites d'outils. + +* L' [Environnement de conteneur dans Kubernetes](/docs/concepts/containers/container-environment-variables/) décrit l'environnement des conteneurs gérés par la Kubelet sur un nœud Kubernetes. + +* Le [Contrôle de l'accès à l'API Kubernetes](/docs/reference/access-authn-authz/controlling-access/) explique comment configurer les autorisations pour les utilisateurs et les comptes de service. + +* La rubrique [Authentification](/docs/reference/access-authn-authz/authentication/) explique l'authentification dans Kubernetes, y compris les différentes options d'authentification. + +* [Autorisations](/docs/reference/access-authn-authz/authorization/) est distinct de l'authentification et contrôle le traitement des appels HTTP. + +* [Utiliser les Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/) explique les plug-ins qui interceptent les requêtes adressées au serveur d'API Kubernetes après authentification et autorisation. + +* [Utiliser Sysctls dans un cluster Kubernetes](/docs/concepts/cluster-administration/sysctl-cluster/) explique aux administrateurs comment utiliser l'outil de ligne de commande `sysctl` pour définir les paramètres du noyau. + +* [Auditer](/docs/tasks/debug-application-cluster/audit/) explique comment interagir avec les journaux d'audit de Kubernetes. + +### Sécuriser la Kubelet + * [Communication Master-Node](/docs/concepts/architecture/master-node-communication/) + * [TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) + * [Kubelet authentification/autorisations](/docs/admin/kubelet-authentication-authorization/) + +## Services de cluster optionnels + +* [Integration DNS](/docs/concepts/services-networking/dns-pod-service/) décrit comment résoudre un nom DNS directement vers un service Kubernetes. + +* [Journalisation et surveillance de l'activité du cluster](/docs/concepts/cluster-administration/logging/) explique le fonctionnement de la connexion à Kubernetes et son implémentation. +{{% /capture %}} + + diff --git a/content/fr/docs/concepts/containers/_index.md b/content/fr/docs/concepts/containers/_index.md new file mode 100644 index 0000000000000..9a86e2af74afe --- /dev/null +++ b/content/fr/docs/concepts/containers/_index.md @@ -0,0 +1,4 @@ +--- +title: "Les conteneurs" +weight: 40 +--- \ No newline at end of file diff --git a/content/fr/docs/concepts/containers/container-environment-variables.md b/content/fr/docs/concepts/containers/container-environment-variables.md new file mode 100644 index 0000000000000..efe686422bf0e --- /dev/null +++ b/content/fr/docs/concepts/containers/container-environment-variables.md @@ -0,0 +1,69 @@ +--- +reviewers: +- sieben +- perriea +- lledru +- awkif +- yastij +- rbenzair +- oussemos +title: Les variables d’environnement du conteneur +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + +Cette page décrit les ressources disponibles pour les conteneurs dans l'environnement de conteneur. + +{{% /capture %}} + + +{{% capture body %}} + +## L'environnement du conteneur + +L’environnement Kubernetes conteneur fournit plusieurs ressources importantes aux conteneurs: + +* Un système de fichier, qui est une combinaison d'une [image](/docs/concepts/containers/images/) et un ou plusieurs [volumes](/docs/concepts/storage/volumes/). +* Informations sur le conteneur lui-même. +* Informations sur les autres objets du cluster. + +### Informations sur le conteneur + +Le nom d'*hôte* d'un conteneur est le nom du pod dans lequel le conteneur est en cours d'exécution. +Il est disponible via la commande `hostname` ou +[`gethostname`](http://man7.org/linux/man-pages/man2/gethostname.2.html) +dans libc. + +Le nom du pod et le namespace sont disponibles en tant que variables d'environnement via +[l'API downward](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/). + +Les variables d'environnement définies par l'utilisateur à partir de la définition de pod sont également disponibles pour le conteneur, +de même que toutes les variables d'environnement spécifiées de manière statique dans l'image Docker. + +### Informations sur le cluster + +Une liste de tous les services en cours d'exécution lors de la création d'un conteneur est disponible pour ce conteneur en tant que variables d'environnement. +Ces variables d'environnement correspondent à la syntaxe des liens Docker. + +Pour un service nommé *foo* qui correspond à un conteneur *bar*, +les variables suivantes sont définies: + +```shell +FOO_SERVICE_HOST= +FOO_SERVICE_PORT= +``` + +Les services ont des adresses IP dédiées et sont disponibles pour le conteneur avec le DNS, +si le [module DNS](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/) est activé.  + +{{% /capture %}} + +{{% capture whatsnext %}} + +* En savoir plus sur [les hooks du cycle de vie d'un conteneur](/docs/concepts/containers/container-lifecycle-hooks/). +* Acquérir une expérience pratique + [en attachant les handlers aux événements du cycle de vie du conteneur](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/). + +{{% /capture %}} diff --git a/content/fr/docs/concepts/containers/runtime-class.md b/content/fr/docs/concepts/containers/runtime-class.md new file mode 100644 index 0000000000000..915d08e5acaef --- /dev/null +++ b/content/fr/docs/concepts/containers/runtime-class.md @@ -0,0 +1,122 @@ +--- +reviewers: +- sieben +- perriea +- lledru +- awkif +- yastij +- rbenzair +- oussemos +title: Classe d'exécution (Runtime Class) +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + +{{< feature-state for_k8s_version="v1.12" state="alpha" >}} + +Cette page décrit la ressource RuntimeClass et le mécanisme de sélection d'exécution (runtime). + +{{% /capture %}} + + +{{% capture body %}} + +## Runtime Class + +La RuntimeClass est une fonctionnalité alpha permettant de sélectionner la configuration d'exécution du conteneur +à utiliser pour exécuter les conteneurs d'un pod. + +### Installation + +En tant que nouvelle fonctionnalité alpha, certaines étapes de configuration supplémentaires doivent +être suivies pour utiliser la RuntimeClass: + +1. Activer la fonctionnalité RuntimeClass (sur les apiservers et les kubelets, nécessite la version 1.12+) +2. Installer la RuntimeClass CRD +3. Configurer l'implémentation CRI sur les nœuds (dépend du runtime) +4. Créer les ressources RuntimeClass correspondantes + +#### 1. Activer RuntimeClass feature gate (portail de fonctionnalité) + +Voir [Feature Gates](/docs/reference/command-line-tools-reference/feature-gates/) pour une explication +sur l'activation des feature gates. La `RuntimeClass` feature gate doit être activée sur les API servers _et_ +les kubelets. + +#### 2. Installer la CRD RuntimeClass + +La RuntimeClass [CustomResourceDefinition][] (CRD) se trouve dans le répertoire addons du dépôt +Git Kubernetes: [kubernetes/cluster/addons/runtimeclass/runtimeclass_crd.yaml][runtimeclass_crd] + +Installer la CRD avec `kubectl apply -f runtimeclass_crd.yaml`. + +[CustomResourceDefinition]: /docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/ +[runtimeclass_crd]: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/runtimeclass/runtimeclass_crd.yaml + + +#### 3. Configurer l'implémentation CRI sur les nœuds + +Les configurations à sélectionner avec RuntimeClass dépendent de l'implémentation CRI. Consultez +la documentation correspondante pour votre implémentation CRI pour savoir comment le configurer. +Comme c'est une fonctionnalité alpha, tous les CRI ne prennent pas encore en charge plusieurs RuntimeClasses. + +{{< note >}} +La RuntimeClass suppose actuellement une configuration de nœud homogène sur l'ensemble du cluster +(ce qui signifie que tous les nœuds sont configurés de la même manière en ce qui concerne les environnements d'exécution de conteneur). Toute hétérogénéité (configuration variable) doit être +gérée indépendamment de RuntimeClass via des fonctions de planification (scheduling features) (voir [Affectation de pods sur les nœuds](/docs/concepts/configuration/assign-pod-node/)). +{{< /note >}} + +Les configurations ont un nom `RuntimeHandler` correspondant , référencé par la RuntimeClass. +Le RuntimeHandler doit être un sous-domaine DNS valide selon la norme RFC 1123 (alphanumériques + `-` et `.` caractères). + +#### 4. Créer les ressources RuntimeClass correspondantes + +Les configurations effectuées à l'étape 3 doivent chacune avoir un nom `RuntimeHandler` associé, qui +identifie la configuration. Pour chaque RuntimeHandler (et optionellement les handlers vides `""`), +créez un objet RuntimeClass correspondant. + +La ressource RuntimeClass ne contient actuellement que 2 champs significatifs: le nom RuntimeClass +(`metadata.name`) et le RuntimeHandler (`spec.runtimeHandler`). la définition de l'objet ressemble à ceci: + +```yaml +apiVersion: node.k8s.io/v1alpha1 # La RuntimeClass est définie dans le groupe d'API node.k8s.io +kind: RuntimeClass +metadata: + name: myclass # Le nom avec lequel la RuntimeClass sera référencée + # La RuntimeClass est une ressource non cantonnées à un namespace +spec: + runtimeHandler: myconfiguration # Le nom de la configuration CRI correspondante +``` + + +{{< note >}} +Il est recommandé de limiter les opérations d'écriture sur la RuntimeClass (create/update/patch/delete) à +l'administrateur du cluster. C'est la configuration par défault. Voir [Vue d'ensemble d'autorisation](https://kubernetes.io/docs/reference/access-authn-authz/authorization/) pour plus de détails. +{{< /note >}} + +### Usage + +Une fois que les RuntimeClasses sont configurées pour le cluster, leur utilisation est très simple. +Spécifiez `runtimeClassName` dans la spécficiation du pod. Par exemple: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + runtimeClassName: myclass + # ... +``` + +Cela indiquera à la kubelet d'utiliser la RuntimeClass spécifiée pour exécuter ce pod. Si la +RuntimeClass n'existe pas, ou si la CRI ne peut pas exécuter le handler correspondant, le pod passera finalement à +[l'état](/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) `failed`. Recherchez +[l'événement](/docs/tasks/debug-application-cluster/debug-application-introspection/) correspondant pour un +message d'erreur. + +Si aucun `runtimeClassName` n'est spécifié, le RuntimeHandler par défault sera utilisé, qui équivaut +au comportement lorsque la fonctionnalité RuntimeClass est désactivée. + +{{% /capture %}} diff --git a/content/fr/docs/concepts/overview/_index.md b/content/fr/docs/concepts/overview/_index.md new file mode 100644 index 0000000000000..df9dc83e3d831 --- /dev/null +++ b/content/fr/docs/concepts/overview/_index.md @@ -0,0 +1,4 @@ +--- +title: "Vue d'ensemble" +weight: 20 +--- diff --git a/content/fr/docs/concepts/overview/what-is-kubernetes.md b/content/fr/docs/concepts/overview/what-is-kubernetes.md new file mode 100644 index 0000000000000..a6166f73a9e7e --- /dev/null +++ b/content/fr/docs/concepts/overview/what-is-kubernetes.md @@ -0,0 +1,136 @@ +--- +reviewers: + - jygastaud + - lledru + - sieben +title: Qu'est-ce-que Kubernetes ? +content_template: templates/concept +weight: 10 +card: + name: concepts + weight: 10 +--- + +{{% capture overview %}} +Cette page est une vue d'ensemble de Kubernetes. +{{% /capture %}} + +{{% capture body %}} +Kubernetes est une plate-forme open-source extensible et portable pour la gestion de charges de travail (workloads) et des services conteneurisés. +Elle favorise à la fois l'écriture de configuration déclarative (declarative configuration) et l'automatisation. +C'est un large écosystème en rapide expansion. +Les services, le support et les outils Kubernetes sont largement disponibles. + +Google a rendu open-source le projet Kubernetes en 2014. +Le développement de Kubernetes est basé sur une [décennie et demie d’expérience de Google avec la gestion de la charge et de la mise à l'échelle (scale) en production](https://research.google.com/pubs/pub43438.html), associé aux meilleures idées et pratiques de la communauté. + +## Pourquoi ai-je besoin de Kubernetes et que peut-il faire ? + +Kubernetes a un certain nombre de fonctionnalités. Il peut être considéré comme: + +- une plate-forme de conteneur +- une plate-forme de microservices +- une plate-forme cloud portable +et beaucoup plus. + +Kubernetes fournit un environnement de gestion **focalisé sur le conteneur** (container-centric). +Il orchestre les ressources machines (computing), la mise en réseau et l’infrastructure de stockage sur les workloads des utilisateurs. +Cela permet de se rapprocher de la simplicité des Platform as a Service (PaaS) avec la flexibilité des solutions d'Infrastructure as a Service (IaaS), tout en gardant de la portabilité entre les différents fournisseurs d'infrastructures (providers). + +## Comment Kubernetes est-il une plate-forme ? + +Même si Kubernetes fournit de nombreuses fonctionnalités, il existe toujours de nouveaux scénarios qui bénéficieraient de fonctionnalités complémentaires. +Ces workflows spécifiques à une application permettent d'accélérer la vitesse de développement. +Si l'orchestration fournie de base est acceptable pour commencer, il est souvent nécessaire d'avoir une automatisation robuste lorsque l'on doit la faire évoluer. +C'est pourquoi Kubernetes a également été conçu pour servir de plate-forme et favoriser la construction d’un écosystème de composants et d’outils facilitant le déploiement, la mise à l’échelle et la gestion des applications. + +[Les Labels](/docs/concepts/overview/working-with-objects/labels/) permettent aux utilisateurs d'organiser leurs ressources comme ils/elles le souhaitent. +[Les Annotations](/docs/concepts/overview/working-with-objects/annotations/) autorisent les utilisateurs à définir des informations personnalisées sur les ressources pour faciliter leurs workflows et fournissent un moyen simple aux outils de gérer la vérification d'un état (checkpoint state). + +De plus, le [plan de contrôle Kubernetes (control +plane)](/docs/concepts/overview/components/) est construit sur les mêmes [APIs](/docs/reference/using-api/api-overview/) que celles accessibles aux développeurs et utilisateurs. +Les utilisateurs peuvent écrire leurs propres controlleurs (controllers), tels que les [ordonnanceurs (schedulers)](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/devel/scheduler.md), +avec [leurs propres APIs](/docs/concepts/api-extension/custom-resources/) qui peuvent être utilisés par un [outil en ligne de commande](/docs/user-guide/kubectl-overview/). + +Ce choix de [conception](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) a permis de construire un ensemble d'autres systèmes par dessus Kubernetes. + +## Ce que Kubernetes n'est pas + +Kubernetes n’est pas une solution PaaS (Platform as a Service). +Kubernetes opérant au niveau des conteneurs plutôt qu'au niveau du matériel, il fournit une partie des fonctionnalités des offres PaaS, telles que le déploiement, la mise à l'échelle, l'équilibrage de charge (load balancing), la journalisation (logging) et la surveillance (monitoring). +Cependant, Kubernetes n'est pas monolithique. +Ces implémentations par défaut sont optionnelles et interchangeables. Kubernetes fournit les bases permettant de construire des plates-formes orientées développeurs, en laissant la possibilité à l'utilisateur de faire ses propres choix. + +Kubernetes: + +- Ne limite pas les types d'applications supportées. Kubernetes prend en charge des workloads extrêmement divers, dont des applications stateless, stateful ou orientées traitement de données (data-processing). +Si l'application peut fonctionner dans un conteneur, elle devrait bien fonctionner sur Kubernetes. +- Ne déploie pas de code source et ne build pas d'application non plus. Les workflows d'Intégration Continue, de Livraison Continue et de Déploiement Continu (CI/CD) sont réalisés en fonction de la culture d'entreprise, des préférences ou des pré-requis techniques. +- Ne fournit pas nativement de services au niveau applicatif tels que des middlewares (e.g., message buses), des frameworks de traitement de données (par exemple, Spark), des bases de données (e.g., mysql), caches, ou systèmes de stockage clusterisés (e.g., Ceph). +Ces composants peuvent être lancés dans Kubernetes et/ou être accessibles à des applications tournant dans Kubernetes via des mécaniques d'intermédiation tel que Open Service Broker. +- N'impose pas de solutions de logging, monitoring, ou alerting. +Kubernetes fournit quelques intégrations primaires et des mécanismes de collecte et export de métriques. +- Ne fournit ou n'impose un langague/système de configuration (e.g., [jsonnet](https://github.com/google/jsonnet)). +Il fournit une API déclarative qui peut être ciblée par n'importe quelle forme de spécifications déclaratives. +- Ne fournit ou n'adopte aucune mécanique de configuration des machines, de maintenance, de gestion ou de contrôle de la santé des systèmes. + +De plus, Kubernetes n'est pas vraiment un _système d'orchestration_. En réalité, il élimine le besoin d'orchestration. +Techniquement, l'_orchestration_ se définie par l'exécution d'un workflow défini : premièrement faire A, puis B, puis C. +Kubernetes quant à lui est composé d'un ensemble de processus de contrôle qui pilote l'état courant vers l'état désiré. +Peu importe comment on arrive du point A au point C. +Un contrôle centralisé n'est pas non plus requis. +Cela abouti à un système plus simple à utiliser et plus puissant, robuste, résiliant et extensible. + +## Pourquoi les conteneurs ? + +Vous cherchez des raisons d'utiliser des conteneurs ? + +![Pourquoi les conteneurs ?](/images/docs/why_containers.svg) + +L'_ancienne façon (old way)_ de déployer des applications consistait à installer les applications sur un hôte en utilisant les systèmes de gestions de paquets natifs. +Cela avait pour principale inconvénient de lier fortement les exécutables, la configuration, les librairies et le cycle de vie de chacun avec l'OS. +Il est bien entendu possible de construire une image de machine virtuelle (VM) immuable pour arriver à produire des publications (rollouts) ou retours arrières (rollbacks), mais les VMs sont lourdes et non-portables. + +La _nouvelle façon (new way)_ consiste à déployer des conteneurs basés sur une virtualisation au niveau du système d'opération (operation-system-level) plutôt que de la virtualisation hardware. +Ces conteneurs sont isolés les uns des autres et de l'hôte : +ils ont leurs propres systèmes de fichiers, ne peuvent voir que leurs propres processus et leur usage des ressources peut être contraint. +Ils sont aussi plus facile à construire que des VMs, et vu qu'ils sont décorrélés de l'infrastructure sous-jacente et du système de fichiers de l'hôte, ils sont aussi portables entre les différents fournisseurs de Cloud et les OS. + +Étant donné que les conteneurs sont petits et rapides, une application peut être packagées dans chaque image de conteneurs. +Cette relation application-image tout-en-un permet de bénéficier de tous les bénéfices des conteneurs. Avec les conteneurs, des images immuables de conteneur peuvent être créées au moment du build/release plutôt qu'au déploiement, vu que chaque application ne dépend pas du reste de la stack applicative et n'est pas liée à l'environnement de production. +La génération d'images de conteneurs au moment du build permet d'obtenir un environnement constant qui peut être déployé tant en développement qu'en production. De la même manière, les conteneurs sont bien plus transparents que les VMs, ce qui facilite le monitoring et le management. +Cela est particulièrement vrai lorsque le cycle de vie des conteneurs est géré par l'infrastructure plutôt que caché par un gestionnaire de processus à l'intérieur du conteneur. Avec une application par conteneur, gérer ces conteneurs équivaut à gérer le déploiement de son application. + +Résumé des bénéfices des conteneurs : + +- **Création et déploiement agile d'application** : + Augmente la simplicité et l'efficacité de la création d'images par rapport à l'utilisation d'image de VM. +- **Développement, intégration et déploiement Continus**: + Fournit un processus pour constuire et déployer fréquemment et de façon fiable avec la capacité de faire des rollbacks rapide et simple (grâce à l'immuabilité de l'image). +- **Séparation des besoins entre Dev et Ops**: + Création d'images applicatives au moment du build plutôt qu'au déploiement, tout en séparant l'application de l'infrastructure. +- **Observabilité** + Pas seulement des informations venant du système d'exploitation sous-jacent mais aussi des signaux propres de l'application. +- **Consistance entre les environnements de développement, tests et production**: + Fonctionne de la même manière que ce soit sur un poste local que chez un fournisseur d'hébergement / dans le Cloud. +- **Portabilité entre Cloud et distribution système**: + Fonctionne sur Ubuntu, RHEL, CoreOS, on-prem, Google Kubernetes Engine, et n'importe où. +- **Gestion centrée Application**: + Bascule le niveau d'abstraction d'une virtualisation hardware liée à l'OS à une logique de ressources orientée application. +- **[Micro-services](https://martinfowler.com/articles/microservices.html) faiblement couplés, distribués, élastiques**: + Les applications sont séparées en petits morceaux indépendants et peuvent être déployés et gérés dynamiquement -- pas une stack monolithique dans une seule machine à tout faire. +- **Isolation des ressources**: + Performances de l'application prédictible. +- **Utilisation des ressources**: + Haute efficacité et densité. + +## Qu'est-ce-que Kubenetes signifie ? K8s ? + +Le nom **Kubernetes** tire son origine du grec ancien, signifiant _capitaine_ ou _pilôte_ et est la racine de _gouverneur_ et [cybernetic](http://www.etymonline.com/index.php?term=cybernetics). _K8s_ est l'abréviation dérivée par le remplacement des 8 lettres "ubernete" par "8". + +{{% /capture %}} + +{{% capture whatsnext %}} +* Prêt à [commencer](/docs/setup/) ? +* Pour plus de détails, voir la [documentation Kubernetes](/docs/home/). +{{% /capture %}} diff --git a/content/fr/docs/home/_index.md b/content/fr/docs/home/_index.md new file mode 100644 index 0000000000000..46b0678291a4b --- /dev/null +++ b/content/fr/docs/home/_index.md @@ -0,0 +1,19 @@ +--- +approvers: +- chenopis +title: Documentation de Kubernetes +noedit: true +cid: docsHome +layout: docsportal_home +class: gridPage +linkTitle: "Home" +main_menu: true +weight: 10 +hide_feedback: true +menu: + main: + title: "Documentation" + weight: 20 + post: > +

Apprenez à utiliser Kubernetes à l'aide d'une documentation conceptuelle, didactique et de référence. Vous pouvez même aider en contribuant à la documentation!

+--- diff --git a/content/fr/docs/home/supported-doc-versions.md b/content/fr/docs/home/supported-doc-versions.md new file mode 100644 index 0000000000000..3be5b0d2d83be --- /dev/null +++ b/content/fr/docs/home/supported-doc-versions.md @@ -0,0 +1,22 @@ +--- +title: Versions supportées de la documentation Kubernetes +content_template: templates/concept +--- + +{{% capture overview %}} + +Ce site contient la documentation de la version actuelle de Kubernetes et les quatre versions précédentes de Kubernetes. + +{{% /capture %}} + +{{% capture body %}} + +## Version courante + +La version actuelle est [{{< param "version" >}}](/). + +## Versions précédentes + +{{< versions-other >}} + +{{% /capture %}} diff --git a/content/fr/docs/reference/kubectl/_index.md b/content/fr/docs/reference/kubectl/_index.md new file mode 100755 index 0000000000000..0c3d7882f6d13 --- /dev/null +++ b/content/fr/docs/reference/kubectl/_index.md @@ -0,0 +1,5 @@ +--- +title: "CLI kubectl" +weight: 60 +--- + diff --git a/content/fr/docs/reference/kubectl/cheatsheet.md b/content/fr/docs/reference/kubectl/cheatsheet.md new file mode 100644 index 0000000000000..d8252970e7e05 --- /dev/null +++ b/content/fr/docs/reference/kubectl/cheatsheet.md @@ -0,0 +1,342 @@ +--- +title: Aide-mémoire kubectl +content_template: templates/concept +card: + name: reference + weight: 30 +--- + +{{% capture overview %}} + +Voir aussi : [Aperçu Kubectl](/docs/reference/kubectl/overview/) et [Guide JsonPath](/docs/reference/kubectl/jsonpath). + +Cette page donne un aperçu de la commande `kubectl`. + +{{% /capture %}} + +{{% capture body %}} + +# Aide-mémoire kubectl + +## Auto-complétion avec Kubectl + +### BASH + +```bash +source <(kubectl completion bash) # active l'auto-complétion pour bash dans le shell courant, le paquet bash-completion devant être installé au préalable +echo "source <(kubectl completion bash)" >> ~/.bashrc # ajoute l'auto-complétion de manière permanente à votre shell bash +``` + +Vous pouvez de plus déclarer un alias pour `kubectl` qui fonctionne aussi avec l'auto-complétion : + +```bash +alias k=kubectl +complete -F __start_kubectl k +``` + +### ZSH + +```bash +source <(kubectl completion zsh) # active l'auto-complétion pour zsh dans le shell courant +echo "if [ $commands[kubectl] ]; then source <(kubectl completion zsh); fi" >> ~/.zshrc # ajoute l'auto-complétion de manière permanente à votre shell zsh +``` + +## Contexte et configuration de Kubectl + +Indique avec quel cluster Kubernetes `kubectl` communique et modifie les informations de configuration. Voir la documentation [Authentification multi-clusters avec kubeconfig](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) pour des informations détaillées sur le fichier de configuration. + +```bash +kubectl config view # Affiche les paramètres fusionnés de kubeconfig + +# Utilise plusieurs fichiers kubeconfig en même temps et affiche la configuration fusionnée +KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 kubectl config view + +# Affiche le mot de passe pour l'utilisateur e2e +kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}' + +kubectl config current-context # Affiche le contexte courant (current-context) +kubectl config use-context my-cluster-name # Définit my-cluster-name comme contexte courant + +# Ajoute un nouveau cluster à votre kubeconf, prenant en charge l'authentification de base (basic auth) +kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword + +# Définit et utilise un contexte qui utilise un nom d'utilisateur et un namespace spécifiques +kubectl config set-context gce --user=cluster-admin --namespace=foo \ + && kubectl config use-context gce +``` + +## Création d'objets + +Les manifests Kubernetes peuvent être définis en json ou yaml. Les extensions de fichier `.yaml`, +`.yml`, et `.json` peuvent être utilisés. + +```bash +kubectl create -f ./my-manifest.yaml # crée une ou plusieurs ressources +kubectl create -f ./my1.yaml -f ./my2.yaml # crée depuis plusieurs fichiers +kubectl create -f ./dir # crée une ou plusieurs ressources depuis tous les manifests dans dir +kubectl create -f https://git.io/vPieo # crée une ou plusieurs ressources depuis une url +kubectl create deployment nginx --image=nginx # démarre une instance unique de nginx +kubectl explain pods,svc # affiche la documentation pour les manifests pod et svc + +# Crée plusieurs objets YAML depuis l'entrée standard (stdin) +cat </dev/null; printf "\n"; done + +# Vérifie quels noeuds sont prêts +JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \ + && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True" + +# Liste tous les Secrets actuellement utilisés par un pod +kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq + +# Liste les événements (Events) classés par timestamp +kubectl get events --sort-by=.metadata.creationTimestamp +``` + +## Mise à jour de ressources + +Depuis la version 1.11, `rolling-update` a été déprécié (voir [CHANGELOG-1.11.md](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md)), utilisez plutôt `rollout`. + +```bash +kubectl set image deployment/frontend www=image:v2 # Rolling update du conteneur "www" du déploiement "frontend", par mise à jour de son image +kubectl rollout undo deployment/frontend # Rollback du déploiement précédent +kubectl rollout status -w deployment/frontend # Écoute (Watch) le status du rolling update du déploiement "frontend" jusqu'à ce qu'il se termine + +# déprécié depuis la version 1.11 +kubectl rolling-update frontend-v1 -f frontend-v2.json # (déprécié) Rolling update des pods de frontend-v1 +kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2 # (déprécié) Modifie le nom de la ressource et met à jour l'image +kubectl rolling-update frontend --image=image:v2 # (déprécié) Met à jour l'image du pod du déploiement frontend +kubectl rolling-update frontend-v1 frontend-v2 --rollback # (déprécié) Annule (rollback) le rollout en cours + +cat pod.json | kubectl replace -f - # Remplace un pod, en utilisant un JSON passé en entrée standard + +# Remplace de manière forcée (Force replace), supprime puis re-crée la ressource. Provoque une interruption de service. +kubectl replace --force -f ./pod.json + +# Crée un service pour un nginx repliqué, qui rend le service sur le port 80 et se connecte aux conteneurs sur le port 8000 +kubectl expose rc nginx --port=80 --target-port=8000 + +# Modifie la version (tag) de l'image du conteneur unique du pod à v4 +kubectl get pod mypod -o yaml | sed 's/\(image: myimage\):.*$/\1:v4/' | kubectl replace -f - + +kubectl label pods my-pod new-label=awesome # Ajoute un Label +kubectl annotate pods my-pod icon-url=http://goo.gl/XXBTWq # Ajoute une annotation +kubectl autoscale deployment foo --min=2 --max=10 # Mise à l'échelle automatique (Auto scale) d'un déploiement "foo" +``` + +## Mise à jour partielle de ressources + +```bash +kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}' # Met à jour partiellement un noeud + +# Met à jour l'image d'un conteneur ; spec.containers[*].name est requis car c'est une clé du merge +kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}' + +# Met à jour l'image d'un conteneur en utilisant un patch json avec tableaux indexés +kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]' + +# Désactive la livenessProbe d'un déploiement en utilisant un patch json avec tableaux indexés +kubectl patch deployment valid-deployment --type json -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/livenessProbe"}]' + +# Ajoute un nouvel élément à un tableau indexé +kubectl patch sa default --type='json' -p='[{"op": "add", "path": "/secrets/1", "value": {"name": "whatever" } }]' +``` + +## Édition de ressources +Ceci édite n'importe quelle ressource de l'API dans un éditeur. + +```bash +kubectl edit svc/docker-registry # Édite le service nommé docker-registry +KUBE_EDITOR="nano" kubectl edit svc/docker-registry # Utilise un autre éditeur +``` + +## Mise à l'échelle de ressources + +```bash +kubectl scale --replicas=3 rs/foo # Scale un replicaset nommé 'foo' à 3 +kubectl scale --replicas=3 -f foo.yaml # Scale une ressource spécifiée dans foo.yaml" à 3 +kubectl scale --current-replicas=2 --replicas=3 deployment/mysql # Si la taille du déploiement nommé mysql est actuellement 2, scale mysql à 3 +kubectl scale --replicas=5 rc/foo rc/bar rc/baz # Scale plusieurs contrôleurs de réplication +``` + +## Suppression de ressources + +```bash +kubectl delete -f ./pod.json # Supprime un pod en utilisant le type et le nom spécifiés dans pod.json +kubectl delete pod,service baz foo # Supprime les pods et services ayant les mêmes noms "baz" et "foo" +kubectl delete pods,services -l name=myLabel # Supprime les pods et services ayant le label name=myLabel +kubectl delete pods,services -l name=myLabel --include-uninitialized # Supprime les pods et services, dont ceux non initialisés, ayant le label name=myLabel +kubectl -n my-ns delete po,svc --all # Supprime tous les pods et services, dont ceux non initialisés, dans le namespace my-ns +``` + +## Interaction avec des Pods en cours d'exécution + +```bash +kubectl logs my-pod # Affiche les logs du pod (stdout) +kubectl logs my-pod --previous # Affiche les logs du pod (stdout) pour une instance précédente du conteneur +kubectl logs my-pod -c my-container # Affiche les logs d'un conteneur particulier du pod (stdout, cas d'un pod multi-conteneurs) +kubectl logs my-pod -c my-container --previous # Affiche les logs d'un conteneur particulier du pod (stdout, cas d'un pod multi-conteneurs) pour une instance précédente du conteneur +kubectl logs -f my-pod # Fait défiler (stream) les logs du pod (stdout) +kubectl logs -f my-pod -c my-container # Fait défiler (stream) les logs d'un conteneur particulier du pod (stdout, cas d'un pod multi-conteneurs) +kubectl run -i --tty busybox --image=busybox -- sh # Exécute un pod comme un shell interactif +kubectl attach my-pod -i # Attache à un conteneur en cours d'exécution +kubectl port-forward my-pod 5000:6000 # Écoute le port 5000 de la machine locale et forwarde vers le port 6000 de my-pod +kubectl exec my-pod -- ls / # Exécute une commande dans un pod existant (cas d'un seul conteneur) +kubectl exec my-pod -c my-container -- ls / # Exécute une commande dans un pod existant (cas multi-conteneurs) +kubectl top pod POD_NAME --containers # Affiche les métriques pour un pod donné et ses conteneurs +``` + +## Interaction avec des Noeuds et Clusters + +```bash +kubectl cordon mon-noeud # Marque mon-noeud comme non assignable (unschedulable) +kubectl drain mon-noeud # Draine mon-noeud en préparation d'une mise en maintenance +kubectl uncordon mon-noeud # Marque mon-noeud comme assignable +kubectl top node mon-noeud # Affiche les métriques pour un noeud donné +kubectl cluster-info # Affiche les adresses du master et des services +kubectl cluster-info dump # Affiche l'état courant du cluster sur stdout +kubectl cluster-info dump --output-directory=/path/to/cluster-state # Affiche l'état courant du cluster sur /path/to/cluster-state + +# Si une teinte avec cette clé et cet effet existe déjà, sa valeur est remplacée comme spécifié. +kubectl taint nodes foo dedicated=special-user:NoSchedule +``` + +### Types de ressources + +Liste tous les types de ressources pris en charge avec leurs noms courts (shortnames), [groupe d'API (API group)](/docs/concepts/overview/kubernetes-api/#api-groups), si elles sont [cantonnées à un namespace (namespaced)](/docs/concepts/overview/working-with-objects/namespaces), et leur [Genre (Kind)](/docs/concepts/overview/working-with-objects/kubernetes-objects): + +```bash +kubectl api-resources +``` + +Autres opérations pour explorer les ressources de l'API : + +```bash +kubectl api-resources --namespaced=true # Toutes les ressources cantonnées à un namespace +kubectl api-resources --namespaced=false # Toutes les ressources non cantonnées à un namespace +kubectl api-resources -o name # Toutes les ressources avec un affichage simple (uniquement le nom de la ressource) +kubectl api-resources -o wide # Toutes les ressources avec un affichage étendu (alias "wide") +kubectl api-resources --verbs=list,get # Toutes les ressources prenant en charge les verbes de requête "list" et "get" +kubectl api-resources --api-group=extensions # Toutes les ressources dans le groupe d'API "extensions" +``` + +### Formattage de l'affichage + +Pour afficher les détails sur votre terminal dans un format spécifique, vous pouvez utiliser une des options `-o` ou `--output` avec les commandes `kubectl` qui les prennent en charge. + +| Format d'affichage | Description | +|-------------------------------------|-----------------------------------------------------------------------------------------------------------------------| +| `-o=custom-columns=` | Affiche un tableau en spécifiant une liste de colonnes séparées par des virgules | +| `-o=custom-columns-file=` | Affiche un tableau en utilisant les colonnes spécifiées dans le fichier `` | +| `-o=json` | Affiche un objet de l'API formaté en JSON | +| `-o=jsonpath=