From ea0bca239471bfa172e78df12c6b216a8d16312c Mon Sep 17 00:00:00 2001 From: Dragons Date: Sat, 24 Jun 2017 10:27:38 +0800 Subject: [PATCH 01/87] what-is-kubernetes-pr --- cn/docs/concepts/overview/what-is-kubernetes.md | 17 ++++++++--------- cn/docs/whatisk8s.md | 1 + 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/cn/docs/concepts/overview/what-is-kubernetes.md b/cn/docs/concepts/overview/what-is-kubernetes.md index 3a651964f6e78..52d141a409aba 100644 --- a/cn/docs/concepts/overview/what-is-kubernetes.md +++ b/cn/docs/concepts/overview/what-is-kubernetes.md @@ -5,14 +5,14 @@ assignees: title: 认识 Kubernetes? --- -Kubernetes 是一个 [开源的容器调度平台,它可以自动化应用容器的部署、扩展和操作](http://www.slideshare.net/BrianGrant11/wso2con-us-2015-kubernetes-a-platform-for-automating-deployment-scaling-and-operations) 可以跨主机集群, 提供以容器为中心的基础架构。 +Kubernetes 是一个跨主机集群的 [开源的容器调度平台,它可以自动化应用容器的部署、扩展和操作](http://www.slideshare.net/BrianGrant11/wso2con-us-2015-kubernetes-a-platform-for-automating-deployment-scaling-and-operations) , 提供以容器为中心的基础架构。 使用 Kubernetes, 您可以快速高效地响应客户需求: - 快速、可预测地部署您的应用程序 - 拥有即时扩展应用程序的能力 - 不影响现有业务的情况下,无缝地发布新功能。 - - 优化您的硬件资源,降低您的拥有成本 + - 优化您的硬件资源,降低您的所需成本 我们的目标是构建一个软件和工具的生态系统,以减轻您在公共云或私有云运行应用程序的负担。 @@ -36,11 +36,11 @@ Kubernetes 项目由 Google 公司在 2014 年启动。Kubernetes 建立在 [Goo *新方式* 是基于操作系统级虚拟化而不是硬件级虚拟化方法来部署容器。容器之间彼此隔离并与主机隔离:它们具有自己的文件系统,不能看到彼此的进程,并且它们所使用的计算资源是可以被限制的。它们比虚拟机更容易构建,并且因为它们与底层基础架构和主机文件系统隔离,所以它们可以跨云和操作系统快速分发。 -由于容器体积小且启动快,因此可以在每个容器镜像中打包一个应用程序。这种一对一的应用镜像关系拥有很多好处。使用容器,不需要与外部的基础架构环境绑定, 因为每一个应用程序不需要外部依赖,更不需要与外部的基础架构环境依赖。完美解决了从开发到生产环境的一致性问题。 +由于容器体积小且启动快,因此可以在每个容器镜像中打包一个应用程序。这种一对一的应用镜像关系拥有很多好处。使用容器,不需要与外部的基础架构环境绑定, 因为每一个应用程序都不需要外部依赖,更不需要与外部的基础架构环境依赖。完美解决了从开发到生产环境的一致性问题。 -类似地,容器比虚拟机更加透明,这有助于监测和管理。真实的情况是,容器进程的生命周期由基础设施管理,而容器内的进程对外是隐藏的。最后,每个应用程序用容器封装,管理容器部署就等同于管理应用程序部署。 +容器同样比虚拟机更加透明,这有助于监测和管理。尤其是容器进程的生命周期由基础设施管理,而不是由容器内的进程对外隐藏时更是如此。最后,每个应用程序用容器封装,管理容器部署就等同于管理应用程序部署。 -容器好处摘要: +容器优点摘要: * **敏捷的应用程序创建和部署**: 与虚拟机镜像相比,容器镜像更容易创建,提升了硬件的使用效率。 @@ -55,7 +55,7 @@ Kubernetes 项目由 Google 公司在 2014 年启动。Kubernetes 建立在 [Goo * **以应用为中心的管理**: 提升了操作系统的抽象级别,以便在使用逻辑资源的操作系统上运行应用程序。 * **松耦合、分布式、弹性伸缩 [微服务](http://martinfowler.com/articles/microservices.html)**: - 应用程序被分成更小,更独立的部分,可以动态部署和管理 - 而不是巨型单体应用运行在专用的大型机。 + 应用程序被分成更小,更独立的部分,可以动态部署和管理 - 而不是巨型单体应用运行在专用的大型机上。 * **资源隔离**: 通过对应用进行资源隔离,可以很容易的预测应用程序性能。 * **资源利用**: @@ -87,11 +87,11 @@ Kubernetes 满足了生产中运行应用程序的许多常见的需求,例如 #### 为什么 Kubernetes 是一个平台? -Kubernetes 提供了很多的功能,总会有新的场景会受益于新特性。它可以简化应用程序的工作流,加快开发速度。被大家认可的应用编排通常需要有较强的自动化能力。这就是为什么 Kubernetes 被设计作为构建组件和工具的生态系统平台,以便更轻松地部署、扩展和管理应用程序。 +Kubernetes 提供了很多的功能,总会有新的场景受益于新特性。它可以简化应用程序的工作流,加快开发速度。被大家认可的应用编排通常需要有较强的自动化能力。这就是为什么 Kubernetes 被设计作为构建组件和工具的生态系统平台,以便更轻松地部署、扩展和管理应用程序。 [Label](/docs/user-guide/labels/) 允许用户按照自己的方式组织管理对应的资源。 [注解](/docs/user-guide/annotations/) 使用户能够以自定义的描述信息来修饰资源,以适用于自己的工作流,并为管理工具提供检查点状态的简单方法。 -此外,[Kubernetes 控制面](/docs/admin/cluster-components) 是构建在相同的 [APIs](/docs/api/) 上面,开发员人和用户都可以用。用户可以编写自己的控制器, [调度器](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/scheduler.md)等等,如果这么做,根据新加的[自定义 API](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/extending-api.md) ,可以扩展当前的通用 [CLI 命令行工具](/docs/user-guide/kubectl-overview/)。 +此外,[Kubernetes 控制面](/docs/admin/cluster-components) 是构建在相同的 [APIs](/docs/api/) 上面,开发人员和用户都可以用。用户可以编写自己的控制器, [调度器](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/scheduler.md)等等,如果这么做,根据新加的[自定义 API](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/extending-api.md) ,可以扩展当前的通用 [CLI 命令行工具](/docs/user-guide/kubectl-overview/)。 这种 [设计](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/principles.md) 使得许多其他系统可以构建在 Kubernetes 之上。 @@ -116,4 +116,3 @@ Kubernetes 不是一个传统意义上,包罗万象的 PaaS (平台即服务) #### *Kubernetes* 是什么意思? K8s? 名称 **Kubernetes** 源于希腊语,意为 "舵手" 或 "飞行员", 且是英文 "governor" 和 ["cybernetic"](http://www.etymonline.com/index.php?term=cybernetics)的词根。 **K8s** 是通过将 8 个字母 "ubernete" 替换为 8 而导出的缩写。另外,在中文里,k8s 的发音与 Kubernetes 的发音比较接近。 - diff --git a/cn/docs/whatisk8s.md b/cn/docs/whatisk8s.md index 61432c6991da0..e4e662c152509 100644 --- a/cn/docs/whatisk8s.md +++ b/cn/docs/whatisk8s.md @@ -1,6 +1,7 @@ --- assignees: - k8s-merge-robot + title: 认识 Kubernetes? --- From bb02346e47161b38154e6b43c07f45a59f769eba Mon Sep 17 00:00:00 2001 From: Dragons Date: Sat, 24 Jun 2017 10:28:34 +0800 Subject: [PATCH 02/87] kubernetes-basics-pr --- cn/docs/tutorials/kubernetes-basics/index.html | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) mode change 100755 => 100644 cn/docs/tutorials/kubernetes-basics/index.html diff --git a/cn/docs/tutorials/kubernetes-basics/index.html b/cn/docs/tutorials/kubernetes-basics/index.html old mode 100755 new mode 100644 index 856a6dcc28431..eac5a29ad5219 --- a/cn/docs/tutorials/kubernetes-basics/index.html +++ b/cn/docs/tutorials/kubernetes-basics/index.html @@ -17,15 +17,15 @@

Kubernetes 基础

-

本教程介绍了 Kubernetes 集群编排系统的基础知识。每个模块包含关于 Kubernetes 主要特性和概念的一些背景信息,并包括一个交互式在线教程。这些交互式教程让您可以自己管理一个简单的集群及其容器化应用程序。

-

使用交互式教程,您可以学习:

+

本教程介绍了 Kubernetes 集群编排系统的基础知识。每个模块包含关于 Kubernetes 主要特性和概念的一些背景信息,并包括一个在线互动教程。这些互动教程让您可以自己管理一个简单的集群及其容器化应用程序。

+

使用互动教程,您可以学习:

  • 在集群上部署容器化应用程序
  • 弹性部署
  • 使用新的软件版本,更新容器化应用程序
  • 调试容器化应用程序
-

教程 Katacoda 在您的浏览器中运行一个虚拟终端,在浏览器中运行 Minikube,这是一个可在任何地方小规模本地部署的 Kubernetes 集群。没有安装任何软件或进行任何配置; 每个交互性教程都直接从您的网页浏览器上运行。

+

教程 Katacoda 在您的浏览器中运行一个虚拟终端,在浏览器中运行 Minikube,这是一个可在任何地方小规模本地部署的 Kubernetes 集群。不需要安装任何软件或进行任何配置; 每个交互性教程都直接从您的网页浏览器上运行。

@@ -34,7 +34,7 @@

Kubernetes 基础

Kubernetes 可以为您做些什么?

-

现代的 Web 服务,用户希望应用程序能够 24/7 全天候使用,开发人员希望每天可以多次发布部署新版本的应用程序。 容器化可以帮助软件包服务于这些目标,使应用程序能够以简单快速的方式发布和更新,而无需停机。Kubernetes 帮助您确保这些容器化的应用程序在您想要的地方和时间运行,并帮助应用程序找到它们需要的资源的工具。 Kubernetes 是一个生产可用的开源平台,具有 Google 容器集群方面的设计与经验积累,拥有来自社区的最佳实践。

+

通过现代的 Web 服务,用户希望应用程序能够 24/7 全天候使用,开发人员希望每天可以多次发布部署新版本的应用程序。 容器化可以帮助软件包达成这些目标,使应用程序能够以简单快速的方式发布和更新,而无需停机。Kubernetes 帮助您确保这些容器化的应用程序在您想要的时间和地点运行,并帮助应用程序找到它们需要的资源和工具。 Kubernetes 是一个可用于生产的开源平台,根据 Google 容器集群方面积累的经验,以及来自社区的最佳实践而设计。

@@ -77,7 +77,7 @@

Kubernetes 基础模块

From 88ed4d38bd9efbeb5f42365a5a988128c595cd0a Mon Sep 17 00:00:00 2001 From: Dragons Date: Sat, 24 Jun 2017 10:29:44 +0800 Subject: [PATCH 03/87] kubernetes-basics-cluster-pr --- .../kubernetes-basics/cluster-interactive.html | 2 +- .../kubernetes-basics/cluster-intro.html | 16 ++++++++-------- 2 files changed, 9 insertions(+), 9 deletions(-) mode change 100755 => 100644 cn/docs/tutorials/kubernetes-basics/cluster-interactive.html mode change 100755 => 100644 cn/docs/tutorials/kubernetes-basics/cluster-intro.html diff --git a/cn/docs/tutorials/kubernetes-basics/cluster-interactive.html b/cn/docs/tutorials/kubernetes-basics/cluster-interactive.html old mode 100755 new mode 100644 index e1b1613bdbb96..c69ce8ec9657c --- a/cn/docs/tutorials/kubernetes-basics/cluster-interactive.html +++ b/cn/docs/tutorials/kubernetes-basics/cluster-interactive.html @@ -1,5 +1,5 @@ --- -title: 交互式教程 - 创建集群 +title: 互动教程 - 创建集群 --- diff --git a/cn/docs/tutorials/kubernetes-basics/cluster-intro.html b/cn/docs/tutorials/kubernetes-basics/cluster-intro.html old mode 100755 new mode 100644 index 3c1057f4228b0..927a2fdf33546 --- a/cn/docs/tutorials/kubernetes-basics/cluster-intro.html +++ b/cn/docs/tutorials/kubernetes-basics/cluster-intro.html @@ -31,12 +31,12 @@

目标

Kubernetes 集群

- Kubernetes 用于协调高度可用的计算机集群,这些计算机群集被连接作为单个单元工作。 Kubernetes 中的抽象允许您将容器化的应用程序部署到集群,而不必专门将其绑定到单个计算机。为了利用这种新的部署模型,应用程序需要以将它们与各个主机分离的方式打包: 它们需要被容器化。容器化应用程序比过去的部署模型更灵活和可用,其中应用程序直接安装到特定机器上,作为深入集成到主机中的软件包。 Kubernetes 以更有效的方式自动化、跨集群的容器应用程序的分发和调度。 Kubernetes 是一个 开源 平台,为生产环境准备的。 + Kubernetes 用于协调高度可用的计算机集群,这些计算机群集被连接作为单个单元工作。 Kubernetes 的抽象性允许您将容器化的应用程序部署到集群,而不必专门将其绑定到单个计算机。为了利用这种新的部署模型,应用程序需要以将它们与各个主机分离的方式打包: 它们需要被容器化。容器化应用程序比过去的部署模型更灵活和可用,其中应用程序直接安装到特定机器上,作为深入集成到主机中的软件包。 Kubernetes 在一个集群上以更有效的方式自动分发和调度容器应用程序。 Kubernetes 是一个 开源 平台,并且已经准备好了帮助生产。

Kubernetes 集群由两种类型的资源组成:

    -
  • 一个 Master 调度节点
  • -
  • Nodes 应用程序实际运行的地方
  • +
  • 一个 Master 调度集群
  • +
  • 节点 是应用程序实际运行的地方

@@ -74,21 +74,21 @@

集群图

Master 负责管理集群。 master 协调集群中的所有活动,例如调度应用程序、维护应用程序的所需状态、扩展应用程序和滚动更新。

-

node 是 Kubernetes 集群中的工作机器,可以是物理机或虚拟机。 每个工作节点都有一个 Kubelet,它是管理 node 并与 Kubernetes Master 节点进行通信的代理。node 上还应具有处理容器操作的工作,例如 Dockerrkt。一个 Kubernetes 工作集群至少有三个 node 节点。

+

节点 是 Kubernetes 集群中的工作机器,可以是物理机或虚拟机。 每个工作节点都有一个 Kubelet,它是管理 节点 并与 Kubernetes Master 节点进行通信的代理。节点 上还应具有处理容器操作的工作,例如 Dockerrkt。一个 Kubernetes 工作集群至少有三个节点。

-

Master 管理集群和 Nodes 用于托管正在运行的应用程序。

+

Master 管理集群,而 节点 用于托管正在运行的应用程序。

-

当您在 Kubernetes 上部署应用程序时,您可以告诉 master 启动应用程序容器。Master 调度容器在集群的 Node 上运行。 Nodes 使用 Master 公开的 Kubernetes API 与 Master 通信。最终用户还可以直接使用 Kubernetes 的 API 与集群交互。

+

当您在 Kubernetes 上部署应用程序时,您可以告诉 master 启动应用程序容器。Master 调度容器在集群的 节点 上运行。 节点 使用 Master 公开的 Kubernetes API 与 Master 通信。最终用户还可以直接使用 Kubernetes 的 API 与集群交互。

-

Kubernetes 集群可以部署在物理机或虚拟机上。要开始使用 Kubernetes 开发,您可以使用 Minikube。Minikube 是一个轻量级的 Kubernetes 实现,在本机创建一台虚拟机,并部署一个只包含一个节点的简单集群。 Minikube 适用于 Linux, Mac OS 和 Windows 系统。Minikube CLI 提供了集群的基本引导操作,包括启动、停止、状态和删除。但是,对于此基础训练,您将使用预先安装了 Minikube 的在线终端。

+

Kubernetes 集群可以部署在物理机或虚拟机上。要开始使用 Kubernetes 开发,您可以使用 Minikube。Minikube 是一个轻量级的 Kubernetes 实现,会在本机创建一台虚拟机,并部署一个只包含一个节点的简单集群。 Minikube 适用于 Linux, Mac OS 和 Windows 系统。Minikube CLI 提供了集群的基本引导操作,包括启动、停止、状态和删除。为了完成此基础训练,您将使用预先安装了 Minikube 的在线终端。

现在您已经知道 Kubernetes 是什么,让我们使用在线教程,开始我们的第一个集群!

@@ -98,7 +98,7 @@

集群图

From b5cb8321ceb61012b87751bc52290b9a3d25b162 Mon Sep 17 00:00:00 2001 From: Dragons Date: Sat, 24 Jun 2017 10:30:10 +0800 Subject: [PATCH 04/87] kubernetes-basics-deploy-pr --- .../kubernetes-basics/deploy-interactive.html | 2 +- .../kubernetes-basics/deploy-intro.html | 18 +++++++++--------- 2 files changed, 10 insertions(+), 10 deletions(-) mode change 100755 => 100644 cn/docs/tutorials/kubernetes-basics/deploy-interactive.html mode change 100755 => 100644 cn/docs/tutorials/kubernetes-basics/deploy-intro.html diff --git a/cn/docs/tutorials/kubernetes-basics/deploy-interactive.html b/cn/docs/tutorials/kubernetes-basics/deploy-interactive.html old mode 100755 new mode 100644 index 707538a45ed89..bd419e4ef5f03 --- a/cn/docs/tutorials/kubernetes-basics/deploy-interactive.html +++ b/cn/docs/tutorials/kubernetes-basics/deploy-interactive.html @@ -1,5 +1,5 @@ --- -title: 交互式教程 - 部署应用程序 +title: 互动教程 - 部署应用程序 --- diff --git a/cn/docs/tutorials/kubernetes-basics/deploy-intro.html b/cn/docs/tutorials/kubernetes-basics/deploy-intro.html old mode 100755 new mode 100644 index ecd024ca40ac8..b06de18b947a0 --- a/cn/docs/tutorials/kubernetes-basics/deploy-intro.html +++ b/cn/docs/tutorials/kubernetes-basics/deploy-intro.html @@ -27,12 +27,12 @@

目标

Kubernetes 部署

- 一旦运行了 Kubernetes 集群,您可以在其上部署容器化应用程序。为此,您可以创建一个 Kubernetes Deployment。Deployment 负责创建和更新应用程序实例。创建 Deployment 后, Kubernetes master 会将 Deployment 创建的应用程序实例调度到集群中的各个节点。 + 一旦运行了 Kubernetes 集群,您可以在其上部署容器化应用程序。为此,您可以创建一个 Kubernetes 部署。部署负责创建和更新应用程序实例。创建部署 后, Kubernetes master 会将部署创建的应用程序实例调度到集群中的各个节点。

-

创建应用程序实例后,Kubernetes Deployment 控制器会持续监视这些实例。如果托管它的节点不可用或删除,则 Deployment 控制器将替换实例。 这提供了一种解决机器故障或维护的自愈机制。

+

创建应用程序实例后,Kubernetes 部署控制器会持续监视这些实例。如果托管它的节点不可用或删除,则部署控制器将替换实例。 这提供了一种解决机器故障或维护的自愈机制。

-

在编排前的世界中,通常会使用安装脚本启动应用程序,但是它们并不能从机器故障中恢复。通过创建应用程序实例并使其运行在跨节点的机器之间,Kubernetes Deployments 提供了截然不同的应用管理方法。

+

在编排诞生前的世界中,通常会使用安装脚本启动应用程序,但是它们并不能从机器故障中恢复。通过创建应用程序实例并使其运行在跨节点的机器之间,Kubernetes 部署提供了截然不同的应用管理方法。

@@ -40,13 +40,13 @@

Kubernetes 部署

概要:

    -
  • Deployments
  • +
  • 部署
  • Kubectl

- Deployment 负责创建和更新应用程序的实例 + 部署负责创建和更新应用程序的实例

@@ -69,9 +69,9 @@

在 Kubernetes 上部署您的第一个应用程序<
-

您可以使用 Kubernetes 命令行工具 Kubectl创建和管理 Deployment。Kubectl 使用 Kubernetes API 与集群进行交互。在本模块中,您将学习在 Kubernetes 集群上运行应用程序部署所需的最常见 Kubectl 命令。

+

您可以使用 Kubernetes 命令行工具 Kubectl创建和管理部署。Kubectl 使用 Kubernetes API 与集群进行交互。在本模块中,您将学习在 Kubernetes 集群上运行应用程序部署所需的最常见的 Kubectl 命令。

-

创建部署时,您需要为应用程序指定容器镜像以及要运行的副本数。您可以稍后通过更新部署来更改该信息;模块 56 是一个基础训练讨论如何扩展和更新您的部署。

+

创建部署时,您需要为应用程序指定容器镜像以及要运行的副本数。您可以稍后通过更新部署来更改该信息;基础训练模块 56 讨论如何扩展和更新您的部署。

@@ -86,7 +86,7 @@

在 Kubernetes 上部署您的第一个应用程序<
-

对于我们的第一个部署,我们将使用 Node.js 应用程序打包到 Docker 容器。源代码和 Dockerfile 可在 Kubernetes Bootcamp GitHub 存储库 中找到。

+

对于我们的第一个部署,我们将使用 Node.js 应用程序打包到 Docker 容器。源代码和 Dockerfile 可在 Kubernetes Bootcamp 的 GitHub 存储库 中找到。

现在您已经知道部署是什么,我来再来看看在线教程,并部署我们的第一个应用程序!

@@ -96,7 +96,7 @@

在 Kubernetes 上部署您的第一个应用程序< From 9181bb31e42d068622599df3fc6e94346d07746d Mon Sep 17 00:00:00 2001 From: Dragons Date: Sat, 24 Jun 2017 10:30:48 +0800 Subject: [PATCH 05/87] kubernetes-basics/explore-pr --- .../explore-interactive.html | 2 +- .../kubernetes-basics/explore-intro.html | 30 +++++++++---------- 2 files changed, 16 insertions(+), 16 deletions(-) mode change 100755 => 100644 cn/docs/tutorials/kubernetes-basics/explore-interactive.html mode change 100755 => 100644 cn/docs/tutorials/kubernetes-basics/explore-intro.html diff --git a/cn/docs/tutorials/kubernetes-basics/explore-interactive.html b/cn/docs/tutorials/kubernetes-basics/explore-interactive.html old mode 100755 new mode 100644 index 821e6930c1906..8787d3da58754 --- a/cn/docs/tutorials/kubernetes-basics/explore-interactive.html +++ b/cn/docs/tutorials/kubernetes-basics/explore-interactive.html @@ -1,5 +1,5 @@ --- -title: 交互式教程 - 应用程序探索 +title: 互动教程 - 应用程序探索 --- diff --git a/cn/docs/tutorials/kubernetes-basics/explore-intro.html b/cn/docs/tutorials/kubernetes-basics/explore-intro.html old mode 100755 new mode 100644 index 4e2d1416f6ebd..22e8319506ff4 --- a/cn/docs/tutorials/kubernetes-basics/explore-intro.html +++ b/cn/docs/tutorials/kubernetes-basics/explore-intro.html @@ -1,5 +1,5 @@ --- -title: 查看 Pods 和 Nodes +title: 查看 Pods 和节点 --- @@ -22,7 +22,7 @@

目标

  • 了解 Kubernetes Pods
  • 了解 Kubernetes Nodes
  • -
  • 已部署应用故障排除
  • +
  • 已部署应用的故障排除

@@ -34,9 +34,9 @@

Kubernetes Pods

  • 网络,作为唯一的集群 IP 地址
  • 每个容器如何运行的信息,例如容器镜像版本或要使用的特定端口
  • -

    Pod 模型可以理解为应用程序特定的 "逻辑主机",并且可以包含相对紧密耦合的不同应用程序容器。例如,Pod 可能包含带有 Node.js 应用程序的容器以及用于提供要由 Node.js Web 服务器发布数据的不同容器。Pod 中的容器共享 IP 地址和端口空间,始终位于同一位置并且统一调度,并在相同的节点上运行,共享上下文环境。

    +

    Pod 模型可以理解为应用程序特定的 "逻辑主机",并且可以包含相对紧密耦合的不同应用程序容器。例如,Pod 可能包含带有 Node.js 应用程序的容器以及另一个要吸收 Node.js Web 服务器提供的数据的不同容器。Pod 中的容器共享 IP 地址和端口空间,始终位于同一位置并且统一调度,并在相同的节点上运行,共享上下文环境。

    -

    Pods 是 Kubernetes 平台上的原子单元。当我们在 Kubernetes 上创建一个部署时,该部署将在其中创建包含容器的 Pod (而不是直接创建容器)。每个 Pod 绑定到它被调度的节点,并且保持在那里,直到终止 (根据重启策略) 或删除。在节点故障的情况下,在集群中的其他可用节点上调度相同的 Pod。

    +

    Pods 是 Kubernetes 平台上原子级别的单元。当我们在 Kubernetes 上创建一个部署时,该部署将在其中创建包含容器的 Pod (而不是直接创建容器)。每个 Pod 都绑定到它被调度的节点,并且始终在那里,直到终止 (根据重启策略) 或删除。在节点故障的情况下,在集群中的其他可用节点上调度相同的 Pod。

    @@ -44,13 +44,13 @@

    Kubernetes Pods

    概要:

    • Pods
    • -
    • Nodes
    • +
    • 节点
    • Kubectl 主要命令

    - Pod 是一组或多个应用程序容器 (例如 Docker 或 rkt),包含共享存储 (卷),IP 地址以及有关如何运行它们的信息。 + Pod 是一组一个或多个应用程序容器 (例如 Docker 或 rkt),包含共享存储 (卷),IP 地址以及有关如何运行它们的信息。

    @@ -72,19 +72,19 @@

    Pods 概览

    -

    Nodes

    -

    Pod 总是运行在 Node上。Node 是 Kubernetes 的工作机器,可以是一个虚拟机或物理,这取决于在集群的安装情况。每个 Node 由 Master 管理。一个 Node 上可以有多个 Pod, Kubernetes master 会自动处理调度集群各个 Node 上的 Pod。 Master 在自动调度时,会考虑每个 Node 上的可用资源。

    +

    节点

    +

    Pod 总是运行在 Node上。Node 是 Kubernetes 的工作机器,可以是虚拟机或物理机,这取决于在集群的安装情况。每个节点由 Master 管理。一个节点上可以有多个 Pod, Kubernetes master 会自动处理调度集群各个节点上的 Pod。 Master 在自动调度时,会考虑每个 Node 上的可用资源。

    -

    每个 Kubernetes Node 节点至少运行以下组件:

    +

    每个 Kubernetes 节点至少运行以下组件:

      -
    • Kubelet 是负责 Kubernetes Master 和 所有 Node 节点之间通信的进程,它管理机器上运行的 Pod 和容器。
    • +
    • Kubelet 是负责 Kubernetes Master 和 所有节点之间通信的进程,它管理机器上运行的 Pod 和容器。
    • 容器运行时(例如 Docker, rkt) 负责从镜像仓库中拉取容器镜像,解包容器并运行应用程序。
    -

    如果一些容器强耦合并且需要共享资源(例如 磁盘),那么这些容器应该放到单个 Pod 中一起调度。

    +

    如果一些容器属于强耦合并且需要共享资源(例如 磁盘),那么这些容器应该放到单个 Pod 中一起调度。

    @@ -93,7 +93,7 @@

    Nodes

    -

    Node 概述

    +

    节点概述

    @@ -107,7 +107,7 @@

    Node 概述

    使用 kubectl 进行故障排除

    -

    在模块 2中,您使用了 Kubectl 命令行接口。您将在模块 3 中继续使用它来获取有关已部署应用程序及其环境信息。最常见的操作可以通过以下 kubectl 命令完成:

    +

    在模块 2中,您使用了 Kubectl 命令行接口。您将在模块 3 中继续使用它来获取有关已部署应用程序及其环境的信息。最常见的操作可以通过以下 kubectl 命令完成:

    • kubectl get - 列出可用资源
    • kubectl describe - 显示有关资源的详细信息
    • @@ -122,7 +122,7 @@

      使用 kubectl 进行故障排除

    -

    Node 是 Kubernetes 中的工作机器,可能是物理机或虚拟机,具体取决于集群的安装配置。多个 Pod 可以在一个 Node 上运行。

    +

    节点是 Kubernetes 中的工作机器,可能是物理机或虚拟机,具体取决于集群的安装配置。多个 Pod 可以在一个节点上运行。

    @@ -130,7 +130,7 @@

    使用 kubectl 进行故障排除

    From 0de03a50a8986e6ee2581658e7ab800d2abf26cb Mon Sep 17 00:00:00 2001 From: Dragons Date: Sat, 24 Jun 2017 10:31:19 +0800 Subject: [PATCH 06/87] kubernetes-basics-expose-pr --- .../kubernetes-basics/expose-interactive.html | 2 +- .../kubernetes-basics/expose-intro.html | 89 +++++++------------ 2 files changed, 32 insertions(+), 59 deletions(-) mode change 100755 => 100644 cn/docs/tutorials/kubernetes-basics/expose-interactive.html mode change 100755 => 100644 cn/docs/tutorials/kubernetes-basics/expose-intro.html diff --git a/cn/docs/tutorials/kubernetes-basics/expose-interactive.html b/cn/docs/tutorials/kubernetes-basics/expose-interactive.html old mode 100755 new mode 100644 index 4cae319edf9bf..e41ebcfc37ba1 --- a/cn/docs/tutorials/kubernetes-basics/expose-interactive.html +++ b/cn/docs/tutorials/kubernetes-basics/expose-interactive.html @@ -1,5 +1,5 @@ --- -title: 交互性教程 - 应用外部可见 +title: 互动教程 - 应用外部可见 --- diff --git a/cn/docs/tutorials/kubernetes-basics/expose-intro.html b/cn/docs/tutorials/kubernetes-basics/expose-intro.html old mode 100755 new mode 100644 index b70e1d2de1d8b..1e133f95fd20b --- a/cn/docs/tutorials/kubernetes-basics/expose-intro.html +++ b/cn/docs/tutorials/kubernetes-basics/expose-intro.html @@ -1,5 +1,5 @@ --- -title: 使用服务让您的应用程序外部可见 +title: 使用服务公开您的应用程序 --- @@ -15,40 +15,41 @@
    - -
    -

    目标

    +
    +

    目标

      -
    • 了解 Kubernetes 服务
    • -
    • 了解 Kubernetes 标签
    • -
    • 应用程序在 Kubernetes 外部可见
    • +
    • 了解 Kubernetes 中的服务
    • +
    • 了解标签和标签选择器对象如何与服务相关联
    • +
    • 使用服务在 Kubernetes 群集外公开应用程序
    -

    Kubernetes Services

    +

    Kubernetes 服务概述

    -

    虽然每个 Pod 在所在的集群中拥有自己独立的 IP 地址,但这些 IP 地址不会暴露在 Kubernetes 集群外部。考虑到 Pod 可能随时会被终止、删除或被其他 Pod 替换,我们需要一种方法让其他 Pod 和应用程序自动发现彼此。Kubernetes 通过 Service 对 Pods 进行分组来解决此问题。一个 Kubernetes Service 是一个抽象层,它定义了一组逻辑的 Pods,并让这些 Pods 对外部流量可见,可以被负载均衡和服务发现。

    +

    Kubernetes Pods 终有一死. Pods 实际上有一个 生命周期. 当工作节点死机时, 节点上运行的Pod也将丢失。 一个 ReplicationController 可能会通过创建新的Pod以动态地将集群恢复到所需的状态,以保持您的应用程序运行。还有一种方法就是:决定是否使用具有3个副本的图像处理后端。这些副本是可替代的前端系统不应关心后端副本,即使Pod丢失并重建也不会更改。 也就是说,Kubernetes 集群中的每个 Pod 都有一个唯一的IP地址,即使在同一个节点上的 Pods 也是如此,所以此时就需要一种自动调整更改 Pod 的方法, 以便您的应用程序继续运行。输入 服务. Kubernetes 中的服务是一个抽象对象,它定义了一组逻辑的 Pods 和一个访问它们的策略。 服务让互相依赖的 Pod 之间的耦合松动。 服务由 YAML (首选) 或 JSON 定义。像所有 Kubernetes 对象一样。 针对服务的一组 Pod 通常由Label选择器确定(参见下文,为什么您可能希望不将 选择器 包含在规范中。

    -

    此抽象允许我们将 Pods 暴露给集群外部的流量访问。Services 具有自己的唯一集群专用 IP 地址,并显示一个端口以接收流量。如果选择在集群外公开 Service,则有如下选项:

    +

    虽然每个 Pod 都有一个唯一的 IP 地址,但是这些 IP 不会在没有服务的情况下公开在群集之外。服务允许您的应用程序接收流量。 可以通过在 ServiceSpec 中指定类型 以不同方式显示服务:

      -
    • LoadBalancer - 提供公有 IP 地址 (在 GCP 或 AWS 上运行 Kubernetes 通常使用此方式)
    • -
    • NodePort - 使用 NAT 在集群的每个 Node 节点的同一端口让服务可见。(所有 Kubernetes 集群和 Minikube 中都可用此方式)
    • +
    • ClusterIP(默认) - 在集群中的内部IP上公开服务。此类型使服务只能从集群中访问。
    • +
    • NodePort —— 使用NAT在群集中每个选定的节点的同一端口上显示该服务。使用 :可以从群集外部访问服务。建立 ClusterIP 的超集.
    • +
    • LoadBalancer —— 在当前云中创建外部负载平衡器(如果支持),并为服务分配固定的外部IP。建立 NodePort 的超集。
    • +
    • ExternalName —— 使用任意名称显示该服务(由规范中的externalName 指定),本过程通过使用该名称返回 CNAME 记录达成。无须使用代理。这种类型需要 v1.7 或更高版本的 kube-dns.
    +

    有关不同类型服务的详细信息,请参见 使用源IP 教程。另请参阅 使用服务连接应用程序.

    +

    另外,请注意,服务中有一些使用案例涉及在规范中不定义选择器 。不使用 选择器 创建的服务也不会创建相应的端点对象。 这允许用户手动将服务映射到特定端点。没有选择器还有可能是因为您严格地使用了 type: ExternalName.

    -

    摘要:

    +

    摘要

      -
    • Pod 流量外部可见
    • -
    • Pods 流量负载均衡
    • +
    • 对外部流量曝光 Pod
    • +
    • 跨多个 Pods 进行流量负载均衡
    • 使用标签
    -

    - Kubernetes Service 是一个抽象层,它定义了一组逻辑的 Pods,并为这些 Pods 启用了外部流量访问、负载均衡和服务发现。 -

    +

    Kubernetes 服务是一个抽象层,它定义了一组逻辑的Pods,并为这些Pods启用了外部流量曝光、负载平衡和服务发现。

    @@ -56,57 +57,35 @@

    摘要:

    -

    Services 概述

    -
    -
    - -
    -
    -

    +

    服务和标签

    -
    - -

    一个 Service 提供了一组 Pods 的流量负载均衡。通过创建服务以对来自特定部署的所有 Pods 进行分组时,这是有用的(当我们有多个实例运行时,我们的应用程序将在下一个模块中使用这一点)。

    - -

    Services 还负责集群内部的服务发现 (包含在 访问服务中)。 例如,这将允许前端服务 (如 web 服务器) 从后端服务 (如 数据库) 接收流量,而不必考虑 Pod。

    - -

    Services 使用标签选择器匹配一组 Pods,标签选择器支持在标签上进行原始逻辑分组的能力。

    - -
    -
    -
    -

    您可以通过添加 --expose 作为 kubectl 运行命令的参数,在创建 Deployment 的同时创建 Service。

    -
    +

    -
    -
    -

    Labels 是附加到对象的 键/值对,例如 Pods,您可以将它们视为社交媒体的标签符号。它们用于以对用户有意义的方式组织相关对象,如:

    +

    A服务可以跨一组 Pod 路由流量。服务是允许 Pod 在 Kubernetes 中死亡和复制而不影响应用程序的抽象层。相关 Pod 之间的发现和路由(如应用程序中的前端和后端组件)是由 Kubernetes Services 处理的。

    +

    服务使用 标签和选择器, 匹配一组 Pod,成为分组原语,此原语允许在 Kubernetes 中的对象进行逻辑运算。标签是一对附加到对象的关键/重要组,可以以多种方式使用,方式如下:

      -
    • 生产环境 (生产、测试、开发)
    • -
    • 应用程序版本 (beta、v1.3)
    • -
    • 服务类型 (前端、后端、数据库)
    • +
    • 指定用于开发、测试和生产的对象
    • +
    • 嵌入版本标签
    • +
    • 使用标签分类对象
    +
    -

    Labels 是附加到对象的键/值对。

    +

    您可以在使用
    --expose 在 kubectl 中创建部署的同时创建服务.

    +
    -
    -
    -

    Labels

    -
    -
    @@ -116,22 +95,16 @@

    Labels


    - -

    Labels 可以在创建时或以后附加到对象,并可以随时修改。 - 在使用 kubectl run 命令新建 Pods/Deployment 时,会设置一些默认的 Labels/Label。标签和标签选择器之间的链接定义了 Deployment 及其创建 Pod 之间的关系。

    - -

    现在,让我们在 Service 的帮助下公开我们的应用程序,并应用一些新的标签。

    +

    标签可以在创建时或之后附加到对象后,并支持随时修改。让我们现在开始使用服务公开我们的应用程序并应用一些标签吧。


    -
    -
    From ce93d29e9dbb3be107f98a1a13de44a013b77c82 Mon Sep 17 00:00:00 2001 From: Dragons Date: Sat, 24 Jun 2017 10:31:52 +0800 Subject: [PATCH 07/87] kubernetes-basics-scale-pr --- .../kubernetes-basics/scale-interactive.html | 2 +- .../tutorials/kubernetes-basics/scale-intro.html | 13 +++++++------ 2 files changed, 8 insertions(+), 7 deletions(-) mode change 100755 => 100644 cn/docs/tutorials/kubernetes-basics/scale-interactive.html mode change 100755 => 100644 cn/docs/tutorials/kubernetes-basics/scale-intro.html diff --git a/cn/docs/tutorials/kubernetes-basics/scale-interactive.html b/cn/docs/tutorials/kubernetes-basics/scale-interactive.html old mode 100755 new mode 100644 index dd03ef220d0c8..5dbd15486c881 --- a/cn/docs/tutorials/kubernetes-basics/scale-interactive.html +++ b/cn/docs/tutorials/kubernetes-basics/scale-interactive.html @@ -1,5 +1,5 @@ --- -title: 交互式教程 - 扩展您的应用程序 +title: 互动教程 - 扩展您的应用程序 --- diff --git a/cn/docs/tutorials/kubernetes-basics/scale-intro.html b/cn/docs/tutorials/kubernetes-basics/scale-intro.html old mode 100755 new mode 100644 index 28ec8ec033707..a4041dae02104 --- a/cn/docs/tutorials/kubernetes-basics/scale-intro.html +++ b/cn/docs/tutorials/kubernetes-basics/scale-intro.html @@ -19,14 +19,15 @@

    目标

      -
    • 使用 kubectl 缩放应用程序
    • +
    • 使用 kubectl 伸缩应用程序
    -

    缩放应用程序

    +

    伸缩应用程序

    -

    在之前的模块中,我们创建了一个 Deployment,然后通过 Service让应用程序外部可见。Deployment 仅为我们的应用程序创建了一个 Pod。 当流量增加时,我们将需要扩展应用程序以跟上用户需求。

    +

    在之前的模块中,我们创建了一个 Deployment,然后通过 Service让应用程序外部可见。Deployment 仅为我们的应用程序创建了一个 Pod。 当流量增加时,我们将需要 + 应用程序以跟上用户需求。

    Scaling 是通过更改 Deployment 中的副本数量实现的。

    @@ -35,7 +36,7 @@

    缩放应用程序

    摘要:

      -
    • Deployment 的缩放
    • +
    • Deployment 的伸缩
    @@ -101,14 +102,14 @@

    Scaling 概述

    -

    一旦您有应用程序的多个实例,您将能够滚动更新而不会停止服务。我们将在下一个模块中介绍。现在,我们去在线终端扩展我们的应用程序。

    +

    一旦您有应用程序的多个实例,您将能够滚动更新,而不会停止服务————我们将在下一个模块中介绍这些。现在,我们去在线终端对我们的应用程序进行伸缩。


    From e41d03e957b39f427b7e8e54d4ffc04021ace2dd Mon Sep 17 00:00:00 2001 From: Dragons Date: Sat, 24 Jun 2017 10:32:44 +0800 Subject: [PATCH 08/87] kuberntes-basics-update-pr --- .../kubernetes-basics/update-interactive.html | 2 +- cn/docs/tutorials/kubernetes-basics/update-intro.html | 10 +++++----- 2 files changed, 6 insertions(+), 6 deletions(-) mode change 100755 => 100644 cn/docs/tutorials/kubernetes-basics/update-interactive.html mode change 100755 => 100644 cn/docs/tutorials/kubernetes-basics/update-intro.html diff --git a/cn/docs/tutorials/kubernetes-basics/update-interactive.html b/cn/docs/tutorials/kubernetes-basics/update-interactive.html old mode 100755 new mode 100644 index c38ba6c6f5985..61b43318a0e6e --- a/cn/docs/tutorials/kubernetes-basics/update-interactive.html +++ b/cn/docs/tutorials/kubernetes-basics/update-interactive.html @@ -1,5 +1,5 @@ --- -title: 交互式教程 - 更新您的应用程序 +title: 互动教程 - 更新您的应用程序 --- diff --git a/cn/docs/tutorials/kubernetes-basics/update-intro.html b/cn/docs/tutorials/kubernetes-basics/update-intro.html old mode 100755 new mode 100644 index 18289e04df8de..ffb4950d5e012 --- a/cn/docs/tutorials/kubernetes-basics/update-intro.html +++ b/cn/docs/tutorials/kubernetes-basics/update-intro.html @@ -26,9 +26,9 @@

    目标

    更新应用程序

    -

    用户期望应用程序始终可用,并且开发人员有望每天部署新版本。这就是 Kubernetes 的滚动更新。 Rolling updates 允许通过使用新的 Pods 实例逐个更新来实现零停机的更新部署。新的 Pods 会被调度到可用资源的 Node 节点上。

    +

    用户期望应用程序始终可用,并且希望开发人员每天部署新版本。在 Kubernetes 上这通过滚动更新达成。 Rolling updates 允许通过使用新的 Pods 实例逐个更新来实现零停机的部署更新。新的 Pods 会被调度到可用资源的 Node 节点上。

    -

    在上一个模块中,我们将应用程序扩展为运行多个实例。这是执行更新但不影响应用可用性的要求。默认情况下,更新期间最大数量的 Pods 可能不可用,此时创建和更新 Pod 的最大数量是一。 这两个选项可以配置为数字或百分比(Pods)。 +

    在上一个模块中,我们将应用程序扩展为运行多个实例。这也是执行更新但不影响应用可用性所需的条件。默认情况下,更新期间最大数量的不可用 Pod 以及最大数量的新 Pod 是一。 这两个选项可以配置为数字或百分比(Pods)。 在 Kubernetes 中,更新已版本化,任何部署更新都可以恢复到以前的 (稳定) 版本。

    @@ -94,13 +94,13 @@

    滚动更新概述

    -

    与应用程序缩放类似,如果 Deployment 外部可见,则 Service 将在更新期间将流量负载均衡到可用的 Pod。可用的 Pod 指的是应用程序用户可用的实例。

    +

    与应用程序伸缩类似,如果 Deployment 外部可见,则服务将在更新期间将流量负载均衡到可用的 Pod。可用的 Pod 指的是应用程序用户可用的实例。

    滚动更新允许以下操作:

    • 将应用程序从一个环境升级到另一个环境 (通过容器镜像更新)
    • 回滚到以前的版本
    • -
    • 持续集成和持续交付,实现应用程序零故障
    • +
    • 持续集成和持续交付,实现应用程序零停机
    @@ -116,7 +116,7 @@

    滚动更新概述

    -

    在下面的交互式教程中,我们将把应用程序更新到一个新版本,并执行回滚。

    +

    在下面的互动教程中,我们将把应用程序更新到一个新版本,并执行回滚。


    From a5aea25a2f0abcbb2e081b9781752a41dde80c1a Mon Sep 17 00:00:00 2001 From: Dragons Date: Mon, 26 Jun 2017 16:59:52 +0800 Subject: [PATCH 09/87] tutorials-object-management-kubectl-object-management-pr --- .../object-management.md | 157 ++++++++++++++++++ 1 file changed, 157 insertions(+) create mode 100644 cn/docs/tutorials/object-management-kubectl/object-management.md diff --git a/cn/docs/tutorials/object-management-kubectl/object-management.md b/cn/docs/tutorials/object-management-kubectl/object-management.md new file mode 100644 index 0000000000000..a2013ec41860c --- /dev/null +++ b/cn/docs/tutorials/object-management-kubectl/object-management.md @@ -0,0 +1,157 @@ +--- +title: Kubernetes 对象管理 +redirect_from: +- "/docs/concepts/tools/kubectl/object-management-overview/" +- "/docs/concepts/tools/kubectl/object-management-overview.html" +- "/docs/user-guide/working-with-resources/" +- "/docs/user-guide/working-with-resources.html" +--- + +{% capture overview %} +`kubectl` 命令行工具支持 Kubernetes 对象几种不同的创建和管理方法。本文档概述了不同的方法. +{% endcapture %} + +{% capture body %} + +## 管理技巧 + +**警告:** Kubernetes 对象应该只使用一种技术进行管理。混合使用不同的技术,会导致相同对象出现未定义的行为。 + +| 管理技术 | 操作 |推荐环境 | 支持撰写 | 学习曲线 | +|----------------------------------|----------------------|------------------------|--------------------|----------------| +| 命令式的方式 | 活动对象 | 开发项目 | 1+ | 最低 | +| 命令式对象配置 | 单文件 | 生产项目 | 1 | 中等 | +| 声明式对象配置 | 文件目录 | 生产项目 | 1+ | 最高 | + +## 命令式的方式 + +当使用命令式的命令时,用户直接对集群中的活动对象进行操作。用户提供 `kubectl` 命令的参数或标记进行操作。 + +这是在集群中启动或运行一次性任务的最简单的方法。因为这种技术直接在活动对象上运行,所以它没有提供以前配置的历史记录。 + +### 例子 + +通过创建 Deployment 对象来运行 nginx 容器的实例: + +```sh +kubectl run nginx --image nginx +``` + +使用不同的语法做同样的事情: + +```sh +kubectl create deployment nginx --image nginx +``` + +### 权衡 + +与对象配置相比的优点: + + - 命令简单易学,易于记忆。 + - 命令只需要一个步骤即可对群集进行更改。 + +与对象配置相比的缺点: + + - 命令不与变更审核流程整合。 + - 命令不提供与更改相关联的审计跟踪。 + - 除了活动对象之外,命令不提供记录来源。 + - 命令不提供用于创建新对象的模板。 + +## 命令式对象配置 + +在命令式对象配置中,`kubectl` 命令指定操作(创建,替换等),可选标志和至少一个文件名称。指定的文件必须包含对象的完整定义以 YAML 或 JSON 格式。 + +请参阅[资源参考](https://kubernetes.io/docs/resources-reference/v1.6/) +查看有关对象定义的更多细节。 + +**警告:** 命令式 `replace` 命令用新提供的命令替换现有资源规格,将对配置文件中缺少的对象的所有更改都丢弃。这种方法不应更新与配置文件无关的资源类型。例如,`LoadBalancer` 类型的服务使其 `externalIPs` 字段与集群的配置无关。 + +### 例子 + +创建对象定义配置文件: + +```sh +kubectl create -f nginx.yaml +``` + +删除两个配置文件中定义的对象: + +```sh +kubectl delete -f nginx.yaml -f redis.yaml +``` + +通过覆写实时配置更新配置文件中定义的对象: + +```sh +kubectl replace -f nginx.yaml +``` + +### 权衡 + +与命令式的命令相比的优点: + + - 对象配置可以存储在源码控制系统中,如Git。 + - 对象配置可以与进程集成,例如在推送和审计跟踪之前查看更改。 + - 对象配置提供了一个用于创建新对象的模板。 + +与命令式的命令相比的缺点: + + - 对象配置需要对对象模式有基本的了解。 + - 对象配置需要编写 YAML 文件的附加步骤。 + +与声明式对象配置相比的优势: + + - 命令对象配置行为更简单易懂。 + - 至于 Kubernetes 1.5 版本,命令式对象配置更为成熟。 + +与声明式对象配置相比的缺点: + + - 命令对象配置最适合于文件,而不是目录。 + - 活动对象的更新必须反映在配置文件中,否则在下次更替时将丢失。 + +## 声明式对象配置 + +当使用声明式对象配置时,用户对本地存储的对象配置文件进行操作,但是用户没有定义要对文件执行的操作。通过 `kubectl` 自动检测每个对象进行创建、更新和删除操作。这样可以在目录层级上工作,因为不同的对象可能需要不同的操作。 + +**注意:** 声明式对象配置保留由其他对象进行的更改,即使更改未合并到对象配置文件中。这可以通过使用 `patch` API 操作来写入观察到的差异,而不是使用`replace` API 操作来替换整个对象的配置。 + +### 例子 + +处理`configs` 目录中的所有对象配置文件,创建或修补(patch)活动对象: + +```sh +kubectl apply -f configs/ +``` + +递归处理目录: + +```sh +kubectl apply -R -f configs/ +``` + +### 权衡 + +与命令式对象配置相比的优点: + + - 直接对活动对象进行的更改将被保留,即使它们未被并入到配置文件中。 + - 声明式对象配置更好地支持目录操作,并自动检测每个对象的操作类型 (创建、修补,删除)。 + +与命令式对象配置相比的缺点: + +- 声明式对象配置在意外情况下难以调试和了解结果。 +- 使用差异的部分更新会创建复杂的合并和补丁操作。 + + {% endcapture %} + + {% capture whatsnext %} + - [使用命令式的命令管理 Kubernetes 对象](/docs/tutorials/object-management-kubectl/imperative-object-management-command/) + - [使用对象配置管理 Kubernetes 对象(必要)](/docs/tutorials/object-management-kubectl/imperative-object-management-configuration/) + - [使用对象配置(声明式)管理 Kubernetes 对象](/docs/tutorials/object-management-kubectl/declarative-object-management-configuration/) + - [Kubectl 命令参考](/docs/user-guide/kubectl/v1.6/) + - [Kubernetes 对象模式参考](/docs/resources-reference/v1.6/) + + {% comment %} + {% endcomment %} + {% endcapture %} + + {% include templates/concept.md %} From 5fa66df44b8e1ca58884fbe2e5577e181f7d15b2 Mon Sep 17 00:00:00 2001 From: Dragons Date: Mon, 26 Jun 2017 17:00:46 +0800 Subject: [PATCH 10/87] tutorials-object-management-kubectl-imperative-object-management-command-pr --- .../imperative-object-management-command.md | 145 ++++++++++++++++++ 1 file changed, 145 insertions(+) create mode 100644 cn/docs/tutorials/object-management-kubectl/imperative-object-management-command.md diff --git a/cn/docs/tutorials/object-management-kubectl/imperative-object-management-command.md b/cn/docs/tutorials/object-management-kubectl/imperative-object-management-command.md new file mode 100644 index 0000000000000..5ddde0e5dcfc7 --- /dev/null +++ b/cn/docs/tutorials/object-management-kubectl/imperative-object-management-command.md @@ -0,0 +1,145 @@ +--- +title: 使用命令式的方式管理 Kubernetes 对象 +redirect_from: +- "/docs/concepts/tools/kubectl/object-management-using-imperative-commands/" +- "/docs/concepts/tools/kubectl/object-management-using-imperative-commands.html" +--- + +{% capture overview %} +直接使用内置的 `kubectl` 命令行工具,以命令式方式可以快速创建,更新和删除 Kubernetes 对象。本文档介绍了这些命令是如何组织的,以及如何使用它们来管理活动对象。 +{% endcapture %} + +{% capture body %} + +## 权衡 + +`kubectl` 工具支持三种对象的管理: + +* 命令式的方式 +* 命令式的对象配置 +* 声明式的对象配置 + +参见[Kubernetes对象管理](/docs/concepts/tools/kubectl/object-management-overview/) +讨论各种对象管理的优缺点. + +## 如何创建对象 + +`kubectl` 工具支持用于创建一些最常用的对象类型的动词驱动命令,这些命令被命名为对于不熟悉的用户也是一目了然。 + +- `run`: 创建一个新的 Deployment 对象以在一个或多个 Pod 中运行 Containers。 +- `expose`: 创建一个新的 Service 对象用于负载均衡 Pods 上的的网络流量。 +- `autoscale`: 创建一个新的 Autoscaler 对象,即自动水平扩展控制器,提供 Deployment 自动水平伸缩支持。 + +`kubectl` 工具也支持由对象类型驱动的创建命令。 这些命令支持更多的对象类型,并且对其意图更为明确,但要求用户知道他们打算创建的对象的类型。 + + - `create [] ` + +某些对象类型具有您可以在“create"命令中指定的子类型. +例如,Service对象有几种子类型,包括ClusterIP, +LoadBalancer和NodePort. 以下是创建一个服务的示例 +子类型NodePort: + +一些对象类型允许你在 `create` 命令中指定子命令。例如,Service 对象拥有几个子命令,包括 ClusterIP、LoadBalancer 和 NodePort。以下是使用子命令 NodePort 创建服务的示例: + + +```shell +kubectl create service nodeport +``` + +在前面的例子中,调用 `create service nodeport`命令是 `create service`命令的子命令. + +您可以使用 `-h` 标志来查找子命令支持的参数和标志: + +```shell +kubectl create service nodeport -h +``` + +## 如何更新对象 + +`kubectl` 命令支持一些常见更新操作的动词驱动命令。这样命名可以让不熟悉 Kubernetes 对象的用户,在不知道必须设置的特定字段的情况下也可以执行更新操作: + + - `scale`: 通过更新控制器的副本数量,水平扩展控制器以添加或删除 Pod。 + - `annotate`: 从对象添加或删除注释。 + - `label`: 为对象添加或删除标签。 + +`kubectl`命令还支持由对象的一个​​切面驱动的更新命令.设置此切面可能会为不同的对象类型设置不同的字段: + + - `set` : 设置对象的一个​​切面. + +**注**: 在 Kubernetes 版本 1.5 中,并不是每个动词驱动的命令都有一个相关的切面驱动的命令. + +`kubectl` 工具支持直接更新活动对象的其他方法,然而,它们需要更好的了解 Kubernetes 对象模式。 + +- `edit`: 通过在编辑器中打开其配置,直接编辑活动对象的原始配置。 +- `patch`: 通过使用补丁字符串直接修改活动对象的特定字段。 + +有关补丁字符串的更多详细信息,请参阅补丁部分 +[API 公约](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#patch-operations). + +## 如何删除对象 + +您可以使用 `delete` 命令从集群中删除一个对象: + + - `delete /` + + **注意**: 您可以对命令式命令和命令式对象配置都使用 `kubectl delete` 方法。两者的差异在于传递的命令参数不同。要将 + `kubectl delete` 作为命令式命令使用,将要删除的对象作为参数传递。以下是传递名为 nginx 的 Deployment 对象的示例: + +```shell +kubectl delete deployment/nginx +``` + +## 如何查看对象 + +{% comment %} +TODO(pwittrock): 实现时取消注释. + +您可以使用 `kubectl view` 打印指定对象的字段。 + +- `view`: 打印对象的特定字段的值。 + +{% endcomment %} + + + +有几个命令用于打印有关对象的信息: + +- `get`: 打印有关匹配对象的基本信息。使用 `get -h` 来查看选项列表。 +- `describe`: 打印有关匹配对象的聚合详细信息。 +- `logs`: 打印 Pod 运行容器的 stdout 和 stderr 信息。 + +## 使用 `set` 命令在创建之前修改对象 + +有一些对象字段没有可以使用的标志,在 `create` 命令中。在某些情况下,您可以使用组合 `set` 和 `create` 为对象之前的字段指定一个值创建。这是通过将 `create` 命令的输出管道连接到 `set` 命令,然后回到 `create` 命令。以下是一个例子: + +```sh +kubectl create service clusterip -o yaml --dry-run | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f - +``` + +1. 使用 `create service -o yaml --dry-run` 创建服务配置,并将其作为 YAML 打印到 stdout,而不是将其发送到 Kubernetes API 服务器。 +1. 使用 `set --local -f - -o yaml` 从 stdin 读取配置,并将更新后的配置作为 YAML 写入 stdout。 +1. 使用 `kubectl create -f -` 从 stdin 提供的配置创建对象。 + +## 使用 `--edit` 在创建之前修改对象 + +您可以使用 `kubectl create --edit` 命令在对象创建之前,对对象进行任意更改。以下是一个例子: + +```sh +kubectl create service clusterip my-svc -o yaml --dry-run > /tmp/srv.yaml +kubectl create --edit -f /tmp/srv.yaml +``` + +1. 使用`create service` 创建服务的配置并将其保存到 `/tmp/srv.yaml`。 +1. 使用`create --edit` 在创建对象之前打开配置文件进行编辑。 + + +{% endcapture %} + +{% capture whatsnext %} + - [使用对象配置管理 Kubernetes 对象(必要)](/docs/tutorials/object-management-kubectl/imperative-object-management-configuration/) + - [使用对象配置(声明式)管理 Kubernetes 对象](/docs/tutorials/object-management-kubectl/declarative-object-management-configuration/) + - [Kubectl 命令参考](/docs/user-guide/kubectl/v1.6/) + - [Kubernetes 对象模式参考](/docs/resources-reference/v1.6/) + {% endcapture %} + + {% include templates/concept.md %} From fb7650ba298d9e5e2d79be9dde5b82ee7c66d51e Mon Sep 17 00:00:00 2001 From: Dragons Date: Thu, 29 Jun 2017 17:01:22 +0800 Subject: [PATCH 11/87] xingzhou-fix-pr --- cn/docs/concepts/overview/what-is-kubernetes.md | 6 +++--- cn/docs/tutorials/kubernetes-basics/cluster-intro.html | 6 +++--- cn/docs/tutorials/kubernetes-basics/deploy-intro.html | 2 +- cn/docs/tutorials/kubernetes-basics/explore-intro.html | 2 +- cn/docs/tutorials/kubernetes-basics/expose-intro.html | 4 ++-- cn/docs/tutorials/kubernetes-basics/scale-intro.html | 2 +- cn/docs/tutorials/kubernetes-basics/update-intro.html | 2 +- .../object-management-kubectl/object-management.md | 4 ++-- 8 files changed, 14 insertions(+), 14 deletions(-) diff --git a/cn/docs/concepts/overview/what-is-kubernetes.md b/cn/docs/concepts/overview/what-is-kubernetes.md index 52d141a409aba..167a06697c451 100644 --- a/cn/docs/concepts/overview/what-is-kubernetes.md +++ b/cn/docs/concepts/overview/what-is-kubernetes.md @@ -11,8 +11,8 @@ Kubernetes 是一个跨主机集群的 [开源的容器调度平台,它可以 - 快速、可预测地部署您的应用程序 - 拥有即时扩展应用程序的能力 - - 不影响现有业务的情况下,无缝地发布新功能。 - - 优化您的硬件资源,降低您的所需成本 + - 不影响现有业务的情况下,无缝地发布新功能 + - 优化硬件资源,降低成本 我们的目标是构建一个软件和工具的生态系统,以减轻您在公共云或私有云运行应用程序的负担。 @@ -91,7 +91,7 @@ Kubernetes 提供了很多的功能,总会有新的场景受益于新特性。 [Label](/docs/user-guide/labels/) 允许用户按照自己的方式组织管理对应的资源。 [注解](/docs/user-guide/annotations/) 使用户能够以自定义的描述信息来修饰资源,以适用于自己的工作流,并为管理工具提供检查点状态的简单方法。 -此外,[Kubernetes 控制面](/docs/admin/cluster-components) 是构建在相同的 [APIs](/docs/api/) 上面,开发人员和用户都可以用。用户可以编写自己的控制器, [调度器](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/scheduler.md)等等,如果这么做,根据新加的[自定义 API](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/extending-api.md) ,可以扩展当前的通用 [CLI 命令行工具](/docs/user-guide/kubectl-overview/)。 +此外,[Kubernetes 控制面 (Controll Plane)](/docs/admin/cluster-components) 是构建在相同的 [APIs](/docs/api/) 上面,开发人员和用户都可以用。用户可以编写自己的控制器, [调度器](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/scheduler.md)等等,如果这么做,根据新加的[自定义 API](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/extending-api.md) ,可以扩展当前的通用 [CLI 命令行工具](/docs/user-guide/kubectl-overview/)。 这种 [设计](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/principles.md) 使得许多其他系统可以构建在 Kubernetes 之上。 diff --git a/cn/docs/tutorials/kubernetes-basics/cluster-intro.html b/cn/docs/tutorials/kubernetes-basics/cluster-intro.html index 927a2fdf33546..23eb7e3ba6903 100644 --- a/cn/docs/tutorials/kubernetes-basics/cluster-intro.html +++ b/cn/docs/tutorials/kubernetes-basics/cluster-intro.html @@ -31,12 +31,12 @@

    目标

    Kubernetes 集群

    - Kubernetes 用于协调高度可用的计算机集群,这些计算机群集被连接作为单个单元工作。 Kubernetes 的抽象性允许您将容器化的应用程序部署到集群,而不必专门将其绑定到单个计算机。为了利用这种新的部署模型,应用程序需要以将它们与各个主机分离的方式打包: 它们需要被容器化。容器化应用程序比过去的部署模型更灵活和可用,其中应用程序直接安装到特定机器上,作为深入集成到主机中的软件包。 Kubernetes 在一个集群上以更有效的方式自动分发和调度容器应用程序。 Kubernetes 是一个 开源 平台,并且已经准备好了帮助生产。 + Kubernetes 用于协调高度可用的计算机集群,这些计算机群集被连接作为单个单元工作。 Kubernetes 的抽象性允许您将容器化的应用程序部署到集群,而不必专门将其绑定到单个计算机。为了利用这种新的部署模型,应用程序需要以将它们与各个主机分离的方式打包: 它们需要被容器化。容器化应用程序比过去的部署模型更灵活和可用,其中应用程序直接安装到特定机器上,作为深入集成到主机中的软件包。 Kubernetes 在一个集群上以更有效的方式自动分发和调度容器应用程序。 Kubernetes 是一个 开源 平台,可满足生产环境的需要。

    Kubernetes 集群由两种类型的资源组成:

      -
    • 一个 Master 调度集群
    • -
    • 节点 是应用程序实际运行的地方
    • +
    • 一个 Master 是集群的调度节点
    • +
    • Nodes 是应用程序实际运行的工作节点

    diff --git a/cn/docs/tutorials/kubernetes-basics/deploy-intro.html b/cn/docs/tutorials/kubernetes-basics/deploy-intro.html index b06de18b947a0..65d382a3cb801 100644 --- a/cn/docs/tutorials/kubernetes-basics/deploy-intro.html +++ b/cn/docs/tutorials/kubernetes-basics/deploy-intro.html @@ -27,7 +27,7 @@

    目标

    Kubernetes 部署

    - 一旦运行了 Kubernetes 集群,您可以在其上部署容器化应用程序。为此,您可以创建一个 Kubernetes 部署。部署负责创建和更新应用程序实例。创建部署 后, Kubernetes master 会将部署创建的应用程序实例调度到集群中的各个节点。 + 一旦运行了 Kubernetes 集群,您可以在其上部署容器化应用程序。为此,您可以创建一个 Kubernetes Deployment。Deployment 负责创建和更新应用程序实例。创建 Deployment 后, Kubernetes master 会将 Deployment 创建的应用程序实例调度到集群中的各个节点。

    创建应用程序实例后,Kubernetes 部署控制器会持续监视这些实例。如果托管它的节点不可用或删除,则部署控制器将替换实例。 这提供了一种解决机器故障或维护的自愈机制。

    diff --git a/cn/docs/tutorials/kubernetes-basics/explore-intro.html b/cn/docs/tutorials/kubernetes-basics/explore-intro.html index 22e8319506ff4..786eede3726cc 100644 --- a/cn/docs/tutorials/kubernetes-basics/explore-intro.html +++ b/cn/docs/tutorials/kubernetes-basics/explore-intro.html @@ -50,7 +50,7 @@

    概要:

    - Pod 是一组一个或多个应用程序容器 (例如 Docker 或 rkt),包含共享存储 (卷),IP 地址以及有关如何运行它们的信息。 + Pod是由一个或者多个应用程序容器构成的(例如 Docker 或 rkt),包含共享存储 (卷),IP 地址以及有关如何运行它们的信息。

    diff --git a/cn/docs/tutorials/kubernetes-basics/expose-intro.html b/cn/docs/tutorials/kubernetes-basics/expose-intro.html index 1e133f95fd20b..301df7376dc88 100644 --- a/cn/docs/tutorials/kubernetes-basics/expose-intro.html +++ b/cn/docs/tutorials/kubernetes-basics/expose-intro.html @@ -1,5 +1,5 @@ --- -title: 使用服务公开您的应用程序 +title: 使用服务发布您的应用程序 --- @@ -20,7 +20,7 @@

    目标

    • 了解 Kubernetes 中的服务
    • 了解标签和标签选择器对象如何与服务相关联
    • -
    • 使用服务在 Kubernetes 群集外公开应用程序
    • +
    • 通过 Service 在 Kubernetes 集群外发布应用程序
    diff --git a/cn/docs/tutorials/kubernetes-basics/scale-intro.html b/cn/docs/tutorials/kubernetes-basics/scale-intro.html index a4041dae02104..d021b9586b32d 100644 --- a/cn/docs/tutorials/kubernetes-basics/scale-intro.html +++ b/cn/docs/tutorials/kubernetes-basics/scale-intro.html @@ -24,7 +24,7 @@

    目标

    -

    伸缩应用程序

    +

    应用程序伸缩

    在之前的模块中,我们创建了一个 Deployment,然后通过 Service让应用程序外部可见。Deployment 仅为我们的应用程序创建了一个 Pod。 当流量增加时,我们将需要 应用程序以跟上用户需求。

    diff --git a/cn/docs/tutorials/kubernetes-basics/update-intro.html b/cn/docs/tutorials/kubernetes-basics/update-intro.html index ffb4950d5e012..2b5773d9df766 100644 --- a/cn/docs/tutorials/kubernetes-basics/update-intro.html +++ b/cn/docs/tutorials/kubernetes-basics/update-intro.html @@ -26,7 +26,7 @@

    目标

    更新应用程序

    -

    用户期望应用程序始终可用,并且希望开发人员每天部署新版本。在 Kubernetes 上这通过滚动更新达成。 Rolling updates 允许通过使用新的 Pods 实例逐个更新来实现零停机的部署更新。新的 Pods 会被调度到可用资源的 Node 节点上。

    +

    用户期望应用程序始终可用,并且希望开发人员每天部署新版本。在 Kubernetes 上这通过滚动更新(Rolling updates)达成。 Rolling updates 允许通过使用新的 Pods 实例逐个更新来实现零停机的部署更新。新的 Pods 会被调度到可用资源的 Node 节点上。

    在上一个模块中,我们将应用程序扩展为运行多个实例。这也是执行更新但不影响应用可用性所需的条件。默认情况下,更新期间最大数量的不可用 Pod 以及最大数量的新 Pod 是一。 这两个选项可以配置为数字或百分比(Pods)。 在 Kubernetes 中,更新已版本化,任何部署更新都可以恢复到以前的 (稳定) 版本。

    diff --git a/cn/docs/tutorials/object-management-kubectl/object-management.md b/cn/docs/tutorials/object-management-kubectl/object-management.md index a2013ec41860c..ab63f1665de6f 100644 --- a/cn/docs/tutorials/object-management-kubectl/object-management.md +++ b/cn/docs/tutorials/object-management-kubectl/object-management.md @@ -8,7 +8,7 @@ redirect_from: --- {% capture overview %} -`kubectl` 命令行工具支持 Kubernetes 对象几种不同的创建和管理方法。本文档概述了不同的方法. +`kubectl` 命令行工具支持 Kubernetes 对象几种不同的创建和管理方法。本文档简要介绍了这些方法. {% endcapture %} {% capture body %} @@ -61,7 +61,7 @@ kubectl create deployment nginx --image nginx 在命令式对象配置中,`kubectl` 命令指定操作(创建,替换等),可选标志和至少一个文件名称。指定的文件必须包含对象的完整定义以 YAML 或 JSON 格式。 -请参阅[资源参考](https://kubernetes.io/docs/resources-reference/v1.6/) +请参阅[参考资源](https://kubernetes.io/docs/resources-reference/v1.6/) 查看有关对象定义的更多细节。 **警告:** 命令式 `replace` 命令用新提供的命令替换现有资源规格,将对配置文件中缺少的对象的所有更改都丢弃。这种方法不应更新与配置文件无关的资源类型。例如,`LoadBalancer` 类型的服务使其 `externalIPs` 字段与集群的配置无关。 From 48ffe5ca1adcacf11e4ed2bed4b8f62381f6ac51 Mon Sep 17 00:00:00 2001 From: Dragons Date: Fri, 7 Jul 2017 10:46:20 +0800 Subject: [PATCH 12/87] kubernetes-concepts-overview-components-pr --- cn/docs/concepts/overview/components.md | 124 ++++++++++++++++++++++++ 1 file changed, 124 insertions(+) create mode 100644 cn/docs/concepts/overview/components.md diff --git a/cn/docs/concepts/overview/components.md b/cn/docs/concepts/overview/components.md new file mode 100644 index 0000000000000..a19cb92854bc2 --- /dev/null +++ b/cn/docs/concepts/overview/components.md @@ -0,0 +1,124 @@ +--- +assignees: +- lavalamp +title: Kubernetes 组件 +redirect_from: +- "/docs/admin/cluster-components/" +- "/docs/admin/cluster-components.html" +--- +{% capture overview %} +本文档概述了 Kubernetes 所需的各种二进制组件, 用于提供齐全的功能。 +{% endcapture %} + +{% capture body %} + +## Master 组件 + +Master 组件提供的集群控制。Master 组件对集群做出全局性决策(例如:调度),以及检测和响应集群事件(副本控制器的`replicas`字段不满足时,启动新的副本)。 + +Master 组件可以在集群中的任何节点上运行。然而,为了简单起见,设置脚本通常会启动同一个虚拟机上所有 Master 组件,并且不会在此虚拟机上运行用户容器。请参阅[构建高可用性群集](/docs/admin/high-availability)示例对于多主机 VM 的设置。 + +### API服务器 + +[kube-apiserver](/docs/admin/kube-apiserver)对外暴露了Kubernetes API。它是的 Kubernetes 前端控制层。它被设计为水平扩展,即通过部署更多实例来缩放。请参阅[构建高可用性群集](/docs/admin/high-availability). + +### etcd + +[etcd](/docs/admin/etcd) 用于 Kubernetes 的后端存储。所有集群数据都存储在此处,始终为您的 Kubernetes 集群的 etcd 数据提供备份计划。 + +### kube-controller-manager + +[kube-controller-manager](/docs/admin/kube-controller-manager)运行控制器,它们是处理集群中常规任务的后台线程。逻辑上,每个控制器是一个单独的进程,但为了降低复杂性,它们都被编译成单个二进制文件,并在单个进程中运行。 + +这些控制器包括: + +* 节点控制器: 当节点移除时,负责注意和响应。 +* 副本控制器: 负责维护系统中每个副本控制器对象正确数量的 Pod。 +* 端点控制器: 填充 Endpoints 对象(即连接 Services & Pods)。 +* 服务帐户和令牌控制器: 为新的命名空间创建默认帐户和 API 访问令牌. + +### 云控制器管理器 + +云控制器管理器是用于与底层云提供商交互的控制器。云控制器管理器二进制是 Kubernetes v1.6 版本中引入的 Alpha 功能。 + +云控制器管理器仅运行云提供商特定的控制器循环。您必须在 kube-controller-manager 中禁用这些控制器循环,您可以通过在启动 kube-controller-manager 时将 `--cloud-provider` 标志设置为`external`来禁用控制器循环。 + +云控制器管理器允许云供应商代码和 Kubernetes 核心彼此独立发展,在以前的版本中,Kubernetes 核心代码依赖于云提供商特定的功能代码。在未来的版本中,云供应商的特定代码应由云供应商自己维护,并与运行 Kubernetes 的云控制器管理器相关联。 + +以下控制器具有云提供商依赖关系: + +* 节点控制器: 用于检查云提供商以确定节点是否在云中停止响应后被删除 +* 路由控制器: 用于在底层云基础架构中设置路由 +* 服务控制器: 用于创建,更新和删除云提供商负载平衡器 +* 数据卷控制器: 用于创建,附加和装载卷,并与云提供商进行交互以协调卷 + +### kube-scheduler + +[kube-scheduler](/docs/admin/kube-scheduler)观看没有分配节点的新创建的 Pod,选择一个节点供他们运行。 + +### 插件 + +插件是实现集群功能的 Pod 和 Service。 Pods 可能通过 Deployments,ReplicationControllers 管理。命名空间的插件对象被创建在 `kube-system` 命名空间。 + +Addon 管理器用于创建和维护附加资源. 有关详细信息,请参阅[here](http://releases.k8s.io/HEAD/cluster/addons). + +#### DNS + +虽然其他插件并不是严格要求的,但所有 Kubernetes 集群都应该具有[Cluster DNS](/docs/concepts/services-networking/dns-pod-service/),许多示例依赖于它。 + +Cluster DNS是一个 DNS 服务器,除了您的环境中的其他 DNS 服务器,它为 Kubernetes 服务提供DNS记录。 + +Kubernetes 启动的容器自动将 DNS 服务器包含在 DNS 搜索中。 + +#### 用户界面 + +kube-ui 提供了集群状态的只读概述。有关更多信息,请参阅[使用HTTP代理访问 Kubernetes API](/docs/tasks/access-kubernetes-api/http-proxy-access-api/) + + +#### 容器资源监控 + +[容器资源监控](/docs/user-guide/monitoring)记录关于中央数据库中的容器的通用时间序列指标,并提供用于浏览该数据的 UI。 + +#### 集群级日志记录 + +[Cluster-level logging](/docs/user-guide/logging/overview) 负责使用搜索/浏览界面将容器日志保存到中央日志存储。 + +## 节点组件 + +节点组件在每个节点上运行,维护运行的 Pod 并提供 Kubernetes 运行时环境。 + +### kubelet + +[kubelet](/docs/admin/kubelet)是 Master 节点代理,它监视已分配给其节点的 Pod(通过 apiserver 或通过本地配置文件)和: + +* 安装 Pod 的所需数据卷(Volume)。 +* 下载 Pod 的 secrets。 +* 通过 Docker码 运行(或通过 rkt)运行 Pod 的容器。 +* 定期对容器生命周期进行探测。 +* 如果需要,通过创建 *mirror pod* 将报告状态报告回系统的其余部分。 +* 将节点的状态报告回系统的其余部分。 + +### kube-proxy + +[kube-proxy](/docs/admin/kube-proxy)通过维护主机上的网络规则并执行连接转发,实现了Kubernetes服务抽象。 + + +### docker + +Docker 用于运行容器。 + +### rkt + +实验中支持 rkt 运行容器作为 Docker 的替代方案。 + +### supervisord + +supervisord 是一个轻量级的过程监控和控制系统,可以用来保证 kubelet 和 docker 运行。 + +### fluentd + +fluentd 是一个守护进程,它有助于提供[cluster-level logging](#cluster-level-logging) 集群层级的日志。 + +{% endcapture %} + +{% include templates/concept.md %} From b3bce95c2b9ccd6bd9dd0e5c61a39940c5f184ef Mon Sep 17 00:00:00 2001 From: Dragons Date: Mon, 10 Jul 2017 21:48:09 +0800 Subject: [PATCH 13/87] admin-authorization-abac-pr --- cn/docs/admin/authorization/abac.md | 141 ++++++++++++++++++++++++++++ 1 file changed, 141 insertions(+) create mode 100644 cn/docs/admin/authorization/abac.md diff --git a/cn/docs/admin/authorization/abac.md b/cn/docs/admin/authorization/abac.md new file mode 100644 index 0000000000000..a0759a7be1ff3 --- /dev/null +++ b/cn/docs/admin/authorization/abac.md @@ -0,0 +1,141 @@ +--- +assignees: +- erictune +- lavalamp +- deads2k +- liggitt +title: ABAC 模式 +--- + +{% capture overview %} + +基于属性的访问控制(ABAC)定义了访问控制范例,其中通过使用将属性组合在一起的策略来向用户授予访问权限。 + +{% endcapture %} + +{% capture body %} + +## 策略文件格式 + +基于 `ABAC` 模式,可以这样指定策略文件 `--authorization-policy-file=SOME_FILENAME`。 + +此文件是 JSON 格式[每行都是一个JSON对象](http://jsonlines.org/),不应存在封闭的列表或映射,每行只有一个映射。 + +每一行都是一个 "策略对象",策略对象是具有以下映射的属性: + + - 版本控制属性: + - `apiVersion`,字符串类型: 有效值为"abac.authorization.kubernetes.io/v1beta1",允许版本控制和转换策略格式。 + - `kind`,字符串类型: 有效值为 "Policy",允许版本控制和转换策略格式。 + - `spec` 配置为具有以下映射的属性: + - 匹配属性: + - `user`,字符串类型; 来自 `--token-auth-file` 的用户字符串,如果你指定`user`,它必须与验证用户的用户名匹配。 + - `group`,字符串类型; 如果指定`group`,它必须与经过身份验证的用户的一个组匹配,`system:authenticated`匹配所有经过身份验证的请求。`system:unauthenticated`匹配所有未经过身份验证的请求。 + - 资源匹配属性: + - `apiGroup`,字符串类型; 一个 API 组。 + - 例: `extensions` + - 通配符: `*`匹配所有 API 组。 + - `namespace`,字符串类型; 一个命名空间。 + - 例如: `kube-system` + - 通配符: `*` 匹配所有资源请求。 + - `resource`,字符串类型; 资源类型。 + - 例:`pods` + - 通配符: `*`匹配所有资源请求。 + - 非资源匹配属性: + - `nonResourcePath`,字符串类型; 非资源请求路径。 + - 例如:`/version`或`/apis` + - 通配符: + - `*` 匹配所有非资源请求。 + - `/foo/*` 匹配`/foo/`的所有子路径。 + - `readonly`,键入 boolean,如果为 true,则表示该策略仅适用于 get,list 和 watch 操作。 + +**注意:** 未设置的属性与类型设置为零值的属性相同(例如空字符串,0、false),然而未知的应该可读性优先。 + +在将来,策略可能以 JSON 格式表示,并通过 REST 界面进行管理。 + +## 授权算法 + +请求具有与策略对象的属性对应的属性。 + +当接收到请求时,确定属性。 未知属性设置为其类型的零值(例如: 空字符串,0,false)。 + +设置为`“*"`的属性将匹配相应属性的任何值。 + +检查属性的元组,以匹配策略文件中的每个策略。 如果至少有一行匹配请求属性,则请求被授权(但可能会在稍后验证失败)。 + +要允许任何经过身份验证的用户执行某些操作,请将策略组属性设置为 `"system:authenticated“`。 + +要允许任何未经身份验证的用户执行某些操作,请将策略组属性设置为`"system:authentication“`。 + +要允许用户执行任何操作,请使用 apiGroup,命名空间, +资源和 nonResourcePath 属性设置为 `“*"`的策略. + +要允许用户执行任何操作,请使用设置为`“*”` 的 apiGroup,namespace,resource 和 nonResourcePath 属性编写策略。 + +## Kubectl + +Kubectl 使用 api-server 的 `/api` 和 `/apis` 端点进行协商客户端/服务器版本。 通过创建/更新来验证发送到API的对象操作,kubectl 查询某些 swagger 资源。 对于API版本"v1", 那就是`/swaggerapi/api/v1` & `/swaggerapi/ experimental/v1`。 + +当使用 ABAC 授权时,这些特殊资源必须明确通过策略中的 `nonResourcePath` 属性暴露出来(参见下面的[examples](#examples)): + +* `/api`,`/api/*`,`/apis`和`/apis/*` 用于 API 版本协商. +* `/version` 通过 `kubectl version` 检索服务器版本. +* `/swaggerapi/*` 用于创建/更新操作. + +要检查涉及到特定kubectl操作的HTTP调用,您可以调整详细程度: + + kubectl --v=8 version + +## 例子 + +1. Alice 可以对所有资源做任何事情: + + ```json + {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "alice", "namespace": "*", "resource": "*", "apiGroup": "*"}} + ``` +2. Kubelet 可以读取任何pod: + + ```json + {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "kubelet", "namespace": "*", "resource": "pods", "readonly": true}} + ``` +3. Kubelet 可以读写事件: + + ```json + {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "kubelet", "namespace": "*", "resource": "events"}} + ``` +4. Bob 可以在命名空间“projectCaribou"中读取 pod: + + ```json + {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "bob", "namespace": "projectCaribou", "resource": "pods", "readonly": true}} + ``` +5. 任何人都可以对所有非资源路径进行只读请求: + + ```json + {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:authenticated", "readonly": true, "nonResourcePath": "*"}} + {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:unauthenticated", "readonly": true, "nonResourcePath": "*"}} + ``` + +[完整文件示例](http://releases.k8s.io/{{page.githubbranch}}/pkg/auth/authorizer/abac/example_policy_file.jsonl) + +## 服务帐户的快速说明 + +服务帐户自动生成用户。 用户名是根据命名约定生成的: + +```shell +system:serviceaccount:: +``` +创建新的命名空间也会导致创建一个新的服务帐户: + +```shell +system:serviceaccount::default +``` + +例如,如果要将 API 的 kube-system 完整权限中的默认服务帐户授予,则可以将此行添加到策略文件中: + +```json +{"apiVersion":"abac.authorization.kubernetes.io/v1beta1","kind":"Policy","spec":{"user":"system:serviceaccount:kube-system:default","namespace":"*","resource":"*","apiGroup":"*"}} +``` + +需要重新启动 apitorver 以获取新的策略行. + +{% endcapture %} +{% include templates/concept.md %} From c35043d0fed86b6f744b8a02ad40cd3f19437ab7 Mon Sep 17 00:00:00 2001 From: Dragons Date: Mon, 10 Jul 2017 21:54:48 +0800 Subject: [PATCH 14/87] templates-concept-pr --- cn/_includes/templates/concept.md | 32 +++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) create mode 100644 cn/_includes/templates/concept.md diff --git a/cn/_includes/templates/concept.md b/cn/_includes/templates/concept.md new file mode 100644 index 0000000000000..cfd2c8eae9f2e --- /dev/null +++ b/cn/_includes/templates/concept.md @@ -0,0 +1,32 @@ +{% if overview %} + +{{ overview }} + +{% else %} + +{% include templates/_errorthrower.md missing_block='overview' purpose='provides an overview of this concept.' %} + +{% endif %} + +* TOC +{:toc} + +{% if body %} + +{{ body }} + +{% else %} + +{% include templates/_errorthrower.md missing_block='body' purpose='supplies the body of the page content.' %} + +{% endif %} + + +{% if whatsnext %} + +## 开始下一步 + +{{ whatsnext }} + +{% endif %} + From 77b1039aec0feadeb93ae3b3da8b6173bac8f716 Mon Sep 17 00:00:00 2001 From: Dragons Date: Mon, 10 Jul 2017 22:20:51 +0800 Subject: [PATCH 15/87] admin-authorization-index-pr --- cn/docs/admin/authorization/index.md | 155 +++++++++++++++++++++++++++ 1 file changed, 155 insertions(+) create mode 100644 cn/docs/admin/authorization/index.md diff --git a/cn/docs/admin/authorization/index.md b/cn/docs/admin/authorization/index.md new file mode 100644 index 0000000000000..fdf788fd1cd42 --- /dev/null +++ b/cn/docs/admin/authorization/index.md @@ -0,0 +1,155 @@ +--- +assignees: +- erictune +- lavalamp +- deads2k +- liggitt +title: 概述 +--- + +{% capture overview %} + +学习有关 Kubernetes 授权的更多信息,包括有关使用支持的授权模块创建策略的详细信息。 + +{% endcapture %} + +{% capture body %} + +在 Kubernetes 里,您必须经过身份验证(登录),才能授权您的请求(授予访问权限).。有关认证的信息,请参阅[访问控制概述](/docs/admin/access-the-api/)。 + +Kubernetes 提供通用的 REST API 请求。这意味着 Kubernetes 授权可以与现有的组织或云提供商的访问控制系统一起使用,该系统可以处理除 Kubernetes API 之外的其他 API。 + +## 确定请求是允许还是被拒绝 +Kubernetes 使用 API​​ 服务器授权 API 请求。它根据所有策略评估所有请求属性,并允许或拒绝请求。某些策略必须允许 API 请求的所有部分继续进行,这意味着默认情况下是拒绝权限。 + +(虽然 Kubernetes 使用 API ​​服务器,访问控制和依赖特定类型对象的特定领域策略由 Admission 控制器处理。) + +当配置多个授权模块时,按顺序检查每个模块,如果有任何模块授权请求,则可以继续执行该请求。如果所有模块拒绝请求,则拒绝该请求(HTTP状态代码403)。 + +## 查看您的请求属性 + +Kubernetes 仅查看以下API请求属性: + +* **user** - 验证期间提供的 `user` 字符串 +* **group** - 认证用户所属的组名列表 +* **“extra"** - 由认证层提供的任意字符串键到字符串值的映射 +* **API** - 指示请求是否用于API资源 +* **Request path** - 诸如`/api`或`/healthz`的其他非资源端点的路径(请参阅[kubectl](#kubectl)). +* **API request verb** - API 动词 `get`,`list`,`create`,`update`,`patch`,`watch`,`proxy`,`redirect`,`delete`和`deletecollection`用于资源请求。要确定资源 API 端点的请求动词,请参阅**确定下面的请求动词**. +* **HTTP request verb** - HTTP动词`get`,`post`,`put`和`delete`用于非资源请求 +* **Resource** - 正在访问的资源的ID或名称(仅适用于资源请求) + --* 对于使用`get`, `update`, `patch`, 和 `delete`动词的资源请求,您必须提供资源名称。 +* **Subresource** - 正在访问的子资源(仅用于资源请求) +* **Namespace** - 正在被访问的对象的命名空间(仅针对命名空间的资源请求) +* **API group** - 正在访问的API组(仅用于资源请求). 一个空字符串指定[核心 API 组](/docs/api/). + +## 确定请求动词 + +要确定资源 API 端点的请求动词,请查看所使用的HTTP动词以及请求是否对单个资源或资源集合进行操作: + +HTTP动词| 请求动词 +---------- | --------------- +POST | 创建 +GET,HEAD | 获取(个人资源),列表(集合) +PUT | 更新 +PATCH | 补丁 +DELETE| 删除(个人资源),删除(收藏) + +Kubernetes 有时会使用专门的动词检查授权以获得额外的权限。例如: + +* [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/)在`extensions` API组中的`podsecuritypolicies`资源上检查`use`动词的授权。 +* [RBAC](/docs/admin/authorization/rbac/#privilege-escalation-prevention-and-bootstrapping) 在`rbac.authorization.k8s.io` API组中的`roles`和`clusterroles`资源上检查`bind`动词的授权。 +* [认证](/docs/admin/authentication/) 在核心API组中的`users`,`groups`和`serviceaccounts`上的`impersonate`动词的授权以及`authentication.k8s.io` API组中的`userextras`进行层次检查。 + +## 授权模块 +* **ABAC模式** - 基于属性的访问控制(ABAC)定义了访问控制范例,通过使用将属性组合在一起的策略来授予用户访问权限。策略可以使用任何类型的属性(用户属性,资源属性,对象,环境属性等)。要了解有关使用ABAC模式的更多信息,请参阅[ABAC模式](/docs/admin/authorization/abac/) +* **RBAC模式** - 基于角色的访问控制(RBAC)是一种根据企业内个人用户的角色来调整对计算机或网络资源的访问的方法。在这种情况下,访问是单个用户执行特定任务(例如查看,创建或修改文件)的能力。要了解有关使用RBAC模式的更多信息,请参阅[RBAC模式](/docs/admin/authorization/rbac/) +*当指定 "RBAC"(基于角色的访问控制)使用 "rbac.authorization.k8s.io" API组来驱动授权决定时,允许管理员通过Kubernetes API动态配置权限策略. +.. *截至1.6 RBAC模式是测试版. +.. *要启用RBAC,请使用 `--authorization-mode=RBAC` 启动 apiserver. +* **Webhook模式** - WebHook 是HTTP回调:发生事件时发生的HTTP POST; 通过HTTP POST简单的事件通知. 实施 WebHooks 的 Web 应用程序将在某些事情发生时向URL发送消息. 要了解有关使用Webhook模式的更多信息,请参阅[Webhook模式](/docs/admin/authorization/webhook/) +* **自定义模块** - 您可以创建使用Kubernetes的自定义模块. 要了解更多信息,请参阅下面的**自定义模块**。 + +### 自定义模块 +可以相当容易地开发其他实现,APIserver 调用 Authorizer 接口: + +```go +type Authorizer interface { + Authorize(a Attributes) error +} +``` + +以确定是否允许每个API操作. + +授权插件是实现此接口的模块.授权插件代码位于 `pkg/auth/authorizer/$MODULENAME` 中。 + +授权模块可以完全实现,也可以拨出远程授权服务。 授权模块可以实现自己的缓存,以减少具有相同或相似参数的重复授权调用的成本。 开发人员应该考虑缓存和撤销权限之间的交互。 + +#### 检查API访问 + +Kubernetes 将 `subjectaccessreviews.v1.authorization.k8s.io` 资源公开为允许外部访问API授权者决策的普通资源。 无论您选择使用哪个授权器,您都可以使用`SubjectAccessReview`发出一个`POST`,就像webhook授权器的`apis/authorization.k8s.io/v1/subjectaccessreviews` 端点一样,并回复一个响应。 例如: + + +```bash +kubectl create --v=8 -f - << __EOF__ +{ + "apiVersion": "authorization.k8s.io/v1", + "kind": "SubjectAccessReview", + "spec": { + "resourceAttributes": { + "namespace": "kittensandponies", + "verb": "get", + "group": "unicorn.example.org", + "resource": "pods" + }, + "user": "jane", + "group": [ + "group1", + "group2" + ], + "extra": { + "scopes": [ + "openid", + "profile" + ] + } + } +} +__EOF__ + +--- snip lots of output --- + +I0913 08:12:31.362873 27425 request.go:908] Response Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1","metadata":{"creationTimestamp":null},"spec":{"resourceAttributes":{"namespace":"kittensandponies","verb":"GET","group":"unicorn.example.org","resource":"pods"},"user":"jane","group":["group1","group2"],"extra":{"scopes":["openid","profile"]}},"status":{"allowed":true}} +subjectaccessreview "" created +``` + +这对于调试访问问题非常有用,因为您可以使用此资源来确定授权者授予哪些访问权限。 + +## 为您的授权模块使用标志 + +您的策略中必须包含一个标志,以指出您的策略包含哪个授权模块: + +可以使用以下标志: + - `--authorization-mode=ABAC` 基于属性的访问控制(ABAC)模式允许您使用本地文件配置策略。 + - `--authorization-mode=RBAC` 基于角色的访问控制(RBAC)模式允许您使用Kubernetes API创建和存储策略. + - `--authorization-mode=Webhook` WebHook是一种HTTP回调模式,允许您使用远程REST管理授权。 + - `--authorization-mode=AlwaysDeny` 此标志阻止所有请求. 仅使用此标志进行测试。 + - `--authorization-mode=AlwaysAllow` 此标志允许所有请求. 只有在您不需要API请求授权的情况下才能使用此标志。 + +您可以选择多个授权模块. 如果其中一种模式为 `AlwaysAllow`,则覆盖其他模式,并允许所有API请求。 + +## 版本控制 + +对于版本 1.2,配置了 kube-up.sh 创建的集群,以便任何请求都不需要授权。 + +从版本 1.3 开始,配置由 kube-up.sh 创建的集群,使得 ABAC 授权模块处于启用状态。但是,其输入文件最初设置为允许所有用户执行所有操作,集群管理员需要编辑该文件,或者配置不同的授权器来限制用户可以执行的操作。 + +{% endcapture %} +{% capture whatsnext %} + +* 要学习有关身份验证的更多信息,请参阅**身份验证**[控制访问 Kubernetes API](docs/admin/access-the-api/)。 +* 要了解有关入学管理的更多信息,请参阅[使用 Admission 控制器](docs/admin/admission-controllers/)。 +* +{% endcapture %} + +{% include templates/concept.md %} From f4ad2b179f5970d3e0dd7d0267644654882ff05c Mon Sep 17 00:00:00 2001 From: Dragons Date: Tue, 15 Aug 2017 10:47:31 +0800 Subject: [PATCH 16/87] kubernetes-concepts-overview-components-pr-update-fix --- cn/docs/admin/authorization/abac.md | 4 ++-- cn/docs/concepts/overview/components.md | 16 ++++++++-------- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/cn/docs/admin/authorization/abac.md b/cn/docs/admin/authorization/abac.md index a0759a7be1ff3..c1b71874328a1 100644 --- a/cn/docs/admin/authorization/abac.md +++ b/cn/docs/admin/authorization/abac.md @@ -9,7 +9,7 @@ title: ABAC 模式 {% capture overview %} -基于属性的访问控制(ABAC)定义了访问控制范例,其中通过使用将属性组合在一起的策略来向用户授予访问权限。 +基于属性的访问控制(Attribute-based access control - ABAC)定义了访问控制范例,其中通过使用将属性组合在一起的策略来向用户授予访问权限。 {% endcapture %} @@ -21,7 +21,7 @@ title: ABAC 模式 此文件是 JSON 格式[每行都是一个JSON对象](http://jsonlines.org/),不应存在封闭的列表或映射,每行只有一个映射。 -每一行都是一个 "策略对象",策略对象是具有以下映射的属性: +每一行都是一个 "策略对象",策略对象是具有以下映射的属性: - 版本控制属性: - `apiVersion`,字符串类型: 有效值为"abac.authorization.kubernetes.io/v1beta1",允许版本控制和转换策略格式。 diff --git a/cn/docs/concepts/overview/components.md b/cn/docs/concepts/overview/components.md index a19cb92854bc2..1db0b84346148 100644 --- a/cn/docs/concepts/overview/components.md +++ b/cn/docs/concepts/overview/components.md @@ -28,22 +28,22 @@ Master 组件可以在集群中的任何节点上运行。然而,为了简单 ### kube-controller-manager -[kube-controller-manager](/docs/admin/kube-controller-manager)运行控制器,它们是处理集群中常规任务的后台线程。逻辑上,每个控制器是一个单独的进程,但为了降低复杂性,它们都被编译成单个二进制文件,并在单个进程中运行。 +[kube-controller-manager](/docs/admin/kube-controller-manager)运行控制器,它们是处理集群中常规任务的后台线程。逻辑上,每个控制器是一个单独的进程,但为了降低复杂性,它们都被编译成独立的可执行文件,并在单个进程中运行。 这些控制器包括: * 节点控制器: 当节点移除时,负责注意和响应。 * 副本控制器: 负责维护系统中每个副本控制器对象正确数量的 Pod。 -* 端点控制器: 填充 Endpoints 对象(即连接 Services & Pods)。 +* 端点控制器: 填充 端点(Endpoints) 对象(即连接 Services & Pods)。 * 服务帐户和令牌控制器: 为新的命名空间创建默认帐户和 API 访问令牌. -### 云控制器管理器 +### 云控制器管理器-(cloud-controller-manager) -云控制器管理器是用于与底层云提供商交互的控制器。云控制器管理器二进制是 Kubernetes v1.6 版本中引入的 Alpha 功能。 +cloud-controller-manager 是用于与底层云提供商交互的控制器。云控制器管理器二进制是 Kubernetes v1.6 版本中引入的 Alpha 功能。 -云控制器管理器仅运行云提供商特定的控制器循环。您必须在 kube-controller-manager 中禁用这些控制器循环,您可以通过在启动 kube-controller-manager 时将 `--cloud-provider` 标志设置为`external`来禁用控制器循环。 +cloud-controller-manager 仅运行云提供商特定的控制器循环。您必须在 kube-controller-manager 中禁用这些控制器循环,您可以通过在启动 kube-controller-manager 时将 `--cloud-provider` 标志设置为`external`来禁用控制器循环。 -云控制器管理器允许云供应商代码和 Kubernetes 核心彼此独立发展,在以前的版本中,Kubernetes 核心代码依赖于云提供商特定的功能代码。在未来的版本中,云供应商的特定代码应由云供应商自己维护,并与运行 Kubernetes 的云控制器管理器相关联。 +cloud-controller-manager 允许云供应商代码和 Kubernetes 核心彼此独立发展,在以前的版本中,Kubernetes 核心代码依赖于云提供商特定的功能代码。在未来的版本中,云供应商的特定代码应由云供应商自己维护,并与运行 Kubernetes 的云控制器管理器相关联。 以下控制器具有云提供商依赖关系: @@ -52,9 +52,9 @@ Master 组件可以在集群中的任何节点上运行。然而,为了简单 * 服务控制器: 用于创建,更新和删除云提供商负载平衡器 * 数据卷控制器: 用于创建,附加和装载卷,并与云提供商进行交互以协调卷 -### kube-scheduler +### 调度器 - (kube-scheduler) -[kube-scheduler](/docs/admin/kube-scheduler)观看没有分配节点的新创建的 Pod,选择一个节点供他们运行。 +[kube-scheduler](/docs/admin/kube-scheduler)监视没有分配节点的新创建的 Pod,选择一个节点供他们运行。 ### 插件 From d0aa820c76ed9ea09ac56c48404450424014635a Mon Sep 17 00:00:00 2001 From: XuJun00192603 Date: Thu, 14 Sep 2017 01:34:40 +0800 Subject: [PATCH 17/87] ZTE-SH-CN-cluster-administration-sysctl-cluster translate-2017-09-13 --- .../cluster-administration/sysctl-cluster.md | 101 ++++++++++++++++++ 1 file changed, 101 insertions(+) create mode 100644 cn/docs/concepts/cluster-administration/sysctl-cluster.md diff --git a/cn/docs/concepts/cluster-administration/sysctl-cluster.md b/cn/docs/concepts/cluster-administration/sysctl-cluster.md new file mode 100644 index 0000000000000..9476750d551fa --- /dev/null +++ b/cn/docs/concepts/cluster-administration/sysctl-cluster.md @@ -0,0 +1,101 @@ +--- +approvers: +- sttts +title: Kubernetes集群中使用Sysctls +--- + +* TOC +{:toc} + +这篇文章描述了如何在Kubernetes集群中使用Sysctls。 + +## 什么是Sysctl? + +在Linux中,Sysctl接口允许管理员在内核运行时修改内核参数。这些可用参数都存在于虚拟进程文件系统中的`/proc/sys/`目录。这些内核参数作用于各种子系统中,例如: + +- 内核 (通用前缀:`kernel.`) +- 网络 (通用前缀:`net.`) +- 虚拟内存 (通用前缀:`vm.`) +- 设备专用 (通用前缀:`dev.`) +- 更多子系统描述见 [Kernel docs](https://www.kernel.org/doc/Documentation/sysctl/README). + +获取所有参数列表,可运行 + +``` +$ sudo sysctl -a +``` + +## 命名空间级vs.节点级Sysctls + +在今天的Linux内核系统中有一些Sysctls是 _命名空间级_ 的。这意味着他们在同节点的不同pod间是可配置成独立的。在kubernetes里,命名空间级是Sysctls的一个必要条件,以使其在一个pod语境里易于理解。 + +以下列出了Sysctls中已知的 _命名空间级_ : + +- `kernel.shm*`(内核中共享内存相关参数), +- `kernel.msg*`(内核中SystemV消息队列相关参数), +- `kernel.sem`(内核中信号量参数), +- `fs.mqueue.*`(内核中POSIX消息队列相关参数), +- `net.*`(内核中网络配置项相关参数)。 + +Sysctls中非命名空间级的被称为 _节点级_ ,其必须由集群管理员手动设置,要么通过节点的底层Linux分布方式(例如,通过 `/etc/sysctls.conf`),亦或在特权容器中使用Daemonset。 + +**注意**: 这是很好的做法,考虑在一个集群里给有特殊sysctl的节点设置为 _污点_ ,并且给他们安排仅需要这些sysctl设置的pods。 建议采用Kubernetes [_污点和容点_ +特征](/docs/user-guide/kubectl/{{page.version}}/#taint) 来实现。 + +## 安全的 vs. 不安全的 Sysctls + +Sysctls被分为 _安全的_ 和 _不安全的_ sysctls。同一节点上的pods间除了适当命名空间命名一个 _安全的_ sysctl,还必须适当的 _隔离_ 。 这意味着给一个pod设置一个 _安全的_ sysctl + +- 不能对相同节点上其他pod产生任何影响 +- 不能对节点的健康造成损害 +- 不能在pod资源限制以外获取更多的CPU和内存资源 + +目前看来,大多数的 _命名空间级_ sysctls 不一定被认为是 _安全的_ 。 + +在Kubernetes 1.4版本中,以下sysctls提供了 _安全的_ 配置: + +- `kernel.shm_rmid_forced`, +- `net.ipv4.ip_local_port_range`, +- `net.ipv4.tcp_syncookies`. + +该列表在未来的Kubernetes版本里还会继续扩充,当kubelet提供更好的隔离机制时。 + +所有 _安全的_ sysctls 都是默认启用的。 + +所有 _不安全的_ sysctls 默认是关闭的,且必须通过每个节点基础上的集群管理手动开启。禁用不安全的sysctls的Pods将会被计划,但不会启动。 + +**警告**: 由于他们的本质是 _不安全的_ ,使用 _不安全的_ sysctls是自担风险的,并且会导致严重的问题,例如容器的错误行为,资源短缺或者是一个节点的完全破损。 + +## 使能不安全的Sysctls + +牢记上面的警告, 在非常特殊的情况下,例如高性能指标或是实时应用程序优化,集群管理员可以允许 _不安全的_ +sysctls。 _不安全的_ sysctls 会打上kubelet标识,在逐节点的基础上被启用,例如: + +```shell +$ kubelet --experimental-allowed-unsafe-sysctls 'kernel.msg*,net.ipv4.route.min_pmtu' ... +``` + +只有 _命名空间级_ sysctls 可以使用该方法启用。 + +## 给Pod配置Sysctls + +在Kubernetes 1.4版本中,sysctl特性是一个alpha API。因此,sysctls被设置为在pods上使用注释。它们适用于同一个pod上的所有容器。 + +这里列举了一个例子, _安全的_ 和 _不安全的_ sysctls使用不同的注释: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: sysctl-example + annotations: + security.alpha.kubernetes.io/sysctls: kernel.shm_rmid_forced=1 + security.alpha.kubernetes.io/unsafe-sysctls: net.ipv4.route.min_pmtu=1000,kernel.msgmax=1 2 3 +spec: + ... +``` + +**注意**: 包含以上规定的 _不安全的_ sysctls的一个Pod, 将无法启动任何不能使这两个 _不安全的_ sysctls明确的节点。 推荐 +_节点级_ sysctls使用 [_容点和污点_ +特征](/docs/user-guide/kubectl/v1.6/#taint) or [taints on nodes](/docs/concepts/configuration/taint-and-toleration/) +来将这些pods分配到正确的nodes上。 From 3c79f8c40aaa84e3d1ff3413e762e36fe08b528c Mon Sep 17 00:00:00 2001 From: zhangmingld Date: Thu, 14 Sep 2017 15:29:10 +0800 Subject: [PATCH 18/87] ZTE-SH-CN-images.md translation to the docs/concept/containers/images.md --- cn/docs/concepts/containers/images.md | 299 ++++++++++++++++++++++++++ 1 file changed, 299 insertions(+) create mode 100644 cn/docs/concepts/containers/images.md diff --git a/cn/docs/concepts/containers/images.md b/cn/docs/concepts/containers/images.md new file mode 100644 index 0000000000000..5bb9cebfd2868 --- /dev/null +++ b/cn/docs/concepts/containers/images.md @@ -0,0 +1,299 @@ +--- +approvers: +- erictune +- thockin +title: 镜像 +--- + +{% capture overview %} + +在Kubernetes pod中引用镜像前,请创建Docker镜像,并将之推送到镜像仓库中。 +容器的“image”属性支持和Docker命令行相同的语法,包括私有仓库和标签。 + +{% endcapture %} + +{:toc} + +{% capture body %} + +## 升级镜像 +默认的镜像拉取策略是“IfNotPresent”,在镜像已经存在的情况下,kubelet将不在去拉取镜像。 +如果总是想要拉取镜像,必须设置拉取策略为“Always”或者设置镜像标签为“:latest”。 + +如果没有指定镜像的标签,它会被假定为“:latest”,同时拉取策略为“Always”。 + +注意应避免使用“:latest”标签,参见 [Best Practices for Configuration](/docs/concepts/configuration/overview/#container-images) 获取更多信息。 + +## 使用私有仓库 + +从私有仓库读取镜像时可能需要密钥。 +凭证可以用以下方式提供: + + - 使用Google Container Registry + - 每个集群分别配置 + - 在Google Compute Engine 或者 Google Container Engine上自动配置 + - 所有的pod都能读取项目的私有仓库 + - 使用 AWS EC2 Container Registry (ECR) + - 使用IAM角色和策略来控制对ECR仓库的访问 + - 自动刷新ECR的登录凭证 + - 使用 Azure Container Registry (ACR) + - 配置节点对私有仓库认证 + - 所有的pod都可以读取已配置的私有仓库 + - 需要集群管理员提供node的配置 + - 提前拉取镜像 + - 所有的pod都可以使用node上缓存的镜像 + - 需要以root进入node操作 + - pod上指定 ImagePullSecrets + - 只有提供了密钥的pod才能接入私有仓库 +下面将详细描述每一项 + + +### 使用 Google Container Registry +Kuberetes运行在Google Compute Engine (GCE)时原生支持[Google ContainerRegistry (GCR)] +(https://cloud.google.com/tools/container-registry/)。如果kubernetes集群运行在GCE +或者Google Container Engine (GKE)上,使用镜像全名(e.g. gcr.io/my_project/image:tag) +即可。 + +集群中的所有pod都会有读取这个仓库中镜像的权限。 + +Kubelet将使用实例的Google service account向GCR认证。实例的service account拥有 +`https://www.googleapis.com/auth/devstorage.read_only`,所以它可以从项目的GCR拉取,但不能 +推送。 + +### 使用 AWS EC2 Container Registry + +当Node是AWS EC2实例时,Kubernetes原生支持[AWS EC2 ContainerRegistry](https://aws.amazon.com/ecr/)。 + +在pod定义中,使用镜像全名即可 (例如 `ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag`) + +集群中可以创建pod的用户都可以使用ECR中的任意镜像运行pod。 + +Kubelet会获取并且定期刷新ECR的凭证。它需要以下权限 + +- `ecr:GetAuthorizationToken` +- `ecr:BatchCheckLayerAvailability` +- `ecr:GetDownloadUrlForLayer` +- `ecr:GetRepositoryPolicy` +- `ecr:DescribeRepositories` +- `ecr:ListImages` +- `ecr:BatchGetImage` + +要求: + +- 必须使用kubelet 1.2.0及以上版本 +- 如果node在区域A,而镜像仓库在另一个区域B,需要1.3.0及以上版本 +- 区域中必须提供ECR + +诊断 + +- 验证是否满足以上要求 +- 获取工作站的$REGION (例如 `us-west-2`)凭证,使用凭证SSH到主机手动运行docker,检查是否运行 +- 验证kublet是否使用参数`--cloud-provider=aws`运行 +- 检查kubelet日志(例如 `journalctl -u kubelet`),是否有类似的行 + - `plugins.go:56] Registering credential provider: aws-ecr-key` + - `provider.go:91] Refreshing cache for provider: *aws_credentials.ecrProvider` + +### 使用 Azure Container Registry (ACR) +当使用[Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/)时, +可以使用admin user或者service principal认证。 +任何一种情况,认证都通过标准的Dokcer authentication完成。本指南假设使用[azure-cli](https://github.com/azure/azure-cli) +命令行工具。 + +首先,需要创建仓库并获取凭证,完整的文档请参考 +[Azure container registry documentation](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-azure-cli)。 + +创建好容器仓库后,可以使用以下凭证登录: + + * `DOCKER_USER` : service principal, or admin username + * `DOCKER_PASSWORD`: service principal password, or admin user password + * `DOCKER_REGISTRY_SERVER`: `${some-registry-name}.azurecr.io` + * `DOCKER_EMAIL`: `${some-email-address}` + +填写以上变量后,就可以 +[configure a Kubernetes Secret and use it to deploy a Pod](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod)。 + + +### 配置Nodes对私有仓库认证 + +**注意:** 如果在Google Container Engine (GKE)上运行集群,每个节点上都会有`.dockercfg`文件,它包含对Google Container Registry的凭证。 +不需要使用以下方法。 + +**注意:** 如果在AWS EC2上运行集群且准备使用EC2 Container Registry (ECR),每个node上的kubelet会管理和更新ECR的登录凭证。不需要使用以下方法。 + +**注意:** 该方法适用于能够对节点进行配置的情况。该方法在GCE及在其它能自动配置节点的云平台上并不适合。 + +Docker将私有仓库的密钥存放在`$HOME/.dockercfg`或`$HOME/.docker/config.json`文件中。Kubelet上,docker会使用root用户`$HOME`路径下的密钥。 + +推荐如下步骤来为node配置私有仓库。以下示例在PC或笔记本电脑中操作 + + 1.对于想要使用的每一种凭证,运行 `docker login [server]`,它会更新`$HOME/.docker/config.json`。 + 1.使用编辑器查看`$HOME/.docker/config.json`,保证文件中包含了想要使用的凭证 + 1.获取node列表,例如 + - 如果使用node名称,`nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')` + - 如果使用node IP ,`nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}')` + 1.将本地的`.docker/config.json`拷贝到每个节点root用户目录下 + - 例如: `for n in $nodes; do scp ~/.docker/config.json root@$n:/root/.docker/config.json; done` + +创建使用私有仓库的pod来验证,例如: + +```yaml +$ cat < /tmp/private-image-test-1.yaml +apiVersion: v1 +kind: Pod +metadata: + name: private-image-test-1 +spec: + containers: + - name: uses-private-image + image: $PRIVATE_IMAGE_NAME + imagePullPolicy: Always + command: [ "echo", "SUCCESS" ] +EOF +$ kubectl create -f /tmp/private-image-test-1.yaml +pod "private-image-test-1" created +$ +``` + +如果一切正常,一段时间后,可以看到: + +```shell +$ kubectl logs private-image-test-1 +SUCCESS +``` + +如果失败,则可以看到: + +```shell +$ kubectl describe pods/private-image-test-1 | grep "Failed" + Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found +``` + + +必须保证集群中所有的节点都有相同的`.docker/config.json`文件。否则,pod会在一些节点上正常运行而在另一些节点上无法启动 +例如,如果使用node自动弹缩,那么每个实例模板都需要包含`.docker/config.json`,或者挂载一个包含这个文件的驱动器。 + +在`.docker/config.json`中配置了私有仓库密钥后,所有pod都会能读取私有仓库中的镜像。 + +**该方法已在6月26日的docker私有仓库和kubernetes v0.19.3上测试通过,其他私有仓库,如quay.io应该也可以运行,但未测试过。** + +### 提前拉取镜像 + +**注意:** 如果在Google Container Engine (GKE)上运行集群,每个节点上都会有`.dockercfg`文件,它包含对Google Container Registry的凭证。 +不需要使用以下方法。 + +**注意:** 该方法适用于能够对节点进行配置的情况。该方法在GCE及在其它能自动配置节点的云平台上并不适合。 + +默认情况下,kubelet会尝试从指定的仓库拉取每一个镜像 +但是,如果容器属性`imagePullPolicy`设置为`IfNotPresent`或者`Never`, +则会使用本地镜像(优先、唯一、分别)。 + +如果依赖提前拉取镜像代替仓库认证, +必须保证集群所有的节点提前拉取的镜像是相同的。 + +可以用于提前载入指定的镜像以提高速度,或者作为私有仓库认证的一种替代方案 + +所有的pod都可以使用node上缓存的镜像 + +### 在pod上指定ImagePullSecrets + +**注意:** GKE,GCE及其他自动创建node的云平台上,推荐使用本方法。 + +Kuberentes支持在pod中指定仓库密钥。 + +#### 使用Docker Config创建Secret + +运行以下命令,将大写字母代替为合适的值 + +```shell +$ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL +secret "myregistrykey" created. +``` + +如果需要接入多个仓库,可以为每个仓库创建一个secret。 +当为pod拉取镜像时,kubelet会将`imagePullSecrets`合入一个独立虚拟的`.docker/config.json`。 + +Pod只能引用和它相同namespace的ImagePullSecrets, +所以需要为每一个namespace做配置 + +#### 通过kubectl创建secret + +由于某种原因在一个`.docker/config.json`中需要多个项或者需要非上述命令给出的secret,可以[create a secret using +json or yaml](/docs/user-guide/secrets/#creating-a-secret-manually)。 + +请保证: + +- 设置data项的名称为`.dockerconfigjson` +- 使用base64对docker文件编码,并将字符准确黏贴到`data[".dockerconfigjson"]`里 +- 设置`type`为`kubernetes.io/dockerconfigjson` + +示例: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: myregistrykey + namespace: awesomeapps +data: + .dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== +type: kubernetes.io/dockerconfigjson +``` + +如果收到错误消息`error: no objects passed to create`,可能是 base64 编码后的字符串非法。 +如果收到错误消息类似`Secret "myregistrykey" is invalid: data[.dockerconfigjson]: invalid value ...`, +说明数据已经解码成功,但是不满足`.docker/config.json`文件的语法。 + +#### 在pod中引用imagePullSecrets + +现在,在创建pod时,可以在pod定义中增加`imagePullSecrets`小节来引用secret + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: foo + namespace: awesomeapps +spec: + containers: + - name: foo + image: janedoe/awesomeapp:v1 + imagePullSecrets: + - name: myregistrykey +``` + +对每一个使用私有仓库的pod,都需要做以上操作。 + +也可以在[serviceAccount](/docs/user-guide/service-accounts) 资源中设置imagePullSecrets自动设置`imagePullSecrets` + +`imagePullSecrets`可以和每个node上的`.docker/config.json`一起使用,他们将共同生效。本方法在Google Container Engine (GKE) +也能正常工作。 + +### 使用场景 + +配置私有仓库有多种方案,以下是一些常用场景和建议的解决方案。 + +1. 集群运行非专有(例如 开源镜像)镜像。镜像不需要隐藏。 + - 使用Docker hub上的公有镜像 + - 无需配置 + - 在GCE/GKE上会自动使用高稳定性和高速的Docker hub的本地mirror +1. 集群运行一些专有镜像,这些镜像对外部公司需要隐藏,对集群用户可见 + - 使用自主的私有[Docker registry](https://docs.docker.com/registry/). + - 可以放置在[Docker Hub](https://hub.docker.com/account/signup/),或者其他地方。 + - 按照上面的描述,在每个节点手动配置.docker/config.json + - 或者,在防火墙内运行一个内置的私有仓库,并开放读取权限 + - 不需要配置Kubenretes + - 或者,在GCE/GKE上时,使用项目的Google Container Registry + - 使用集群自动伸缩比手动配置node工作的更好 + - 或者,在更改集群node配置不方便时,使用`imagePullSecrets` +1. 使用专有镜像的集群,有更严格的访问控制 + - 保证[AlwaysPullImages admission controller](/docs/admin/admission-controllers/#alwayspullimages)开启。否则,所有的pod都可以使用镜像 + - 将敏感数据存储在"Secret"资源中,而不是打包在镜像里 +1. 多租户集群下,每个租户需要自己的私有仓库 + - 保证[AlwaysPullImages admission controller](/docs/admin/admission-controllers/#alwayspullimages)开启。否则,所有租户的所有的pod都可以使用镜像 + - 私有仓库开启认证 + - 为每个租户获取仓库凭证,放置在secret中,并发布到每个租户的namespace下 + - 租户将secret增加到每个namespace下的imagePullSecrets中 + +{% endcapture %} + +{% include templates/concept.md %} From 9d20d15520b7c81b2a673f6bc25f3b6c64510c15 Mon Sep 17 00:00:00 2001 From: Weihua Meng Date: Thu, 14 Sep 2017 19:18:03 +0800 Subject: [PATCH 19/87] Update podpreset.md --- docs/tasks/inject-data-application/podpreset.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/tasks/inject-data-application/podpreset.md b/docs/tasks/inject-data-application/podpreset.md index b3b88b2412a0b..7c68980415a75 100644 --- a/docs/tasks/inject-data-application/podpreset.md +++ b/docs/tasks/inject-data-application/podpreset.md @@ -123,7 +123,7 @@ metadata: app: website role: frontend annotations: - podpreset.admission.kubernetes.io/allow-database: "resource version" + podpreset.admission.kubernetes.io/podpreset-allow-database: "resource version" spec: containers: - name: website @@ -229,7 +229,7 @@ metadata: app: website role: frontend annotations: - podpreset.admission.kubernetes.io/allow-database: "resource version" + podpreset.admission.kubernetes.io/podpreset-allow-database: "resource version" spec: containers: - name: website @@ -331,7 +331,7 @@ kind: Pod app: guestbook tier: frontend annotations: - podpreset.admission.kubernetes.io/allow-database: "resource version" + podpreset.admission.kubernetes.io/podpreset-allow-database: "resource version" spec: containers: - name: php-redis @@ -432,8 +432,8 @@ metadata: app: website role: frontend annotations: - podpreset.admission.kubernetes.io/allow-database: "resource version" - podpreset.admission.kubernetes.io/proxy: "resource version" + podpreset.admission.kubernetes.io/podpreset-allow-database: "resource version" + podpreset.admission.kubernetes.io/podpreset-proxy: "resource version" spec: containers: - name: website @@ -538,7 +538,7 @@ $ kubectl describe ... .... Events: FirstSeen LastSeen Count From SubobjectPath Reason Message - Tue, 07 Feb 2017 16:56:12 -0700 Tue, 07 Feb 2017 16:56:12 -0700 1 {podpreset.admission.kubernetes.io/allow-database } conflict Conflict on pod preset. Duplicate mountPath /cache. + Tue, 07 Feb 2017 16:56:12 -0700 Tue, 07 Feb 2017 16:56:12 -0700 1 {podpreset.admission.kubernetes.io/podpreset-allow-database } conflict Conflict on pod preset. Duplicate mountPath /cache. ``` ## Deleting a Pod Preset From 677cea2ed93954607dd79f3447c6ae6787050b8b Mon Sep 17 00:00:00 2001 From: jianglingxia Date: Thu, 14 Sep 2017 19:44:09 +0800 Subject: [PATCH 20/87] link error --- .../run-application/run-single-instance-stateful-application.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tasks/run-application/run-single-instance-stateful-application.md b/docs/tasks/run-application/run-single-instance-stateful-application.md index 6a1c305fa9dc8..c14a42264f17e 100644 --- a/docs/tasks/run-application/run-single-instance-stateful-application.md +++ b/docs/tasks/run-application/run-single-instance-stateful-application.md @@ -206,7 +206,7 @@ specific to stateful apps: * Don't scale the app. This setup is for single-instance apps only. The underlying PersistentVolume can only be mounted to one Pod. For clustered stateful apps, see the - [StatefulSet documentation](/docs/concepts/workloads/controllers/petset/). + [StatefulSet documentation](/docs/concepts/workloads/controllers/statefulset/). * Use `strategy:` `type: Recreate` in the Deployment configuration YAML file. This instructs Kubernetes to _not_ use rolling updates. Rolling updates will not work, as you cannot have more than From fa16762e0b9b9e64e904857cee9b164e6476698f Mon Sep 17 00:00:00 2001 From: jianglingxia Date: Tue, 19 Sep 2017 17:27:28 +0800 Subject: [PATCH 21/87] modify the link of kubelet.md bootstrap-tokens.md federation/index.md --- docs/admin/bootstrap-tokens.md | 2 +- docs/admin/federation/index.md | 2 +- docs/admin/kubelet.md | 4 ++-- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/admin/bootstrap-tokens.md b/docs/admin/bootstrap-tokens.md index d1f13a5585bcb..cdb563bbb6a5d 100644 --- a/docs/admin/bootstrap-tokens.md +++ b/docs/admin/bootstrap-tokens.md @@ -58,7 +58,7 @@ Authorization: Bearer 07401b.f395accd246ae52d Each valid token is backed by a secret in the `kube-system` namespace. You can find the full design doc -[here](https://github.com/kubernetes/community/blob/{{page.githubbranch}}/contributors/design-proposals/bootstrap-discovery.md). +[here](https://github.com/kubernetes/community/blob/{{page.githubbranch}}/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md). Here is what the secret looks like. Note that `base64(string)` indicates the value should be base64 encoded. The undecoded version is provided here for diff --git a/docs/admin/federation/index.md b/docs/admin/federation/index.md index b6f0243b9711c..ecdcca87d974b 100644 --- a/docs/admin/federation/index.md +++ b/docs/admin/federation/index.md @@ -385,4 +385,4 @@ if required. ## For more information - * [Federation proposal](https://git.k8s.io/community/contributors/design-proposals/federation.md) details use cases that motivated this work. + * [Federation proposal](https://git.k8s.io/community/contributors/design-proposals/federation/federation.md) details use cases that motivated this work. diff --git a/docs/admin/kubelet.md b/docs/admin/kubelet.md index cc762ed62bbea..77b1b0caf4977 100644 --- a/docs/admin/kubelet.md +++ b/docs/admin/kubelet.md @@ -70,7 +70,7 @@ kubelet --enable-custom-metrics Support for gathering custom metrics. --enable-debugging-handlers Enables server endpoints for log collection and local running of containers and commands (default true) --enable-server Enable the Kubelet's server (default true) - --enforce-node-allocatable stringSlice A comma separated list of levels of node allocatable enforcement to be enforced by kubelet. Acceptable options are 'pods', 'system-reserved' & 'kube-reserved'. If the latter two options are specified, '--system-reserved-cgroup' & '--kube-reserved-cgroup' must also be set respectively. See https://git.k8s.io/community/contributors/design-proposals/node-allocatable.md for more details. (default [pods]) + --enforce-node-allocatable stringSlice A comma separated list of levels of node allocatable enforcement to be enforced by kubelet. Acceptable options are 'pods', 'system-reserved' & 'kube-reserved'. If the latter two options are specified, '--system-reserved-cgroup' & '--kube-reserved-cgroup' must also be set respectively. See https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md for more details. (default [pods]) --event-burst int32 Maximum size of a bursty event records, temporarily allows event records to burst to this number, while still not exceeding event-qps. Only used if --event-qps > 0 (default 10) --event-qps int32 If > 0, limit event creations per second to this value. If 0, unlimited. (default 5) --eviction-hard string A set of eviction thresholds (e.g. memory.available<1Gi) that if met would trigger a pod eviction. (default "memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%") @@ -80,7 +80,7 @@ kubelet --eviction-soft string A set of eviction thresholds (e.g. memory.available<1.5Gi) that if met over a corresponding grace period would trigger a pod eviction. --eviction-soft-grace-period string A set of eviction grace periods (e.g. memory.available=1m30s) that correspond to how long a soft eviction threshold must hold before triggering a pod eviction. --exit-on-lock-contention Whether kubelet should exit upon lock-file contention. - --experimental-allocatable-ignore-eviction When set to 'true', Hard Eviction Thresholds will be ignored while calculating Node Allocatable. See https://git.k8s.io/community/contributors/design-proposals/node-allocatable.md for more details. [default=false] + --experimental-allocatable-ignore-eviction When set to 'true', Hard Eviction Thresholds will be ignored while calculating Node Allocatable. See https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md for more details. [default=false] --experimental-allowed-unsafe-sysctls stringSlice Comma-separated whitelist of unsafe sysctls or unsafe sysctl patterns (ending in *). Use these at your own risk. --experimental-bootstrap-kubeconfig string deprecated: use --bootstrap-kubeconfig --experimental-check-node-capabilities-before-mount [Experimental] if set true, the kubelet will check the underlying node for required componenets (binaries, etc.) before performing the mount From 1c00298426e58d8c7f2ca0e4f79bc547ec3203c7 Mon Sep 17 00:00:00 2001 From: YuanJunliang10067740 Date: Wed, 20 Sep 2017 19:24:55 +0800 Subject: [PATCH 22/87] ZTE-SH-CN-debug-application --- .../debug-application.md | 182 ++++++++++++++++++ 1 file changed, 182 insertions(+) create mode 100644 cn/docs/tasks/debug-application-cluster/debug-application.md diff --git a/cn/docs/tasks/debug-application-cluster/debug-application.md b/cn/docs/tasks/debug-application-cluster/debug-application.md new file mode 100644 index 0000000000000..11538f35de87e --- /dev/null +++ b/cn/docs/tasks/debug-application-cluster/debug-application.md @@ -0,0 +1,182 @@ +--- +title: 应用故障排查 +--- + +本指南帮助用户来调试kubernetes上那些没有正常运行的应用。 +本指南*不能*调试集群。如果想调试集群的话,请参阅[这里](/docs/admin/cluster-troubleshooting)。 + +* TOC +{:toc} + +## FAQ + +强烈建议用户参考我们的[FAQ](https://github.com/kubernetes/kubernetes/wiki/User-FAQ). + +## 诊断问题 + +故障排查的第一步是先给问题分下类。这个问题是什么?Pods,Replication Controller或者Service? + + * [Debugging Pods](#debugging-pods) + * [Debugging Replication Controllers](#debugging-replication-controllers) + * [Debugging Services](#debugging-services) + +### Debugging Pods + +调试pod的第一步是看一下这个pod的信息,用如下命令查看一下pod的当前状态和最近的事件: + +```shell +$ kubectl describe pods ${POD_NAME} +``` + +查看一下pod中的容器所处的状态。这些容器的状态都是`Running`吗?最近有没有重启过? + +后面的调试都是要依靠pods的状态的。 + +#### pod停留在pending状态 + +如果一个pod卡在`Pending`状态,则表示这个pod没有被调度到一个节点上。通常这是因为资源不足引起的。 +敲一下`kubectl describe ...`这个命令,输出的信息里面应该有显示为什么没被调度的原因。 +常见原因如下: + +* **资源不足**: +你可能耗尽了集群上所有的CPU和内存,此时,你需要删除pods,调整资源请求,或者增加节点。 +更多信息请参阅[Compute Resources document](/docs/user-guide/compute-resources/#my-pods-are-pending-with-event-message-failedscheduling) + +* **使用了`hostPort`**: +如果绑定一个pod到`hostPort`,那么能创建的pod个数就有限了。 +多数情况下,`hostPort`是非必要的,而应该采用服务来暴露pod。 +如果确实需要使用`hostPort`,那么能创建的pod的数量就是节点的个数。 + + +#### pod停留在waiting状态 + +如果一个pod卡在`Waiting`状态,则表示这个pod已经调试到节点上,但是没有运行起来。 +再次敲一下`kubectl describe ...`这个命令来查看相关信息。 +最常见的原因是拉取镜像失败。可以通过以下三种方式来检查: + +* 使用的镜像名字正确吗? +* 镜像仓库里有没有这个镜像? +* 用`docker pull `命令手动拉下镜像试试。 + +#### pod处于crashing状态或者unhealthy + +首先,看一下容器的log: + +```shell +$ kubectl logs ${POD_NAME} ${CONTAINER_NAME} +``` + +如果容器是crashed的,用如下命令可以看到crash的log: + +```shell +$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME} +``` + +或者,用`exec`在容器内运行一些命令: + +```shell +$ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN} +``` + +注意:当一个pod内只有一个容器时,可以不带参数`-c ${CONTAINER_NAME}`。 + +例如,名为Cassandra的pod,处于running态,要查看它的log,可运行如下命令: + +```shell +$ kubectl exec cassandra -- cat /var/log/cassandra/system.log +``` + +如果以上方法都不起作用,找到这个pod所在的节点并用SSH登录进去做进一步的分析。 +通常情况下,是不需要在Kubernetes API中再给出另外的工具的。 +因此,如果你发现需要ssh进一个主机来分析问题时,请在GitHub上提一个特性请求,描述一个你的场景并说明为什么已经提供的工具不能满足需求。 + + +#### pod处于running态,但是没有正常工作 + +如果创建的pod不符合预期,那么创建pod的描述文件应该是存在某种错误的,并且这个错误在创建pod时被忽略掉。 +通常pod的定义中,章节被错误的嵌套,或者一个字段名字被写错,都可能会引起被忽略掉。 +例如,希望在pod中用命令行执行某个命令,但是将`command`写成`commnd`,pod虽然可以创建,但命令并没有执行。 + +如何查出来哪里出错? +首先,删掉这个pod再重新创建一个,重创时,像下面这样带着`--validate`这个参数: +`kubectl create --validate -f mypod.yaml`,`command`写成`commnd`的拼写错误就会打印出来了。 + +```shell +I0805 10:43:25.129850 46757 schema.go:126] unknown field: commnd +I0805 10:43:25.129973 46757 schema.go:129] this may be a false alarm, see https://github.com/kubernetes/kubernetes/issues/6842 +pods/mypod +``` + + + +如果上面方法没有看到相关异常的信息,那么接下来就要验证从apiserver获取到的pod是否与期望的一致,比如创建Pod的yaml文件是mypod.yaml。 + +运行如下命令来获取apiserver创建的pod信息并保存成一个文件: +`kubectl get pods/mypod -o yaml > mypod-on-apiserver.yaml`。 + +然后手动对这两个文件进行比较: +apiserver获得的yaml文件中的一些行,不在创建pod的yaml文件内,这是正常的。 +如果创建Pod的yaml文件内的一些行,在piserver获得的yaml文件中不存在,可以说明创建pod的yaml中的定义有问题。 + + +### Debugging Replication Controllers + +RC相当简单。他们要么能创建pod,要么不能。如果不能创建pod,请参阅上述[Debugging Pods](#debugging-pods)。 + +也可以使用`kubectl describe rc ${CONTROLLER_NAME}`命令来监视RC相关的事件。 + +### Debugging Services + +服务提供了多个Pod之间的负载均衡功能。 +有一些常见的问题可以造成服务无法正常工作。以下说明将有助于调试服务的问题。 + +首先,验证服务是否有端点。对于每一个Service对像,apiserver使`endpoints`资源可用。 + +通过如下命令可以查看endpoints资源: + +```shell +$ kubectl get endpoints ${SERVICE_NAME} +``` + +确保endpoints与服务内容器个数一致。 +例如,如果你创建了一个nginx服务,它有3个副本,那么你就会在这个服务的endpoints中看到3个不同的IP地址。 + +#### 服务缺少endpoints + +如果缺少endpoints,请尝试使用服务的labels列出所有的pod。 +假如有一个服务,有如下的label: + +```yaml +... +spec: + - selector: + name: nginx + type: frontend +``` + +你可以使用如下命令列出与selector相匹配的pod,并验证这些pod是否归属于创建的服务: + +```shell +$ kubectl get pods --selector=name=nginx,type=frontend +``` + +如果pod列表附合预期,但是endpoints仍然为空,那么可能没有暴露出正确的端口。 +如果服务指定了`containerPort`,但是列表中的Pod没有列出该端口,则不会将其添加到端口列表。 + +验证该pod的`containerPort`与服务的`containerPort`是否匹配。 + +#### 网络业务不工作 + +如果可以连接到服务上,但是连接立即被断开了,并且在endpoints列表中有endpoints,可能是代理和pods之间不通。 + +确认以下3件事情: + + * Pods工作是否正常? 看一下重启计数,并参阅[Debugging Pods](#debugging-pods); + * 可以直接连接到pod上吗?获取pod的IP地址,然后尝试直接连接到该IP上; + * 应用是否在配置的端口上进行服务?Kubernetes不进行端口重映射,所以如果应用在8080端口上服务,那么`containerPort`字段就需要设定为8080。 + +#### 更多信息 + +如果上述都不能解决你的问题,请按照[Debugging Service document](/docs/user-guide/debugging-services)中的介绍来确保你的`Service`处于running态,有`Endpoints`,`Pods`真正的在服务;你有DNS在工作,安装了iptables规则,kube-proxy也没有异常行为。 + +你也可以访问[troubleshooting document](/docs/troubleshooting/)来获取更多信息。 From 1612f0f86c6528fd3e903ebbb8a17ba51126e06f Mon Sep 17 00:00:00 2001 From: Steve Perry Date: Fri, 22 Sep 2017 18:16:19 -0700 Subject: [PATCH 23/87] Update _redirects. (#5589) --- _redirects | 812 ++++++++++++++++++++++++++--------------------------- 1 file changed, 399 insertions(+), 413 deletions(-) diff --git a/_redirects b/_redirects index 217484f75d6d7..b3b0d56ddb616 100644 --- a/_redirects +++ b/_redirects @@ -4,451 +4,437 @@ # test at https://play.netlify.com/redirects # ############################################### -/docs/admin/addons /docs/concepts/cluster-administration/addons 301 -/docs/admin/apparmor /docs/tutorials/clusters/apparmor 301 -/docs/admin/audit /docs/tasks/debug-application-cluster/audit 301 -/docs/admin/cluster-components /docs/concepts/overview/components 301 -/docs/admin/cluster-management /docs/tasks/administer-cluster/cluster-management 301 -/docs/admin/cluster-troubleshooting /docs/tasks/debug-application-cluster/debug-cluster 301 -/docs/admin/daemons /docs/concepts/workloads/controllers/daemonset 301 -/docs/admin/disruptions /docs/concepts/workloads/pods/disruptions 301 -/docs/admin/dns /docs/concepts/services-networking/dns-pod-service 301 -/docs/admin/etcd /docs/tasks/administer-cluster/configure-upgrade-etcd 301 -/docs/admin/etcd_upgrade /docs/tasks/administer-cluster/configure-upgrade-etcd 301 -/docs/admin/federation/kubefed /docs/tasks/federation/set-up-cluster-federation-kubefed 301 -/docs/admin/garbage-collection /docs/concepts/cluster-administration/kubelet-garbage-collection 301 -/docs/admin/ha-master-gce /docs/tasks/administer-cluster/highly-available-master 301 -/docs/admin/ /docs/concepts/cluster-administration/cluster-administration-overview 301 -/docs/admin/kubeadm-upgrade-1-7 /docs/tasks/administer-cluster/kubeadm-upgrade-1-7 301 -/docs/admin/limitrange/ /docs/tasks/administer-cluster/cpu-memory-limit 301 -/docs/admin/master-node-communication /docs/concepts/architecture/master-node-communication 301 -/docs/admin/multi-cluster /docs/concepts/cluster-administration/federation 301 -/docs/admin/multiple-schedulers /docs/tasks/administer-cluster/configure-multiple-schedulers 301 -/docs/admin/namespaces /docs/tasks/administer-cluster/namespaces 301 -/docs/admin/namespaces/walkthrough /docs/tasks/administer-cluster/namespaces-walkthrough 301 -/docs/admin/network-plugins /docs/concepts/cluster-administration/network-plugins 301 -/docs/admin/networking /docs/concepts/cluster-administration/networking 301 -/docs/admin/node /docs/concepts/architecture/nodes 301 -/docs/admin/node-allocatable /docs/tasks/administer-cluster/reserve-compute-resources 301 -/docs/admin/node-problem /docs/tasks/debug-application-cluster/monitor-node-health 301 -/docs/admin/out-of-resource /docs/tasks/administer-cluster/out-of-resource 301 -/docs/admin/rescheduler /docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods 301 -/docs/admin/resourcequota/limitstorageconsumption /docs/tasks/administer-cluster/limit-storage-consumption 301 -/docs/admin/resourcequota/walkthrough /docs/tasks/administer-cluster/quota-api-object 301 -/docs/admin/static-pods /docs/tasks/administer-cluster/static-pod 301 -/docs/admin/sysctls /docs/concepts/cluster-administration/sysctl-cluster 301 -/docs/admin/upgrade-1-6 /docs/tasks/administer-cluster/upgrade-1-6 301 - -/docs/api /docs/concepts/overview/kubernetes-api 301 - -/docs/concepts/abstractions/controllers/garbage-collection /docs/concepts/workloads/controllers/garbage-collection 301 -/docs/concepts/abstractions/controllers/petsets /docs/concepts/workloads/controllers/petset 301 -/docs/concepts/abstractions/controllers/statefulsets /docs/concepts/workloads/controllers/statefulset 301 -/docs/concepts/abstractions/init-containers /docs/concepts/workloads/pods/init-containers 301 -/docs/concepts/abstractions/overview /docs/concepts/overview/working-with-objects/kubernetes-objects 301 -/docs/concepts/abstractions/pod /docs/concepts/workloads/pods/pod-overview 301 - -/docs/concepts/cluster-administration/access-cluster /docs/tasks/access-application-cluster/access-cluster 301 -/docs/concepts/cluster-administration/audit /docs/tasks/debug-application-cluster/audit 301 -/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig /docs/tasks/access-application-cluster/authenticate-across-clusters-kubeconfig 301 -/docs/concepts/cluster-administration/cluster-management /docs/tasks/administer-cluster/cluster-management 301 -/docs/concepts/cluster-administration/configure-etcd /docs/tasks/administer-cluster/configure-upgrade-etcd 301 -/docs/concepts/cluster-administration/etcd-upgrade /docs/tasks/administer-cluster/configure-upgrade-etcd 301 -/docs/concepts/cluster-administration/federation-service-discovery /docs/tasks/federation/federation-service-discovery 301 -/docs/concepts/cluster-administration/guaranteed-scheduling-critical-addon-pods /docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods 301 -/docs/concepts/cluster-administration/master-node-communication /docs/concepts/architecture/master-node-communication 301 -/docs/concepts/cluster-administration/multiple-clusters /docs/concepts/cluster-administration/federation 301 -/docs/concepts/cluster-administration/out-of-resource /docs/tasks/administer-cluster/out-of-resource 301 -/docs/concepts/cluster-administration/resource-usage-monitoring /docs/tasks/debug-application-cluster/resource-usage-monitoring 301 -/docs/concepts/cluster-administration/static-pod /docs/tasks/administer-cluster/static-pod 301 - -/docs/concepts/clusters/logging /docs/concepts/cluster-administration/logging 301 -/docs/concepts/configuration/container-command-arg /docs/tasks/inject-data-application/define-command-argument-container/docs/concepts/ecosystem/thirdpartyresource 301 /docs/tasks/access-kubernetes-api/extend-api-third-party-resource -/docs/concepts/jobs/cron-jobs /docs/concepts/workloads/controllers/cron-jobs 301 -/docs/concepts/jobs/run-to-completion-finite-workloads /docs/concepts/workloads/controllers/jobs-run-to-completion 301 -/docs/concepts/nodes/node /docs/concepts/architecture/nodes 301 -/docs/concepts/storage/etcd-store-api-object /docs/tasks/administer-cluster/configure-upgrade-etcd 301 - -/docs/concepts/tools/kubectl/object-management-overview /docs/tutorials/object-management-kubectl/object-management 301 -/docs/concepts/tools/kubectl/object-management-using-declarative-config /docs/tutorials/object-management-kubectl/declarative-object-management-configuration 301 -/docs/concepts/tools/kubectl/object-management-using-imperative-commands /docs/tutorials/object-management-kubectl/imperative-object-management-command 301 -/docs/concepts/tools/kubectl/object-management-using-imperative-config /docs/tutorials/object-management-kubectl/imperative-object-management-configuration 301 - -/docs/getting-started-guides /docs/setup/pick-right-solution 301 -/docs/getting-started-guides/kubeadm /docs/setup/independent/create-cluster-kubeadm 301 -/docs/getting-started-guides/network-policy/calico /docs/tasks/administer-cluster/calico-network-policy 301 -/docs/getting-started-guides/network-policy/romana /docs/tasks/administer-cluster/romana-network-policy 301 -/docs/getting-started-guides/network-policy/walkthrough /docs/tasks/administer-cluster/declare-network-policy 301 -/docs/getting-started-guides/network-policy/weave /docs/tasks/administer-cluster/weave-network-policy 301 -/docs/getting-started-guides/running-cloud-controller /docs/tasks/administer-cluster/running-cloud-controller 301 -/docs/getting-started-guides/ubuntu/calico /docs/getting-started-guides/ubuntu/ 301 - -/docs/hellonode /docs/tutorials/stateless-application/hello-minikube 301 -/docs/ /docs/home/ 301 -/docs/samples /docs/tutorials/ 301 - -/docs/tasks/administer-cluster/apply-resource-quota-limit /docs/tasks/administer-cluster/quota-api-object 301 -/docs/tasks/administer-cluster/assign-pods-nodes /docs/tasks/configure-pod-container/assign-pods-nodes 301 -/docs/tasks/administer-cluster/overview /docs/concepts/cluster-administration/cluster-administration-overview 301 -/docs/tasks/administer-cluster/cpu-memory-limit /docs/tasks/administer-cluster/memory-default-namespace 301 -/docs/tasks/administer-cluster/share-configuration /docs/tasks/access-application-cluster/configure-access-multiple-clusters 301 - -/docs/tasks/configure-pod-container/apply-resource-quota-limit /docs/tasks/administer-cluster/apply-resource-quota-limit 301 -/docs/tasks/configure-pod-container/calico-network-policy /docs/tasks/administer-cluster/calico-network-policy 301 -/docs/tasks/configure-pod-container/communicate-containers-same-pod /docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume 301 -/docs/tasks/configure-pod-container/declare-network-policy /docs/tasks/administer-cluster/declare-network-policy 301 -/docs/tasks/configure-pod-container/define-environment-variable-container /docs/tasks/inject-data-application/define-environment-variable-container 301 -/docs/tasks/configure-pod-container/distribute-credentials-secure /docs/tasks/inject-data-application/distribute-credentials-secure 301 -/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information /docs/tasks/inject-data-application/downward-api-volume-expose-pod-information 301 -/docs/tasks/configure-pod-container/environment-variable-expose-pod-information /docs/tasks/inject-data-application/environment-variable-expose-pod-information 301 -/docs/tasks/configure-pod-container/limit-range /docs/tasks/administer-cluster/cpu-memory-limit 301 -/docs/tasks/configure-pod-container/romana-network-policy /docs/tasks/administer-cluster/romana-network-policy 301 -/docs/tasks/configure-pod-container/weave-network-policy /docs/tasks/administer-cluster/weave-network-policy 301 -/docs/tasks/configure-pod-container/assign-cpu-ram-container /docs/tasks/configure-pod-container/assign-memory-resource 301 - -/docs/tasks/kubectl/get-shell-running-container /docs/tasks/debug-application-cluster/get-shell-running-container 301 -/docs/tasks/kubectl/install /docs/tasks/tools/install-kubectl 301 -/docs/tasks/kubectl/list-all-running-container-images /docs/tasks/access-application-cluster/list-all-running-container-images 301 - -/docs/tasks/manage-stateful-set/debugging-a-statefulset /docs/tasks/debug-application-cluster/debug-stateful-set 301 -/docs/tasks/manage-stateful-set/delete-pods /docs/tasks/run-application/force-delete-stateful-set-pod 301 -/docs/tasks/manage-stateful-set/deleting-a-statefulset /docs/tasks/run-application/delete-stateful-set 301 -/docs/tasks/manage-stateful-set/scale-stateful-set /docs/tasks/run-application/scale-stateful-set 301 -/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set /docs/tasks/run-application/upgrade-pet-set-to-stateful-set 301 - -/docs/tasks/run-application/podpreset /docs/tasks/inject-data-application/podpreset 301 -/docs/tasks/troubleshoot/debug-init-containers /docs/tasks/debug-application-cluster/debug-init-containers 301 -/docs/tasks/web-ui-dashboard /docs/tasks/access-application-cluster/web-ui-dashboard 301 -/docs/templatedemos /docs/home/contribute/page-templates 301 -/docs/tools/kompose /docs/tools/kompose/user-guide 301 - -/docs/tutorials/clusters/multiple-schedulers /docs/tasks/administer-cluster/configure-multiple-schedulers 301 -/docs/tutorials/connecting-apps/connecting-frontend-backend /docs/tasks/access-application-cluster/connecting-frontend-backend 301 -/docs/tutorials/federation/set-up-cluster-federation-kubefed /docs/tasks/federation/set-up-cluster-federation-kubefed 301 -/docs/tutorials/federation/set-up-coredns-provider-federation /docs/tasks/federation/set-up-coredns-provider-federation 301 -/docs/tutorials/federation/set-up-placement-policies-federation /docs/tasks/federation/set-up-placement-policies-federation 301 -/docs/tutorials/getting-started/create-cluster /docs/tutorials/kubernetes-basics/cluster-intro 301 -/docs/tutorials/stateful-application/run-replicated-stateful-application /docs/tasks/run-application/run-replicated-stateful-application 301 -/docs/tutorials/stateful-application/run-stateful-application /docs/tasks/run-application/run-single-instance-stateful-application 301 -/docs/tutorials/stateless-application/expose-external-ip-address-service /docs/tasks/access-application-cluster/service-access-application-cluster 301 -/docs/tutorials/stateless-application/run-stateless-ap-replication-controller /docs/tasks/run-application/run-stateless-application-deployment 301 -/docs/tutorials/stateless-application/run-stateless-application-deployment /docs/tasks/run-application/run-stateless-application-deployment 301 - -/docs/user-guide/accessing-the-cluster /docs/tasks/access-application-cluster/access-cluster 301 -/docs/user-guide/add-entries-to-pod-etc-hosts-with-host-aliases /docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases 301 -/docs/user-guide/annotations /docs/concepts/overview/working-with-objects/annotations 301 -/docs/user-guide/application-troubleshooting /docs/tasks/debug-application-cluster/debug-application 301 -/docs/user-guide/compute-resources /docs/concepts/configuration/manage-compute-resources-container 301 -/docs/user-guide/config-best-practices /docs/concepts/configuration/overview 301 -/docs/user-guide/configmap /docs/tasks/configure-pod-container/configmap 301 -/docs/user-guide/configuring-containers /docs/tasks/ 301 -/docs/user-guide/connecting-applications /docs/concepts/services-networking/connect-applications-service 301 -/docs/user-guide/connecting-to-applications-port-forward /docs/tasks/access-application-cluster/port-forward-access-application-cluster 301 -/docs/user-guide/connecting-to-applications-proxy /docs/tasks/access-kubernetes-api/http-proxy-access-api 301 -/docs/user-guide/container-environment /docs/concepts/containers/container-lifecycle-hooks 301 -/docs/user-guide/cron-jobs /docs/concepts/workloads/controllers/cron-jobs 301 - -/docs/user-guide/debugging-pods-and-replication-controllers/ /docs/tasks/debug-application-cluster/debug-pod-replication-controller/ 301 - -/docs/user-guide/debugging-services /docs/tasks/debug-application-cluster/debug-service 301 -/docs/user-guide/deploying-applications /docs/tasks/run-application/run-stateless-application-deployment 301 -/docs/user-guide/deployments /docs/concepts/workloads/controllers/deployment 301 -/docs/user-guide/downward-api /docs/tasks/inject-data-application/downward-api-volume-expose-pod-information 301 -/docs/user-guide/downward-api/volume /docs/tasks/inject-data-application/downward-api-volume-expose-pod-information 301 -/docs/user-guide/environment-guide /docs/tasks/inject-data-application/environment-variable-expose-pod-information 301 -/docs/user-guide/federation/cluster /docs/tasks/administer-federation/cluster 301 -/docs/user-guide/federation/configmap /docs/tasks/administer-federation/configmap 301 -/docs/user-guide/federation/daemonsets /docs/tasks/administer-federation/daemonset 301 -/docs/user-guide/federation/deployment /docs/tasks/administer-federation/deployment 301 -/docs/user-guide/federation/events /docs/tasks/administer-federation/events 301 -/docs/user-guide/federation/federated-ingress /docs/tasks/administer-federation/ingress 301 -/docs/user-guide/federation/federated-services /docs/tasks/federation/federation-service-discovery 301 -/docs/user-guide/federation /docs/concepts/cluster-administration/federation 301 -/docs/user-guide/federation/namespaces /docs/tasks/administer-federation/namespaces 301 -/docs/user-guide/federation/replicasets /docs/tasks/administer-federation/replicaset 301 -/docs/user-guide/federation/secrets /docs/tasks/administer-federation/secret 301 -/docs/user-guide/garbage-collection /docs/concepts/workloads/controllers/garbage-collection 301 -/docs/user-guide/getting-into-containers /docs/tasks/debug-application-cluster/get-shell-running-container 301 -/docs/user-guide/gpus /docs/tasks/manage-gpus/scheduling-gpus 301 -/docs/user-guide/horizontal-pod-autoscaling /docs/tasks/run-application/horizontal-pod-autoscale 301 -/docs/user-guide/horizontal-pod-autoscaling/walkthrough /docs/tasks/run-application/horizontal-pod-autoscale-walkthrough 301 -/docs/user-guide/identifiers /docs/concepts/overview/working-with-objects/names 301 -/docs/user-guide/images /docs/concepts/containers/images 301 -/docs/user-guide /docs/home/ 301 -/docs/user-guide/ingress /docs/concepts/services-networking/ingress 301 -/docs/user-guide/introspection-and-debugging /docs/tasks/debug-application-cluster/debug-application-introspection 301 -/docs/user-guide/jobs /docs/concepts/workloads/controllers/jobs-run-to-completion 301 -/docs/user-guide/jobs/expansions /docs/tasks/job/parallel-processing-expansion 301 -/docs/user-guide/jobs/work-queue-1 /docs/tasks/job/coarse-parallel-processing-work-queue/ 301 -/docs/user-guide/jobs/work-queue-2 /docs/tasks/job/fine-parallel-processing-work-queue/ 301 -/docs/user-guide/kubeconfig-file /docs/tasks/access-application-cluster/authenticate-across-clusters-kubeconfig 301 -/docs/user-guide/labels /docs/concepts/overview/working-with-objects/labels 301 -/docs/user-guide/liveness /docs/tasks/configure-pod-container/configure-liveness-readiness-probes 301 -/docs/user-guide/load-balancer /docs/tasks/access-application-cluster/create-external-load-balancer 301 -/docs/user-guide/logging/elasticsearch /docs/tasks/debug-application-cluster/logging-elasticsearch-kibana 301 -/docs/user-guide/logging/overview /docs/concepts/cluster-administration/logging 301 -/docs/user-guide/logging/stackdriver /docs/tasks/debug-application-cluster/logging-stackdriver 301 -/docs/user-guide/managing-deployments /docs/concepts/cluster-administration/manage-deployment 301 -/docs/user-guide/monitoring /docs/tasks/debug-application-cluster/resource-usage-monitoring 301 -/docs/user-guide/namespaces /docs/concepts/overview/working-with-objects/namespaces 301 -/docs/user-guide/networkpolicies /docs/concepts/services-networking/network-policies 301 -/docs/user-guide/node-selection /docs/concepts/configuration/assign-pod-node 301 -/docs/user-guide/persistent-volumes /docs/concepts/storage/persistent-volumes 301 -/docs/user-guide/persistent-volumes/walkthrough /docs/tasks/configure-pod-container/configure-persistent-volume-storage 301 -/docs/user-guide/petset /docs/concepts/workloads/controllers/petset 301 -/docs/user-guide/petset/bootstrapping /docs/concepts/workloads/controllers/petset 301 -/docs/user-guide/pod-preset /docs/tasks/inject-data-application/podpreset 301 -/docs/user-guide/pod-security-policy /docs/concepts/policy/pod-security-policy 301 -/docs/user-guide/pod-states /docs/concepts/workloads/pods/pod-lifecycle 301 -/docs/user-guide/pod-templates /docs/concepts/workloads/pods/pod-overview 301 -/docs/user-guide/pods /docs/concepts/workloads/pods/pod 301 -/docs/user-guide/pods/init-container /docs/concepts/workloads/pods/init-containers 301 -/docs/user-guide/pods/multi-container /docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume 301 -/docs/user-guide/pods/single-container /docs/tasks/run-application/run-stateless-application-deployment 301 -/docs/user-guide/prereqs /docs/tasks/tools/install-kubectl 301 -/docs/user-guide/production-pods /docs/tasks/ 301 -/docs/user-guide/projected-volume /docs/tasks/configure-pod-container/configure-projected-volume-storage 301 -/docs/user-guide/quick-start /docs/tasks/access-application-cluster/service-access-application-cluster 301 -/docs/user-guide/replicasets /docs/concepts/workloads/controllers/replicaset 301 -/docs/user-guide/replication-controller /docs/concepts/workloads/controllers/replicationcontroller 301 -/docs/user-guide/rolling-updates /docs/tasks/run-application/rolling-update-replication-controller 301 -/docs/user-guide/secrets /docs/concepts/configuration/secret 301 -/docs/user-guide/secrets/walkthrough /docs/tasks/inject-data-application/distribute-credentials-secure 301 -/docs/user-guide/service-accounts /docs/tasks/configure-pod-container/configure-service-account 301 -/docs/user-guide/services-firewalls /docs/tasks/access-application-cluster/configure-cloud-provider-firewall 301 -/docs/user-guide/services /docs/concepts/services-networking/service 301 -/docs/user-guide/services/operations /docs/tasks/access-application-cluster/connecting-frontend-backend 301 -/docs/user-guide/sharing-clusters /docs/tasks/administer-cluster/share-configuration 301 -/docs/user-guide/simple-nginx /docs/tasks/run-application/run-stateless-application-deployment 301 -/docs/user-guide/thirdpartyresources /docs/tasks/access-kubernetes-api/extend-api-third-party-resource 301 -/docs/user-guide/ui /docs/tasks/access-application-cluster/web-ui-dashboard 301 -/docs/user-guide/update-demo /docs/tasks/run-application/rolling-update-replication-controller 301 -/docs/user-guide/volumes /docs/concepts/storage/volumes 301 -/docs/user-guide/working-with-resources /docs/tutorials/object-management-kubectl/object-management 301 - -/docs/whatisk8s /docs/concepts/overview/what-is-kubernetes 301 - - -############## -# address 404s -# -/concepts/containers/container-lifecycle-hooks /docs/concepts/containers/container-lifecycle-hooks 301 - -/docs/api-reference/apps/v1alpha1/definitions https://v1-4.docs.kubernetes.io/docs/api-reference/apps/v1alpha1/definitions 301 -/docs/api-reference/apps/v1beta1/operations https://v1-4.docs.kubernetes.io/docs/api-reference/apps/v1beta1/operations 301 -/docs/api-reference/authorization.k8s.io/v1beta1/definitions https://v1-4.docs.kubernetes.io/docs/api-reference/authorization.k8s.io/v1beta1/definitions 301 -/docs/api-reference/authorization.k8s.io/v1beta1/operations https://v1-4.docs.kubernetes.io/docs/api-reference/authorization.k8s.io/v1beta1/operations 301 -/docs/api-reference/autoscaling/v1/operations https://v1-4.docs.kubernetes.io/docs/api-reference/autoscaling/v1/operations 301 -/docs/api-reference/batch/v1/operations https://v1-4.docs.kubernetes.io/docs/api-reference/batch/v1/operations 301 -/docs/api-reference/batch/v2alpha1/definitions https://v1-4.docs.kubernetes.io/docs/api-reference/batch/v2alpha1/definitions 301 -/docs/api-reference/certificates.k8s.io/v1alpha1/definitions https://v1-4.docs.kubernetes.io/docs/api-reference/certificates.k8s.io/v1alpha1/definitions 301 -/docs/api-reference/certificates/v1alpha1/operations https://v1-4.docs.kubernetes.io/docs/api-reference/certificates/v1alpha1/operations 301 -/docs/api-reference/extensions/v1beta1/operations https://v1-4.docs.kubernetes.io/docs/api-reference/extensions/v1beta1/operations 301 -/docs/api-reference/policy/v1alpha1/definitions https://v1-4.docs.kubernetes.io/docs/api-reference/policy/v1alpha1/definitions 301 -/docs/api-reference/policy/v1beta1/definitions https://v1-4.docs.kubernetes.io/docs/api-reference/policy/v1beta1/definitions 301 -/docs/api-reference/README https://v1-4.docs.kubernetes.io/docs/api-reference/README 301 -/docs/api-reference/storage.k8s.io/v1beta1/operations https://v1-4.docs.kubernetes.io/docs/api-reference/storage.k8s.io/v1beta1/operations 301 - -/docs/api-reference/v1/definitions /docs/api-reference/v1.7 301 - -/docs/concepts/cluster /docs/concepts/cluster-administration/cluster-administration-overview/ 301 -/docs/concepts/object-metadata/annotations /docs/concepts/overview/working-with-objects/annotations 301 - -/docs/concepts/workloads/controllers/daemonset/docs/concepts/workloads/pods/pod /docs/concepts/workloads/pods/pod 301 -/docs/concepts/workloads/controllers/deployment/docs/concepts/workloads/pods/pod /docs/concepts/workloads/pods/pod 301 - -/docs/contribute/write-new-topic /docs/home/contribute/write-new-topic 301 - -/docs/getting-started-guides/coreos/azure /docs/getting-started-guides/coreos 301 -/docs/getting-started-guides/coreos/bare_metal_calico /docs/getting-started-guides/coreos 301 -/docs/getting-started-guides/juju /docs/getting-started-guides/ubuntu/installation 301 -/docs/getting-started-guides/kargo /docs/getting-started-guides/kubespray 301 -/docs/getting-started-guides/logging-elasticsearch /docs/tasks/debug-application-cluster/logging-elasticsearch-kibana 301 -/docs/getting-started-guides/logging /docs/concepts/cluster-administration/logging 301 -/docs/getting-started-guides/rackspace /docs/setup/pick-right-solution 301 -/docs/getting-started-guides/ubuntu-calico /docs/getting-started-guides/ubuntu 301 -/docs/getting-started-guides/ubuntu/automated /docs/getting-started-guides/ubuntu 301 -/docs/getting-started-guides/vagrant /docs/getting-started-guides/alternatives 301 -/docs/getting-started-guides/windows/While /docs/getting-started-guides/windows 301 - -/docs/federation/api-reference/extensions/v1beta1/definitions /docs/reference/federation/extensions/v1beta1/definitions 301 -/docs/federation/api-reference/federation/v1beta1/definitions /docs/reference/federation/extensions/v1beta1/definitions 301 -/docs/federation/api-reference/README /docs/reference/federation 301 -/docs/federation/api-reference/v1/definitions /docs/reference/federation/v1/definitions 301 -/docs/reference/federation/v1beta1/definitions /docs/reference/federation/extensions/v1beta1/definitions 301 -/docs/reference/federation/v1beta1/operations /docs/reference/federation/extensions/v1beta1/operations 301 - -/docs/reporting-security-issues /security 301 - -/docs/stable/user-guide/labels /docs/concepts/overview/working-with-objects/labels 301 -/docs/tasks/access-application-cluster/access-cluster.md /docs/tasks/access-application-cluster/access-cluster 301 -/docs/tasks/access-application-cluster/authenticate-across-clusters-kubeconfig /docs/tasks/access-application-cluster/configure-access-multiple-clusters 301 -/docs/tasks/access-kubernetes-api/access-kubernetes-api/http-proxy-access-api /docs/tasks/access-kubernetes-api/http-proxy-access-api 301 -/docs/tasks/administer-cluster/reserve-compute-resources/out-of-resource.md /docs/tasks/administer-cluster/out-of-resource 301 -/docs/tasks/configure-pod-container/configure-pod-disruption-budget /docs/tasks/run-application/configure-pdb 301 -/docs/tasks/configure-pod-container/define-command-argument-container /docs/tasks/inject-data-application/define-command-argument-container 301 -/docs/tasks/debug-application-cluster/sematext-logging-monitoring https://sematext.com/kubernetes/ 301 -/docs/tasks/job/work-queue-1 /docs/concepts/workloads/controllers/jobs-run-to-completion 301 -/docs/tasks/manage-stateful-set/delete-pods /docs/tasks/run-application/delete-stateful-set 301 - -/docs/tutorials/getting-started/cluster-intro /docs/tutorials/kubernetes-basics/cluster-intro 301 -/docs/tutorials/getting-started/expose-intro /docs/tutorials/kubernetes-basics/expose-intro 301 -/docs/tutorials/getting-started/scale-app /docs/tutorials/kubernetes-basics/scale-interactive 301 -/docs/tutorials/getting-started/scale-intro /docs/tutorials/kubernetes-basics/scale-intro 301 -/docs/tutorials/getting-started/update-interactive /docs/tutorials/kubernetes-basics/update-interactive 301 -/docs/tutorials/getting-started/update-intro /docs/tutorials/kubernetes-basics/ 301 - -/docs/user-guide/containers /docs/tasks/inject-data-application/define-command-argument-container 301 -/docs/user-guide/horizontal-pod-autoscaling/walkthrough.md /docs/tasks/run-application/horizontal-pod-autoscale-walkthrough 301 -/docs/user-guide/ingress.md /docs/concepts/services-networking/ingress 301 -/docs/user-guide/replication-controller/operations /docs/concepts/workloads/controllers/replicationcontroller 301 -/docs/user-guide/resizing-a-replication-controller /docs/concepts/workloads/controllers/replicationcontroller 301 -/docs/user-guide/scheduled-jobs /docs/concepts/workloads/controllers/cron-jobs 301 -/docs/user-guide/security-context /docs/tasks/configure-pod-container/security-context 301 - -/kubernetes-bootcamp/2-1.html /docs/tutorials/kubernetes-basics 301 -/kubernetes-bootcamp/2-3-2.html /docs/tutorials/kubernetes-basics 301 -/kubernetes /docs 301 -/kubernetes/swagger-spec https://github.com/kubernetes/kubernetes/tree/master/api/swagger-spec 301 -/serviceaccount/token /docs/tasks/configure-pod-container/configure-service-account 301 - -/v1.1/docs/admin/networking.html /docs/concepts/cluster-administration/networking 301 -/v1.1/docs/getting-started-guides /docs/tutorials/kubernetes-basics/ 301 - - ############################ # pattern matching redirects # -/docs/user-guide/kubectl/kubectl_* /docs/user-guide/kubectl/v1.7/#:splat 200 -/v1.1/docs/* /docs/ 301 +/docs/user-guide/kubectl/kubectl_*/ /docs/user-guide/kubectl/v1.7/#:splat 200 +/v1.1/docs/* /docs/ 301 +/docs/user-guide/kubectl/1_5/* https://v1-5.docs.kubernetes.io/docs/user-guide/kubectl/v1.5/ 301 +/docs/user-guide/kubectl/v1.5/node_modules/* https://v1-5.docs.kubernetes.io/docs/user-guide/kubectl/v1.5/ 301 +/docs/resources-reference/1_5/* https://v1-5.docs.kubernetes.io/docs/resources-reference/v1.5/ 301 +/docs/resources-reference/v1.5/node_modules/* https://v1-5.docs.kubernetes.io/docs/resources-reference/v1.5/ 301 +/docs/api-reference/1_5/* https://v1-5.docs.kubernetes.io/docs/api-reference/v1.5 301 +/docs/api-reference/v1.5/node_modules/* https://v1-5.docs.kubernetes.io/docs/api-reference/v1.5 301 +/docs/user-guide/kubectl/v1.6/node_modules/* https://v1-6.docs.kubernetes.io/docs/user-guide/kubectl/v1.6/ 301 +/docs/api-reference/v1.6/node_modules/* https://v1-6.docs.kubernetes.io/docs/api-reference/v1.6 301 +/docs/api-reference/v1.7/node_modules/* /docs/api-reference/v1.7/ 301 +/docs/getting-started-guides/docker-multinode/* /docs/setup/independent/create-cluster-kubeadm/ 301 +/docs/admin/resourcequota/* /docs/concepts/policy/resource-quotas/ 301 +/docs/getting-started-guide/* /docs/setup/ 301 +/docs/api-reference/1_5/* /docs/api-reference/v1.5/ 301 +/docs/resources-reference/1_5/* /docs/resources-reference/v1.5/ 301 +/docs/resources-reference/1_6/* /docs/resources-reference/v1.6/ 301 +/docs/resources-reference/1_7/* /docs/resources-reference/v1.7/ 301 +/docs/templatedemos/* /docs/home/contribute/page-templates/ 301 +/docs/tutorials/getting-started/*docs/tutorials/kubernetes-basics/ 301 +/docs/user-guide/federation/*/ /docs/concepts/cluster-administration/federation/ 301 +/docs/user-guide/garbage-collector/ /docs/concepts/workloads/controllers/garbage-collection/ 301 +/docs/user-guide/horizontal-pod-autoscaler/* /docs/tasks/run-application/horizontal-pod-autoscale/ 301 +/kubernetes-bootcamp/* /docs/tutorials/kubernetes-basics/ 301 +/swagger-spec/* https://github.com/kubernetes/kubernetes/tree/master/api/swagger-spec/ 301 +/third_party/swagger-ui/* /docs/reference/ 301 -/docs/user-guide/kubectl/1_5/* https://v1-5.docs.kubernetes.io/docs/user-guide/kubectl/v1.5 301 -/docs/user-guide/kubectl/v1.5/node_modules/* https://v1-5.docs.kubernetes.io/docs/user-guide/kubectl/v1.5 301 -/docs/resources-reference/1_5/* https://v1-5.docs.kubernetes.io/docs/resources-reference/v1.5 301 -/docs/resources-reference/v1.5/node_modules/* https://v1-5.docs.kubernetes.io/docs/resources-reference/v1.5 301 -/docs/api-reference/1_5/* https://v1-5.docs.kubernetes.io/docs/api-reference/v1.5 301 -/docs/api-reference/v1.5/node_modules/* https://v1-5.docs.kubernetes.io/docs/api-reference/v1.5 301 - -/docs/user-guide/kubectl/v1.6/node_modules/* https://v1-6.docs.kubernetes.io/docs/user-guide/kubectl/v1.6 301 -/docs/api-reference/v1.6/node_modules/* https://v1-6.docs.kubernetes.io/docs/api-reference/v1.6 301 - -/docs/api-reference/v1.7/node_modules/* /docs/api-reference/v1.7 301 +############################ +# individual redirects +# -/docs/getting-started-guides/docker-multinode/* /docs/setup/independent/create-cluster-kubeadm 301 +/docs/admin/addons/ /docs/concepts/cluster-administration/addons/ 301 +/docs/admin/apparmor/ /docs/tutorials/clusters/apparmor/ 301 +/docs/admin/audit/ /docs/tasks/debug-application-cluster/audit/ 301 +/docs/admin/cluster-components/ /docs/concepts/overview/components/ 301 +/docs/admin/cluster-management/ /docs/tasks/administer-cluster/cluster-management/ 301 +/docs/admin/cluster-troubleshooting/ /docs/tasks/debug-application-cluster/debug-cluster/ 301 +/docs/admin/daemons/ /docs/concepts/workloads/controllers/daemonset/ 301 +/docs/admin/disruptions/ /docs/concepts/workloads/pods/disruptions/ 301 +/docs/admin/dns/ /docs/concepts/services-networking/dns-pod-service/ 301 +/docs/admin/etcd/ /docs/tasks/administer-cluster/configure-upgrade-etcd/ 301 +/docs/admin/etcd_upgrade/ /docs/tasks/administer-cluster/configure-upgrade-etcd/ 301 +/docs/admin/federation/kubefed/ /docs/tasks/federation/set-up-cluster-federation-kubefed/ 301 +/docs/admin/garbage-collection/ /docs/concepts/cluster-administration/kubelet-garbage-collection/ 301 +/docs/admin/ha-master-gce/ /docs/tasks/administer-cluster/highly-available-master/ 301 +/docs/admin/ /docs/concepts/cluster-administration/cluster-administration-overview/ 301 +/docs/admin/kubeadm-upgrade-1-7/ /docs/tasks/administer-cluster/kubeadm-upgrade-1-7/ 301 +/docs/admin/limitrange/docs/tasks/administer-cluster/cpu-memory-limit/ 301 +/docs/admin/master-node-communication/ /docs/concepts/architecture/master-node-communication/ 301 +/docs/admin/multi-cluster/ /docs/concepts/cluster-administration/federation/ 301 +/docs/admin/multiple-schedulers/ /docs/tasks/administer-cluster/configure-multiple-schedulers/ 301 +/docs/admin/namespaces/ /docs/tasks/administer-cluster/namespaces/ 301 +/docs/admin/namespaces/walkthrough/ /docs/tasks/administer-cluster/namespaces-walkthrough/ 301 +/docs/admin/network-plugins/ /docs/concepts/cluster-administration/network-plugins/ 301 +/docs/admin/networking/ /docs/concepts/cluster-administration/networking/ 301 +/docs/admin/node/ /docs/concepts/architecture/nodes/ 301 +/docs/admin/node-allocatable/ /docs/tasks/administer-cluster/reserve-compute-resources/ 301 +/docs/admin/node-problem/ /docs/tasks/debug-application-cluster/monitor-node-health/ 301 +/docs/admin/out-of-resource/ /docs/tasks/administer-cluster/out-of-resource/ 301 +/docs/admin/rescheduler/ /docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/ 301 +/docs/admin/resourcequota/limitstorageconsumption/ /docs/tasks/administer-cluster/limit-storage-consumption/ 301 +/docs/admin/resourcequota/walkthrough/ /docs/tasks/administer-cluster/quota-api-object/ 301 +/docs/admin/static-pods/ /docs/tasks/administer-cluster/static-pod/ 301 +/docs/admin/sysctls/ /docs/concepts/cluster-administration/sysctl-cluster/ 301 +/docs/admin/upgrade-1-6/ /docs/tasks/administer-cluster/upgrade-1-6/ 301 + +/docs/api/ /docs/concepts/overview/kubernetes-api/ 301 + +/docs/concepts/abstractions/controllers/garbage-collection/ /docs/concepts/workloads/controllers/garbage-collection/ 301 +/docs/concepts/abstractions/controllers/petsets/ /docs/concepts/workloads/controllers/petset/ 301 +/docs/concepts/abstractions/controllers/statefulsets/ /docs/concepts/workloads/controllers/statefulset/ 301 +/docs/concepts/abstractions/init-containers/ /docs/concepts/workloads/pods/init-containers/ 301 +/docs/concepts/abstractions/overview/ /docs/concepts/overview/working-with-objects/kubernetes-objects/ 301 +/docs/concepts/abstractions/pod/ /docs/concepts/workloads/pods/pod-overview/ 301 + +/docs/concepts/cluster-administration/access-cluster/ /docs/tasks/access-application-cluster/access-cluster/ 301 +/docs/concepts/cluster-administration/audit/ /docs/tasks/debug-application-cluster/audit/ 301 +/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/ /docs/tasks/access-application-cluster/authenticate-across-clusters-kubeconfig/ 301 +/docs/concepts/cluster-administration/cluster-management/ /docs/tasks/administer-cluster/cluster-management/ 301 +/docs/concepts/cluster-administration/configure-etcd/ /docs/tasks/administer-cluster/configure-upgrade-etcd/ 301 +/docs/concepts/cluster-administration/etcd-upgrade/ /docs/tasks/administer-cluster/configure-upgrade-etcd/ 301 +/docs/concepts/cluster-administration/federation-service-discovery/ /docs/tasks/federation/federation-service-discovery/ 301 +/docs/concepts/cluster-administration/guaranteed-scheduling-critical-addon-pods/ /docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/ 301 +/docs/concepts/cluster-administration/master-node-communication/ /docs/concepts/architecture/master-node-communication/ 301 +/docs/concepts/cluster-administration/multiple-clusters/ /docs/concepts/cluster-administration/federation/ 301 +/docs/concepts/cluster-administration/out-of-resource/ /docs/tasks/administer-cluster/out-of-resource/ 301 +/docs/concepts/cluster-administration/resource-usage-monitoring/ /docs/tasks/debug-application-cluster/resource-usage-monitoring/ 301 +/docs/concepts/cluster-administration/static-pod/ /docs/tasks/administer-cluster/static-pod/ 301 +/docs/concepts/clusters/logging/ /docs/concepts/cluster-administration/logging/ 301 +/docs/concepts/configuration/container-command-arg/ /docs/tasks/inject-data-application/define-command-argument-container/ 301 +/docs/concepts/ecosystem/thirdpartyresource/ /docs/tasks/access-kubernetes-api/extend-api-third-party-resource/ 301 +/docs/concepts/jobs/cron-jobs/ /docs/concepts/workloads/controllers/cron-jobs/ 301 +/docs/concepts/jobs/run-to-completion-finite-workloads/ /docs/concepts/workloads/controllers/jobs-run-to-completion/ 301 +/docs/concepts/nodes/node/ /docs/concepts/architecture/nodes/ 301 +/docs/concepts/storage/etcd-store-api-object/ /docs/tasks/administer-cluster/configure-upgrade-etcd/ 301 +/docs/concepts/tools/kubectl/object-management-overview/ /docs/tutorials/object-management-kubectl/object-management/ 301 +/docs/concepts/tools/kubectl/object-management-using-declarative-config/ /docs/tutorials/object-management-kubectl/declarative-object-management-configuration/ 301 +/docs/concepts/tools/kubectl/object-management-using-imperative-commands/ /docs/tutorials/object-management-kubectl/imperative-object-management-command/ 301 +/docs/concepts/tools/kubectl/object-management-using-imperative-config/ /docs/tutorials/object-management-kubectl/imperative-object-management-configuration/ 301 + +/docs/getting-started-guides/ /docs/setup/pick-right-solution/ 301 +/docs/getting-started-guides/kubeadm/ /docs/setup/independent/create-cluster-kubeadm/ 301 +/docs/getting-started-guides/network-policy/calico/ /docs/tasks/administer-cluster/calico-network-policy/ 301 +/docs/getting-started-guides/network-policy/romana/ /docs/tasks/administer-cluster/romana-network-policy/ 301 +/docs/getting-started-guides/network-policy/walkthrough/ /docs/tasks/administer-cluster/declare-network-policy/ 301 +/docs/getting-started-guides/network-policy/weave/ /docs/tasks/administer-cluster/weave-network-policy/ 301 +/docs/getting-started-guides/running-cloud-controller/ /docs/tasks/administer-cluster/running-cloud-controller/ 301 +/docs/getting-started-guides/ubuntu/calico/ /docs/getting-started-guides/ubuntu/ 301 + +/docs/hellonode/ /docs/tutorials/stateless-application/hello-minikube/ 301 +/docs/ /docs/home/ 301 +/docs/samples/ /docs/tutorials/ 301 + +/docs/tasks/administer-cluster/apply-resource-quota-limit/ /docs/tasks/administer-cluster/quota-api-object/ 301 +/docs/tasks/administer-cluster/assign-pods-nodes/ /docs/tasks/configure-pod-container/assign-pods-nodes/ 301 +/docs/tasks/administer-cluster/overview/ /docs/concepts/cluster-administration/cluster-administration-overview/ 301 +/docs/tasks/administer-cluster/cpu-memory-limit/ /docs/tasks/administer-cluster/memory-default-namespace/ 301 +/docs/tasks/administer-cluster/share-configuration/ /docs/tasks/access-application-cluster/configure-access-multiple-clusters/ 301 + +/docs/tasks/configure-pod-container/apply-resource-quota-limit/ /docs/tasks/administer-cluster/apply-resource-quota-limit/ 301 +/docs/tasks/configure-pod-container/calico-network-policy/ /docs/tasks/administer-cluster/calico-network-policy/ 301 +/docs/tasks/configure-pod-container/communicate-containers-same-pod/ /docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/ 301 +/docs/tasks/configure-pod-container/declare-network-policy/ /docs/tasks/administer-cluster/declare-network-policy/ 301 +/docs/tasks/configure-pod-container/define-environment-variable-container/ /docs/tasks/inject-data-application/define-environment-variable-container/ 301 +/docs/tasks/configure-pod-container/distribute-credentials-secure/ /docs/tasks/inject-data-application/distribute-credentials-secure/ 301 +/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information/ /docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/ 301 +/docs/tasks/configure-pod-container/environment-variable-expose-pod-information/ /docs/tasks/inject-data-application/environment-variable-expose-pod-information/ 301 +/docs/tasks/configure-pod-container/limit-range/ /docs/tasks/administer-cluster/cpu-memory-limit/ 301 +/docs/tasks/configure-pod-container/romana-network-policy/ /docs/tasks/administer-cluster/romana-network-policy/ 301 +/docs/tasks/configure-pod-container/weave-network-policy/ /docs/tasks/administer-cluster/weave-network-policy/ 301 +/docs/tasks/configure-pod-container/assign-cpu-ram-container/ /docs/tasks/configure-pod-container/assign-memory-resource/ 301 + +/docs/tasks/kubectl/get-shell-running-container/ /docs/tasks/debug-application-cluster/get-shell-running-container/ 301 +/docs/tasks/kubectl/install/ /docs/tasks/tools/install-kubectl/ 301 +/docs/tasks/kubectl/list-all-running-container-images/ /docs/tasks/access-application-cluster/list-all-running-container-images/ 301 + +/docs/tasks/manage-stateful-set/debugging-a-statefulset/ /docs/tasks/debug-application-cluster/debug-stateful-set/ 301 +/docs/tasks/manage-stateful-set/delete-pods/ /docs/tasks/run-application/force-delete-stateful-set-pod/ 301 +/docs/tasks/manage-stateful-set/deleting-a-statefulset/ /docs/tasks/run-application/delete-stateful-set/ 301 +/docs/tasks/manage-stateful-set/scale-stateful-set/ /docs/tasks/run-application/scale-stateful-set/ 301 +/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set/ /docs/tasks/run-application/upgrade-pet-set-to-stateful-set/ 301 + +/docs/tasks/run-application/podpreset/ /docs/tasks/inject-data-application/podpreset/ 301 +/docs/tasks/troubleshoot/debug-init-containers/ /docs/tasks/debug-application-cluster/debug-init-containers/ 301 +/docs/tasks/web-ui-dashboard/ /docs/tasks/access-application-cluster/web-ui-dashboard/ 301 +/docs/templatedemos/ /docs/home/contribute/page-templates/ 301 +/docs/tools/kompose/ /docs/tools/kompose/user-guide/ 301 + +/docs/tutorials/clusters/multiple-schedulers/ /docs/tasks/administer-cluster/configure-multiple-schedulers/ 301 +/docs/tutorials/connecting-apps/connecting-frontend-backend/ /docs/tasks/access-application-cluster/connecting-frontend-backend/ 301 +/docs/tutorials/federation/set-up-cluster-federation-kubefed/ /docs/tasks/federation/set-up-cluster-federation-kubefed/ 301 +/docs/tutorials/federation/set-up-coredns-provider-federation/ /docs/tasks/federation/set-up-coredns-provider-federation/ 301 +/docs/tutorials/federation/set-up-placement-policies-federation/ /docs/tasks/federation/set-up-placement-policies-federation/ 301 +/docs/tutorials/getting-started/create-cluster/ /docs/tutorials/kubernetes-basics/cluster-intro/ 301 +/docs/tutorials/stateful-application/run-replicated-stateful-application/ /docs/tasks/run-application/run-replicated-stateful-application/ 301 +/docs/tutorials/stateful-application/run-stateful-application/ /docs/tasks/run-application/run-single-instance-stateful-application/ 301 +/docs/tutorials/stateless-application/expose-external-ip-address-service/ /docs/tasks/access-application-cluster/service-access-application-cluster/ 301 +/docs/tutorials/stateless-application/run-stateless-ap-replication-controller/ /docs/tasks/run-application/run-stateless-application-deployment/ 301 +/docs/tutorials/stateless-application/run-stateless-application-deployment/ /docs/tasks/run-application/run-stateless-application-deployment/ 301 + +/docs/user-guide/accessing-the-cluster/ /docs/tasks/access-application-cluster/access-cluster/ 301 +/docs/user-guide/add-entries-to-pod-etc-hosts-with-host-aliases/ /docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ 301 +/docs/user-guide/annotations/ /docs/concepts/overview/working-with-objects/annotations/ 301 +/docs/user-guide/application-troubleshooting/ /docs/tasks/debug-application-cluster/debug-application/ 301 +/docs/user-guide/compute-resources/ /docs/concepts/configuration/manage-compute-resources-container/ 301 +/docs/user-guide/config-best-practices/ /docs/concepts/configuration/overview/ 301 +/docs/user-guide/configmap/ /docs/tasks/configure-pod-container/configmap/ 301 +/docs/user-guide/configuring-containers/ /docs/tasks/ 301 +/docs/user-guide/connecting-applications/ /docs/concepts/services-networking/connect-applications-service/ 301 +/docs/user-guide/connecting-to-applications-port-forward/ /docs/tasks/access-application-cluster/port-forward-access-application-cluster/ 301 +/docs/user-guide/connecting-to-applications-proxy/ /docs/tasks/access-kubernetes-api/http-proxy-access-api/ 301 +/docs/user-guide/container-environment/ /docs/concepts/containers/container-lifecycle-hooks/ 301 +/docs/user-guide/cron-jobs/ /docs/concepts/workloads/controllers/cron-jobs/ 301 +/docs/user-guide/debugging-pods-and-replication-controllers/ /docs/tasks/debug-application-cluster/debug-pod-replication-controller/ 301 +/docs/user-guide/debugging-services/ /docs/tasks/debug-application-cluster/debug-service/ 301 +/docs/user-guide/deploying-applications/ /docs/tasks/run-application/run-stateless-application-deployment/ 301 +/docs/user-guide/deployments/ /docs/concepts/workloads/controllers/deployment/ 301 +/docs/user-guide/downward-api/ /docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/ 301 +/docs/user-guide/downward-api/volume/ /docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/ 301 +/docs/user-guide/environment-guide/ /docs/tasks/inject-data-application/environment-variable-expose-pod-information/ 301 +/docs/user-guide/federation/cluster/ /docs/tasks/administer-federation/cluster/ 301 +/docs/user-guide/federation/configmap/ /docs/tasks/administer-federation/configmap/ 301 +/docs/user-guide/federation/daemonsets/ /docs/tasks/administer-federation/daemonset/ 301 +/docs/user-guide/federation/deployment/ /docs/tasks/administer-federation/deployment/ 301 +/docs/user-guide/federation/events/ /docs/tasks/administer-federation/events/ 301 +/docs/user-guide/federation/federated-ingress/ /docs/tasks/administer-federation/ingress/ 301 +/docs/user-guide/federation/federated-services/ /docs/tasks/federation/federation-service-discovery/ 301 +/docs/user-guide/federation/ /docs/concepts/cluster-administration/federation/ 301 +/docs/user-guide/federation/namespaces/ /docs/tasks/administer-federation/namespaces/ 301 +/docs/user-guide/federation/replicasets/ /docs/tasks/administer-federation/replicaset/ 301 +/docs/user-guide/federation/secrets/ /docs/tasks/administer-federation/secret/ 301 +/docs/user-guide/garbage-collection/ /docs/concepts/workloads/controllers/garbage-collection/ 301 +/docs/user-guide/getting-into-containers/ /docs/tasks/debug-application-cluster/get-shell-running-container/ 301 +/docs/user-guide/gpus/ /docs/tasks/manage-gpus/scheduling-gpus/ 301 +/docs/user-guide/horizontal-pod-autoscaling/ /docs/tasks/run-application/horizontal-pod-autoscale/ 301 +/docs/user-guide/horizontal-pod-autoscaling/walkthrough/ /docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ 301 +/docs/user-guide/identifiers/ /docs/concepts/overview/working-with-objects/names/ 301 +/docs/user-guide/images/ /docs/concepts/containers/images/ 301 +/docs/user-guide/ /docs/home/ 301 +/docs/user-guide/ingress/ /docs/concepts/services-networking/ingress/ 301 +/docs/user-guide/introspection-and-debugging/ /docs/tasks/debug-application-cluster/debug-application-introspection/ 301 +/docs/user-guide/jobs/ /docs/concepts/workloads/controllers/jobs-run-to-completion/ 301 +/docs/user-guide/jobs/expansions/ /docs/tasks/job/parallel-processing-expansion/ 301 +/docs/user-guide/jobs/work-queue-1/ /docs/tasks/job/coarse-parallel-processing-work-queue/ 301 +/docs/user-guide/jobs/work-queue-2/ /docs/tasks/job/fine-parallel-processing-work-queue/ 301 +/docs/user-guide/kubeconfig-file/ /docs/tasks/access-application-cluster/authenticate-across-clusters-kubeconfig/ 301 +/docs/user-guide/labels/ /docs/concepts/overview/working-with-objects/labels/ 301 +/docs/user-guide/liveness/ /docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ 301 +/docs/user-guide/load-balancer/ /docs/tasks/access-application-cluster/create-external-load-balancer/ 301 +/docs/user-guide/logging/elasticsearch/ /docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/ 301 +/docs/user-guide/logging/overview/ /docs/concepts/cluster-administration/logging/ 301 +/docs/user-guide/logging/stackdriver/ /docs/tasks/debug-application-cluster/logging-stackdriver/ 301 +/docs/user-guide/managing-deployments/ /docs/concepts/cluster-administration/manage-deployment/ 301 +/docs/user-guide/monitoring/ /docs/tasks/debug-application-cluster/resource-usage-monitoring/ 301 +/docs/user-guide/namespaces/ /docs/concepts/overview/working-with-objects/namespaces/ 301 +/docs/user-guide/networkpolicies/ /docs/concepts/services-networking/network-policies/ 301 +/docs/user-guide/node-selection/ /docs/concepts/configuration/assign-pod-node/ 301 +/docs/user-guide/persistent-volumes/ /docs/concepts/storage/persistent-volumes/ 301 +/docs/user-guide/persistent-volumes/walkthrough/ /docs/tasks/configure-pod-container/configure-persistent-volume-storage/ 301 +/docs/user-guide/petset/ /docs/concepts/workloads/controllers/petset/ 301 +/docs/user-guide/petset/bootstrapping/ /docs/concepts/workloads/controllers/petset/ 301 +/docs/user-guide/pod-preset/ /docs/tasks/inject-data-application/podpreset/ 301 +/docs/user-guide/pod-security-policy/ /docs/concepts/policy/pod-security-policy/ 301 +/docs/user-guide/pod-states/ /docs/concepts/workloads/pods/pod-lifecycle/ 301 +/docs/user-guide/pod-templates/ /docs/concepts/workloads/pods/pod-overview/ 301 +/docs/user-guide/pods/ /docs/concepts/workloads/pods/pod/ 301 +/docs/user-guide/pods/init-container/ /docs/concepts/workloads/pods/init-containers/ 301 +/docs/user-guide/pods/multi-container/ /docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/ 301 +/docs/user-guide/pods/single-container/ /docs/tasks/run-application/run-stateless-application-deployment/ 301 +/docs/user-guide/prereqs/ /docs/tasks/tools/install-kubectl/ 301 +/docs/user-guide/production-pods/ /docs/tasks/ 301 +/docs/user-guide/projected-volume/ /docs/tasks/configure-pod-container/configure-projected-volume-storage/ 301 +/docs/user-guide/quick-start/ /docs/tasks/access-application-cluster/service-access-application-cluster/ 301 +/docs/user-guide/replicasets/ /docs/concepts/workloads/controllers/replicaset/ 301 +/docs/user-guide/replication-controller/ /docs/concepts/workloads/controllers/replicationcontroller/ 301 +/docs/user-guide/rolling-updates/ /docs/tasks/run-application/rolling-update-replication-controller/ 301 +/docs/user-guide/secrets/ /docs/concepts/configuration/secret/ 301 +/docs/user-guide/secrets/walkthrough/ /docs/tasks/inject-data-application/distribute-credentials-secure/ 301 +/docs/user-guide/service-accounts/ /docs/tasks/configure-pod-container/configure-service-account/ 301 +/docs/user-guide/services-firewalls/ /docs/tasks/access-application-cluster/configure-cloud-provider-firewall/ 301 +/docs/user-guide/services/ /docs/concepts/services-networking/service/ 301 +/docs/user-guide/services/operations/ /docs/tasks/access-application-cluster/connecting-frontend-backend/ 301 +/docs/user-guide/sharing-clusters/ /docs/tasks/administer-cluster/share-configuration/ 301 +/docs/user-guide/simple-nginx/ /docs/tasks/run-application/run-stateless-application-deployment/ 301 +/docs/user-guide/thirdpartyresources/ /docs/tasks/access-kubernetes-api/extend-api-third-party-resource/ 301 +/docs/user-guide/ui/ /docs/tasks/access-application-cluster/web-ui-dashboard/ 301 +/docs/user-guide/update-dem/ /docs/tasks/run-application/rolling-update-replication-controller/ 301 +/docs/user-guide/volumes/ /docs/concepts/storage/volumes/ 301 +/docs/user-guide/working-with-resources/ /docs/tutorials/object-management-kubectl/object-management/ 301 + +/docs/whatisk8s/ /docs/concepts/overview/what-is-kubernetes/ 301 -/docs/admin/resourcequota/* /docs/concepts/policy/resource-quotas 301 +############## +# address 404s +# +/concepts/containers/container-lifecycle-hooks/ /docs/concepts/containers/container-lifecycle-hooks/ 301 + +/docs/api-reference/apps/v1alpha1/definitions https://v1-4.docs.kubernetes.io/docs/api-reference/apps/v1alpha1/definitions/ 301 +/docs/api-reference/apps/v1beta1/operations https://v1-4.docs.kubernetes.io/docs/api-reference/apps/v1beta1/operations/ 301 +/docs/api-reference/authorization.k8s.io/v1beta1/definitions https://v1-4.docs.kubernetes.io/docs/api-reference/authorization.k8s.io/v1beta1/definitions/ 301 +/docs/api-reference/authorization.k8s.io/v1beta1/operations https://v1-4.docs.kubernetes.io/docs/api-reference/authorization.k8s.io/v1beta1/operations/ 301 +/docs/api-reference/autoscaling/v1/operations https://v1-4.docs.kubernetes.io/docs/api-reference/autoscaling/v1/operations/ 301 +/docs/api-reference/batch/v1/operations https://v1-4.docs.kubernetes.io/docs/api-reference/batch/v1/operations/ 301 +/docs/api-reference/batch/v2alpha1/definitions https://v1-4.docs.kubernetes.io/docs/api-reference/batch/v2alpha1/definitions/ 301 +/docs/api-reference/certificates.k8s.io/v1alpha1/definitions https://v1-4.docs.kubernetes.io/docs/api-reference/certificates.k8s.io/v1alpha1/definitions/ 301 +/docs/api-reference/certificates/v1alpha1/operations https://v1-4.docs.kubernetes.io/docs/api-reference/certificates/v1alpha1/operations/ 301 +/docs/api-reference/extensions/v1beta1/operations https://v1-4.docs.kubernetes.io/docs/api-reference/extensions/v1beta1/operations/ 301 +/docs/api-reference/policy/v1alpha1/definitions https://v1-4.docs.kubernetes.io/docs/api-reference/policy/v1alpha1/definitions/ 301 +/docs/api-reference/policy/v1beta1/definitions https://v1-4.docs.kubernetes.io/docs/api-reference/policy/v1beta1/definitions/ 301 +/docs/api-reference/README https://v1-4.docs.kubernetes.io/docs/api-reference/README/ 301 +/docs/api-reference/storage.k8s.io/v1beta1/operations https://v1-4.docs.kubernetes.io/docs/api-reference/storage.k8s.io/v1beta1/operations/ 301 + +/docs/api-reference/v1/definitions/ /docs/api-reference/v1.7/ 301 + +/docs/concepts/cluster/ /docs/concepts/cluster-administration/cluster-administration-overview/ 301 +/docs/concepts/object-metadata/annotations/ /docs/concepts/overview/working-with-objects/annotations/ 301 + +/docs/concepts/workloads/controllers/daemonset/ /docs/concepts/workloads/pods/poddocs/concepts/workloads/pods/pod/ 301 +/docs/concepts/workloads/controllers/deployment/ /docs/concepts/workloads/pods/poddocs/concepts/workloads/pods/pod/ 301 + +/docs/contribute/write-new-topic/ /docs/home/contribute/write-new-topic/ 301 + +/docs/getting-started-guides/coreos/azure/ /docs/getting-started-guides/coreos/ 301 +/docs/getting-started-guides/coreos/bare_metal_calico/ /docs/getting-started-guides/coreos/ 301 +/docs/getting-started-guides/juju/ /docs/getting-started-guides/ubuntu/installation/ 301 +/docs/getting-started-guides/kargo/ /docs/getting-started-guides/kubespray/ 301 +/docs/getting-started-guides/logging-elasticsearch/ /docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/ 301 +/docs/getting-started-guides/logging/ /docs/concepts/cluster-administration/logging/ 301 +/docs/getting-started-guides/rackspace/ /docs/setup/pick-right-solution/ 301 +/docs/getting-started-guides/ubuntu-calico/ /docs/getting-started-guides/ubuntu/ 301 +/docs/getting-started-guides/ubuntu/automated/ /docs/getting-started-guides/ubuntu/ 301 +/docs/getting-started-guides/vagrant/ /docs/getting-started-guides/alternatives/ 301 +/docs/getting-started-guides/windows/While/ /docs/getting-started-guides/windows/ 301 + +/docs/federation/api-reference/extensions/v1beta1/definitions/ /docs/reference/federation/extensions/v1beta1/definitions/ 301 +/docs/federation/api-reference/federation/v1beta1/definitions/ /docs/reference/federation/extensions/v1beta1/definitions/ 301 +/docs/federation/api-reference/README/ /docs/reference/federation/ 301 +/docs/federation/api-reference/v1/definitions/ /docs/reference/federation/v1/definitions/ 301 +/docs/reference/federation/v1beta1/definitions/ /docs/reference/federation/extensions/v1beta1/definitions/ 301 +/docs/reference/federation/v1beta1/operations/ /docs/reference/federation/extensions/v1beta1/operations/ 301 + +/docs/reporting-security-issues/ /security/ 301 + +/docs/stable/user-guide/labels/ /docs/concepts/overview/working-with-objects/labels/ 301 +/docs/tasks/access-application-cluster/access-cluster.md /docs/tasks/access-application-cluster/access-cluster/ 301 +/docs/tasks/access-application-cluster/authenticate-across-clusters-kubeconfig/ /docs/tasks/access-application-cluster/configure-access-multiple-clusters/ 301 +/docs/tasks/access-kubernetes-api/access-kubernetes-api/http-proxy-access-api/ /docs/tasks/access-kubernetes-api/http-proxy-access-api/ 301 +/docs/tasks/administer-cluster/reserve-compute-resources/out-of-resource.md /docs/tasks/administer-cluster/out-of-resource/ 301 +/docs/tasks/configure-pod-container/configure-pod-disruption-budget/ /docs/tasks/run-application/configure-pdb/ 301 +/docs/tasks/configure-pod-container/define-command-argument-container/ /docs/tasks/inject-data-application/define-command-argument-container/ 301 + +/docs/tasks/debug-application-cluster/sematext-logging-monitoring/ https://sematext.com/kubernetes/ 301 + +/docs/tasks/job/work-queue-1/ /docs/concepts/workloads/controllers/jobs-run-to-completion/ 301 +/docs/tasks/manage-stateful-set/delete-pods/ /docs/tasks/run-application/delete-stateful-set/ 301 + +/docs/tutorials/getting-started/cluster-intro/ /docs/tutorials/kubernetes-basics/cluster-intro/ 301 +/docs/tutorials/getting-started/expose-intro/ /docs/tutorials/kubernetes-basics/expose-intro/ 301 +/docs/tutorials/getting-started/scale-app/ /docs/tutorials/kubernetes-basics/scale-interactive/ 301 +/docs/tutorials/getting-started/scale-intro/ /docs/tutorials/kubernetes-basics/scale-intro/ 301 +/docs/tutorials/getting-started/update-interactive/ /docs/tutorials/kubernetes-basics/update-interactive/ 301 +/docs/tutorials/getting-started/update-intro/ /docs/tutorials/kubernetes-basics/ 301 + +/docs/user-guide/containers/ /docs/tasks/inject-data-application/define-command-argument-container/ 301 +/docs/user-guide/horizontal-pod-autoscaling/walkthrough.md /docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ 301 +/docs/user-guide/ingress.md /docs/concepts/services-networking/ingress/ 301 +/docs/user-guide/replication-controller/operations/ /docs/concepts/workloads/controllers/replicationcontroller/ 301 +/docs/user-guide/resizing-a-replication-controller/ /docs/concepts/workloads/controllers/replicationcontroller/ 301 +/docs/user-guide/scheduled-jobs/ /docs/concepts/workloads/controllers/cron-jobs/ 301 +/docs/user-guide/security-context/ /docs/tasks/configure-pod-container/security-context/ 301 + +/kubernetes-bootcamp/2-1.html /docs/tutorials/kubernetes-basics/ 301 +/kubernetes-bootcamp/2-3-2.html /docs/tutorials/kubernetes-basics/ 301 +/kubernetes /docs/ 301 +/kubernetes/swagger-spec https://github.com/kubernetes/kubernetes/tree/master/api/swagger-spec/ 301 +/serviceaccount/token/ /docs/tasks/configure-pod-container/configure-service-account/ 301 + +/v1.1/docs/admin/networking.html/ /docs/concepts/cluster-administration/networking/ 301 +/v1.1/docs/getting-started-guides/ /docs/tutorials/kubernetes-basics/ 301 ################################# # redirects from /js/redirects.js # -/resource-quota /docs/concepts/policy/resource-quotas 301 -/horizontal-pod-autoscaler /docs/tasks/run-application/horizontal-pod-autoscale 301 -/docs/roadmap https://github.com/kubernetes/kubernetes/milestones/ 301 -/api-ref https://github.com/kubernetes/kubernetes/milestones/ 301 -/kubernetes/third_party/swagger-ui /docs/reference 301 -/docs/user-guide/overview /docs/concepts/overview/what-is-kubernetes 301 -/docs/troubleshooting /docs/tasks/debug-application-cluster/troubleshooting 301 -/docs/concepts/services-networking/networkpolicies /docs/concepts/services-networking/network-policies 301 -/docs/getting-started-guides/meanstack https://medium.com/google-cloud/running-a-mean-stack-on-google-cloud-platform-with-kubernetes-149ca81c2b5d 301 -/docs/samples /docs/tutorials 301 -/v1.1 / 301 -/v1.0 / 301 +/resource-quota/ /docs/concepts/policy/resource-quotas/ 301 +/horizontal-pod-autoscaler/ /docs/tasks/run-application/horizontal-pod-autoscale/ 301 +/docs/roadmap/ https://github.com/kubernetes/kubernetes/milestones/ 301 +/api-ref/ https://github.com/kubernetes/kubernetes/milestones/ 301 +/kubernetes/third_party/swagger-ui/ /docs/reference/ 301 +/docs/user-guide/overview/ /docs/concepts/overview/what-is-kubernetes/ 301 +/docs/troubleshooting/ /docs/tasks/debug-application-cluster/troubleshooting/ 301 +/docs/concepts/services-networking/networkpolicies/ /docs/concepts/services-networking/network-policies/ 301 +/docs/getting-started-guides/meanstack/ https://medium.com/google-cloud/running-a-mean-stack-on-google-cloud-platform-with-kubernetes-149ca81c2b5d/ 301 +/docs/samples/ /docs/tutorials/ 301 + +/v1.1 301 +/v1.0 301 ######################################################## # Redirect users with chinese language preference to /cn # -#/ /cn 302 Language=zh - +#/ /cn 302 Language=zh ########################### # Fixed 404s from analytics +# -/concepts/containers/container-lifecycle-hooks /docs/concepts/containers/container-lifecycle-hooks 301 -/docs/abstractions/controllers/petset /docs/concepts/workloads/controllers/petset 301 - -/docs/admin/add-ons /docs/concepts/cluster-administration/addons 301 -/docs/admin/limitrange/Limits /docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage 301 - -/docs/api-reference/1_5/* /docs/api-reference/v1.5 301 - -/docs/concepts/cluster-administration/device-plugins /docs/concepts/cluster-administration/network-plugins 301 -/docs/concepts/configuration/container-command-args /docs/tasks/inject-data-application/define-command-argument-container 301 -/docs/concepts/ecosystem/thirdpartyresource /docs/tasks/access-kubernetes-api/extend-api-third-party-resource 301 -/docs/concepts/overview /docs/concepts/overview/what-is-kubernetes 301 -/docs/concepts/policy/container-capabilities /docs/tasks/configure-pod-container/security-context/#set-capabilities-for-a-container 301 -/docs/concepts/policy/security-context /docs/tasks/configure-pod-container/security-context 301 -/docs/concepts/storage/volumes/emptyDirapiVersion /docs/concepts/storage/volumes/#emptydir 301 -/docs/concepts/tools/kubectl/object-management-using-commands /docs/tutorials/object-management-kubectl/imperative-object-management-command 301 -/docs/concepts/workload/pods/pod-overview /docs/concepts/workloads/pods/pod-overview 301 -/docs/concepts/workloads/controllers/cron-jobs/deployment /docs/concepts/workloads/controllers/cron-jobs 301 -/docs/concepts/workloads/controllers/statefulsets /docs/concepts/workloads/controllers/statefulset 301 -/docs/concepts/workloads/pods/init-containers/Kubernetes /docs/concepts/workloads/pods/init-containers 301 - -/docs/consumer-guideline/pod-security-coverage /docs/concepts/policy/pod-security-policy 301 - -/docs/contribute/create-pull-request /docs/home/contribute/create-pull-request 301 -/docs/contribute/page-templates /docs/home/contribute/page-templates 301 -/docs/contribute/review-issues /docs/home/contribute/review-issues 301 -/docs/contribute/stage-documentation-changes /docs/home/contribute/stage-documentation-changes 301 -/docs/contribute/style-guide /docs/home/contribute/style-guide 301 - -/docs/deprecated /docs/reference/deprecation-policy 301 -/docs/deprecation-policy /docs/reference/deprecation-policy 301 - - -/docs/federation/api-reference /docs/reference/federation/v1/operations 301 -/docs/federation/api-reference/extensions/v1beta1/operations /docs/reference/federation/extensions/v1beta1/operations 301 -/docs/federation/api-reference/federation/v1beta1/operations /docs/reference/federation/extensions/v1beta1/operations 301 -/docs/federation/api-reference/v1/operations /docs/reference/federation/v1/operations 301 - -/docs/getting-started-guide/* /docs/setup 301 - -/docs/home/deprecation-policy /docs/reference/deprecation-policy 301 - -/docs/resources-reference/1_5/* /docs/resources-reference/v1.5 301 -/docs/resources-reference/1_6/* /docs/resources-reference/v1.6 301 -/docs/resources-reference/1_7/* /docs/resources-reference/v1.7 301 +/concepts/containers/container-lifecycle-hooks/ /docs/concepts/containers/container-lifecycle-hooks/ 301 +/docs/abstractions/controllers/petset/ /docs/concepts/workloads/controllers/petset/ 301 -/docs/stable/user-guide/labels /docs/concepts/overview/working-with-objects/labels 301 +/docs/admin/add-ons/ /docs/concepts/cluster-administration/addons/ 301 +/docs/admin/limitrange/Limits/ /docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage/ 301 +/docs/concepts/cluster-administration/device-plugins/ /docs/concepts/cluster-administration/network-plugins/ 301 +/docs/concepts/configuration/container-command-args/ /docs/tasks/inject-data-application/define-command-argument-container/ 301 +/docs/concepts/ecosystem/thirdpartyresource/ /docs/tasks/access-kubernetes-api/extend-api-third-party-resource/ 301 +/docs/concepts/overview/ /docs/concepts/overview/what-is-kubernetes/ 301 +/docs/concepts/policy/container-capabilities/ /docs/tasks/configure-pod-container/security-context/#set-capabilities-for-a-container/ 301 +/docs/concepts/policy/security-context/ /docs/tasks/configure-pod-container/security-context/ 301 +/docs/concepts/storage/volumes/emptyDirapiVersion/ /docs/concepts/storage/volumes/#emptydir/ 301 +/docs/concepts/tools/kubectl/object-management-using-commands/ /docs/tutorials/object-management-kubectl/imperative-object-management-command/ 301 +/docs/concepts/workload/pods/pod-overview/ /docs/concepts/workloads/pods/pod-overview 301 +/docs/concepts/workloads/controllers/cron-jobs/deployment/ /docs/concepts/workloads/controllers/cron-jobs/ 301 +/docs/concepts/workloads/controllers/statefulsets/ /docs/concepts/workloads/controllers/statefulset/ 301 +/docs/concepts/workloads/pods/init-containers/Kubernetes /docs/concepts/workloads/pods/init-containers/ 301 -/docs/tasks/administer-cluster/apply-resource-quota-limit /docs/tasks/administer-cluster/quota-api-object 301 -/docs/tasks/administer-cluster/configure-namespace-isolation /docs/concepts/services-networking/network-policies 301 -/docs/tasks/administer-cluster/configure-pod-disruption-budget /docs/tasks/run-application/configure-pdb 301 +/docs/consumer-guideline/pod-security-coverage/ /docs/concepts/policy/pod-security-policy/ 301 -/docs/tasks/administer-cluster/cpu-management-policies /docs/concepts/configuration/manage-compute-resources-container 301 -/docs/tasks/administer-cluster/default-cpu-request-limit /docs/tasks/configure-pod-container/assign-cpu-resource/#specify-a-cpu-request-and-a-cpu-limit 301 -/docs/tasks/administer-cluster/default-memory-request-limit /docs/tasks/configure-pod-container/assign-memory-resource/#specify-a-memory-request-and-a-memory-limit 301 +/docs/contribute/create-pull-request/ /docs/home/contribute/create-pull-request 301 +/docs/contribute/page-templates/ /docs/home/contribute/page-templates 301 +/docs/contribute/review-issues/ /docs/home/contribute/review-issues 301 +/docs/contribute/stage-documentation-changes/ /docs/home/contribute/stage-documentation-changes/ 301 +/docs/contribute/style-guide/ /docs/home/contribute/style-guide 301 -/docs/tasks/configure-pod-container/cilium-network-policy /docs/tasks/administer-cluster/cilium-network-policy 301 -/docs/tasks/configure-pod-container/define-command-argument-container /docs/tasks/inject-data-application/define-command-argument-container 301 -/docs/tasks/configure-pod-container/projected-volume /docs/tasks/configure-pod-container/configure-projected-volume-storage 301 +/docs/deprecate/ /ddocs/reference/deprecation-policy/ 301 +/docs/deprecation-policy/ /docs/reference/deprecation-policy/ 301 -/docs/tasks/stateful-sets/deleting-pods /docs/tasks/run-application/force-delete-stateful-set-pod 301 +/docs/federation/api-reference/ /docs/reference/federation/v1/operations/ 301 +/docs/federation/api-reference/extensions/v1beta1/operations/ /docs/reference/federation/extensions/v1beta1/operations/ 301 +/docs/federation/api-reference/federation/v1beta1/operations/ /docs/reference/federation/extensions/v1beta1/operations/ 301 +/docs/federation/api-reference/v1/operations/ /docs/reference/federation/v1/operations/ 301 -/docs/templatedemos/* /docs/home/contribute/page-templates 301 +/docs/home/deprecation-policy/ /docs/reference/deprecation-policy/ 301 -/docs/tutorials/getting-started/* /docs/tutorials/kubernetes-basics 301 +/docs/stable/user-guide/labels/ /docs/concepts/overview/working-with-objects/labels/ 301 -/docs/user-guide/federation/* /docs/concepts/cluster-administration/federation 301 -/docs/user-guide/garbage-collector /docs/concepts/workloads/controllers/garbage-collection 301 -/docs/user-guide/horizontal-pod-autoscaler/* /docs/tasks/run-application/horizontal-pod-autoscale 301 +/docs/tasks/administer-cluster/apply-resource-quota-limit/ /docs/tasks/administer-cluster/quota-api-object/ 301 +/docs/tasks/administer-cluster/configure-namespace-isolation/ /docs/concepts/services-networking/network-policies/ 301 +/docs/tasks/administer-cluster/configure-pod-disruption-budget/ /docs/tasks/run-application/configure-pdb/ 301 +/docs/tasks/administer-cluster/cpu-management-policies/ /docs/concepts/configuration/manage-compute-resources-container/ 301 +/docs/tasks/administer-cluster/default-cpu-request-limit/ /docs/tasks/configure-pod-container/assign-cpu-resource/#specify-a-cpu-request-and-a-cpu-limit/ 301 +/docs/tasks/administer-cluster/default-memory-request-limit/ /docs/tasks/configure-pod-container/assign-memory-resource/#specify-a-memory-request-and-a-memory-limit/ 301 +/docs/tasks/configure-pod-container/cilium-network-policy/ /docs/tasks/administer-cluster/cilium-network-policy/ 301 +/docs/tasks/configure-pod-container/define-command-argument-container/ /docs/tasks/inject-data-application/define-command-argument-container/ 301 +/docs/tasks/configure-pod-container/projected-volume/ /docs/tasks/configure-pod-container/configure-projected-volume-storage/ 301 -/docs/user-guide/liveness /docs/tasks/configure-pod-container/configure-liveness-readiness-probes 301 -/docs/user-guide/logging /docs/concepts/cluster-administration/logging 301 -/docs/user-guide/replication-controller/operations /docs/concepts/workloads/controllers/replicationcontroller 301 -/docs/user-guide/service-accounts/working-with-resources /docs/tutorials/object-management-kubectl/object-management 301 -/docs/user-guide/StatefulSet /docs/concepts/workloads/controllers/statefulset 301 -/docs/user-guide/ui-access /docs/tasks/access-application-cluster/web-ui-dashboard 301 +/docs/tasks/stateful-sets/deleting-pods/ /docs/tasks/run-application/force-delete-stateful-set-pod/ 301 -/kubernetes-bootcamp/* /docs/tutorials/kubernetes-basics 301 +/docs/user-guide/liveness/ /docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ 301 +/docs/user-guide/logging/ /docs/concepts/cluster-administration/logging/ 301 +/docs/user-guide/replication-controller/operations/ /docs/concepts/workloads/controllers/replicationcontroller/ 301 +/docs/user-guide/service-accounts/working-with-resources/ /docs/tutorials/object-management-kubectl/object-management/ 301 +/docs/user-guide/StatefulSet/ /docs/concepts/workloads/controllers/statefulset/ 301 +/docs/user-guide/ui-access/ /docs/tasks/access-application-cluster/web-ui-dashboard/ 301 -/latest/docs /docs/home 301 +/latest/docs/ /docs/home/ 301 -/kubernetes/swagger-spec https://github.com/kubernetes/kubernetes/tree/master/api/swagger-spec 301 -/swagger-spec/* https://github.com/kubernetes/kubernetes/tree/master/api/swagger-spec 301 -/third_party/swagger-ui/* /docs/reference 301 +/kubernetes/swagger-spec https://github.com/kubernetes/kubernetes/tree/master/api/swagger-spec/ 301 From a25b597805324987c1885404d2ae157c57e904b1 Mon Sep 17 00:00:00 2001 From: Dragons Date: Sat, 23 Sep 2017 09:16:39 +0800 Subject: [PATCH 24/87] concepts-overview-components-pr-fix --- cn/docs/concepts/overview/components.md | 32 ++++++++++++------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/cn/docs/concepts/overview/components.md b/cn/docs/concepts/overview/components.md index 1db0b84346148..897ea8b42c0f2 100644 --- a/cn/docs/concepts/overview/components.md +++ b/cn/docs/concepts/overview/components.md @@ -56,32 +56,32 @@ cloud-controller-manager 允许云供应商代码和 Kubernetes 核心彼此独 [kube-scheduler](/docs/admin/kube-scheduler)监视没有分配节点的新创建的 Pod,选择一个节点供他们运行。 -### 插件 +### 插件(addons) -插件是实现集群功能的 Pod 和 Service。 Pods 可能通过 Deployments,ReplicationControllers 管理。命名空间的插件对象被创建在 `kube-system` 命名空间。 +插件是实现集群功能的 Pod 和 Service。 Pods 可以通过 Deployments,ReplicationControllers 管理。插件对象本身是受命名空间限制的,被创建于 `kube-system` 命名空间。 Addon 管理器用于创建和维护附加资源. 有关详细信息,请参阅[here](http://releases.k8s.io/HEAD/cluster/addons). #### DNS -虽然其他插件并不是严格要求的,但所有 Kubernetes 集群都应该具有[Cluster DNS](/docs/concepts/services-networking/dns-pod-service/),许多示例依赖于它。 +虽然其他插件并不是必需的,但所有 Kubernetes 集群都应该具有[Cluster DNS](/docs/concepts/services-networking/dns-pod-service/),许多示例依赖于它。 -Cluster DNS是一个 DNS 服务器,除了您的环境中的其他 DNS 服务器,它为 Kubernetes 服务提供DNS记录。 +Cluster DNS 是一个 DNS 服务器,和您部署环境中的其他 DNS 服务器一起工作,为 Kubernetes 服务提供DNS记录。 Kubernetes 启动的容器自动将 DNS 服务器包含在 DNS 搜索中。 #### 用户界面 -kube-ui 提供了集群状态的只读概述。有关更多信息,请参阅[使用HTTP代理访问 Kubernetes API](/docs/tasks/access-kubernetes-api/http-proxy-access-api/) +dashboard 提供了集群状态的只读概述。有关更多信息,请参阅[使用HTTP代理访问 Kubernetes API](/docs/tasks/access-kubernetes-api/http-proxy-access-api/) #### 容器资源监控 -[容器资源监控](/docs/user-guide/monitoring)记录关于中央数据库中的容器的通用时间序列指标,并提供用于浏览该数据的 UI。 +[容器资源监控](/docs/user-guide/monitoring)将关于容器的一些常见的时间序列度量值保存到一个集中的数据库中,并提供用于浏览这些数据的界面。 -#### 集群级日志记录 +#### 集群层面日志 -[Cluster-level logging](/docs/user-guide/logging/overview) 负责使用搜索/浏览界面将容器日志保存到中央日志存储。 +[Cluster-level logging](/docs/user-guide/logging/overview) 机制负责将容器的日志数据保存到一个集中的日志存储中,该存储能够提供搜索和浏览接口。 ## 节点组件 @@ -89,13 +89,13 @@ kube-ui 提供了集群状态的只读概述。有关更多信息,请参阅[ ### kubelet -[kubelet](/docs/admin/kubelet)是 Master 节点代理,它监视已分配给其节点的 Pod(通过 apiserver 或通过本地配置文件)和: +[kubelet](/docs/admin/kubelet)是主要的节点代理,它监测已分配给其节点的 Pod(通过 apiserver 或通过本地配置文件),提供如下的功能: -* 安装 Pod 的所需数据卷(Volume)。 +* 挂载 Pod 所需要的数据卷(Volume)。 * 下载 Pod 的 secrets。 -* 通过 Docker码 运行(或通过 rkt)运行 Pod 的容器。 -* 定期对容器生命周期进行探测。 -* 如果需要,通过创建 *mirror pod* 将报告状态报告回系统的其余部分。 +* 通过 Docker 运行(或通过 rkt)运行 Pod 的容器。 +* 周期性的对容器生命周期进行探测。 +* 如果需要,通过创建 *mirror pod* 将 Pod 的状态报告回系统的其余部分。 * 将节点的状态报告回系统的其余部分。 ### kube-proxy @@ -109,15 +109,15 @@ Docker 用于运行容器。 ### rkt -实验中支持 rkt 运行容器作为 Docker 的替代方案。 +支持 rkt 运行容器作为 Docker 的试验性替代方案。 ### supervisord -supervisord 是一个轻量级的过程监控和控制系统,可以用来保证 kubelet 和 docker 运行。 +supervisord 是一个轻量级的过程监控系统,可以用来保证 kubelet 和 docker 运行。 ### fluentd -fluentd 是一个守护进程,它有助于提供[cluster-level logging](#cluster-level-logging) 集群层级的日志。 +fluentd 是一个守护进程,它有助于提供[cluster-level logging](#cluster-level-logging) 集群层面的日志。 {% endcapture %} From eb660976ff7b5b68b98cfe5031bb1a7b7d77a172 Mon Sep 17 00:00:00 2001 From: lichuqiang Date: Fri, 22 Sep 2017 14:30:00 +0800 Subject: [PATCH 25/87] translate doc multiple-zones into chinese --- cn/docs/admin/multiple-zones.md | 292 ++++++++++++++++++++++++++++++++ 1 file changed, 292 insertions(+) create mode 100644 cn/docs/admin/multiple-zones.md diff --git a/cn/docs/admin/multiple-zones.md b/cn/docs/admin/multiple-zones.md new file mode 100644 index 0000000000000..4eac17c49f682 --- /dev/null +++ b/cn/docs/admin/multiple-zones.md @@ -0,0 +1,292 @@ +--- +approvers: +- jlowdermilk +- justinsb +- quinton-hoole +title: 多区域运行 +--- + +## 介绍 + +Kubernetes 从v1.2开始支持将集群运行在多个故障域中。 +(GCE 中称其为 "区(Zones)", AWS 中称其为 "可用区(Availability Zones)",这里我们也称其为 "区")。 +它是广泛意义上的集群联邦特性的轻量级版本 (之前被称为 ["Ubernetes"](https://github.com/kubernetes/community/blob/{{page.githubbranch}}/contributors/design-proposals/federation/federation.md))。 +完整的集群联邦能够将多个分别运行在不同区或云供应商(或本地数据中心)的集群集中管理。 +然而,很多用户只是希望通过将单一云供应商上的Kubernetes集群运行在多个区域,来提高集群的可用性, +这就是1.2版本中提供的对多区域的支持。 +(之前被称为 "Ubernetes Lite")。 + +多区域的支持是有明确限制的: Kubernetes集群能够运行在多个区,但必须在同一个地域内 (云供应商也须一致)。 +目前只有GCE和AWS自动支持 (尽管在其他云甚至裸机上,也很容易通过为节点和卷添加合适的标签来实现类似的支持)。 + + +* TOC +{:toc} + +## 功能 + +节点启动时,Kubelet自动为其添加区信息的标签。 + +在单一区域的集群中,Kubernetes 会自动将副本管理器或服务的pod分布到各节点上 (以减轻单实例故障的影响)。 +在多区域的集群中,这种分布的行为扩展到了区域级别 +(以减少区域故障对整体的影响)。 (通过 `SelectorSpreadPriority` 来实现)。 +这种分发是尽力而为(best-effort)的,所以如果集群在各个区之间是异构的 +(比如,各区间的节点数量不同、节点类型不同、pod的资源需求不同等)可能导致pod无法完全均匀地分布。 +如果需要的话,用户可以使用同质的区(节点数量和节点类型相同)来减少区域之间分配不均匀的可能。 + +当卷被创建时, `PersistentVolumeLabel`准入控制器会自动为其添加区域的标签。 +调度器 (通过 `VolumeZonePredicate` 断言) 会确申领该卷的pod被调度到该卷对应的区域, +因为卷是不支持跨区挂载的。 + +## 限制 + +对多区的支持有一些重要的限制: + +* 我们假设不同的区域间在网络上离得很近,所以我们不做任何的区域感知路由。 特别是,通过服务的网络访问可能跨区域 (即使该服务后端pod的其中一些运行在与客户端相同的区域中),这可能导致额外的延迟和损耗。 + +* 卷的区域亲和性只对 `PersistentVolume`有效。 例如,如果你在pod的spec中直接指定一个EBS的卷,则不会生效。 + +* 集群不支持跨云平台或地域 (这些功能需要完整的集群联邦特性支持)。 + +* 尽管节点位于多区域,目前默认情况下 kube-up 创建的管理节点是单实例的。 所以尽管服务是高可用的,并且能够容忍跨区域的性能损耗,管理平面还是单区域的。 需要高可用的管理平面的用户可以按照 [高可用](/docs/admin/high-availability) 指导来操作。 + +* 目前StatefulSet的卷动态创建时的跨区域分配,与pod的亲和性/反亲和性不兼容。 + +* StatefulSet的名称包含破折号 ("-")时,可能影响到卷在区域间的均匀分布。 + +* 为deployment或pod指定多个PVC时,要求其StorageClass处于同一区域内,否则,相应的PV卷需要在一个区域中静态配置。 另一种方式是使用StatefulSet,这可以确保同一副本所挂载的卷位于同一区内。 + + +## 演练 + +接下来我们将介绍如何同时在 GCE 和 AWS 上创建和使用多区域的集群。 为此,你需要创建一个完整的集群 +(指定 `MULTIZONE=true`),然后再次执行 `kube-up`(指定 `KUBE_USE_EXISTING_MASTER=true`)来添加其他区域的节点。 + +### 创建集群 + +按正常方式创建集群,但是传入 MULTIZONE 来通知集群对多区域进行管理。 在 us-central1-a 区域创建节点。 + +GCE: + +```shell +curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-a NUM_NODES=3 bash +``` + +AWS: + +```shell +curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a NUM_NODES=3 bash +``` + +该步骤按正常方式创建了集群,仍然运行在单个区域中。 +但 `MULTIZONE=true` 已经开启了多区域的能力。 + +### 标记节点 + +查看节点,你可以发现节点上打了区域信息的标签。 +节点位于 `us-central1-a` (GCE) 或者 `us-west-2a` (AWS)。 标签 `failure-domain.beta.kubernetes.io/region` 用于区分地域, +标签 `failure-domain.beta.kubernetes.io/zone` 用于区分区域。 + +```shell +> kubectl get nodes --show-labels + + +NAME STATUS AGE VERSION LABELS +kubernetes-master Ready,SchedulingDisabled 6m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master +kubernetes-minion-87j9 Ready 6m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9 +kubernetes-minion-9vlv Ready 6m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv +kubernetes-minion-a12q Ready 6m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q +``` + +### 添加其它区中的节点 + +接下来我们复用已有的管理节点,添加运行于其它区域 (us-central1-b或us-west-2b)中的节点。 +再次执行 kube-up, 通过指定 `KUBE_USE_EXISTING_MASTER=true`, +kube-up 不会创建新的管理节点,而是会复用之前创建的。 + +GCE: + +```shell +KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-b NUM_NODES=3 kubernetes/cluster/kube-up.sh +``` + +在 AWS 中我们还需要为新增的子网指定网络CIDR,还有管理节点的内部IP地址。 + +```shell +KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2b NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.1.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh +``` + + +再次查看节点,3个新增的节点已经启动,并被标记为us-central1-b: + +```shell +> kubectl get nodes --show-labels + +NAME STATUS AGE VERSION LABELS +kubernetes-master Ready,SchedulingDisabled 16m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master +kubernetes-minion-281d Ready 2m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d +kubernetes-minion-87j9 Ready 16m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9 +kubernetes-minion-9vlv Ready 16m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv +kubernetes-minion-a12q Ready 17m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q +kubernetes-minion-pp2f Ready 2m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-pp2f +kubernetes-minion-wf8i Ready 2m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-wf8i +``` + +### 卷的亲和性 + +使用动态创建卷的功能创建一个卷 (只有PV持久卷才支持区域亲和性): + +```json +kubectl create -f - < kubectl get pv --show-labels +NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE LABELS +pv-gce-mj4gm 5Gi RWO Bound default/claim1 46s failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a +``` + +现在我们将创建使用这些PVC的pod。 +因为 GCE 的PD存储 / AWS 的EBS 卷 不支持跨区域挂载, +这意味着相应的pod只能创建在卷所在的区域中。 + +```yaml +kubectl create -f - < kubectl describe pod mypod | grep Node +Node: kubernetes-minion-9vlv/10.240.0.5 +> kubectl get node kubernetes-minion-9vlv --show-labels +NAME STATUS AGE VERSION LABELS +kubernetes-minion-9vlv Ready 22m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv +``` + +### Pod的跨区域分布 + +副本管理器或服务的pod被自动创建在了不同的区域。 首先,在第三个区域内启动节点: + +GCE: + +```shell +KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-f NUM_NODES=3 kubernetes/cluster/kube-up.sh +``` + +AWS: + +```shell +KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2c NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.2.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh +``` + +验证你现在在3个区域内拥有节点: + +```shell +kubectl get nodes --show-labels +``` + +创建 guestbook-go 示例应用, 它包含一个副本数为3的RC,运行一个简单的网络应用: + +```shell +find kubernetes/examples/guestbook-go/ -name '*.json' | xargs -I {} kubectl create -f {} +``` + +Pod应该分布在全部3个区域上: + +```shell +> kubectl describe pod -l app=guestbook | grep Node +Node: kubernetes-minion-9vlv/10.240.0.5 +Node: kubernetes-minion-281d/10.240.0.8 +Node: kubernetes-minion-olsh/10.240.0.11 + + > kubectl get node kubernetes-minion-9vlv kubernetes-minion-281d kubernetes-minion-olsh --show-labels +NAME STATUS AGE VERSION LABELS +kubernetes-minion-9vlv Ready 34m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv +kubernetes-minion-281d Ready 20m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d +kubernetes-minion-olsh Ready 3m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-f,kubernetes.io/hostname=kubernetes-minion-olsh +``` + + +负载平衡器覆盖集群中的所有区域; guestbook-go 示例包含一个 +负载均衡服务的例子: + +```shell +> kubectl describe service guestbook | grep LoadBalancer.Ingress +LoadBalancer Ingress: 130.211.126.21 + +> ip=130.211.126.21 + +> curl -s http://${ip}:3000/env | grep HOSTNAME + "HOSTNAME": "guestbook-44sep", + +> (for i in `seq 20`; do curl -s http://${ip}:3000/env | grep HOSTNAME; done) | sort | uniq + "HOSTNAME": "guestbook-44sep", + "HOSTNAME": "guestbook-hum5n", + "HOSTNAME": "guestbook-ppm40", +``` + +负载平衡器正确指向了所有的pod,即使它们位于不同的区域内。 + +### 停止集群 + +使用完成后,进行清理: + +GCE: + +```shell +KUBERNETES_PROVIDER=gce KUBE_USE_EXISTING_MASTER=true KUBE_GCE_ZONE=us-central1-f kubernetes/cluster/kube-down.sh +KUBERNETES_PROVIDER=gce KUBE_USE_EXISTING_MASTER=true KUBE_GCE_ZONE=us-central1-b kubernetes/cluster/kube-down.sh +KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-a kubernetes/cluster/kube-down.sh +``` + +AWS: + +```shell +KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2c kubernetes/cluster/kube-down.sh +KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2b kubernetes/cluster/kube-down.sh +KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a kubernetes/cluster/kube-down.sh +``` From 27f78d7d53831b9ef6217828bbd9546a7837405c Mon Sep 17 00:00:00 2001 From: lichuqiang Date: Thu, 21 Sep 2017 16:16:42 +0800 Subject: [PATCH 26/87] translate doc accessing-the-api into chinese --- cn/docs/admin/accessing-the-api.md | 135 +++++++++++++++++++++++++++++ 1 file changed, 135 insertions(+) create mode 100644 cn/docs/admin/accessing-the-api.md diff --git a/cn/docs/admin/accessing-the-api.md b/cn/docs/admin/accessing-the-api.md new file mode 100644 index 0000000000000..930cf620fd15a --- /dev/null +++ b/cn/docs/admin/accessing-the-api.md @@ -0,0 +1,135 @@ +--- +approvers: +- bgrant0607 +- erictune +- lavalamp +title: Kubernetes API访问控制 +--- + +用户通过 `kubectl`、客户端库或者通过发送REST请求[访问API](/docs/user-guide/accessing-the-cluster)。 用户(自然人)和[Kubernetes服务账户](/docs/tasks/configure-pod-container/configure-service-account/) 都可以被授权进行API访问。 +请求到达API服务器后会经过几个阶段,具体说明如图: + +![Diagram of request handling steps for Kubernetes API request](/images/docs/admin/access-control-overview.svg) + +## 传输层安全 + +在典型的Kubernetes集群中,API通过443端口提供服务。 +API服务器会提供一份证书。 该证书一般是自签名的, 所以用户机器上的 `$USER/.kube/config` 目录通常 +包含该API服务器证书的根证书,用来代替系统默认根证书。 当用户使用 `kube-up.sh` 创建集群时,该证书通常会被自动写入用户的`$USER/.kube/config`。 如果集群中存在多个用户,则创建者需要与其他用户共享证书。 + +## 认证 + +一旦 TLS 连接建立,HTTP请求就进入到了认证的步骤。即图中的步骤 **1** 。 +集群创建脚本或集群管理员会为API服务器配置一个或多个认证模块。 +更具体的认证相关的描述详见 [这里](/docs/admin/authentication/)。 + +认证步骤的输入是整个HTTP请求,但这里通常只是检查请求头和/或客户端证书。 + +认证模块支持客户端证书,密码和Plain Tokens, +Bootstrap Tokens,以及JWT Tokens (用于服务账户)。 + +(管理员)可以同时设置多种认证模块,在设置了多个认证模块的情况下,每个模块会依次尝试认证, +直到其中一个认证成功。 + +在 GCE 平台中,客户端证书,密码和Plain Tokens,Bootstrap Tokens,以及JWT Tokens同时被启用。 + +如果请求认证失败,则请求被拒绝,返回401状态码。 +如果认证成功,则被认证为具体的 `username`,该用户名可供随后的步骤中使用。一些认证模块还提供了用户的组成员关系,另一些则没有。 + +尽管Kubernetes使用 "用户名" 来进行访问控制和请求记录,但它实际上并没有 `user` 对象,也不存储用户名称或其他相关信息。 + +## 授权 + +当请求被认证为来自某个特定的用户后,该请求需要被授权。 即图中的步骤 **2** 。 + +请求须包含请求者的用户名,请求动作,以及该动作影响的对象。 如果存在相应策略,声明该用户具有进行相应操作的权限,则该请求会被授权。 + +例如,如果Bob有如下策略,那么他只能够读取`projectCaribou`命名空间下的pod资源: + +```json +{ + "apiVersion": "abac.authorization.kubernetes.io/v1beta1", + "kind": "Policy", + "spec": { + "user": "bob", + "namespace": "projectCaribou", + "resource": "pods", + "readonly": true + } +} +``` +如果Bob发起以下请求,那么请求能够通过授权,因为Bob被允许访问 `projectCaribou` 命名空间下的对象: + +```json +{ + "apiVersion": "authorization.k8s.io/v1beta1", + "kind": "SubjectAccessReview", + "spec": { + "resourceAttributes": { + "namespace": "projectCaribou", + "verb": "get", + "group": "unicorn.example.org", + "resource": "pods" + } + } +} +``` +如果Bob对 `projectCaribou` 命名空间下的对象发起一个写(`create` 或者 `update`)请求,那么它的授权会被拒绝。 如果Bob请求读取(`get`) 其他命名空间,例如 `projectFish`下的对象,其授权也会被拒绝。 + +Kubernetes的授权要求使用通用的REST属性与现有的组织或云服务提供商的访问控制系统进行交互。 采用REST格式是必要的,因为除Kubernetes外,这些访问控制系统还可能与其他的API进行交互。 + +Kubernetes 支持多种授权模块,例如ABAC模式,RBAC模式和 Webhook模式。 管理员创建集群时,会配置API服务器应用的授权模块。 如果多种授权模式同时被启用,Kubernetes将检查所有模块,如果其中一种通过授权,则请求授权通过。 如果所有的模块全部拒绝,则请求被拒绝(HTTP状态码403)。 + +要了解更多的Kubernetes授权相关信息,包括使用授权模块创建策略的具体说明等,可参考[授权概述](/docs/admin/authorization)。 + + +## 准入控制 + +准入控制模块是能够修改或拒绝请求的软件模块。 +作为授权模块的补充,准入控制模块会访问被创建或更新的对象的内容。 +它们作用于对象的创建,删除,更新和连接 (proxy)阶段,但不包括对象的读取。 + +可以同时配置多个准入控制器,它们会按顺序依次被调用。 + +即图中的步骤 **3** 。 + +与认证和授权模块不同的是,如果任一个准入控制器拒绝请求,那么整个请求会立即被拒绝。 + +除了拒绝请求外,准入控制器还可以为对象设置复杂的默认值。 + +可用的准入控制模块描述 [如下](/docs/admin/admission-controllers/)。 + +一旦请求通过所有准入控制器,将使用对应API对象的验证流程对其进行验证,然后写入对象存储 (如步骤 **4**)。 + + +## API的端口和IP + +上述讨论适用于发送请求到API服务器的安全端口(典型情况)。 +实际上API服务器可以通过两个端口提供服务: + +默认情况下,API服务器在2个端口上提供HTTP服务: + + 1. `Localhost Port`: + + - 用于测试和启动,以及管理节点的其他组件 + (scheduler, controller-manager)与API的交互 + - 没有TLS + - 默认值为8080,可以通过 `--insecure-port` 标记来修改。 + - 默认的IP地址为localhost, 可以通过 `--insecure-bind-address`标记来修改。 + - 请求会 **绕过** 认证和鉴权模块。 + - 请求会被准入控制模块处理。 + - 其访问需要主机访问的权限。 + + 2. `Secure Port`: + + - 尽可能使用该端口访问 + - 应用 TLS。 可以通过 `--tls-cert-file` 设置证书, 通过 `--tls-private-key-file` 设置私钥。 + - 默认值为6443,可以通过 `--secure-port` 标记来修改。 + - 默认IP是首个非本地的网络接口地址,可以通过 `--bind-address` 标记来修改。 + - 请求会经过认证和鉴权模块处理。 + - 请求会被准入控制模块处理。 + - 要求认证和授权模块正常运行。 + +通过 `kube-up.sh`创建集群时, 对 Google Compute Engine (GCE) +和一些其他的云供应商来说, API通过443端口提供服务。 对 +GCE而言,项目上配置了防火墙规则,允许外部的HTTPS请求访问API,其他(厂商的)集群设置方法各不相同。 From ed5d92d91e461e58c2635537c51b518fa531f11d Mon Sep 17 00:00:00 2001 From: Steve Perry Date: Fri, 22 Sep 2017 19:54:37 -0700 Subject: [PATCH 27/87] Experiment: Add trailing slash to eliminate redirection. (#5590) Please enter the commit message for your changes. Lines starting --- docs/tasks/federation/set-up-cluster-federation-kubefed.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tasks/federation/set-up-cluster-federation-kubefed.md b/docs/tasks/federation/set-up-cluster-federation-kubefed.md index c613f1e03d8d9..af6e7f83c9080 100644 --- a/docs/tasks/federation/set-up-cluster-federation-kubefed.md +++ b/docs/tasks/federation/set-up-cluster-federation-kubefed.md @@ -373,7 +373,7 @@ For more information see Once you've deployed a federation control plane, you'll need to make that control plane aware of the clusters it should manage. You can add -a cluster to your federation by using the [`kubefed join`](/docs/admin/kubefed_join) +a cluster to your federation by using the [`kubefed join`](/docs/admin/kubefed_join/) command. To use `kubefed join`, you'll need to provide the name of the cluster From 9b31b9a4d3d8676b0195c932c60720babfbe1a60 Mon Sep 17 00:00:00 2001 From: Steve Perry Date: Sat, 23 Sep 2017 03:41:18 -0700 Subject: [PATCH 28/87] Add trailing slashes to reduce redirects. (#5592) --- .../cluster-administration/cloud-providers.md | 2 +- .../cluster-administration-overview.md | 18 ++-- .../container-environment-variables.md | 4 +- docs/concepts/overview/components.md | 20 ++-- .../workloads/controllers/statefulset.md | 4 +- .../coreos/bare_metal_offline.md | 6 +- docs/getting-started-guides/dcos.md | 2 +- docs/getting-started-guides/mesos/index.md | 4 +- docs/getting-started-guides/scratch.md | 34 +++---- docs/getting-started-guides/ubuntu/index.md | 30 +++--- .../ubuntu/installation.md | 16 +-- docs/getting-started-guides/vsphere.md | 2 +- docs/setup/independent/install-kubeadm.md | 2 +- docs/setup/pick-right-solution.md | 98 +++++++++---------- .../configure-pod-configmap.md | 2 +- .../configure-service-account.md | 2 +- .../set-up-cluster-federation-kubefed.md | 2 +- 17 files changed, 124 insertions(+), 124 deletions(-) diff --git a/docs/concepts/cluster-administration/cloud-providers.md b/docs/concepts/cluster-administration/cloud-providers.md index 4c0034efb9108..e31c0acf23fff 100644 --- a/docs/concepts/cluster-administration/cloud-providers.md +++ b/docs/concepts/cluster-administration/cloud-providers.md @@ -13,7 +13,7 @@ This section describes all the possible configurations which can be used when running Kubernetes on Amazon Web Services. ## Load Balancers -You can setup [external load balancers](/docs/tasks/access-application-cluster/create-external-load-balancer) +You can setup [external load balancers](/docs/tasks/access-application-cluster/create-external-load-balancer/) to use specific features in AWS by configuring the annotations as shown below. ```yaml diff --git a/docs/concepts/cluster-administration/cluster-administration-overview.md b/docs/concepts/cluster-administration/cluster-administration-overview.md index 9f6a9018c077c..f430a389e64af 100644 --- a/docs/concepts/cluster-administration/cluster-administration-overview.md +++ b/docs/concepts/cluster-administration/cluster-administration-overview.md @@ -18,23 +18,23 @@ See the guides in [Picking the Right Solution](/docs/setup/pick-right-solution/) Before choosing a guide, here are some considerations: - Do you just want to try out Kubernetes on your computer, or do you want to build a high-availability, multi-node cluster? Choose distros best suited for your needs. - - **If you are designing for high-availability**, learn about configuring [clusters in multiple zones](/docs/admin/multi-cluster). + - **If you are designing for high-availability**, learn about configuring [clusters in multiple zones](/docs/admin/multi-cluster/). - Will you be using **a hosted Kubernetes cluster**, such as [Google Container Engine (GKE)](https://cloud.google.com/container-engine/), or **hosting your own cluster**? - Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly support hybrid clusters. Instead, you can set up multiple clusters. - - **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/admin/networking) fits best. One option for custom networking is [*OpenVSwitch GRE/VxLAN networking*](/docs/admin/ovs-networking/), which uses OpenVSwitch to set up networking between pods across Kubernetes nodes. + - **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/admin/networking/) fits best. One option for custom networking is [*OpenVSwitch GRE/VxLAN networking*](/docs/admin/ovs-networking/), which uses OpenVSwitch to set up networking between pods across Kubernetes nodes. - Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**? - Do you **just want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the latter, choose a actively-developed distro. Some distros only use binary releases, but offer a greater variety of choices. - - Familiarize yourself with the [components](/docs/admin/cluster-components) needed to run a cluster. + - Familiarize yourself with the [components](/docs/admin/cluster-components/) needed to run a cluster. Note: Not all distros are actively maintained. Choose distros which have been tested with a recent version of Kubernetes. -If you are using a guide involving Salt, see [Configuring Kubernetes with Salt](/docs/admin/salt). +If you are using a guide involving Salt, see [Configuring Kubernetes with Salt](/docs/admin/salt/). ## Managing a cluster -* [Managing a cluster](/docs/concepts/cluster-administration/cluster-management/) describes several topics related to the lifecycle of a cluster: creating a new cluster, upgrading your cluster’s master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a running cluster.. +* [Managing a cluster](/docs/concepts/cluster-administration/cluster-management/) describes several topics related to the lifecycle of a cluster: creating a new cluster, upgrading your cluster’s master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a running cluster. * Learn how to [manage nodes](/docs/concepts/nodes/node/). @@ -44,13 +44,13 @@ If you are using a guide involving Salt, see [Configuring Kubernetes with Salt]( * [Kubernetes Container Environment](/docs/concepts/containers/container-environment-variables/) describes the environment for Kubelet managed containers on a Kubernetes node. -* [Controlling Access to the Kubernetes API](/docs/admin/accessing-the-api) describes how to set up permissions for users and service accounts. +* [Controlling Access to the Kubernetes API](/docs/admin/accessing-the-api/) describes how to set up permissions for users and service accounts. -* [Authenticating](/docs/admin/authentication) explains authentication in Kubernetes, including the various authentication options. +* [Authenticating](/docs/admin/authentication/) explains authentication in Kubernetes, including the various authentication options. -* [Authorization](/docs/admin/authorization) is separate from authentication, and controls how HTTP calls are handled. +* [Authorization](/docs/admin/authorization/) is separate from authentication, and controls how HTTP calls are handled. -* [Using Admission Controllers](/docs/admin/admission-controllers) explains plug-ins which intercepts requests to the Kubernetes API server after authentication and authorization. +* [Using Admission Controllers](/docs/admin/admission-controllers/) explains plug-ins which intercepts requests to the Kubernetes API server after authentication and authorization. * [Using Sysctls in a Kubernetes Cluster](/docs/concepts/cluster-administration/sysctl-cluster/) describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters . diff --git a/docs/concepts/containers/container-environment-variables.md b/docs/concepts/containers/container-environment-variables.md index 07f2e2caba383..d5d0975cb7669 100644 --- a/docs/concepts/containers/container-environment-variables.md +++ b/docs/concepts/containers/container-environment-variables.md @@ -19,7 +19,7 @@ This page describes the resources available to Containers in the Container envir The Kubernetes Container environment provides several important resources to Containers: -* A filesystem, which is a combination of an [image](/docs/concepts/containers/images) and one or more [volumes](/docs/concepts/storage/volumes). +* A filesystem, which is a combination of an [image](/docs/concepts/containers/images/) and one or more [volumes](/docs/concepts/storage/volumes/). * Information about the Container itself. * Information about other objects in the cluster. @@ -31,7 +31,7 @@ It is available through the `hostname` command or the function call in libc. The Pod name and namespace are available as environment variables through the -[downward API](/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information). +[downward API](/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information/). User defined environment variables from the Pod definition are also available to the Container, as are any environment variables specified statically in the Docker image. diff --git a/docs/concepts/overview/components.md b/docs/concepts/overview/components.md index 3cf64483ca1a8..52fcd42175367 100644 --- a/docs/concepts/overview/components.md +++ b/docs/concepts/overview/components.md @@ -18,20 +18,20 @@ cluster (for example, scheduling), and detecting and responding to cluster event Master components can be run on any node in the cluster. However, for simplicity, set up scripts typically start all master components on the same VM, and do not run user containers on this VM. See -[Building High-Availability Clusters](/docs/admin/high-availability) for an example multi-master-VM setup. +[Building High-Availability Clusters](/docs/admin/high-availability/) for an example multi-master-VM setup. ### kube-apiserver -[kube-apiserver](/docs/admin/kube-apiserver) exposes the Kubernetes API. It is the front-end for the -Kubernetes control plane. It is designed to scale horizontally -- that is, it scales by deploying more instances. See [Building High-Availability Clusters](/docs/admin/high-availability). +[kube-apiserver](/docs/admin/kube-apiserver/) exposes the Kubernetes API. It is the front-end for the +Kubernetes control plane. It is designed to scale horizontally -- that is, it scales by deploying more instances. See [Building High-Availability Clusters](/docs/admin/high-availability/). ### etcd -[etcd](/docs/tasks/administer-cluster/configure-upgrade-etcd) is used as Kubernetes' backing store. All cluster data is stored here. Always have a backup plan for etcd's data for your Kubernetes cluster. +[etcd](/docs/tasks/administer-cluster/configure-upgrade-etcd/) is used as Kubernetes' backing store. All cluster data is stored here. Always have a backup plan for etcd's data for your Kubernetes cluster. ### kube-controller-manager -[kube-controller-manager](/docs/admin/kube-controller-manager) runs controllers, which are the background threads that handle routine tasks in the cluster. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process. +[kube-controller-manager](/docs/admin/kube-controller-manager/) runs controllers, which are the background threads that handle routine tasks in the cluster. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process. These controllers include: @@ -58,7 +58,7 @@ The following controllers have cloud provider dependencies: ### kube-scheduler -[kube-scheduler](/docs/admin/kube-scheduler) watches newly created pods that have no node assigned, and +[kube-scheduler](/docs/admin/kube-scheduler/) watches newly created pods that have no node assigned, and selects a node for them to run on. ### addons @@ -84,12 +84,12 @@ Containers started by Kubernetes automatically include this DNS server in their #### Container Resource Monitoring -[Container Resource Monitoring](/docs/tasks/debug-application-cluster/resource-usage-monitoring) records generic time-series metrics +[Container Resource Monitoring](/docs/tasks/debug-application-cluster/resource-usage-monitoring/) records generic time-series metrics about containers in a central database, and provides a UI for browsing that data. #### Cluster-level Logging -A [Cluster-level logging](/docs/concepts/cluster-administration/logging) mechanism is responsible for +A [Cluster-level logging](/docs/concepts/cluster-administration/logging/) mechanism is responsible for saving container logs to a central log store with search/browsing interface. ## Node components @@ -98,7 +98,7 @@ Node components run on every node, maintaining running pods and providing the Ku ### kubelet -[kubelet](/docs/admin/kubelet) is the primary node agent. It watches for pods that have been assigned to its node (either by apiserver or via local configuration file) and: +[kubelet](/docs/admin/kubelet/) is the primary node agent. It watches for pods that have been assigned to its node (either by apiserver or via local configuration file) and: * Mounts the pod's required volumes. * Downloads the pod's secrets. @@ -109,7 +109,7 @@ Node components run on every node, maintaining running pods and providing the Ku ### kube-proxy -[kube-proxy](/docs/admin/kube-proxy) enables the Kubernetes service abstraction by maintaining +[kube-proxy](/docs/admin/kube-proxy/) enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding. diff --git a/docs/concepts/workloads/controllers/statefulset.md b/docs/concepts/workloads/controllers/statefulset.md index e06551d6d43c7..1aefb854f8937 100644 --- a/docs/concepts/workloads/controllers/statefulset.md +++ b/docs/concepts/workloads/controllers/statefulset.md @@ -160,7 +160,7 @@ The StatefulSet should not specify a `pod.Spec.TerminationGracePeriodSeconds` of When the nginx example above is created, three Pods will be deployed in the order web-0, web-1, web-2. web-1 will not be deployed before web-0 is -[Running and Ready](/docs/user-guide/pod-states), and web-2 will not be deployed until +[Running and Ready](/docs/user-guide/pod-states/), and web-2 will not be deployed until web-1 is Running and Ready. If web-0 should fail, after web-1 is Running and Ready, but before web-2 is launched, web-2 will not be launched until web-0 is successfully relaunched and becomes Running and Ready. @@ -225,7 +225,7 @@ update, roll out a canary, or perform a phased roll out. {% endcapture %} {% capture whatsnext %} -* Follow an example of [deploying a stateful application](/docs/tutorials/stateful-application/basic-stateful-set). +* Follow an example of [deploying a stateful application](/docs/tutorials/stateful-application/basic-stateful-set/). * Follow an example of [deploying Cassandra with Stateful Sets](/docs/tutorials/stateful-application/cassandra/). {% endcapture %} diff --git a/docs/getting-started-guides/coreos/bare_metal_offline.md b/docs/getting-started-guides/coreos/bare_metal_offline.md index c7684f73b22a0..35824a4f03fd9 100644 --- a/docs/getting-started-guides/coreos/bare_metal_offline.md +++ b/docs/getting-started-guides/coreos/bare_metal_offline.md @@ -631,7 +631,7 @@ Reboot these servers to get the images PXEd and ready for running containers! Now that the CoreOS with Kubernetes installed is up and running lets spin up some Kubernetes pods to demonstrate the system. -See [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster. +See [a simple nginx example](/docs/user-guide/simple-nginx/) to try out your new cluster. For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/). @@ -683,6 +683,6 @@ for i in `kubectl get pods | awk '{print $1}'`; do kubectl delete pod $i; done IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/bare_metal_offline) | | Community ([@jeffbean](https://github.com/jeffbean)) +Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/bare_metal_offline/) | | Community ([@jeffbean](https://github.com/jeffbean)) -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions/) chart. diff --git a/docs/getting-started-guides/dcos.md b/docs/getting-started-guides/dcos.md index 07aa56c002f0e..816bd288a5276 100644 --- a/docs/getting-started-guides/dcos.md +++ b/docs/getting-started-guides/dcos.md @@ -138,6 +138,6 @@ $ dcos package uninstall kubernetes IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -DCOS | Marathon | CoreOS/Alpine | custom | [docs](/docs/getting-started-guides/dcos) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) +DCOS | Marathon | CoreOS/Alpine | custom | [docs](/docs/getting-started-guides/dcos/) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/mesos/index.md b/docs/getting-started-guides/mesos/index.md index c2fb280fbc4fa..f40c41ad707e2 100644 --- a/docs/getting-started-guides/mesos/index.md +++ b/docs/getting-started-guides/mesos/index.md @@ -309,10 +309,10 @@ Address 1: 10.10.10.1 IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Mesos/GCE | | | | [docs](/docs/getting-started-guides/mesos) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) +Mesos/GCE | | | | [docs](/docs/getting-started-guides/mesos/) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions/) chart. ## What next? diff --git a/docs/getting-started-guides/scratch.md b/docs/getting-started-guides/scratch.md index eebdebb78b4b3..20bcecfef768c 100644 --- a/docs/getting-started-guides/scratch.md +++ b/docs/getting-started-guides/scratch.md @@ -27,7 +27,7 @@ steps that existing cluster setup scripts are making. 1. You should be familiar with using Kubernetes already. We suggest you set up a temporary cluster by following one of the other Getting Started Guides. - This will help you become familiar with the CLI ([kubectl](/docs/user-guide/kubectl/)) and concepts ([pods](/docs/user-guide/pods), [services](/docs/user-guide/services), etc.) first. + This will help you become familiar with the CLI ([kubectl](/docs/user-guide/kubectl/)) and concepts ([pods](/docs/user-guide/pods/), [services](/docs/user-guide/services/), etc.) first. 1. You should have `kubectl` installed on your desktop. This will happen as a side effect of completing one of the other Getting Started Guides. If not, follow the instructions [here](/docs/tasks/kubectl/install/). @@ -58,7 +58,7 @@ on how flags are set on various components. ### Network #### Network Connectivity -Kubernetes has a distinctive [networking model](/docs/admin/networking). +Kubernetes has a distinctive [networking model](/docs/admin/networking/). Kubernetes allocates an IP address to each pod. When creating a cluster, you need to allocate a block of IPs for Kubernetes to use as Pod IPs. The simplest @@ -87,11 +87,11 @@ to implement one of the above options: - [Open vSwitch (OVS)](http://openvswitch.org/) - [Romana](http://romana.io/) - [Weave](http://weave.works/) - - [More found here](/docs/admin/networking#how-to-achieve-this) + - [More found here](/docs/admin/networking#how-to-achieve-this/) - You can also write your own. - **Compile support directly into Kubernetes** - This can be done by implementing the "Routes" interface of a Cloud Provider module. - - The Google Compute Engine ([GCE](/docs/getting-started-guides/gce)) and [AWS](/docs/getting-started-guides/aws) guides use this approach. + - The Google Compute Engine ([GCE](/docs/getting-started-guides/gce)/) and [AWS](/docs/getting-started-guides/aws/) guides use this approach. - **Configure the network external to Kubernetes** - This can be done by manually running commands, or through a set of externally maintained scripts. - You have to implement this yourself, but it can give you an extra degree of flexibility. @@ -113,7 +113,7 @@ You will need to select an address range for the Pod IPs. Note that IPv6 is not using `10.10.0.0/24` through `10.10.255.0/24`, respectively. - Need to make these routable or connect with overlay. -Kubernetes also allocates an IP to each [service](/docs/user-guide/services). However, +Kubernetes also allocates an IP to each [service](/docs/user-guide/services/). However, service IPs do not necessarily need to be routable. The kube-proxy takes care of translating Service IPs to Pod IPs before traffic leaves the node. You do need to Allocate a block of IPs for services. Call this @@ -237,7 +237,7 @@ You need to prepare several certs: Unless you plan to have a real CA generate your certs, you will need to generate a root cert and use that to sign the master, kubelet, and kubectl certs. How to do this is described in the [authentication -documentation](/docs/admin/authentication/#creating-certificates). +documentation](/docs/admin/authentication/#creating-certificates/). You will end up with the following files (we will use these variables later on) @@ -263,7 +263,7 @@ The admin user (and any users) need: Your tokens and passwords need to be stored in a file for the apiserver to read. This guide uses `/var/lib/kube-apiserver/known_tokens.csv`. -The format for this file is described in the [authentication documentation](/docs/admin/authentication). +The format for this file is described in the [authentication documentation](/docs/admin/authentication/). For distributing credentials to clients, the convention in Kubernetes is to put the credentials into a [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/). @@ -408,7 +408,7 @@ Arguments to consider: - `--docker-root=` - `--root-dir=` - `--configure-cbr0=` (described below) - - `--register-node` (described in [Node](/docs/admin/node) documentation.) + - `--register-node` (described in [Node](/docs/admin/node/) documentation.) ### kube-proxy @@ -430,7 +430,7 @@ Each node needs to be allocated its own CIDR range for pod networking. Call this `NODE_X_POD_CIDR`. A bridge called `cbr0` needs to be created on each node. The bridge is explained -further in the [networking documentation](/docs/admin/networking). The bridge itself +further in the [networking documentation](/docs/admin/networking/). The bridge itself needs an address from `$NODE_X_POD_CIDR` - by convention the first IP. Call this `NODE_X_BRIDGE_ADDR`. For example, if `NODE_X_POD_CIDR` is `10.0.0.0/16`, then `NODE_X_BRIDGE_ADDR` is `10.0.0.1/16`. NOTE: this retains the `/16` suffix @@ -499,7 +499,7 @@ traffic to the internet, but have no problem with them inside your GCE Project. The previous steps all involved "conventional" system administration techniques for setting up machines. You may want to use a Configuration Management system to automate the node configuration -process. There are examples of [Saltstack](/docs/admin/salt), Ansible, Juju, and CoreOS Cloud Config in the +process. There are examples of [Saltstack](/docs/admin/salt/), Ansible, Juju, and CoreOS Cloud Config in the various Getting Started Guides. ## Bootstrapping the Cluster @@ -524,7 +524,7 @@ You will need to run one or more instances of etcd. - Highly available - Run 3 or 5 etcd instances with non durable storage. **Note:** Log can be written to non-durable storage because storage is replicated. -See [cluster-troubleshooting](/docs/admin/cluster-troubleshooting) for more discussion on factors affecting cluster +See [cluster-troubleshooting](/docs/admin/cluster-troubleshooting/) for more discussion on factors affecting cluster availability. To run an etcd instance: @@ -633,7 +633,7 @@ Here are some apiserver flags you may need to set: - `--tls-cert-file=/srv/kubernetes/server.cert` - `--tls-private-key-file=/srv/kubernetes/server.key` - `--admission-control=$RECOMMENDED_LIST` - - See [admission controllers](/docs/admin/admission-controllers) for recommended arguments. + - See [admission controllers](/docs/admin/admission-controllers/) for recommended arguments. - `--allow-privileged=true`, only if you trust your cluster user to run pods as root. If you are following the firewall-only security approach, then use these arguments: @@ -841,9 +841,9 @@ Notes for setting up each cluster service are given below: * [Setup instructions](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/) * [Admin Guide](/docs/concepts/services-networking/dns-pod-service/) * Cluster-level Logging - * [Cluster-level Logging Overview](/docs/user-guide/logging/overview) - * [Cluster-level Logging with Elasticsearch](/docs/user-guide/logging/elasticsearch) - * [Cluster-level Logging with Stackdriver Logging](/docs/user-guide/logging/stackdriver) + * [Cluster-level Logging Overview](/docs/user-guide/logging/overview/) + * [Cluster-level Logging with Elasticsearch](/docs/user-guide/logging/elasticsearch/) + * [Cluster-level Logging with Stackdriver Logging](/docs/user-guide/logging/stackdriver/) * Container Resource Monitoring * [Setup instructions](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/) * GUI @@ -904,7 +904,7 @@ If you run into trouble, please see the section on [troubleshooting](/docs/getti IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -any | any | any | any | [docs](/docs/getting-started-guides/scratch) | | Community ([@erictune](https://github.com/erictune)) +any | any | any | any | [docs](/docs/getting-started-guides/scratch/) | | Community ([@erictune](https://github.com/erictune)) -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions/) chart. diff --git a/docs/getting-started-guides/ubuntu/index.md b/docs/getting-started-guides/ubuntu/index.md index 576f35d09adcd..c851f32a6e71b 100644 --- a/docs/getting-started-guides/ubuntu/index.md +++ b/docs/getting-started-guides/ubuntu/index.md @@ -36,24 +36,24 @@ conjure-up kubernetes These are more in-depth guides for users choosing to run Kubernetes in production: - - [Installation](/docs/getting-started-guides/ubuntu/installation) - - [Validation](/docs/getting-started-guides/ubuntu/validation) - - [Backups](/docs/getting-started-guides/ubuntu/backups) - - [Upgrades](/docs/getting-started-guides/ubuntu/upgrades) - - [Scaling](/docs/getting-started-guides/ubuntu/scaling) - - [Logging](/docs/getting-started-guides/ubuntu/logging) - - [Monitoring](/docs/getting-started-guides/ubuntu/monitoring) - - [Networking](/docs/getting-started-guides/ubuntu/networking) - - [Security](/docs/getting-started-guides/ubuntu/security) - - [Storage](/docs/getting-started-guides/ubuntu/storage) - - [Troubleshooting](/docs/getting-started-guides/ubuntu/troubleshooting) - - [Decommissioning](/docs/getting-started-guides/ubuntu/decommissioning) - - [Operational Considerations](/docs/getting-started-guides/ubuntu/operational-considerations) - - [Glossary](/docs/getting-started-guides/ubuntu/glossary) + - [Installation](/docs/getting-started-guides/ubuntu/installation/) + - [Validation](/docs/getting-started-guides/ubuntu/validation/) + - [Backups](/docs/getting-started-guides/ubuntu/backups/) + - [Upgrades](/docs/getting-started-guides/ubuntu/upgrades/) + - [Scaling](/docs/getting-started-guides/ubuntu/scaling/) + - [Logging](/docs/getting-started-guides/ubuntu/logging/) + - [Monitoring](/docs/getting-started-guides/ubuntu/monitoring/) + - [Networking](/docs/getting-started-guides/ubuntu/networking/) + - [Security](/docs/getting-started-guides/ubuntu/security/) + - [Storage](/docs/getting-started-guides/ubuntu/storage/) + - [Troubleshooting](/docs/getting-started-guides/ubuntu/troubleshooting/) + - [Decommissioning](/docs/getting-started-guides/ubuntu/decommissioning/) + - [Operational Considerations](/docs/getting-started-guides/ubuntu/operational-considerations/) + - [Glossary](/docs/getting-started-guides/ubuntu/glossary/) ## Developer Guides - - [Localhost using LXD](/docs/getting-started-guides/ubuntu/local) + - [Localhost using LXD](/docs/getting-started-guides/ubuntu/local/) ## Where to find us diff --git a/docs/getting-started-guides/ubuntu/installation.md b/docs/getting-started-guides/ubuntu/installation.md index b30ab81239df1..53245567b0b63 100644 --- a/docs/getting-started-guides/ubuntu/installation.md +++ b/docs/getting-started-guides/ubuntu/installation.md @@ -251,14 +251,14 @@ Feature requests, bug reports, pull requests or any feedback would be much appre IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Amazon Web Services (AWS) | Juju | Ubuntu | flannel, calico* | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -OpenStack | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -Microsoft Azure | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -Google Compute Engine (GCE) | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -Joyent | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -Rackspace | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -VMWare vSphere | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -Bare Metal (MAAS) | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +Amazon Web Services (AWS) | Juju | Ubuntu | flannel, calico* | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +OpenStack | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +Microsoft Azure | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +Google Compute Engine (GCE) | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +Joyent | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +Rackspace | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +VMWare vSphere | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +Bare Metal (MAAS) | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/vsphere.md b/docs/getting-started-guides/vsphere.md index 1f8e642fdda6b..22207e7d41580 100644 --- a/docs/getting-started-guides/vsphere.md +++ b/docs/getting-started-guides/vsphere.md @@ -201,7 +201,7 @@ For quick support please join VMware Code Slack ([kubernetes](https://vmwarecode IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | --------- | ---------------------------- -Vmware vSphere | Kube-anywhere | Photon OS | Flannel | [docs](/docs/getting-started-guides/vsphere) | | Community ([@abrarshivani](https://github.com/abrarshivani)), ([@kerneltime](https://github.com/kerneltime)), ([@BaluDontu](https://github.com/BaluDontu)), ([@luomiao](https://github.com/luomiao)), ([@divyenpatel](https://github.com/divyenpatel)) +Vmware vSphere | Kube-anywhere | Photon OS | Flannel | [docs](/docs/getting-started-guides/vsphere/) | | Community ([@abrarshivani](https://github.com/abrarshivani)), ([@kerneltime](https://github.com/kerneltime)), ([@BaluDontu](https://github.com/BaluDontu)), ([@luomiao](https://github.com/luomiao)), ([@divyenpatel](https://github.com/divyenpatel)) If you identify any issues/problems using the vSphere cloud provider, you can create an issue in our repo - [VMware Kubernetes](https://github.com/vmware/kubernetes). diff --git a/docs/setup/independent/install-kubeadm.md b/docs/setup/independent/install-kubeadm.md index 69a1d682a59ec..4ff75f4e95e00 100644 --- a/docs/setup/independent/install-kubeadm.md +++ b/docs/setup/independent/install-kubeadm.md @@ -39,7 +39,7 @@ This page shows how to use install kubeadm. |-------------|---------------------------------| | 10250 | Kubelet API | | 10255 | Read-only Kubelet API (Heapster)| -| 30000-32767 | Default port range for [NodePort Services](/docs/concepts/services-networking/service). Typically, these ports would need to be exposed to external load-balancers, or other external consumers of the application itself. | +| 30000-32767 | Default port range for [NodePort Services](/docs/concepts/services-networking/service/). Typically, these ports would need to be exposed to external load-balancers, or other external consumers of the application itself. | Any port numbers marked with * are overridable, so you will need to ensure any custom ports you provide are also open. diff --git a/docs/setup/pick-right-solution.md b/docs/setup/pick-right-solution.md index 5c65f1a38cf6d..8a624c15236e6 100644 --- a/docs/setup/pick-right-solution.md +++ b/docs/setup/pick-right-solution.md @@ -29,7 +29,7 @@ a Kubernetes cluster from scratch. * [Minikube](/docs/getting-started-guides/minikube/) is the recommended method for creating a local, single-node Kubernetes cluster for development and testing. Setup is completely automated and doesn't require a cloud provider account. -* [Ubuntu on LXD](/docs/getting-started-guides/ubuntu/local) supports a nine-instance deployment on localhost. +* [Ubuntu on LXD](/docs/getting-started-guides/ubuntu/local/) supports a nine-instance deployment on localhost. * [IBM Cloud private-ce (Community Edition)](https://www.ibm.com/support/knowledgecenter/en/SSBS6K/product_welcome_cloud_private.html) can use VirtualBox on your machine to deploy Kubernetes to one or more VMs for dev and test scenarios. Scales to full multi-node cluster. Free version of the enterprise solution. @@ -62,11 +62,11 @@ a Kubernetes cluster from scratch. These solutions allow you to create Kubernetes clusters on a range of Cloud IaaS providers with only a few commands. These solutions are actively developed and have active community support. -* [Google Compute Engine (GCE)](/docs/getting-started-guides/gce) -* [AWS](/docs/getting-started-guides/aws) -* [Azure](/docs/getting-started-guides/azure) +* [Google Compute Engine (GCE)](/docs/getting-started-guides/gce/) +* [AWS](/docs/getting-started-guides/aws/) +* [Azure](/docs/getting-started-guides/azure/) * [Tectonic by CoreOS](https://coreos.com/tectonic) -* [CenturyLink Cloud](/docs/getting-started-guides/clc) +* [CenturyLink Cloud](/docs/getting-started-guides/clc/) * [IBM Bluemix](https://github.com/patrocinio/kubernetes-softlayer) * [Stackpoint.io](/docs/getting-started-guides/stackpoint/) * [KUBE2GO.io](https://kube2go.io/) @@ -80,7 +80,7 @@ base operating systems. If you can find a guide below that matches your needs, use it. It may be a little out of date, but it will be easier than starting from scratch. If you do want to start from scratch, either because you have special requirements, or just because you want to understand what is underneath a Kubernetes -cluster, try the [Getting Started from Scratch](/docs/getting-started-guides/scratch) guide. +cluster, try the [Getting Started from Scratch](/docs/getting-started-guides/scratch/) guide. If you are interested in supporting Kubernetes on a new platform, see [Writing a Getting Started Guide](https://git.k8s.io/community/contributors/devel/writing-a-getting-started-guide.md). @@ -95,40 +95,40 @@ with a single command per machine. These solutions are combinations of cloud providers and operating systems not covered by the above solutions. -* [CoreOS on AWS or GCE](/docs/getting-started-guides/coreos) +* [CoreOS on AWS or GCE](/docs/getting-started-guides/coreos/) * [Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/) * [Kubespray](/docs/getting-started-guides/kubespray/) ## On-Premises VMs -* [Vagrant](/docs/getting-started-guides/coreos) (uses CoreOS and flannel) -* [CloudStack](/docs/getting-started-guides/cloudstack) (uses Ansible, CoreOS and flannel) -* [Vmware vSphere](/docs/getting-started-guides/vsphere) (uses Debian) -* [Vmware Photon Controller](/docs/getting-started-guides/photon-controller) (uses Debian) +* [Vagrant](/docs/getting-started-guides/coreos/) (uses CoreOS and flannel) +* [CloudStack](/docs/getting-started-guides/cloudstack/) (uses Ansible, CoreOS and flannel) +* [Vmware vSphere](/docs/getting-started-guides/vsphere/) (uses Debian) +* [Vmware Photon Controller](/docs/getting-started-guides/photon-controller/) (uses Debian) * [Vmware vSphere, OpenStack, or Bare Metal](/docs/getting-started-guides/ubuntu/) (uses Juju, Ubuntu and flannel) -* [Vmware](/docs/getting-started-guides/coreos) (uses CoreOS and flannel) -* [CoreOS on libvirt](/docs/getting-started-guides/libvirt-coreos) (uses CoreOS) -* [oVirt](/docs/getting-started-guides/ovirt) -* [OpenStack Heat](/docs/getting-started-guides/openstack-heat) (uses CentOS and flannel) -* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) (uses Fedora and flannel) +* [Vmware](/docs/getting-started-guides/coreos/) (uses CoreOS and flannel) +* [CoreOS on libvirt](/docs/getting-started-guides/libvirt-coreos/) (uses CoreOS) +* [oVirt](/docs/getting-started-guides/ovirt/) +* [OpenStack Heat](/docs/getting-started-guides/openstack-heat/) (uses CentOS and flannel) +* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) (uses Fedora and flannel) ## Bare Metal -* [Offline](/docs/getting-started-guides/coreos/bare_metal_offline) (no internet required. Uses CoreOS and Flannel) -* [Fedora via Ansible](/docs/getting-started-guides/fedora/fedora_ansible_config) -* [Fedora (Single Node)](/docs/getting-started-guides/fedora/fedora_manual_config) -* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) -* [CentOS](/docs/getting-started-guides/centos/centos_manual_config) +* [Offline](/docs/getting-started-guides/coreos/bare_metal_offline/) (no internet required. Uses CoreOS and Flannel) +* [Fedora via Ansible](/docs/getting-started-guides/fedora/fedora_ansible_config/) +* [Fedora (Single Node)](/docs/getting-started-guides/fedora/fedora_manual_config/) +* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) +* [CentOS](/docs/getting-started-guides/centos/centos_manual_config/) * [Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/) -* [CoreOS on AWS or GCE](/docs/getting-started-guides/coreos) +* [CoreOS on AWS or GCE](/docs/getting-started-guides/coreos/) ## Integrations These solutions provide integration with third-party schedulers, resource managers, and/or lower level platforms. -* [Kubernetes on Mesos](/docs/getting-started-guides/mesos) +* [Kubernetes on Mesos](/docs/getting-started-guides/mesos/) * Instructions specify GCE, but are generic enough to be adapted to most existing Mesos clusters -* [DCOS](/docs/getting-started-guides/dcos) +* [DCOS](/docs/getting-started-guides/dcos/) * Community Edition DCOS uses AWS * Enterprise Edition DCOS supports cloud hosting, on-premises VMs, and bare metal @@ -146,37 +146,37 @@ KUBE2GO.io | | multi-support | multi-support | [docs](http Madcore.Ai | Jenkins DSL | Ubuntu | flannel | [docs](https://madcore.ai) | Community ([@madcore-ai](https://github.com/madcore-ai)) Platform9 | | multi-support | multi-support | [docs](https://platform9.com/managed-kubernetes/) | Commercial Giant Swarm | | CoreOS | flannel and/or Calico | [docs](https://docs.giantswarm.io/) | Commercial -GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce) | Project +GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce/) | Project Azure Container Service | | Ubuntu | Azure | [docs](https://azure.microsoft.com/en-us/services/container-service/) | Commercial -Azure (IaaS) | | Ubuntu | Azure | [docs](/docs/getting-started-guides/azure) | [Community (Microsoft)](https://github.com/Azure/acs-engine) -Bare-metal | Ansible | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/fedora_ansible_config) | Project -Bare-metal | custom | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config) | Project -Bare-metal | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) -libvirt | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) -KVM | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) -Mesos/Docker | custom | Ubuntu | Docker | [docs](/docs/getting-started-guides/mesos-docker) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) -Mesos/GCE | | | | [docs](/docs/getting-started-guides/mesos) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) -DCOS | Marathon | CoreOS/Alpine | custom | [docs](/docs/getting-started-guides/dcos) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) -AWS | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/aws) | Community -GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | Community ([@pires](https://github.com/pires)) -Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles)) -Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/bare_metal_offline) | Community ([@jeffbean](https://github.com/jeffbean)) -CloudStack | Ansible | CoreOS | flannel | [docs](/docs/getting-started-guides/cloudstack) | Community ([@sebgoa](https://github.com/sebgoa)) -Vmware vSphere | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/vsphere) | Community ([@imkin](https://github.com/imkin)) -Vmware Photon | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/photon-controller) | Community ([@alainroy](https://github.com/alainroy)) -Bare-metal | custom | CentOS | flannel | [docs](/docs/getting-started-guides/centos/centos_manual_config) | Community ([@coolsvap](https://github.com/coolsvap)) +Azure (IaaS) | | Ubuntu | Azure | [docs](/docs/getting-started-guides/azure/) | [Community (Microsoft)](https://github.com/Azure/acs-engine) +Bare-metal | Ansible | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/fedora_ansible_config/) | Project +Bare-metal | custom | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config/) | Project +Bare-metal | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) +libvirt | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) +KVM | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) +Mesos/Docker | custom | Ubuntu | Docker | [docs](/docs/getting-started-guides/mesos-docker/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) +Mesos/GCE | | | | [docs](/docs/getting-started-guides/mesos/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) +DCOS | Marathon | CoreOS/Alpine | custom | [docs](/docs/getting-started-guides/dcos/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) +AWS | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/aws/) | Community +GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires)) +Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles)) +Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/bare_metal_offline/) | Community ([@jeffbean](https://github.com/jeffbean)) +CloudStack | Ansible | CoreOS | flannel | [docs](/docs/getting-started-guides/cloudstack/) | Community ([@sebgoa](https://github.com/sebgoa)) +Vmware vSphere | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/vsphere/) | Community ([@imkin](https://github.com/imkin)) +Vmware Photon | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/photon-controller/) | Community ([@alainroy](https://github.com/alainroy)) +Bare-metal | custom | CentOS | flannel | [docs](/docs/getting-started-guides/centos/centos_manual_config/) | Community ([@coolsvap](https://github.com/coolsvap)) AWS | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) GCE | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) Bare Metal | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) Rackspace | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) Vmware vSphere | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) -AWS | Saltstack | Debian | AWS | [docs](/docs/getting-started-guides/aws) | Community ([@justinsb](https://github.com/justinsb)) -AWS | kops | Debian | AWS | [docs](https://github.com/kubernetes/kops) | Community ([@justinsb](https://github.com/justinsb)) -Bare-metal | custom | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu) | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY)) -libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](/docs/getting-started-guides/libvirt-coreos) | Community ([@lhuard1A](https://github.com/lhuard1A)) -oVirt | | | | [docs](/docs/getting-started-guides/ovirt) | Community ([@simon3z](https://github.com/simon3z)) -OpenStack Heat | Saltstack | CentOS | Neutron + flannel hostgw | [docs](/docs/getting-started-guides/openstack-heat) | Community ([@FujitsuEnablingSoftwareTechnologyGmbH](https://github.com/FujitsuEnablingSoftwareTechnologyGmbH)) -any | any | any | any | [docs](/docs/getting-started-guides/scratch) | Community ([@erictune](https://github.com/erictune)) +AWS | Saltstack | Debian | AWS | [docs](/docs/getting-started-guides/aws/) | Community ([@justinsb](https://github.com/justinsb)) +AWS | kops | Debian | AWS | [docs](https://github.com/kubernetes/kops/) | Community ([@justinsb](https://github.com/justinsb)) +Bare-metal | custom | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY)) +libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](/docs/getting-started-guides/libvirt-coreos/) | Community ([@lhuard1A](https://github.com/lhuard1A)) +oVirt | | | | [docs](/docs/getting-started-guides/ovirt/) | Community ([@simon3z](https://github.com/simon3z)) +OpenStack Heat | Saltstack | CentOS | Neutron + flannel hostgw | [docs](/docs/getting-started-guides/openstack-heat/) | Community ([@FujitsuEnablingSoftwareTechnologyGmbH](https://github.com/FujitsuEnablingSoftwareTechnologyGmbH)) +any | any | any | any | [docs](/docs/getting-started-guides/scratch/) | Community ([@erictune](https://github.com/erictune)) any | any | any | any | [docs](http://docs.projectcalico.org/v2.2/getting-started/kubernetes/installation/) | Commercial and Community **Note**: The above table is ordered by version test/used in nodes, followed by support level. diff --git a/docs/tasks/configure-pod-container/configure-pod-configmap.md b/docs/tasks/configure-pod-container/configure-pod-configmap.md index 6349af7a404ec..d83790d9435d7 100644 --- a/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -185,7 +185,7 @@ very charm ## Add ConfigMap data to a Volume -As explained in [Configure Containers Using a ConfigMap](/docs/tasks/configure-pod-container/configmap.html), when you create a ConfigMap using ``--from-file``, the filename becomes a key stored in the `data` section of the ConfigMap. The file contents become the key's value. +As explained in [Configure Containers Using a ConfigMap](/docs/tasks/configure-pod-container/configmap/), when you create a ConfigMap using ``--from-file``, the filename becomes a key stored in the `data` section of the ConfigMap. The file contents become the key's value. The examples in this section refer to a ConfigMap named special-config, shown below. diff --git a/docs/tasks/configure-pod-container/configure-service-account.md b/docs/tasks/configure-pod-container/configure-service-account.md index 55e115bd100b7..3a3d4c3e7c34d 100644 --- a/docs/tasks/configure-pod-container/configure-service-account.md +++ b/docs/tasks/configure-pod-container/configure-service-account.md @@ -9,7 +9,7 @@ title: Configure Service Accounts for Pods A service account provides an identity for processes that run in a Pod. *This is a user introduction to Service Accounts. See also the -[Cluster Admin Guide to Service Accounts](/docs/admin/service-accounts-admin).* +[Cluster Admin Guide to Service Accounts](/docs/admin/service-accounts-admin/).* **Note:** This document describes how service accounts behave in a cluster set up as recommended by the Kubernetes project. Your cluster administrator may have diff --git a/docs/tasks/federation/set-up-cluster-federation-kubefed.md b/docs/tasks/federation/set-up-cluster-federation-kubefed.md index af6e7f83c9080..5aef87109b1b8 100644 --- a/docs/tasks/federation/set-up-cluster-federation-kubefed.md +++ b/docs/tasks/federation/set-up-cluster-federation-kubefed.md @@ -468,7 +468,7 @@ as described in the ## Removing a cluster from a federation -To remove a cluster from a federation, run the [`kubefed unjoin`](/docs/admin/kubefed_unjoin) +To remove a cluster from a federation, run the [`kubefed unjoin`](/docs/admin/kubefed_unjoin/) command with the cluster name and the federation's `--host-cluster-context`: From d7ddfdacb91fc78d9a60e36d383be0e7f1b9b6ba Mon Sep 17 00:00:00 2001 From: Steve Perry Date: Sat, 23 Sep 2017 15:20:49 -0700 Subject: [PATCH 29/87] Update link targest to avoid redirects. (#5597) --- docs/tasks/access-application-cluster/access-cluster.md | 2 +- docs/tasks/administer-cluster/access-cluster-api.md | 2 +- docs/tasks/administer-cluster/declare-network-policy.md | 2 +- .../configure-pod-container/configure-pod-initialization.md | 2 +- .../environment-variable-expose-pod-information.md | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/tasks/access-application-cluster/access-cluster.md b/docs/tasks/access-application-cluster/access-cluster.md index a8c0abcf894e1..a3c1a6ce77ee1 100644 --- a/docs/tasks/access-application-cluster/access-cluster.md +++ b/docs/tasks/access-application-cluster/access-cluster.md @@ -136,7 +136,7 @@ If the application is deployed as a Pod in the cluster, please refer to the [nex To use [Python client](https://github.com/kubernetes-incubator/client-python), run the following command: `pip install kubernetes`. See [Python Client Library page](https://github.com/kubernetes-incubator/client-python) for more installation options. -The Python client can use the same [kubeconfig file](/docs/user-guide/kubeconfig-file) +The Python client can use the same [kubeconfig file](docs/tasks/access-application-cluster/configure-access-multiple-clusters/) as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes-incubator/client-python/tree/master/examples/example1.py). #### Other languages diff --git a/docs/tasks/administer-cluster/access-cluster-api.md b/docs/tasks/administer-cluster/access-cluster-api.md index f1fd4ea5c1630..88ef4334cf94e 100644 --- a/docs/tasks/administer-cluster/access-cluster-api.md +++ b/docs/tasks/administer-cluster/access-cluster-api.md @@ -147,7 +147,7 @@ If the application is deployed as a Pod in the cluster, please refer to the [nex To use [Python client](https://github.com/kubernetes-incubator/client-python), run the following command: `pip install kubernetes` See [Python Client Library page](https://github.com/kubernetes-incubator/client-python) for more installation options. -The Python client can use the same [kubeconfig file](/docs/user-guide/kubeconfig-file) +The Python client can use the same [kubeconfig file](docs/tasks/access-application-cluster/configure-access-multiple-clusters/) as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes-incubator/client-python/tree/master/examples/example1.py): ```python diff --git a/docs/tasks/administer-cluster/declare-network-policy.md b/docs/tasks/administer-cluster/declare-network-policy.md index a1cffa197bd1c..d1b141c607347 100644 --- a/docs/tasks/administer-cluster/declare-network-policy.md +++ b/docs/tasks/administer-cluster/declare-network-policy.md @@ -15,7 +15,7 @@ You'll need to have a Kubernetes cluster in place, with network policy support. * [Cilium](/docs/tasks/administer-cluster/cilium-network-policy/) * [Kube-router](/docs/tasks/administer-cluster/kube-router-network-policy/) * [Romana](/docs/tasks/configure-pod-container/romana-network-policy/) -* [Weave Net](/docs/tasks/configure-pod-container/weave-network-policy/) +* [Weave Net](/docs/tasks/administer-cluster/weave-network-policy/) **Note**: The above list is sorted alphabetically by product name, not by recommendation or preference. This example is valid for a Kubernetes cluster using any of these providers. {% endcapture %} diff --git a/docs/tasks/configure-pod-container/configure-pod-initialization.md b/docs/tasks/configure-pod-container/configure-pod-initialization.md index be18f82e4dbb9..4147b54854ac0 100644 --- a/docs/tasks/configure-pod-container/configure-pod-initialization.md +++ b/docs/tasks/configure-pod-container/configure-pod-initialization.md @@ -81,7 +81,7 @@ The output shows that nginx is serving the web page that was written by the init {% capture whatsnext %} * Learn more about -[communicating between Containers running in the same Pod](/docs/tasks/configure-pod-container/communicate-containers-same-pod/). +[communicating between Containers running in the same Pod](/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/). * Learn more about [Init Containers](/docs/concepts/workloads/pods/init-containers/). * Learn more about [Volumes](/docs/concepts/storage/volumes/). * Learn more about [Debugging Init Containers](/docs/tasks/debug-application-cluster/debug-init-containers/) diff --git a/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md b/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md index 391b58ee54518..0b652dd52d689 100644 --- a/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md +++ b/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md @@ -161,7 +161,7 @@ The output shows the values of selected environment variables: {% capture whatsnext %} -* [Defining Environment Variables for a Container](/docs/tasks/configure-pod-container/define-environment-variable-container/) +* [Defining Environment Variables for a Container](/docs/tasks/inject-data-application/define-environment-variable-container/) * [PodSpec](/docs/resources-reference/{{page.version}}/#podspec-v1-core) * [Container](/docs/resources-reference/{{page.version}}/#container-v1-core) * [EnvVar](/docs/resources-reference/{{page.version}}/#envvar-v1-core) From b09593e98c63e821fd08662ade27029cc6a772fe Mon Sep 17 00:00:00 2001 From: Steve Perry Date: Sat, 23 Sep 2017 15:51:02 -0700 Subject: [PATCH 30/87] Update link targets to avoid redirects. (#5598) --- docs/admin/authentication.md | 2 +- docs/admin/authorization/index.md | 2 +- docs/admin/authorization/webhook.md | 2 +- .../configuration/manage-compute-resources-container.md | 2 +- .../overview/working-with-objects/kubernetes-objects.md | 2 +- docs/concepts/workloads/controllers/statefulset.md | 2 +- 6 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/admin/authentication.md b/docs/admin/authentication.md index c449d511ad7de..bf2c618972581 100644 --- a/docs/admin/authentication.md +++ b/docs/admin/authentication.md @@ -435,7 +435,7 @@ the authentication webhook queries the remote service with a review object containing the token. Kubernetes will not challenge a request that lacks such a header. -Note that webhook API objects are subject to the same [versioning compatibility rules](/docs/api/) +Note that webhook API objects are subject to the same [versioning compatibility rules](/docs/concepts/overview/kubernetes-api/) as other Kubernetes API objects. Implementers should be aware of looser compatibility promises for beta objects and check the "apiVersion" field of the request to ensure correct deserialization. Additionally, the API server must diff --git a/docs/admin/authorization/index.md b/docs/admin/authorization/index.md index d7a009ef3bfe7..43299b6163245 100644 --- a/docs/admin/authorization/index.md +++ b/docs/admin/authorization/index.md @@ -37,7 +37,7 @@ Kubernetes reviews only the following API request attributes: --* For resource requests using `get`, `update`, `patch`, and `delete` verbs, you must provide the resource name. * **Subresource** - The subresource that is being accessed (for resource requests only). * **Namespace** - The namespace of the object that is being accessed (for namespaced resource requests only). - * **API group** - The API group being accessed (for resource requests only). An empty string designates the [core API group](/docs/api/). + * **API group** - The API group being accessed (for resource requests only). An empty string designates the [core API group](/docs/concepts/overview/kubernetes-api/). ## Determine the Request Verb To determine the request verb for a resource API endpoint, review the HTTP verb used and whether or not the request acts on an individual resource or a collection of resources: diff --git a/docs/admin/authorization/webhook.md b/docs/admin/authorization/webhook.md index b88fef6357f3a..8a807bc35bbbc 100644 --- a/docs/admin/authorization/webhook.md +++ b/docs/admin/authorization/webhook.md @@ -58,7 +58,7 @@ action. This object contains fields describing the user attempting to make the request, and either details about the resource being accessed or requests attributes. -Note that webhook API objects are subject to the same [versioning compatibility rules](/docs/api/) +Note that webhook API objects are subject to the same [versioning compatibility rules](/docs/concepts/overview/kubernetes-api/) as other Kubernetes API objects. Implementers should be aware of looser compatibility promises for beta objects and check the "apiVersion" field of the request to ensure correct deserialization. Additionally, the API Server must diff --git a/docs/concepts/configuration/manage-compute-resources-container.md b/docs/concepts/configuration/manage-compute-resources-container.md index 19a322f28041f..fa8d93cc5bd92 100644 --- a/docs/concepts/configuration/manage-compute-resources-container.md +++ b/docs/concepts/configuration/manage-compute-resources-container.md @@ -26,7 +26,7 @@ CPU and memory are collectively referred to as *compute resources*, or just *resources*. Compute resources are measurable quantities that can be requested, allocated, and consumed. They are distinct from -[API resources](/docs/api/). API resources, such as Pods and +[API resources](/docs/concepts/overview/kubernetes-api/). API resources, such as Pods and [Services](/docs/user-guide/services) are objects that can be read and modified through the Kubernetes API server. diff --git a/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/docs/concepts/overview/working-with-objects/kubernetes-objects.md index b462fb5f0b7e3..bb8358f617643 100644 --- a/docs/concepts/overview/working-with-objects/kubernetes-objects.md +++ b/docs/concepts/overview/working-with-objects/kubernetes-objects.md @@ -56,7 +56,7 @@ In the `.yaml` file for the Kubernetes object you want to create, you'll need to * `kind` - What kind of object you want to create * `metadata` - Data that helps uniquely identify the object, including a `name` string, UID, and optional `namespace` -You'll also need to provide the object `spec` field. The precise format of the object `spec` is different for every Kubernetes object, and contains nested fields specific to that object. The [Kubernetes API reference](/docs/api/) can help you find the spec format for all of the objects you can create using Kubernetes. +You'll also need to provide the object `spec` field. The precise format of the object `spec` is different for every Kubernetes object, and contains nested fields specific to that object. The [Kubernetes API reference](/docs/concepts/overview/kubernetes-api/) can help you find the spec format for all of the objects you can create using Kubernetes. {% endcapture %} diff --git a/docs/concepts/workloads/controllers/statefulset.md b/docs/concepts/workloads/controllers/statefulset.md index 1aefb854f8937..7c77781e5a7c6 100644 --- a/docs/concepts/workloads/controllers/statefulset.md +++ b/docs/concepts/workloads/controllers/statefulset.md @@ -12,7 +12,7 @@ title: StatefulSets {% capture overview %} **StatefulSets are a beta feature in 1.7. This feature replaces the PetSets feature from 1.4. Users of PetSets are referred to the 1.5 -[Upgrade Guide](/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set/) +[Upgrade Guide](/docs/tasks/run-application/upgrade-pet-set-to-stateful-set/) for further information on how to upgrade existing PetSets to StatefulSets.** {% include templates/glossary/snippet.md term="statefulset" length="long" %} From d5ef16ae4e066e17ae5f04e9cff2baa1c68ba365 Mon Sep 17 00:00:00 2001 From: Steve Perry Date: Sat, 23 Sep 2017 16:20:15 -0700 Subject: [PATCH 31/87] Update _redirects to fix 404s. (#5600) --- _redirects | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/_redirects b/_redirects index b3b0d56ddb616..e264bbe627056 100644 --- a/_redirects +++ b/_redirects @@ -27,9 +27,9 @@ /docs/resources-reference/1_6/* /docs/resources-reference/v1.6/ 301 /docs/resources-reference/1_7/* /docs/resources-reference/v1.7/ 301 /docs/templatedemos/* /docs/home/contribute/page-templates/ 301 -/docs/tutorials/getting-started/*docs/tutorials/kubernetes-basics/ 301 -/docs/user-guide/federation/*/ /docs/concepts/cluster-administration/federation/ 301 -/docs/user-guide/garbage-collector/ /docs/concepts/workloads/controllers/garbage-collection/ 301 +/docs/tutorials/getting-started/* /docs/tutorials/kubernetes-basics/ 301 +/docs/user-guide/federation/* /docs/concepts/cluster-administration/federation/ 301 +/docs/user-guide/garbage-collector/* /docs/concepts/workloads/controllers/garbage-collection/ 301 /docs/user-guide/horizontal-pod-autoscaler/* /docs/tasks/run-application/horizontal-pod-autoscale/ 301 /kubernetes-bootcamp/* /docs/tutorials/kubernetes-basics/ 301 /swagger-spec/* https://github.com/kubernetes/kubernetes/tree/master/api/swagger-spec/ 301 @@ -51,8 +51,11 @@ /docs/admin/etcd/ /docs/tasks/administer-cluster/configure-upgrade-etcd/ 301 /docs/admin/etcd_upgrade/ /docs/tasks/administer-cluster/configure-upgrade-etcd/ 301 /docs/admin/federation/kubefed/ /docs/tasks/federation/set-up-cluster-federation-kubefed/ 301 +/docs/admin/federation/kubefed.md /docs/tasks/federation/set-up-cluster-federation-kubefed/ 301 +/docs/tasks/federation/set-up-cluster-federation-kubefed.md /docs/tasks/federation/set-up-cluster-federation-kubefed/ 301 /docs/admin/garbage-collection/ /docs/concepts/cluster-administration/kubelet-garbage-collection/ 301 /docs/admin/ha-master-gce/ /docs/tasks/administer-cluster/highly-available-master/ 301 +/docs/admin/ha-master-gce.md /docs/tasks/administer-cluster/highly-available-master/ 301 /docs/admin/ /docs/concepts/cluster-administration/cluster-administration-overview/ 301 /docs/admin/kubeadm-upgrade-1-7/ /docs/tasks/administer-cluster/kubeadm-upgrade-1-7/ 301 /docs/admin/limitrange/docs/tasks/administer-cluster/cpu-memory-limit/ 301 @@ -65,6 +68,7 @@ /docs/admin/networking/ /docs/concepts/cluster-administration/networking/ 301 /docs/admin/node/ /docs/concepts/architecture/nodes/ 301 /docs/admin/node-allocatable/ /docs/tasks/administer-cluster/reserve-compute-resources/ 301 +/docs/admin/node-conformance.md /docs/admin/node-conformance/ 301 /docs/admin/node-problem/ /docs/tasks/debug-application-cluster/monitor-node-health/ 301 /docs/admin/out-of-resource/ /docs/tasks/administer-cluster/out-of-resource/ 301 /docs/admin/rescheduler/ /docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/ 301 @@ -282,13 +286,11 @@ /docs/api-reference/storage.k8s.io/v1beta1/operations https://v1-4.docs.kubernetes.io/docs/api-reference/storage.k8s.io/v1beta1/operations/ 301 /docs/api-reference/v1/definitions/ /docs/api-reference/v1.7/ 301 +/docs/api-reference/v1/operations/ /docs/api-reference/v1.7/ 301 /docs/concepts/cluster/ /docs/concepts/cluster-administration/cluster-administration-overview/ 301 /docs/concepts/object-metadata/annotations/ /docs/concepts/overview/working-with-objects/annotations/ 301 -/docs/concepts/workloads/controllers/daemonset/ /docs/concepts/workloads/pods/poddocs/concepts/workloads/pods/pod/ 301 -/docs/concepts/workloads/controllers/deployment/ /docs/concepts/workloads/pods/poddocs/concepts/workloads/pods/pod/ 301 - /docs/contribute/write-new-topic/ /docs/home/contribute/write-new-topic/ 301 /docs/getting-started-guides/coreos/azure/ /docs/getting-started-guides/coreos/ 301 From 5afbb0d1e00120da8a3db42e09c4662b762f1b20 Mon Sep 17 00:00:00 2001 From: Steve Perry Date: Sun, 24 Sep 2017 09:43:22 -0700 Subject: [PATCH 32/87] Update _redirects to fix 404s. (#5603) --- _redirects | 3 +++ 1 file changed, 3 insertions(+) diff --git a/_redirects b/_redirects index e264bbe627056..8aba4e9d3d98d 100644 --- a/_redirects +++ b/_redirects @@ -39,6 +39,8 @@ # individual redirects # +/gettingstarted/ /docs/home/ 301 + /docs/admin/addons/ /docs/concepts/cluster-administration/addons/ 301 /docs/admin/apparmor/ /docs/tutorials/clusters/apparmor/ 301 /docs/admin/audit/ /docs/tasks/debug-application-cluster/audit/ 301 @@ -230,6 +232,7 @@ /docs/user-guide/networkpolicies/ /docs/concepts/services-networking/network-policies/ 301 /docs/user-guide/node-selection/ /docs/concepts/configuration/assign-pod-node/ 301 /docs/user-guide/persistent-volumes/ /docs/concepts/storage/persistent-volumes/ 301 +/docs/user-guide/persistent-volumes/index /docs/concepts/storage/persistent-volumes/ 301 /docs/user-guide/persistent-volumes/walkthrough/ /docs/tasks/configure-pod-container/configure-persistent-volume-storage/ 301 /docs/user-guide/petset/ /docs/concepts/workloads/controllers/petset/ 301 /docs/user-guide/petset/bootstrapping/ /docs/concepts/workloads/controllers/petset/ 301 From 0e6de2c657115edd4b46a852009cfb538478de7d Mon Sep 17 00:00:00 2001 From: Steve Perry Date: Sun, 24 Sep 2017 11:06:23 -0700 Subject: [PATCH 33/87] Update link targets to avoid redirects. (#5604) --- docs/concepts/cluster-administration/proxies.md | 2 +- docs/concepts/configuration/overview.md | 4 ++-- .../services-networking/connect-applications-service.md | 2 +- docs/concepts/services-networking/ingress.md | 4 ++-- docs/concepts/workloads/controllers/daemonset.md | 2 +- docs/concepts/workloads/controllers/petset.md | 2 +- docs/getting-started-guides/scratch.md | 4 ++-- docs/tasks/access-application-cluster/access-cluster.md | 2 +- .../connecting-frontend-backend.md | 2 +- .../create-external-load-balancer.md | 2 +- .../load-balance-access-application-cluster.md | 2 +- .../service-access-application-cluster.md | 2 +- docs/tasks/federation/federation-service-discovery.md | 4 ++-- docs/tasks/federation/set-up-cluster-federation-kubefed.md | 4 ++-- docs/tools/kompose/user-guide.md | 2 +- docs/tutorials/services/source-ip.md | 6 +++--- docs/tutorials/stateful-application/basic-stateful-set.md | 2 +- docs/tutorials/stateful-application/zookeeper.md | 2 +- 18 files changed, 25 insertions(+), 25 deletions(-) diff --git a/docs/concepts/cluster-administration/proxies.md b/docs/concepts/cluster-administration/proxies.md index 41e29d6cef799..13f73e8bbac82 100644 --- a/docs/concepts/cluster-administration/proxies.md +++ b/docs/concepts/cluster-administration/proxies.md @@ -27,7 +27,7 @@ There are several different proxies you may encounter when using Kubernetes: - proxy to target may use HTTP or HTTPS as chosen by proxy using available information - can be used to reach a Node, Pod, or Service - does load balancing when used to reach a Service - 1. The [kube proxy](/docs/user-guide/services/#ips-and-vips): + 1. The [kube proxy](/docs/concepts/services-networking/service/#ips-and-vips): - runs on each node - proxies UDP and TCP - does not understand HTTP diff --git a/docs/concepts/configuration/overview.md b/docs/concepts/configuration/overview.md index 3690f36ea6e35..c354a2a6df100 100644 --- a/docs/concepts/configuration/overview.md +++ b/docs/concepts/configuration/overview.md @@ -50,11 +50,11 @@ This is a living document. If you think of something that is not on this list bu If you only need access to the port for debugging purposes, you can use the [kubectl proxy and apiserver proxy](/docs/tasks/access-kubernetes-api/http-proxy-access-api/) or [kubectl port-forward](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/). You can use a [Service](/docs/concepts/services-networking/service/) object for external service access. - If you explicitly need to expose a pod's port on the host machine, consider using a [NodePort](/docs/user-guide/services/#type-nodeport) service before resorting to `hostPort`. + If you explicitly need to expose a pod's port on the host machine, consider using a [NodePort](/docs/concepts/services-networking/service/#type-nodeport) service before resorting to `hostPort`. - Avoid using `hostNetwork`, for the same reasons as `hostPort`. -- Use _headless services_ for easy service discovery when you don't need kube-proxy load balancing. See [headless services](/docs/user-guide/services/#headless-services). +- Use _headless services_ for easy service discovery when you don't need kube-proxy load balancing. See [headless services](/docs/concepts/services-networking/service/#headless-services). ## Using Labels diff --git a/docs/concepts/services-networking/connect-applications-service.md b/docs/concepts/services-networking/connect-applications-service.md index b69c831383480..7079cfbeea8e3 100644 --- a/docs/concepts/services-networking/connect-applications-service.md +++ b/docs/concepts/services-networking/connect-applications-service.md @@ -94,7 +94,7 @@ NAME ENDPOINTS AGE my-nginx 10.244.2.5:80,10.244.3.4:80 1m ``` -You should now be able to curl the nginx Service on `:` from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire, if you're curious about how this works you can read more about the [service proxy](/docs/user-guide/services/#virtual-ips-and-service-proxies). +You should now be able to curl the nginx Service on `:` from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire, if you're curious about how this works you can read more about the [service proxy](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies). ## Accessing the Service diff --git a/docs/concepts/services-networking/ingress.md b/docs/concepts/services-networking/ingress.md index 115903c240209..6914c898676a1 100644 --- a/docs/concepts/services-networking/ingress.md +++ b/docs/concepts/services-networking/ingress.md @@ -292,7 +292,7 @@ Please track the [L7 and Ingress proposal](https://github.com/kubernetes/kuberne You can expose a Service in multiple ways that don't directly involve the Ingress resource: -* Use [Service.Type=LoadBalancer](/docs/user-guide/services/#type-loadbalancer) -* Use [Service.Type=NodePort](/docs/user-guide/services/#type-nodeport) +* Use [Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#type-loadbalancer) +* Use [Service.Type=NodePort](/docs/concepts/services-networking/service/#type-nodeport) * Use a [Port Proxy](https://git.k8s.io/contrib/for-demos/proxy-to-service) * Deploy the [Service loadbalancer](https://git.k8s.io/contrib/service-loadbalancer). This allows you to share a single IP among multiple Services and achieve more advanced loadbalancing through Service Annotations. diff --git a/docs/concepts/workloads/controllers/daemonset.md b/docs/concepts/workloads/controllers/daemonset.md index 26bc660eefa6f..f1d6dd871b7c8 100644 --- a/docs/concepts/workloads/controllers/daemonset.md +++ b/docs/concepts/workloads/controllers/daemonset.md @@ -124,7 +124,7 @@ Some possible patterns for communicating with pods in a DaemonSet are: - **Push**: Pods in the DaemonSet are configured to send updates to another service, such as a stats database. They do not have clients. - **NodeIP and Known Port**: Pods in the DaemonSet can use a `hostPort`, so that the pods are reachable via the node IPs. Clients know the list of node IPs somehow, and know the port by convention. -- **DNS**: Create a [headless service](/docs/user-guide/services/#headless-services) with the same pod selector, +- **DNS**: Create a [headless service](/docs/concepts/services-networking/service/#headless-services) with the same pod selector, and then discover DaemonSets using the `endpoints` resource or retrieve multiple A records from DNS. - **Service**: Create a service with the same pod selector, and use the service to reach a diff --git a/docs/concepts/workloads/controllers/petset.md b/docs/concepts/workloads/controllers/petset.md index 42c90cfd96365..5858da2415d65 100644 --- a/docs/concepts/workloads/controllers/petset.md +++ b/docs/concepts/workloads/controllers/petset.md @@ -37,7 +37,7 @@ This doc assumes familiarity with the following Kubernetes concepts: * [Pods](/docs/user-guide/pods/single-container/) * [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/) -* [Headless Services](/docs/user-guide/services/#headless-services) +* [Headless Services](/docs/concepts/services-networking/service/#headless-services) * [Persistent Volumes](/docs/concepts/storage/volumes/) * [Persistent Volume Provisioning](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/persistent-volume-provisioning/README.md) diff --git a/docs/getting-started-guides/scratch.md b/docs/getting-started-guides/scratch.md index 20bcecfef768c..0e93a85301b35 100644 --- a/docs/getting-started-guides/scratch.md +++ b/docs/getting-started-guides/scratch.md @@ -27,7 +27,7 @@ steps that existing cluster setup scripts are making. 1. You should be familiar with using Kubernetes already. We suggest you set up a temporary cluster by following one of the other Getting Started Guides. - This will help you become familiar with the CLI ([kubectl](/docs/user-guide/kubectl/)) and concepts ([pods](/docs/user-guide/pods/), [services](/docs/user-guide/services/), etc.) first. + This will help you become familiar with the CLI ([kubectl](/docs/user-guide/kubectl/)) and concepts ([pods](/docs/user-guide/pods/), [services](/docs/concepts/services-networking/service/), etc.) first. 1. You should have `kubectl` installed on your desktop. This will happen as a side effect of completing one of the other Getting Started Guides. If not, follow the instructions [here](/docs/tasks/kubectl/install/). @@ -113,7 +113,7 @@ You will need to select an address range for the Pod IPs. Note that IPv6 is not using `10.10.0.0/24` through `10.10.255.0/24`, respectively. - Need to make these routable or connect with overlay. -Kubernetes also allocates an IP to each [service](/docs/user-guide/services/). However, +Kubernetes also allocates an IP to each [service](/docs/concepts/services-networking/service/). However, service IPs do not necessarily need to be routable. The kube-proxy takes care of translating Service IPs to Pod IPs before traffic leaves the node. You do need to Allocate a block of IPs for services. Call this diff --git a/docs/tasks/access-application-cluster/access-cluster.md b/docs/tasks/access-application-cluster/access-cluster.md index a3c1a6ce77ee1..641f3c4ed9f3c 100644 --- a/docs/tasks/access-application-cluster/access-cluster.md +++ b/docs/tasks/access-application-cluster/access-cluster.md @@ -308,7 +308,7 @@ There are several different proxies you may encounter when using Kubernetes: - proxy to target may use HTTP or HTTPS as chosen by proxy using available information - can be used to reach a Node, Pod, or Service - does load balancing when used to reach a Service - 1. The [kube proxy](/docs/user-guide/services/#ips-and-vips): + 1. The [kube proxy](/docs/concepts/services-networking/service/#ips-and-vips): - runs on each node - proxies UDP and TCP - does not understand HTTP diff --git a/docs/tasks/access-application-cluster/connecting-frontend-backend.md b/docs/tasks/access-application-cluster/connecting-frontend-backend.md index c1ba86065e00e..7b2798a0e1ad1 100644 --- a/docs/tasks/access-application-cluster/connecting-frontend-backend.md +++ b/docs/tasks/access-application-cluster/connecting-frontend-backend.md @@ -29,7 +29,7 @@ frontend and backend are connected using a Kubernetes Service object. [Services with external load balancers](/docs/tasks/access-application-cluster/create-external-load-balancer/), which require a supported environment. If your environment does not support this, you can use a Service of type - [NodePort](/docs/user-guide/services/#type-nodeport) instead. + [NodePort](/docs/concepts/services-networking/service/#type-nodeport) instead. {% endcapture %} diff --git a/docs/tasks/access-application-cluster/create-external-load-balancer.md b/docs/tasks/access-application-cluster/create-external-load-balancer.md index effd07dd87d21..e3d69fdd581c9 100644 --- a/docs/tasks/access-application-cluster/create-external-load-balancer.md +++ b/docs/tasks/access-application-cluster/create-external-load-balancer.md @@ -25,7 +25,7 @@ cluster nodes _provided your cluster runs in a supported environment and is conf ## Configuration file To create an external load balancer, add the following line to your -[service configuration file](/docs/user-guide/services/operations/#service-configuration-file): +[service configuration file](/docs/concepts/services-networking/service/operations/#service-configuration-file): ```json "type": "LoadBalancer" diff --git a/docs/tasks/access-application-cluster/load-balance-access-application-cluster.md b/docs/tasks/access-application-cluster/load-balance-access-application-cluster.md index 91de9ece98ce5..e4f59bff75faf 100644 --- a/docs/tasks/access-application-cluster/load-balance-access-application-cluster.md +++ b/docs/tasks/access-application-cluster/load-balance-access-application-cluster.md @@ -101,7 +101,7 @@ load-balanced access to an application running in a cluster. ## Using a service configuration file As an alternative to using `kubectl expose`, you can use a -[service configuration file](/docs/user-guide/services/operations) +[service configuration file](/docs/concepts/services-networking/service/operations) to create a Service. diff --git a/docs/tasks/access-application-cluster/service-access-application-cluster.md b/docs/tasks/access-application-cluster/service-access-application-cluster.md index 46bb4c983345d..84909650c7b61 100644 --- a/docs/tasks/access-application-cluster/service-access-application-cluster.md +++ b/docs/tasks/access-application-cluster/service-access-application-cluster.md @@ -117,7 +117,7 @@ provides load balancing for an application that has two running instances. ## Using a service configuration file As an alternative to using `kubectl expose`, you can use a -[service configuration file](/docs/user-guide/services/operations) +[service configuration file](/docs/concepts/services-networking/service/operations) to create a Service. {% endcapture %} diff --git a/docs/tasks/federation/federation-service-discovery.md b/docs/tasks/federation/federation-service-discovery.md index 3068ce9ae34ee..a30910af72837 100644 --- a/docs/tasks/federation/federation-service-discovery.md +++ b/docs/tasks/federation/federation-service-discovery.md @@ -125,9 +125,9 @@ underlying Kubernetes services (once these have been allocated - this may take a few seconds). For inter-cluster and inter-cloud-provider networking between service shards to work correctly, your services need to have an externally visible IP address. [Service Type: -Loadbalancer](/docs/user-guide/services/#type-loadbalancer) +Loadbalancer](/docs/concepts/services-networking/service/#type-loadbalancer) is typically used for this, although other options -(e.g. [External IP's](/docs/user-guide/services/#external-ips)) exist. +(e.g. [External IP's](/docs/concepts/services-networking/service/#external-ips)) exist. Note also that we have not yet provisioned any backend Pods to receive the network traffic directed to these addresses (i.e. 'Service diff --git a/docs/tasks/federation/set-up-cluster-federation-kubefed.md b/docs/tasks/federation/set-up-cluster-federation-kubefed.md index 5aef87109b1b8..bbb18529115c8 100644 --- a/docs/tasks/federation/set-up-cluster-federation-kubefed.md +++ b/docs/tasks/federation/set-up-cluster-federation-kubefed.md @@ -261,11 +261,11 @@ kubefed init fellowship \ `kubefed init` exposes the federation API server as a Kubernetes [service](/docs/concepts/services-networking/service/) on the host cluster. By default, this service is exposed as a -[load balanced service](/docs/user-guide/services/#type-loadbalancer). +[load balanced service](/docs/concepts/services-networking/service/#type-loadbalancer). Most on-premises and bare-metal environments, and some cloud environments lack support for load balanced services. `kubefed init` allows exposing the federation API server as a -[`NodePort` service](/docs/user-guide/services/#type-nodeport) on +[`NodePort` service](/docs/concepts/services-networking/service/#type-nodeport) on such environments. This can be accomplished by passing the `--api-server-service-type=NodePort` flag. You can also specify the preferred address to advertise the federation API server by diff --git a/docs/tools/kompose/user-guide.md b/docs/tools/kompose/user-guide.md index 483ea22ea066c..efd2b1c233390 100644 --- a/docs/tools/kompose/user-guide.md +++ b/docs/tools/kompose/user-guide.md @@ -427,7 +427,7 @@ $ kompose up --provider openshift --build build-config ## Alternative Conversions -The default `kompose` transformation will generate Kubernetes [Deployments](http://kubernetes.io/docs/user-guide/deployments/) and [Services](http://kubernetes.io/docs/user-guide/services/), in yaml format. You have alternative option to generate json with `-j`. Also, you can alternatively generate [Replication Controllers](http://kubernetes.io/docs/user-guide/replication-controller/) objects, [Deamon Sets](http://kubernetes.io/docs/admin/daemons/), or [Helm](https://github.com/helm/helm) charts. +The default `kompose` transformation will generate Kubernetes [Deployments](http://kubernetes.io/docs/user-guide/deployments/) and [Services](http://kubernetes.io/docs/concepts/services-networking/service/), in yaml format. You have alternative option to generate json with `-j`. Also, you can alternatively generate [Replication Controllers](http://kubernetes.io/docs/user-guide/replication-controller/) objects, [Deamon Sets](http://kubernetes.io/docs/admin/daemons/), or [Helm](https://github.com/helm/helm) charts. ```sh $ kompose convert -j diff --git a/docs/tutorials/services/source-ip.md b/docs/tutorials/services/source-ip.md index bab393d37efca..ba9f8139f054d 100644 --- a/docs/tutorials/services/source-ip.md +++ b/docs/tutorials/services/source-ip.md @@ -53,7 +53,7 @@ deployment "source-ip-app" created ## Source IP for Services with Type=ClusterIP Packets sent to ClusterIP from within the cluster are never source NAT'd if -you're running kube-proxy in [iptables mode](/docs/user-guide/services/#proxy-mode-iptables), +you're running kube-proxy in [iptables mode](/docs/concepts/services-networking/service/#proxy-mode-iptables), which is the default since Kubernetes 1.2. Kube-proxy exposes its mode through a `proxyMode` endpoint: @@ -110,7 +110,7 @@ If the client pod and server pod are in the same node, the client_address is the ## Source IP for Services with Type=NodePort -As of Kubernetes 1.5, packets sent to Services with [Type=NodePort](/docs/user-guide/services/#type-nodeport) +As of Kubernetes 1.5, packets sent to Services with [Type=NodePort](/docs/concepts/services-networking/service/#type-nodeport) are source NAT'd by default. You can test this by creating a `NodePort` Service: ```console @@ -208,7 +208,7 @@ Visually: ## Source IP for Services with Type=LoadBalancer -As of Kubernetes 1.5, packets sent to Services with [Type=LoadBalancer](/docs/user-guide/services/#type-loadbalancer) are +As of Kubernetes 1.5, packets sent to Services with [Type=LoadBalancer](/docs/concepts/services-networking/service/#type-loadbalancer) are source NAT'd by default, because all schedulable Kubernetes nodes in the `Ready` state are eligible for loadbalanced traffic. So if packets arrive at a node without an endpoint, the system proxies it to a node *with* an diff --git a/docs/tutorials/stateful-application/basic-stateful-set.md b/docs/tutorials/stateful-application/basic-stateful-set.md index 52de2e99436a3..ea1b752c7ba16 100644 --- a/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/docs/tutorials/stateful-application/basic-stateful-set.md @@ -55,7 +55,7 @@ After this tutorial, you will be familiar with the following. Begin by creating a StatefulSet using the example below. It is similar to the example presented in the [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) concept. -It creates a [Headless Service](/docs/user-guide/services/#headless-services), +It creates a [Headless Service](/docs/concepts/services-networking/service/#headless-services), `nginx`, to publish the IP addresses of Pods in the StatefulSet, `web`. {% include code.html language="yaml" file="web.yaml" ghlink="/docs/tutorials/stateful-application/web.yaml" %} diff --git a/docs/tutorials/stateful-application/zookeeper.md b/docs/tutorials/stateful-application/zookeeper.md index 71350fedc0256..a7857241d8d1c 100644 --- a/docs/tutorials/stateful-application/zookeeper.md +++ b/docs/tutorials/stateful-application/zookeeper.md @@ -89,7 +89,7 @@ safely discarded. ## Creating a ZooKeeper Ensemble The manifest below contains a -[Headless Service](/docs/user-guide/services/#headless-services), +[Headless Service](/docs/concepts/services-networking/service/#headless-services), a [ConfigMap](/docs/tasks/configure-pod-container/configmap/), a [PodDisruptionBudget](/docs/admin/disruptions/#specifying-a-poddisruptionbudget), and a [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/). From ad77d693c74551eaf56b1facb387935590ad3be4 Mon Sep 17 00:00:00 2001 From: Steve Perry Date: Sun, 24 Sep 2017 11:41:34 -0700 Subject: [PATCH 34/87] Update links to avoid redirects. (#5605) --- docs/admin/accessing-the-api.md | 2 +- docs/admin/authorization/rbac.md | 2 +- .../cluster-administration-overview.md | 2 +- docs/concepts/policy/resource-quotas.md | 6 +++--- docs/concepts/storage/persistent-volumes.md | 2 +- docs/concepts/storage/volumes.md | 2 +- docs/concepts/workloads/controllers/petset.md | 2 +- .../photon-controller.md | 2 +- .../change-default-storage-class.md | 2 +- .../change-pv-reclaim-policy.md | 2 +- .../administer-cluster/securing-a-cluster.md | 2 +- .../set-up-cluster-federation-kubefed.md | 18 +++++++++--------- .../horizontal-pod-autoscale-walkthrough.md | 2 +- ...run-single-instance-stateful-application.md | 4 ++-- .../stateful-application/zookeeper.md | 6 +++--- 15 files changed, 28 insertions(+), 28 deletions(-) diff --git a/docs/admin/accessing-the-api.md b/docs/admin/accessing-the-api.md index 03447f2488940..a844b0f5fdbd7 100644 --- a/docs/admin/accessing-the-api.md +++ b/docs/admin/accessing-the-api.md @@ -97,7 +97,7 @@ Kubernetes authorization requires that you use common REST attributes to interac Kubernetes supports multiple authorization modules, such as ABAC mode, RBAC Mode, and Webhook mode. When an administrator creates a cluster, they configured the authorization modules that should be used in the API server. If more than one authorization modules are configured, Kubernetes checks each module, and if any module authorizes the request, then the request can proceed. If all of the modules deny the request, then the request is denied (HTTP status code 403). -To learn more about Kubernetes authorization, including details about creating policies using the supported authorization modules, see [Authorization Overview](/docs/admin/authorization). +To learn more about Kubernetes authorization, including details about creating policies using the supported authorization modules, see [Authorization Overview](/docs/admin/authorization/). ## Admission Control diff --git a/docs/admin/authorization/rbac.md b/docs/admin/authorization/rbac.md index 23d36f71854ad..a5bc8e1cfe933 100644 --- a/docs/admin/authorization/rbac.md +++ b/docs/admin/authorization/rbac.md @@ -519,7 +519,7 @@ This is commonly used by add-on API servers for unified authentication and autho system:persistent-volume-provisioner None -Allows access to the resources required by most dynamic volume provisioners. +Allows access to the resources required by most dynamic volume provisioners. diff --git a/docs/concepts/cluster-administration/cluster-administration-overview.md b/docs/concepts/cluster-administration/cluster-administration-overview.md index f430a389e64af..97c07725e361f 100644 --- a/docs/concepts/cluster-administration/cluster-administration-overview.md +++ b/docs/concepts/cluster-administration/cluster-administration-overview.md @@ -57,7 +57,7 @@ If you are using a guide involving Salt, see [Configuring Kubernetes with Salt]( * [Auditing](/docs/tasks/debug-application-cluster/audit/) describes how to interact with Kubernetes' audit logs. ### Securing the kubelet - * [Master-Node communication](/docs/concepts/cluster-administration/master-node-communication/) + * [Master-Node communication](/docs/concepts/architecture/master-node-communication/) * [TLS bootstrapping](/docs/admin/kubelet-tls-bootstrapping/) * [Kubelet authentication/authorization](/docs/admin/kubelet-authentication-authorization/) diff --git a/docs/concepts/policy/resource-quotas.md b/docs/concepts/policy/resource-quotas.md index 814567c440b92..2d6935b786a12 100644 --- a/docs/concepts/policy/resource-quotas.md +++ b/docs/concepts/policy/resource-quotas.md @@ -74,9 +74,9 @@ In addition, you can limit consumption of storage resources based on associated | Resource Name | Description | | --------------------- | ----------------------------------------------------------- | | `requests.storage` | Across all persistent volume claims, the sum of storage requests cannot exceed this value. | -| `persistentvolumeclaims` | The total number of [persistent volume claims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. | +| `persistentvolumeclaims` | The total number of [persistent volume claims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. | | `.storageclass.storage.k8s.io/requests.storage` | Across all persistent volume claims associated with the storage-class-name, the sum of storage requests cannot exceed this value. | -| `.storageclass.storage.k8s.io/persistentvolumeclaims` | Across all persistent volume claims associated with the storage-class-name, the total number of [persistent volume claims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. | +| `.storageclass.storage.k8s.io/persistentvolumeclaims` | Across all persistent volume claims associated with the storage-class-name, the total number of [persistent volume claims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. | For example, if an operator wants to quota storage with `gold` storage class separate from `bronze` storage class, the operator can define a quota as follows: @@ -92,7 +92,7 @@ are supported: | Resource Name | Description | | ------------------------------- | ------------------------------------------------- | | `configmaps` | The total number of config maps that can exist in the namespace. | -| `persistentvolumeclaims` | The total number of [persistent volume claims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. | +| `persistentvolumeclaims` | The total number of [persistent volume claims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. | | `pods` | The total number of pods in a non-terminal state that can exist in the namespace. A pod is in a terminal state if `status.phase in (Failed, Succeeded)` is true. | | `replicationcontrollers` | The total number of replication controllers that can exist in the namespace. | | `resourcequotas` | The total number of [resource quotas](/docs/admin/admission-controllers/#resourcequota) that can exist in the namespace. | diff --git a/docs/concepts/storage/persistent-volumes.md b/docs/concepts/storage/persistent-volumes.md index d7d1fc428f1ea..19a17e0491462 100644 --- a/docs/concepts/storage/persistent-volumes.md +++ b/docs/concepts/storage/persistent-volumes.md @@ -35,7 +35,7 @@ administrators. Kubernetes itself is unopinionated about what classes represent. This concept is sometimes called "profiles" in other storage systems. -Please see the [detailed walkthrough with working examples](/docs/user-guide/persistent-volumes/walkthrough/). +Please see the [detailed walkthrough with working examples](/docs/concepts/storage/persistent-volumes/walkthrough/). ## Lifecycle of a volume and claim diff --git a/docs/concepts/storage/volumes.md b/docs/concepts/storage/volumes.md index 8c6130bfa6ffb..f6fce622b6ddf 100644 --- a/docs/concepts/storage/volumes.md +++ b/docs/concepts/storage/volumes.md @@ -675,7 +675,7 @@ More details and examples can be found [here](https://github.com/kubernetes/exam ScaleIO is a software-based storage platform that can use existing hardware to create clusters of scalable shared block networked storage. The ScaleIO volume plugin allows deployed pods to access existing ScaleIO volumes (or it can dynamically provision new volumes for persistent volume claims, see -[ScaleIO Persistent Volumes](/docs/user-guide/persistent-volumes/#scaleio)). +[ScaleIO Persistent Volumes](/docs/concepts/storage/persistent-volumes/#scaleio)). **Important:** You must have an existing ScaleIO cluster already setup and running with the volumes created before you can use them. {: .caution} diff --git a/docs/concepts/workloads/controllers/petset.md b/docs/concepts/workloads/controllers/petset.md index 5858da2415d65..e8e7e32fc15bb 100644 --- a/docs/concepts/workloads/controllers/petset.md +++ b/docs/concepts/workloads/controllers/petset.md @@ -24,7 +24,7 @@ Throughout this doc you will see a few terms that are sometimes used interchange * Node: A single virtual or physical machine in a Kubernetes cluster. * Cluster: A group of nodes in a single failure domain, unless mentioned otherwise. -* Persistent Volume Claim (PVC): A request for storage, typically a [persistent volume](/docs/user-guide/persistent-volumes/walkthrough/). +* Persistent Volume Claim (PVC): A request for storage, typically a [persistent volume](/docs/concepts/storage/persistent-volumes/walkthrough/). * Host name: The hostname attached to the UTS namespace of the pod, i.e. the output of `hostname` in the pod. * DNS/Domain name: A *cluster local* domain name resolvable using standard methods (e.g.: [gethostbyname](http://linux.die.net/man/3/gethostbyname)). * Ordinality: the property of being "ordinal", or occupying a position in a sequence. diff --git a/docs/getting-started-guides/photon-controller.md b/docs/getting-started-guides/photon-controller.md index ecdaf13f82eab..e0de503156419 100644 --- a/docs/getting-started-guides/photon-controller.md +++ b/docs/getting-started-guides/photon-controller.md @@ -35,7 +35,7 @@ Mac, you can install this with [brew](http://brew.sh/): 5. You should have an ssh public key installed. This will be used to give you access to the VM's user account, `kube`. -6. Get or build a [binary release](/docs/getting-started-guides/binary_release) +6. Get or build a [binary release](/docs/getting-started-guides/binary_release/) ### Download VM Image diff --git a/docs/tasks/administer-cluster/change-default-storage-class.md b/docs/tasks/administer-cluster/change-default-storage-class.md index 8326d49f4fa00..625d599dfcf68 100644 --- a/docs/tasks/administer-cluster/change-default-storage-class.md +++ b/docs/tasks/administer-cluster/change-default-storage-class.md @@ -22,7 +22,7 @@ Depending on the installation method, your Kubernetes cluster may be deployed wi an existing StorageClass that is marked as default. This default StorageClass is then used to dynamically provision storage for PersistentVolumeClaims that do not require any specific storage class. See -[PersistentVolumeClaim documentation](/docs/user-guide/persistent-volumes/#class-1) +[PersistentVolumeClaim documentation](/docs/concepts/storage/persistent-volumes/#class-1) for details. The pre-installed default StorageClass may not fit well with your expected workload; diff --git a/docs/tasks/administer-cluster/change-pv-reclaim-policy.md b/docs/tasks/administer-cluster/change-pv-reclaim-policy.md index 43080789ed324..3cbb8a76426db 100644 --- a/docs/tasks/administer-cluster/change-pv-reclaim-policy.md +++ b/docs/tasks/administer-cluster/change-pv-reclaim-policy.md @@ -68,7 +68,7 @@ the corresponding `PersistentVolume` is not be deleted. Instead, it is moved to {% capture whatsnext %} * Learn more about [PersistentVolumes](/docs/concepts/storage/persistent-volumes/). -* Learn more about [PersistentVolumeClaims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims). +* Learn more about [PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims). ### Reference diff --git a/docs/tasks/administer-cluster/securing-a-cluster.md b/docs/tasks/administer-cluster/securing-a-cluster.md index 2885a27da9c89..6df8e0f1b32fb 100644 --- a/docs/tasks/administer-cluster/securing-a-cluster.md +++ b/docs/tasks/administer-cluster/securing-a-cluster.md @@ -66,7 +66,7 @@ being terminated and recreated on other nodes. The out of the box roles represen between flexibility and the common use cases, but more limited roles should be carefully reviewed to prevent accidental escalation. You can make roles specific to your use case if the out-of-box ones don't meet your needs. -Consult the [authorization reference section](/docs/admin/authorization) for more information. +Consult the [authorization reference section](/docs/admin/authorization/) for more information. ## Controlling the capabilities of a workload or user at runtime diff --git a/docs/tasks/federation/set-up-cluster-federation-kubefed.md b/docs/tasks/federation/set-up-cluster-federation-kubefed.md index bbb18529115c8..8f6b970dde776 100644 --- a/docs/tasks/federation/set-up-cluster-federation-kubefed.md +++ b/docs/tasks/federation/set-up-cluster-federation-kubefed.md @@ -289,17 +289,17 @@ Federation control plane stores its state in [`etcd`](https://coreos.com/etcd/docs/latest/) data must be stored in a persistent storage volume to ensure correct operation across federation control plane restarts. On host clusters that support -[dynamic provisioning of storage volumes](/docs/user-guide/persistent-volumes/#dynamic), +[dynamic provisioning of storage volumes](/docs/concepts/storage/persistent-volumes/#dynamic), `kubefed init` dynamically provisions a -[`PersistentVolume`](/docs/user-guide/persistent-volumes/#persistent-volumes) +[`PersistentVolume`](/docs/concepts/storage/persistent-volumes/#persistent-volumes) and binds it to a -[`PersistentVolumeClaim`](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) +[`PersistentVolumeClaim`](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) to store [`etcd`](https://coreos.com/etcd/docs/latest/) data. If your host cluster doesn't support dynamic provisioning, you can also statically provision a -[`PersistentVolume`](/docs/user-guide/persistent-volumes/#persistent-volumes). +[`PersistentVolume`](/docs/concepts/storage/persistent-volumes/#persistent-volumes). `kubefed init` creates a -[`PersistentVolumeClaim`](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) +[`PersistentVolumeClaim`](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that has the following configuration: ```yaml @@ -321,12 +321,12 @@ spec: ``` To statically provision a -[`PersistentVolume`](/docs/user-guide/persistent-volumes/#persistent-volumes), +[`PersistentVolume`](/docs/concepts/storage/persistent-volumes/#persistent-volumes), you must ensure that the -[`PersistentVolume`](/docs/user-guide/persistent-volumes/#persistent-volumes) +[`PersistentVolume`](/docs/concepts/storage/persistent-volumes/#persistent-volumes) that you create has the matching storage class, access mode and at least as much capacity as the requested -[`PersistentVolumeClaim`](/docs/user-guide/persistent-volumes/#persistentvolumeclaims). +[`PersistentVolumeClaim`](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims). Alternatively, you can disable persistent storage completely by passing `--etcd-persistent-storage=false` to `kubefed init`. @@ -342,7 +342,7 @@ kubefed init fellowship \ ``` `kubefed init` still doesn't support attaching an existing -[`PersistentVolumeClaim`](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) +[`PersistentVolumeClaim`](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) to the federation control plane that it bootstraps. We are planning to support this in a future version of `kubefed`. diff --git a/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index 4a7efc9d94ff3..6d23d7d008a91 100644 --- a/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -24,7 +24,7 @@ heapster monitoring will be turned-on by default). To specify multiple resource metrics for a Horizontal Pod Autoscaler, you must have a Kubernetes cluster and kubectl at version 1.6 or later. Furthermore, in order to make use of custom metrics, your cluster must be able to communicate with the API server providing the custom metrics API. -See the [Horizontal Pod Autoscaling user guide](/docs/user-guide/horizontal-pod-autoscaling/#support-for-custom-metrics) for more details. +See the [Horizontal Pod Autoscaling user guide](/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics) for more details. ## Step One: Run & expose php-apache server diff --git a/docs/tasks/run-application/run-single-instance-stateful-application.md b/docs/tasks/run-application/run-single-instance-stateful-application.md index 6a1c305fa9dc8..4fdc223b42977 100644 --- a/docs/tasks/run-application/run-single-instance-stateful-application.md +++ b/docs/tasks/run-application/run-single-instance-stateful-application.md @@ -27,7 +27,7 @@ application is MySQL. * For data persistence we will create a Persistent Volume that references a disk in your environment. See - [here](/docs/user-guide/persistent-volumes/#types-of-persistent-volumes) for + [here](/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes) for the types of environments supported. This Tutorial will demonstrate `GCEPersistentDisk` but any type will work. `GCEPersistentDisk` volumes only work on Google Compute Engine. @@ -40,7 +40,7 @@ application is MySQL. ## Set up a disk in your environment You can use any type of persistent volume for your stateful app. See -[Types of Persistent Volumes](/docs/user-guide/persistent-volumes/#types-of-persistent-volumes) +[Types of Persistent Volumes](/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes) for a list of supported environment disks. For Google Compute Engine, run: ``` diff --git a/docs/tutorials/stateful-application/zookeeper.md b/docs/tutorials/stateful-application/zookeeper.md index a7857241d8d1c..9ad45caef903f 100644 --- a/docs/tutorials/stateful-application/zookeeper.md +++ b/docs/tutorials/stateful-application/zookeeper.md @@ -13,7 +13,7 @@ title: Running ZooKeeper, A CP Distributed System {% capture overview %} This tutorial demonstrates [Apache Zookeeper](https://zookeeper.apache.org) on Kubernetes using [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/), -[PodDisruptionBudgets](/docs/admin/disruptions/#specifying-a-poddisruptionbudget), +[PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#specifying-a-poddisruptionbudget), and [PodAntiAffinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature). {% endcapture %} @@ -29,7 +29,7 @@ Kubernetes concepts. * [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/persistent-volume-provisioning/) * [ConfigMaps](/docs/tasks/configure-pod-container/configmap/) * [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) -* [PodDisruptionBudgets](/docs/admin/disruptions/#specifying-a-poddisruptionbudget) +* [PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#specifying-a-poddisruptionbudget) * [PodAntiAffinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature) * [kubectl CLI](/docs/user-guide/kubectl) @@ -91,7 +91,7 @@ safely discarded. The manifest below contains a [Headless Service](/docs/concepts/services-networking/service/#headless-services), a [ConfigMap](/docs/tasks/configure-pod-container/configmap/), -a [PodDisruptionBudget](/docs/admin/disruptions/#specifying-a-poddisruptionbudget), +a [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions//#specifying-a-poddisruptionbudget), and a [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/). {% include code.html language="yaml" file="zookeeper.yaml" ghlink="/docs/tutorials/stateful-application/zookeeper.yaml" %} From ab50634f6355e86854633ceb4f61e1cf13d5c1f5 Mon Sep 17 00:00:00 2001 From: zhangmingld Date: Mon, 25 Sep 2017 09:21:43 +0800 Subject: [PATCH 35/87] fix linefeed --- cn/docs/concepts/containers/images.md | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/cn/docs/concepts/containers/images.md b/cn/docs/concepts/containers/images.md index 5bb9cebfd2868..a50c57b5a0f95 100644 --- a/cn/docs/concepts/containers/images.md +++ b/cn/docs/concepts/containers/images.md @@ -51,14 +51,12 @@ title: 镜像 ### 使用 Google Container Registry Kuberetes运行在Google Compute Engine (GCE)时原生支持[Google ContainerRegistry (GCR)] (https://cloud.google.com/tools/container-registry/)。如果kubernetes集群运行在GCE -或者Google Container Engine (GKE)上,使用镜像全名(e.g. gcr.io/my_project/image:tag) -即可。 +或者Google Container Engine (GKE)上,使用镜像全名(e.g. gcr.io/my_project/image:tag)即可。 集群中的所有pod都会有读取这个仓库中镜像的权限。 Kubelet将使用实例的Google service account向GCR认证。实例的service account拥有 -`https://www.googleapis.com/auth/devstorage.read_only`,所以它可以从项目的GCR拉取,但不能 -推送。 +`https://www.googleapis.com/auth/devstorage.read_only`,所以它可以从项目的GCR拉取,但不能推送。 ### 使用 AWS EC2 Container Registry @@ -94,8 +92,7 @@ Kubelet会获取并且定期刷新ECR的凭证。它需要以下权限 - `provider.go:91] Refreshing cache for provider: *aws_credentials.ecrProvider` ### 使用 Azure Container Registry (ACR) -当使用[Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/)时, -可以使用admin user或者service principal认证。 +当使用[Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/)时,可以使用admin user或者service principal认证。 任何一种情况,认证都通过标准的Dokcer authentication完成。本指南假设使用[azure-cli](https://github.com/azure/azure-cli) 命令行工具。 From a9a81a99bff5d4d43b73a8bc2088e915f90a6f0d Mon Sep 17 00:00:00 2001 From: houjun41544 Date: Thu, 14 Sep 2017 11:00:13 +0800 Subject: [PATCH 36/87] Add inject-data-application and two docs in it. --- .../dapi-envars-container.yaml | 45 ++++ .../dapi-envars-pod.yaml | 38 ++++ .../dapi-volume-resources.yaml | 54 +++++ .../inject-data-application/dapi-volume.yaml | 39 ++++ ...nward-api-volume-expose-pod-information.md | 209 ++++++++++++++++++ ...ronment-variable-expose-pod-information.md | 153 +++++++++++++ 6 files changed, 538 insertions(+) create mode 100644 cn/docs/tasks/inject-data-application/dapi-envars-container.yaml create mode 100644 cn/docs/tasks/inject-data-application/dapi-envars-pod.yaml create mode 100644 cn/docs/tasks/inject-data-application/dapi-volume-resources.yaml create mode 100644 cn/docs/tasks/inject-data-application/dapi-volume.yaml create mode 100644 cn/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md create mode 100644 cn/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md diff --git a/cn/docs/tasks/inject-data-application/dapi-envars-container.yaml b/cn/docs/tasks/inject-data-application/dapi-envars-container.yaml new file mode 100644 index 0000000000000..8b3b3a39d3c1b --- /dev/null +++ b/cn/docs/tasks/inject-data-application/dapi-envars-container.yaml @@ -0,0 +1,45 @@ +apiVersion: v1 +kind: Pod +metadata: + name: dapi-envars-resourcefieldref +spec: + containers: + - name: test-container + image: gcr.io/google_containers/busybox:1.24 + command: [ "sh", "-c"] + args: + - while true; do + echo -en '\n'; + printenv MY_CPU_REQUEST MY_CPU_LIMIT; + printenv MY_MEM_REQUEST MY_MEM_LIMIT; + sleep 10; + done; + resources: + requests: + memory: "32Mi" + cpu: "125m" + limits: + memory: "64Mi" + cpu: "250m" + env: + - name: MY_CPU_REQUEST + valueFrom: + resourceFieldRef: + containerName: test-container + resource: requests.cpu + - name: MY_CPU_LIMIT + valueFrom: + resourceFieldRef: + containerName: test-container + resource: limits.cpu + - name: MY_MEM_REQUEST + valueFrom: + resourceFieldRef: + containerName: test-container + resource: requests.memory + - name: MY_MEM_LIMIT + valueFrom: + resourceFieldRef: + containerName: test-container + resource: limits.memory + restartPolicy: Never diff --git a/cn/docs/tasks/inject-data-application/dapi-envars-pod.yaml b/cn/docs/tasks/inject-data-application/dapi-envars-pod.yaml new file mode 100644 index 0000000000000..00762373b3e89 --- /dev/null +++ b/cn/docs/tasks/inject-data-application/dapi-envars-pod.yaml @@ -0,0 +1,38 @@ +apiVersion: v1 +kind: Pod +metadata: + name: dapi-envars-fieldref +spec: + containers: + - name: test-container + image: gcr.io/google_containers/busybox + command: [ "sh", "-c"] + args: + - while true; do + echo -en '\n'; + printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE; + printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT; + sleep 10; + done; + env: + - name: MY_NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + - name: MY_POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: MY_POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: MY_POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + - name: MY_POD_SERVICE_ACCOUNT + valueFrom: + fieldRef: + fieldPath: spec.serviceAccountName + restartPolicy: Never diff --git a/cn/docs/tasks/inject-data-application/dapi-volume-resources.yaml b/cn/docs/tasks/inject-data-application/dapi-volume-resources.yaml new file mode 100644 index 0000000000000..65770f283f0cd --- /dev/null +++ b/cn/docs/tasks/inject-data-application/dapi-volume-resources.yaml @@ -0,0 +1,54 @@ +apiVersion: v1 +kind: Pod +metadata: + name: kubernetes-downwardapi-volume-example-2 +spec: + containers: + - name: client-container + image: gcr.io/google_containers/busybox:1.24 + command: ["sh", "-c"] + args: + - while true; do + echo -en '\n'; + if [[ -e /etc/cpu_limit ]]; then + echo -en '\n'; cat /etc/cpu_limit; fi; + if [[ -e /etc/cpu_request ]]; then + echo -en '\n'; cat /etc/cpu_request; fi; + if [[ -e /etc/mem_limit ]]; then + echo -en '\n'; cat /etc/mem_limit; fi; + if [[ -e /etc/mem_request ]]; then + echo -en '\n'; cat /etc/mem_request; fi; + sleep 5; + done; + resources: + requests: + memory: "32Mi" + cpu: "125m" + limits: + memory: "64Mi" + cpu: "250m" + volumeMounts: + - name: podinfo + mountPath: /etc + readOnly: false + volumes: + - name: podinfo + downwardAPI: + items: + - path: "cpu_limit" + resourceFieldRef: + containerName: client-container + resource: limits.cpu + - path: "cpu_request" + resourceFieldRef: + containerName: client-container + resource: requests.cpu + - path: "mem_limit" + resourceFieldRef: + containerName: client-container + resource: limits.memory + - path: "mem_request" + resourceFieldRef: + containerName: client-container + resource: requests.memory + diff --git a/cn/docs/tasks/inject-data-application/dapi-volume.yaml b/cn/docs/tasks/inject-data-application/dapi-volume.yaml new file mode 100644 index 0000000000000..7126cefae5be6 --- /dev/null +++ b/cn/docs/tasks/inject-data-application/dapi-volume.yaml @@ -0,0 +1,39 @@ +apiVersion: v1 +kind: Pod +metadata: + name: kubernetes-downwardapi-volume-example + labels: + zone: us-est-coast + cluster: test-cluster1 + rack: rack-22 + annotations: + build: two + builder: john-doe +spec: + containers: + - name: client-container + image: gcr.io/google_containers/busybox + command: ["sh", "-c"] + args: + - while true; do + if [[ -e /etc/labels ]]; then + echo -en '\n\n'; cat /etc/labels; fi; + if [[ -e /etc/annotations ]]; then + echo -en '\n\n'; cat /etc/annotations; fi; + sleep 5; + done; + volumeMounts: + - name: podinfo + mountPath: /etc + readOnly: false + volumes: + - name: podinfo + downwardAPI: + items: + - path: "labels" + fieldRef: + fieldPath: metadata.labels + - path: "annotations" + fieldRef: + fieldPath: metadata.annotations + diff --git a/cn/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md b/cn/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md new file mode 100644 index 0000000000000..98a738be69efc --- /dev/null +++ b/cn/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md @@ -0,0 +1,209 @@ +--- +title: 通过文件将Pod信息呈现给容器 +--- + +{% capture overview %} + +此页面描述Pod如何使用DownwardAPIVolumeFile把自己的信息呈现给pod中运行的容器。DownwardAPIVolumeFile可以呈现pod的字段和容器字段。 + +{% endcapture %} + + +{% capture prerequisites %} + +{% include task-tutorial-prereqs.md %} + +{% endcapture %} + +{% capture steps %} + +## Downward API + +有两种方式可以将Pod和Container字段呈现给运行中的容器: + +* [环境变量](/docs/tasks/configure-pod-container/environment-variable-expose-pod-information/) +* DownwardAPIVolumeFile + +这两种呈现Pod和Container字段的方式都称为*Downward API*。 + +## 存储Pod字段 + +在这个练习中,你将创建一个包含一个容器的pod。这是该pod的配置文件: + +{% include code.html language="yaml" file="dapi-volume.yaml" ghlink="/cn/docs/tasks/inject-data-application/dapi-volume.yaml" %} + +在配置文件中,你可以看到Pod有一个`downwardAPI`类型的Volume,并且挂载到容器中的`/etc`。 + +查看`downwardAPI`下面的`items`数组。每个数组元素都是一个[DownwardAPIVolumeFile](/docs/resources-reference/{{page.version}}/#downwardapivolumefile-v1-core)。 +第一个元素指示Pod的`metadata.labels`字段的值保存在名为`labels`的文件中。 +第二个元素指示Pod的`annotations`字段的值保存在名为`annotations`的文件中。 + +**注意:** 本示例中的字段是Pod字段,不是Pod中容器的字段。 +{: .note} + +创建Pod: + +```shell +kubectl create -f https://k8s.io/cn/docs/tasks/inject-data-application/dapi-volume.yaml +``` + +验证Pod中的容器运行正常: + +```shell +kubectl get pods +``` + +查看容器的日志: + +```shell +kubectl logs kubernetes-downwardapi-volume-example +``` + +输出显示`labels`和`annotations`文件的内容: + +```shell +cluster="test-cluster1" +rack="rack-22" +zone="us-est-coast" + +build="two" +builder="john-doe" +``` + +进入Pod中运行的容器,打开一个shell: + +``` +kubectl exec -it kubernetes-downwardapi-volume-example -- sh +``` + +在该shell中,查看`labels`文件: + +```shell +/# cat /etc/labels +``` + +输出显示Pod的所有labels都已写入`labels`文件。 + +```shell +cluster="test-cluster1" +rack="rack-22" +zone="us-est-coast" +``` + +同样,查看`annotations`文件: + +```shell +/# cat /etc/annotations +``` + +查看`/etc`目录下的文件: + +```shell +/# ls -laR /etc +``` + +在输出中可以看到,`labels` 和 `annotations`文件都在一个临时子目录中:这个例子,`..2982_06_02_21_47_53.299460680`。在`/etc`目录中,`..data`是一个指向临时子目录 +的符号链接。`/etc`目录中,`labels` 和 `annotations`也是符号链接。 + +``` +drwxr-xr-x ... Feb 6 21:47 ..2982_06_02_21_47_53.299460680 +lrwxrwxrwx ... Feb 6 21:47 ..data -> ..2982_06_02_21_47_53.299460680 +lrwxrwxrwx ... Feb 6 21:47 annotations -> ..data/annotations +lrwxrwxrwx ... Feb 6 21:47 labels -> ..data/labels + +/etc/..2982_06_02_21_47_53.299460680: +total 8 +-rw-r--r-- ... Feb 6 21:47 annotations +-rw-r--r-- ... Feb 6 21:47 labels +``` + +用符号链接可实现元数据的动态原子刷新;更新将写入一个新的临时目录,然后`..data`符号链接完成原子更新,通过使用[rename(2)](http://man7.org/linux/man-pages/man2/rename.2.html)。 + +退出shell: + +```shell +/# exit +``` + +## 存储容器字段 + +前面的练习中,你将Pod字段保存到DownwardAPIVolumeFile中。接下来这个练习,你将存储容器字段。这里是包含一个容器的pod的配置文件: + +{% include code.html language="yaml" file="dapi-volume-resources.yaml" ghlink="/cn/docs/tasks/inject-data-application/dapi-volume-resources.yaml" %} + +在这个配置文件中,你可以看到Pod有一个`downwardAPI`类型的Volume,并且挂载到容器的`/etc`目录。 + +查看`downwardAPI`下面的`items`数组。每个数组元素都是一个DownwardAPIVolumeFile。 + +第一个元素指定名为`client-container`的容器中`limits.cpu`字段的值应保存在名为`cpu_limit`的文件中。 + +创建Pod: + +```shell +kubectl create -f https://k8s.io/cn/docs/tasks/inject-data-application/dapi-volume-resources.yaml +``` + +进入Pod中运行的容器,打开一个shell: + +``` +kubectl exec -it kubernetes-downwardapi-volume-example-2 -- sh +``` + +在shell中,查看`cpu_limit`文件: + +```shell +/# cat /etc/cpu_limit +``` +你可以使用同样的命令查看`cpu_request`, `mem_limit` 和`mem_request` 文件. + +{% endcapture %} + +{% capture discussion %} + +## Capabilities of the Downward API + +下面这些信息可以通过环境变量和DownwardAPIVolumeFiles提供给容器: + +* node的name +* node的IP +* Pod的name +* Pod的namespace +* Pod的IP address +* Pod的service account name +* Pod的UID +* Container的CPU limit +* Container的CPU request +* Container的memory limit +* Container的memory request + +此外,以下信息可通过DownwardAPIVolumeFiles获得: + +* Pod的labels +* Pod的annotations + +**Note:** 如果容器未指定CPU和memory limits,则Downward API默认为节点可分配值。 +{: .note} + +## 投射密钥到指定路径并且指定文件权限 + +你可以将密钥投射到指定路径并且指定每个文件的访问权限。更多信息,请参阅[Secrets](/docs/concepts/configuration/secret/). + +## Downward API的动机 + +对于容器来说,有时候拥有自己的信息是很有用的,可避免与Kubernetes过度耦合。Downward API使得容器使用自己或者集群的信息,而不必通过Kubernetes client或API server。 + +一个例子是有一个现有的应用假定要用一个非常熟悉的环境变量来保存一个唯一标识。一种可能是给应用增加处理层,但这样是冗余和易出错的,而且它违反了低耦合的目标。更好的选择是使用Pod名称作为标识,把Pod名称注入这个环境变量中。 +{% endcapture %} + + +{% capture whatsnext %} + +* [PodSpec](/docs/resources-reference/{{page.version}}/#podspec-v1-core) +* [Volume](/docs/resources-reference/{{page.version}}/#volume-v1-core) +* [DownwardAPIVolumeSource](/docs/resources-reference/{{page.version}}/#downwardapivolumesource-v1-core) +* [DownwardAPIVolumeFile](/docs/resources-reference/{{page.version}}/#downwardapivolumefile-v1-core) +* [ResourceFieldSelector](/docs/resources-reference/{{page.version}}/#resourcefieldselector-v1-core) + +{% endcapture %} + +{% include templates/task.md %} diff --git a/cn/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md b/cn/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md new file mode 100644 index 0000000000000..1eb2c074e1b51 --- /dev/null +++ b/cn/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md @@ -0,0 +1,153 @@ +--- +title: 通过环境变量将Pod信息呈现给容器 +--- + +{% capture overview %} + +此页面显示了Pod如何使用环境变量把自己的信息呈现给pod中运行的容器。环境变量可以呈现pod的字段和容器字段。 + +有两种方式可以将Pod和Container字段呈现给运行中的容器: +环境变量 和[DownwardAPIVolumeFiles](/docs/resources-reference/{{page.version}}/#downwardapivolumefile-v1-core). +这两种呈现Pod和Container字段的方式都称为*Downward API*。 + +{% endcapture %} + + +{% capture prerequisites %} + +{% include task-tutorial-prereqs.md %} + +{% endcapture %} + + +{% capture steps %} + +## Downward API + +有两种方式可以将Pod和Container字段呈现给运行中的容器: + +* 环境变量 +* [DownwardAPIVolumeFiles](/docs/resources-reference/{{page.version}}/#downwardapivolumefile-v1-core) + +这两种呈现Pod和Container字段的方式都称为*Downward API*。 + + +## 用Pod字段作为环境变量的值 + +在这个练习中,你将创建一个包含一个容器的pod。这是该pod的配置文件: + +{% include code.html language="yaml" file="dapi-envars-pod.yaml" ghlink="/cn/docs/tasks/inject-data-application/dapi-envars-pod.yaml" %} + +这个配置文件中,你可以看到五个环境变量。`env`字段是一个[EnvVars](/docs/resources-reference/{{page.version}}/#envvar-v1-core)类型的数组。 +数组中第一个元素指定`MY_NODE_NAME`这个环境变量从Pod的`spec.nodeName`字段获取变量值。同样,其它环境变量也是从Pod的字段获取它们的变量值。 + +**注意:** 本示例中的字段是Pod字段,不是Pod中容器的字段。 +{: .note} + +创建Pod: + +```shell +kubectl create -f https://k8s.io/cn/docs/tasks/inject-data-application/dapi-envars-pod.yaml +``` + +验证Pod中的容器运行正常: + +``` +kubectl get pods +``` + +查看容器日志: + +``` +kubectl logs dapi-envars-fieldref +``` + +输出信息显示了所选择的环境变量的值: + +``` +minikube +dapi-envars-fieldref +default +172.17.0.4 +default +``` + +要了解为什么这些值在日志中,请查看配置文件中的`command` 和 `args`字段。 当容器启动时,它将五个环境变量的值写入stdout。每十秒重复执行一次。 + +接下来,进入Pod中运行的容器,打开一个shell: + +``` +kubectl exec -it dapi-envars-fieldref -- sh +``` + +在shell中,查看环境变量: + +``` +/# printenv +``` + +输出信息显示环境变量已经指定为Pod的字段的值。 + +``` +MY_POD_SERVICE_ACCOUNT=default +... +MY_POD_NAMESPACE=default +MY_POD_IP=172.17.0.4 +... +MY_NODE_NAME=minikube +... +MY_POD_NAME=dapi-envars-fieldref +``` + +## 用容器字段作为环境变量的值 + +前面的练习中,你将Pod字段作为环境变量的值。接下来这个练习,你将用容器字段作为环境变量的值。这里是包含一个容器的pod的配置文件: + +{% include code.html language="yaml" file="dapi-envars-container.yaml" ghlink="/cn/docs/tasks/inject-data-application/dapi-envars-container.yaml" %} + +这个配置文件中,你可以看到四个环境变量。`env`字段是一个[EnvVars](/docs/resources-reference/{{page.version}}/#envvar-v1-core) +类型的数组。数组中第一个元素指定`MY_CPU_REQUEST`这个环境变量从容器的`requests.cpu`字段获取变量值。同样,其它环境变量也是从容器的字段获取它们的变量值。 + +创建Pod: + +```shell +kubectl create -f https://k8s.io/cn/docs/tasks/inject-data-application/dapi-envars-container.yaml +``` + +验证Pod中的容器运行正常: + +``` +kubectl get pods +``` + +查看容器日志: + +``` +kubectl logs dapi-envars-resourcefieldref +``` + +输出信息显示了所选择的环境变量的值: + +``` +1 +1 +33554432 +67108864 +``` + +{% endcapture %} + +{% capture whatsnext %} + +* [给容器定义环境变量](/docs/tasks/configure-pod-container/define-environment-variable-container/) +* [PodSpec](/docs/resources-reference/{{page.version}}/#podspec-v1-core) +* [Container](/docs/resources-reference/{{page.version}}/#container-v1-core) +* [EnvVar](/docs/resources-reference/{{page.version}}/#envvar-v1-core) +* [EnvVarSource](/docs/resources-reference/{{page.version}}/#envvarsource-v1-core) +* [ObjectFieldSelector](/docs/resources-reference/{{page.version}}/#objectfieldselector-v1-core) +* [ResourceFieldSelector](/docs/resources-reference/{{page.version}}/#resourcefieldselector-v1-core) + +{% endcapture %} + + +{% include templates/task.md %} From b23b91a3abfdfe1882aaba25fae55273c9a6373d Mon Sep 17 00:00:00 2001 From: Lion-Wei Date: Mon, 25 Sep 2017 11:22:16 +0800 Subject: [PATCH 37/87] update network-policy by adding egress and ipBlock usage (#5473) --- docs/concepts/services-networking/network-policies.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/docs/concepts/services-networking/network-policies.md b/docs/concepts/services-networking/network-policies.md index 0371cfd3cc70e..22518d48765e7 100644 --- a/docs/concepts/services-networking/network-policies.md +++ b/docs/concepts/services-networking/network-policies.md @@ -41,6 +41,10 @@ spec: role: db ingress: - from: + - ipBlock: + cidr: 172.17.0.0/16 + except: + - 172.17.1.0/24 - namespaceSelector: matchLabels: project: myproject @@ -62,6 +66,11 @@ __podSelector__: Each `NetworkPolicy` includes a `podSelector` which selects the __ingress__: Each `NetworkPolicy` includes a list of whitelist `ingress` rules. Each rule allows traffic which matches both the `from` and `ports` sections. The example policy contains a single rule, which matches traffic on a single port, from either of two sources, the first specified via a `namespaceSelector` and the second specified via a `podSelector`. +__ipBlock__: `ipBlock` describes a particular CIDR that is allowed to +the pods matched by a NetworkPolicySpec's podSelector. The `except` entry +is a slice of CIDRs that should not be included within an IP Block. Except +values will be rejected if they are outside the CIDR range. + So, the example NetworkPolicy: 1. isolates "role=db" pods in the "default" namespace (if they weren't already isolated) From 492d7e32eb5d8168d91f2365bf55c1cd7fc5fdba Mon Sep 17 00:00:00 2001 From: houjun41544 Date: Mon, 25 Sep 2017 11:44:17 +0800 Subject: [PATCH 38/87] modify --- ...nward-api-volume-expose-pod-information.md | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/cn/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md b/cn/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md index 98a738be69efc..bbb7162888535 100644 --- a/cn/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md +++ b/cn/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md @@ -164,22 +164,22 @@ kubectl exec -it kubernetes-downwardapi-volume-example-2 -- sh 下面这些信息可以通过环境变量和DownwardAPIVolumeFiles提供给容器: -* node的name -* node的IP -* Pod的name -* Pod的namespace -* Pod的IP address -* Pod的service account name +* 节点名称 +* 节点IP +* Pod名称 +* Pod名字空间 +* Pod IP地址 +* Pod服务帐号名称 * Pod的UID -* Container的CPU limit -* Container的CPU request -* Container的memory limit -* Container的memory request +* 容器的CPU约束 +* 容器的CPU请求值 +* 容器的内存约束 +* 容器的内存请求值 此外,以下信息可通过DownwardAPIVolumeFiles获得: -* Pod的labels -* Pod的annotations +* Pod的标签 +* Pod的注释 **Note:** 如果容器未指定CPU和memory limits,则Downward API默认为节点可分配值。 {: .note} @@ -190,7 +190,7 @@ kubectl exec -it kubernetes-downwardapi-volume-example-2 -- sh ## Downward API的动机 -对于容器来说,有时候拥有自己的信息是很有用的,可避免与Kubernetes过度耦合。Downward API使得容器使用自己或者集群的信息,而不必通过Kubernetes client或API server。 +对于容器来说,有时候拥有自己的信息是很有用的,可避免与Kubernetes过度耦合。Downward API使得容器使用自己或者集群的信息,而不必通过Kubernetes客户端或API服务器。 一个例子是有一个现有的应用假定要用一个非常熟悉的环境变量来保存一个唯一标识。一种可能是给应用增加处理层,但这样是冗余和易出错的,而且它违反了低耦合的目标。更好的选择是使用Pod名称作为标识,把Pod名称注入这个环境变量中。 {% endcapture %} From 179b35d2e3343cdc96675aa159477ce6a02a76ed Mon Sep 17 00:00:00 2001 From: Fabrizio Milo Date: Sun, 24 Sep 2017 20:46:05 -0700 Subject: [PATCH 39/87] update wrong link (#5596) --- docs/tasks/access-application-cluster/web-ui-dashboard.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tasks/access-application-cluster/web-ui-dashboard.md b/docs/tasks/access-application-cluster/web-ui-dashboard.md index d650f69545401..f77da393e5d94 100644 --- a/docs/tasks/access-application-cluster/web-ui-dashboard.md +++ b/docs/tasks/access-application-cluster/web-ui-dashboard.md @@ -20,7 +20,7 @@ Dashboard also provides information on the state of Kubernetes resources in your The Dashboard UI is not deployed by default. To deploy it, run the following command: ``` -kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml +kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml ``` ## Accessing the Dashboard UI From f3279dcc94d5dabc0bef68296bd37fd7802b7925 Mon Sep 17 00:00:00 2001 From: lichuqiang Date: Mon, 25 Sep 2017 11:57:00 +0800 Subject: [PATCH 40/87] translate doc network-policies into chinese --- .../services-networking/network-policies.md | 104 ++++++++++++++++++ 1 file changed, 104 insertions(+) create mode 100644 cn/docs/concepts/services-networking/network-policies.md diff --git a/cn/docs/concepts/services-networking/network-policies.md b/cn/docs/concepts/services-networking/network-policies.md new file mode 100644 index 0000000000000..ae8ccf69ea44b --- /dev/null +++ b/cn/docs/concepts/services-networking/network-policies.md @@ -0,0 +1,104 @@ +--- +approvers: +- thockin +- caseydavenport +- danwinship +title: 网络策略 +--- + +* TOC +{:toc} + +网络策略(NetworkPolicy)是一种关于pod间及pod与其他网络端点间所允许的通信规则的规范。 + +`NetworkPolicy` 资源使用标签选择pod,并定义选定pod所允许的通信规则。 + +## 前提 + +网络策略通过网络插件来实现,所以用户必须使用支持 `NetworkPolicy` 的网络解决方案 - 简单地创建资源对象,而没有控制器来使它生效的话,是没有任何作用的。 + +## 隔离和非隔离的Pod + +默认情况下,Pod是非隔离的,它们接受任何来源的流量。 + +Pod可以通过相关的网络策略进行隔离。一旦命名空间中有网络策略选择了特定的Pod,该Pod会拒绝网络策略所不允许的连接。 (命名空间下其他未被网络策略所选择的Pod会继续接收所有的流量) + +## `NetworkPolicy` 资源 + +通过[api参考](/docs/api-reference/{{page.version}}/#networkpolicy-v1-networking)来了解资源定义。 + +下面是一个 `NetworkPolicy` 的示例: + +```yaml +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: test-network-policy + namespace: default +spec: + podSelector: + matchLabels: + role: db + ingress: + - from: + - namespaceSelector: + matchLabels: + project: myproject + - podSelector: + matchLabels: + role: frontend + ports: + - protocol: TCP + port: 6379 +``` + +除非选择支持网络策略的网络解决方案,否则将上述示例发送到API服务器没有任何效果。 + +__必填字段__: 与所有其他的Kubernetes配置一样,`NetworkPolicy` 需要 `apiVersion`、 `kind`和 `metadata` 字段。 关于配置文件操作的一般信息,请参考 [这里](/docs/user-guide/simple-yaml)、 [这里](/docs/user-guide/configuring-containers)和 [这里](/docs/user-guide/working-with-resources)。 + +__spec__: `NetworkPolicy` [spec](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status) 中包含了在一个命名空间中定义特定网络策略所需的所有信息 + +__podSelector__: 每个 `NetworkPolicy` 都包括一个 `podSelector` ,它对该策略所应用的一组Pod进行选择。因为 `NetworkPolicy` 目前只支持定义 `ingress` 规则,这里的 `podSelector` 本质上是为该策略定义 "目标pod" 。示例中的策略选择带有 "role=db" 标签的pod。空的 `podSelector` 选择命名空间下的所有pod。 + +__ingress__: 每个 `NetworkPolicy` 包含一个 `ingress` 规则的白名单列表。 (其中的)规则允许同时匹配 `from` 和 `ports` 部分的流量。示例策略中包含一条简单的规则: 它匹配一个单一的端口,来自两个来源中的一个, 第一个通过 `namespaceSelector` 指定,第二个通过 `podSelector` 指定。 + +所以,示例网络策略: + +1. 隔离 "default" 命名空间下 "role=db" 的pod (如果它们不是已经被隔离的话)。 +2. 允许从 "default" 命名空间下带有 "role=frontend" 标签的pod到 "default" 命名空间下的pod的6379 TCP端口的连接。 +3. 允许从带有 "project=myproject" 标签的命名空间下的任何pod到 "default" 命名空间下的pod的6379 TCP端口的连接。 + +查看 [网络策略入门指南](/docs/getting-started-guides/network-policy/walkthrough) 了解更多示例。 + +## 默认策略 + +用户可以通过创建一个选择所有Pod,但是不允许任何通信的网络策略,来为一个命名空间创建 "默认的" 隔离策略: + +```yaml +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-deny +spec: + podSelector: +``` + +这可以确保即使Pod在未被其他任何网络策略所选择的情况下仍能被隔离。 + +或者,如果用户希望允许一个命名空间下的所有Pod的所有通信 (即使已经添加了策略,使得一些pod被 "隔离"),仍可以创建一个明确允许所有通信的策略: + +```yaml +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: allow-all +spec: + podSelector: + ingress: + - {} +``` + +## 下一步呢? + +- 查看 [声明网络策略](/docs/tasks/administer-cluster/declare-network-policy/) + 来进行更多的示例演练 From b10df80e5537e0af9a1a1321b72d78e5ab45ffd8 Mon Sep 17 00:00:00 2001 From: houjun41544 Date: Thu, 14 Sep 2017 10:39:39 +0800 Subject: [PATCH 41/87] Add static-pod.md --- .../tasks/administer-cluster/static-pod.md | 126 ++++++++++++++++++ 1 file changed, 126 insertions(+) create mode 100644 cn/docs/tasks/administer-cluster/static-pod.md diff --git a/cn/docs/tasks/administer-cluster/static-pod.md b/cn/docs/tasks/administer-cluster/static-pod.md new file mode 100644 index 0000000000000..1a59fd10df1dd --- /dev/null +++ b/cn/docs/tasks/administer-cluster/static-pod.md @@ -0,0 +1,126 @@ +--- +approvers: +- jsafrane +title: 静态Pods +--- + +**如果你正在运行Kubernetes集群并且使用静态pods在每个节点上起一个pod,那么最好使用[DaemonSet](/docs/concepts/workloads/controllers/daemonset/)!** + +*静态pods*直接由特定节点上的kubelet进程来管理,不通过主控节点上的API服务器。静态pod不关联任何replicationcontroller,它由kubelet进程自己来监控,当pod崩溃时重启该pod。对于静态pod没有健康检查。静态pod始终绑定在某一个kubelet,并且始终运行在同一个节点上。 + +Kubelet自动为每一个静态pod在Kubernetes的API服务器上创建一个镜像Pod(Mirror Pod),因此可以在API服务器查询到该pod,但是不被API 服务器控制(例如不能删除)。 + +## 静态pod创建 + +静态pod有两种创建方式:用配置文件或者通过HTTP。 + +### 配置文件 + +配置文件就是放在特定目录下的标准的JSON或YAML格式的pod定义文件。用`kubelet --pod-manifest-path=`来启动kubelet进程,kubelet将会周期扫描这个目录,根据这个目录下出现或消失的YAML/JSON文件来创建或删除静态pod。 + +下面例子用静态pod的方式启动一个nginx的Web服务器: + +1. 选择一个节点来运行静态pod。这个例子中就是`my-node1`。 + + ``` + [joe@host ~] $ ssh my-node1 + ``` + +2. 选择一个目录,例如/etc/kubelet.d,把web服务器的pod定义文件放在这个目录下,例如`/etc/kubelet.d/static-web.yaml`: + + ``` + [root@my-node1 ~] $ mkdir /etc/kubelet.d/ + [root@my-node1 ~] $ cat </etc/kubelet.d/static-web.yaml + apiVersion: v1 + kind: Pod + metadata: + name: static-web + labels: + role: myrole + spec: + containers: + - name: web + image: nginx + ports: + - name: web + containerPort: 80 + protocol: TCP + EOF + ``` + +3. 配置节点上的kubelet使用这个目录,kubelet启动时增加`--pod-manifest-path=/etc/kubelet.d/`参数。 + 如果是Fedora系统,在Kubelet配置文件/etc/kubernetes/kubelet中添加下面这行: + + ``` + KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --pod-manifest-path=/etc/kubelet.d/" + ``` + + 如果是其它Linux发行版或者其它Kubernetes安装方式,配置方法可能会不一样。 + +4. 重启kubelet。如果是Fedora系统,就是: + + ``` + [root@my-node1 ~] $ systemctl restart kubelet + ``` + +## 通过HTTP创建静态Pods + +Kubelet周期地从--manifest-url=参数指定的地址下载文件,并且把它翻译成JSON/YAML格式的pod定义。此后的操作方式与--pod-manifest-path=相同,kubelet会不时地重新下载该文件,当文件变化时对应地终止或启动静态pod(如下)。 + +## 静态pods的动作行为 + +kubelet启动时,由`--pod-manifest-path=` or `--manifest-url=`参数指定的目录下定义的所有pod都会自动创建,例如,我们示例中的static-web。 (可能要花些时间拉取nginx镜像,耐心等待...) + +```shell +[joe@my-node1 ~] $ docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-node1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c +``` + +如果我们查看Kubernetes的API服务器(运行在主机 `my-master`),可以看到这里创建了一个新的mirror-pod: + +```shell +[joe@host ~] $ ssh my-master +[joe@my-master ~] $ kubectl get pods +NAME READY STATUS RESTARTS AGE +static-web-my-node1 1/1 Running 0 2m +``` + +静态pod的标签会传递给镜像Pod,可以用来过滤或筛选。 + +需要注意的是,我们不能通过API服务器来删除静态pod(例如,通过 [`kubectl`](/docs/user-guide/kubectl/) 命令),kebelet不会删除它。 + +```shell +[joe@my-master ~] $ kubectl delete pod static-web-my-node1 +pods/static-web-my-node1 +[joe@my-master ~] $ kubectl get pods +NAME READY STATUS RESTARTS AGE +static-web-my-node1 1/1 Running 0 12s +``` + +返回`my-node1`主机,我们尝试手动终止容器,可以看到kubelet很快就会自动重启容器。 + +```shell +[joe@host ~] $ ssh my-node1 +[joe@my-node1 ~] $ docker stop f6d05272b57e +[joe@my-node1 ~] $ sleep 20 +[joe@my-node1 ~] $ docker ps +CONTAINER ID IMAGE COMMAND CREATED ... +5b920cbaf8b1 nginx:latest "nginx -g 'daemon of 2 seconds ago ... +``` + +## 静态pods的动态增加和删除 + +运行中的kubelet周期扫描配置的目录(我们这个例子中就是`/etc/kubelet.d`)下文件的变化,当这个目录中有文件出现或消失时创建或删除pods。 + +```shell +[joe@my-node1 ~] $ mv /etc/kubelet.d/static-web.yaml /tmp +[joe@my-node1 ~] $ sleep 20 +[joe@my-node1 ~] $ docker ps +// no nginx container is running +[joe@my-node1 ~] $ mv /tmp/static-web.yaml /etc/kubelet.d/ +[joe@my-node1 ~] $ sleep 20 +[joe@my-node1 ~] $ docker ps +CONTAINER ID IMAGE COMMAND CREATED ... +e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago +``` From 4b381be9d1e0db53dfe67b69600080b744954926 Mon Sep 17 00:00:00 2001 From: houjun41544 Date: Mon, 25 Sep 2017 14:40:07 +0800 Subject: [PATCH 42/87] Modify --- cn/docs/tasks/administer-cluster/static-pod.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/cn/docs/tasks/administer-cluster/static-pod.md b/cn/docs/tasks/administer-cluster/static-pod.md index 1a59fd10df1dd..7c32e2d26d487 100644 --- a/cn/docs/tasks/administer-cluster/static-pod.md +++ b/cn/docs/tasks/administer-cluster/static-pod.md @@ -4,11 +4,11 @@ approvers: title: 静态Pods --- -**如果你正在运行Kubernetes集群并且使用静态pods在每个节点上起一个pod,那么最好使用[DaemonSet](/docs/concepts/workloads/controllers/daemonset/)!** +**如果你正在运行Kubernetes集群并且使用静态pods在每个节点上起一个pod,那么最好使用[DaemonSet](/cn/docs/concepts/workloads/controllers/daemonset/)!** *静态pods*直接由特定节点上的kubelet进程来管理,不通过主控节点上的API服务器。静态pod不关联任何replicationcontroller,它由kubelet进程自己来监控,当pod崩溃时重启该pod。对于静态pod没有健康检查。静态pod始终绑定在某一个kubelet,并且始终运行在同一个节点上。 -Kubelet自动为每一个静态pod在Kubernetes的API服务器上创建一个镜像Pod(Mirror Pod),因此可以在API服务器查询到该pod,但是不被API 服务器控制(例如不能删除)。 +Kubelet自动为每一个静态pod在Kubernetes的API服务器上创建一个镜像Pod(Mirror Pod),因此可以在API服务器查询到该pod,但是不被API服务器控制(例如不能删除)。 ## 静态pod创建 @@ -48,14 +48,13 @@ Kubelet自动为每一个静态pod在Kubernetes的API服务器上创建一个镜 EOF ``` -3. 配置节点上的kubelet使用这个目录,kubelet启动时增加`--pod-manifest-path=/etc/kubelet.d/`参数。 - 如果是Fedora系统,在Kubelet配置文件/etc/kubernetes/kubelet中添加下面这行: +3.配置节点上的kubelet使用这个目录,kubelet启动时增加`--pod-manifest-path=/etc/kubelet.d/`参数。如果是Fedora系统,在Kubelet配置文件/etc/kubernetes/kubelet中添加下面这行: ``` KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --pod-manifest-path=/etc/kubelet.d/" ``` - 如果是其它Linux发行版或者其它Kubernetes安装方式,配置方法可能会不一样。 +如果是其它Linux发行版或者其它Kubernetes安装方式,配置方法可能会不一样。 4. 重启kubelet。如果是Fedora系统,就是: @@ -77,7 +76,7 @@ CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAME f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-node1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c ``` -如果我们查看Kubernetes的API服务器(运行在主机 `my-master`),可以看到这里创建了一个新的mirror-pod: +如果我们查看Kubernetes的API服务器(运行在主机 `my-master`),可以看到这里创建了一个新的镜像Pod: ```shell [joe@host ~] $ ssh my-master From 771a2a40be4decfa7dc701f5ea955aaf12d80894 Mon Sep 17 00:00:00 2001 From: XuJun00192603 Date: Mon, 25 Sep 2017 22:10:38 +0800 Subject: [PATCH 43/87] ZTE-SH-CN-cluster-administration-federation-2017-09-25-13 --- .../cluster-administration/federation.md | 116 ++++++++++++++++++ 1 file changed, 116 insertions(+) create mode 100644 cn/docs/concepts/cluster-administration/federation.md diff --git a/cn/docs/concepts/cluster-administration/federation.md b/cn/docs/concepts/cluster-administration/federation.md new file mode 100644 index 0000000000000..f9a51dfb6c779 --- /dev/null +++ b/cn/docs/concepts/cluster-administration/federation.md @@ -0,0 +1,116 @@ +--- +title: 联邦 +--- + +{% capture overview %} +本页面阐明了为何以及如何使用联邦创建Kubernetes集群。 +{% endcapture %} + +{% capture body %} +## 为何使用联邦 + +联邦可以使多个集群的管理简单化。它提供了两个主要构件模块: + + * 跨集群同步资源:联邦能够让资源在多个集群中同步。例如,你可以确保在多个集群中存在同样的部署。 + * 跨集群发现:联邦能够在所有集群的后端自动配置DNS服务和负载均衡。例如,通过多个集群的后端,你可以确保全局的VIP或DNS记录可用。 + +联邦技术的其他应用场景: + +* 高可用性:通过跨集群分摊负载,自动配置DNS服务和负载均衡,联邦将集群失败所带来的影响降到最低。 +* 避免供应商锁定:跨集群使迁移应用程序变得更容易,联邦服务避免了供应商锁定。 + + +只有在多个集群的场景下联邦服务才是有帮助的。这里列出了一些你会使用多个集群的原因: + +* 降低延迟:在多个区域含有集群,可使用离用户最近的集群来服务用户,从而最大限度降低延迟。 +* 故障隔离:对于故障隔离,也许有多个小的集群比有一个大的集群要更好一些(例如:一个云供应商的不同可用域里有多个集群)。详细信息请参阅[多集群指南](/docs/admin/multi-cluster)。 +* 可伸缩性:对于单个kubernetes集群是有伸缩性限制的(但对于大多数用户来说并非如此。更多细节参考[Kubernetes扩展和性能目标](https://git.k8s.io/community/sig-scalability/goals.md))。 +* [混合云](#混合云的能力):可以有多个集群,它们分别拥有不同的云供应商或者本地数据中心。 + +### 注意事项 + +虽然联邦有很多吸引人的场景,但这里还是有一些需要关注的事项: + +* 增加网络的带宽和损耗:联邦控制面会监控所有的集群,来确保集群的当前状态与预期一致。那么当这些集群运行在一个或者多个云提供者的不同区域中,则会带来重大的网络损耗。 +* 降低集群的隔离:当联邦控制面中存在一个故障时,会影响所有的集群。把联邦控制面的逻辑降到最小可以缓解这个问题。 无论何时,它都是kubernetes集群里控制面的代表。设计和实现也使其变得更安全,避免多集群运行中断。 +* 完整性:联邦项目相对较新,还不是很成熟。不是所有资源都可用,且很多资源才刚刚开始。[Issue 38893](https://github.com/kubernetes/kubernetes/issues/38893) 列举了一些团队正忙于解决的系统已知问题。 + +### 混合云的能力 + +Kubernetes集群里的联邦包括运行在不同云供应商上的集群(例如,谷歌云、亚马逊),和本地部署的集群(例如,OpenStack)。只需在适当的云供应商和/或位置创建所需的所有集群,并将每个集群的API endpoint和凭据注册到您的联邦API服务中(详情参考[联邦管理指南](/docs/admin/federation/))。 + +在此之后,您的[API资源](#api资源)就可以跨越不同的集群和云供应商。 + +## 建立联邦 + +若要能联合多个集群,首先需要建立一个联邦控制面。参照[安装指南](/docs/tutorials/federation/set-up-cluster-federation-kubefed/) 建立联邦控制面。 + +## API资源 + +控制面建立完成后,就可以开始创建联邦API资源了。 +以下指南详细介绍了一些资源: + +* [Cluster](/docs/tasks/administer-federation/cluster/) +* [ConfigMap](/docs/tasks/administer-federation/configmap/) +* [DaemonSets](/docs/tasks/administer-federation/daemonset/) +* [Deployment](/docs/tasks/administer-federation/deployment/) +* [Events](/docs/tasks/administer-federation/events/) +* [Ingress](/docs/tasks/administer-federation/ingress/) +* [Namespaces](/docs/tasks/administer-federation/namespaces/) +* [ReplicaSets](/docs/tasks/administer-federation/replicaset/) +* [Secrets](/docs/tasks/administer-federation/secret/) +* [Services](/docs/concepts/cluster-administration/federation-service-discovery/) + +[API参考文档](/docs/reference/federation/)列举了联邦API服务支持的所有资源。 + +## 级联删除 + +Kubernetes1.6版本支持联邦资源级联删除。使用级联删除,即当删除联邦控制面的一个资源时,也删除了所有底层集群中的相应资源。 + +当使用REST API时,级联删除功能不是默认开启的。若使用REST API从联邦控制面删除一个资源时,要开启级联删除功能,即需配置选项 `DeleteOptions.orphanDependents=false`。使用`kubectl delete`使级联删除功能默认开启。使用`kubectl delete --cascade=false`禁用级联删除功能。 + +注意:Kubernetes1.5版本开始支持联邦资源子集的级联删除。 + +## 单个集群的范围 + +对于IaaS供应商如谷歌计算引擎或亚马逊网络服务,一个虚拟机存在于一个[域](https://cloud.google.com/compute/docs/zones)或[可用域](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html)中。 +我们建议一个Kubernetes集群里的所有虚机应该在相同的可用域里,因为: + + - 与单一的全局Kubernetes集群对比,该方式有较少的单点故障。 + - 与跨可用域的集群对比,该方式更容易推断单区域集群的可用性属性。 + - 当Kubernetes开发者设计一个系统(例如,对延迟、带宽或相关故障进行假设),他们也会假设所有的机器都在一个单一的数据中心,或者以其他方式紧密相连。 + +每个可用区域里包含多个集群当然是可以的,但是总的来说我们认为集群数越少越好。 +偏爱较少集群数的原因是: + + - 在某些情况下,在一个集群里有更多的节点,可以改进Pods的装箱问题(更少的资源碎片)。 + - 减少操作开销(尽管随着OPS工具和流程的成熟而降低了这块的优势)。 + - 为每个集群的固定资源花费降低开销,例如,使用apiserver的虚拟机(但是在全体集群开销中,中小型集群的开销占比要小的多)。 + +多集群的原因包括: + + - 严格的安全性策略要求隔离一类工作与另一类工作(但是,请参见下面的集群分割)。 + - 测试集群或其他集群软件直至最优的新Kubernetes版本发布。 + +## 选择合适的集群数 + +Kubernetes集群数量选择也许是一个相对静止的选择,因为对其重新审核的情况很少。相比之下,一个集群中的节点数和一个服务中的pods数可能会根据负载和增长频繁变化。 + +选择集群的数量,首先,需要决定哪些区域对于将要运行在Kubernetes上的服务,可以有足够的时间到达所有的终端用户(如果使用内容分发网络,则不需要考虑CDN-hosted内容的延迟需求)。法律问题也可能影响这一点。例如,拥有全球客户群的公司可能会对于在美国、欧盟、亚太和南非地区拥有集群起到决定权。使用`R`代表区域的数量。 + +其次,决定有多少集群在同一时间不可用,而一些仍然可用。使用`U`代表不可用的数量。如果不确定,最好选择1。 + +如果允许负载均衡在集群故障发生时将通信引导到任何区域,那么至少需要较大的`R`或`U + 1`集群。若非如此(例如,若要在集群故障发生时确保所有用户的低延迟),则需要`R * (U + 1)`集群(在每一个`R`区域里都有`U + 1`)。在任何情况下,尝试将每个集群放在不同的区域中。 + +最后,如果你的集群需求超过一个Kubernetes集群推荐的最大节点数,那么你可能需要更多的集群。Kubernetes1.3版本支持多达1000个节点的集群规模。 + +{% endcapture %} + +{% capture whatsnext %} +* 进一步学习[联邦提案](https://github.com/kubernetes/community/blob/{{page.githubbranch}}/contributors/design-proposals/federation.md)。 +* 集群联邦参考该[配置指导](/docs/tutorials/federation/set-up-cluster-federation-kubefed/)。 +* 查看[Kubecon2016浅谈联邦](https://www.youtube.com/watch?v=pq9lbkmxpS8) +{% endcapture %} + +{% include templates/concept.md %} + From 35c7393849fb46505ee5a7493a8e7239b4fd0e6f Mon Sep 17 00:00:00 2001 From: Steve Perry Date: Mon, 25 Sep 2017 10:04:09 -0700 Subject: [PATCH 44/87] Update links to avoid redirects. (#5614) --- docs/admin/authorization/rbac.md | 2 +- docs/admin/federation/index.md | 4 +- docs/admin/high-availability/index.md | 2 +- .../apps/v1beta1/definitions.html | 2 +- docs/api-reference/batch/v1/definitions.html | 2 +- .../extensions/v1beta1/definitions.html | 2 +- docs/api-reference/v1.5/index.html | 18 +-- docs/api-reference/v1.6/index.html | 16 +-- docs/api-reference/v1.7/index.html | 2 +- docs/concepts/architecture/nodes.md | 2 +- .../cluster-administration-overview.md | 2 +- .../kubelet-garbage-collection.md | 2 +- .../manage-deployment.md | 2 +- .../concepts/configuration/assign-pod-node.md | 2 +- .../manage-compute-resources-container.md | 2 +- docs/concepts/configuration/overview.md | 4 +- .../container-environment-variables.md | 2 +- docs/concepts/overview/what-is-kubernetes.md | 2 +- .../working-with-objects/annotations.md | 2 +- .../overview/working-with-objects/labels.md | 4 +- .../working-with-objects/namespaces.md | 2 +- .../services-networking/network-policies.md | 2 +- docs/concepts/storage/persistent-volumes.md | 2 +- docs/concepts/storage/volumes.md | 4 +- docs/concepts/workloads/controllers/petset.md | 4 +- .../workloads/controllers/replicaset.md | 2 +- .../controllers/replicationcontroller.md | 4 +- docs/concepts/workloads/pods/disruptions.md | 2 +- .../workloads/pods/init-containers.md | 2 +- docs/concepts/workloads/pods/pod-overview.md | 2 +- docs/concepts/workloads/pods/pod.md | 2 +- docs/getting-started-guides/aws.md | 4 +- docs/getting-started-guides/binary_release.md | 2 +- .../centos/centos_manual_config.md | 6 +- docs/getting-started-guides/cloudstack.md | 4 +- .../coreos/bare_metal_offline.md | 6 +- docs/getting-started-guides/coreos/index.md | 6 +- docs/getting-started-guides/dcos.md | 4 +- .../fedora/fedora_ansible_config.md | 4 +- .../fedora/fedora_manual_config.md | 4 +- .../fedora/flannel_multi_node_cluster.md | 10 +- docs/getting-started-guides/gce.md | 8 +- docs/getting-started-guides/libvirt-coreos.md | 4 +- docs/getting-started-guides/mesos-docker.md | 4 +- docs/getting-started-guides/mesos/index.md | 4 +- docs/getting-started-guides/openstack-heat.md | 4 +- docs/getting-started-guides/ovirt.md | 4 +- .../photon-controller.md | 4 +- docs/getting-started-guides/rkt/index.md | 4 +- docs/getting-started-guides/scratch.md | 16 +-- docs/getting-started-guides/stackpoint.md | 6 +- docs/getting-started-guides/ubuntu/index.md | 30 ++--- .../ubuntu/installation.md | 18 +-- .../ubuntu/operational-considerations.md | 2 +- .../getting-started-guides/ubuntu/upgrades.md | 6 +- docs/getting-started-guides/vsphere.md | 4 +- docs/home/index.md | 2 +- .../extensions/v1beta1/definitions.html | 2 +- docs/resources-reference/v1.5/index.html | 18 +-- docs/resources-reference/v1.6/index.html | 16 +-- docs/resources-reference/v1.7/index.html | 2 +- docs/setup/independent/install-kubeadm.md | 2 +- docs/setup/pick-right-solution.md | 122 +++++++++--------- .../access-cluster.md | 6 +- .../web-ui-dashboard.md | 4 +- .../administer-cluster/access-cluster-api.md | 2 +- .../access-cluster-services.md | 4 +- .../calico-network-policy.md | 4 +- .../cilium-network-policy.md | 2 +- .../administer-cluster/cluster-management.md | 2 +- .../kube-router-network-policy.md | 2 +- .../namespaces-walkthrough.md | 2 +- docs/tasks/administer-cluster/namespaces.md | 2 +- .../administer-cluster/out-of-resource.md | 2 +- .../romana-network-policy.md | 4 +- .../weave-network-policy.md | 4 +- docs/tasks/administer-federation/events.md | 2 +- docs/tasks/administer-federation/ingress.md | 2 +- .../tasks/administer-federation/replicaset.md | 2 +- docs/tasks/administer-federation/secret.md | 2 +- .../assign-pods-nodes.md | 2 +- .../configure-persistent-volume-storage.md | 2 +- .../debug-application-introspection.md | 2 +- .../debug-stateful-set.md | 2 +- .../resource-usage-monitoring.md | 2 +- .../federation-service-discovery.md | 2 +- .../set-up-cluster-federation-kubefed.md | 4 +- .../set-up-coredns-provider-federation.md | 2 +- .../set-up-placement-policies-federation.md | 2 +- .../job/parallel-processing-expansion.md | 2 +- docs/tasks/manage-daemon/update-daemon-set.md | 2 +- .../horizontal-pod-autoscale-walkthrough.md | 2 +- .../run-replicated-stateful-application.md | 6 +- docs/tasks/tools/install-kubectl.md | 2 +- docs/tasks/tools/install-minikube.md | 2 +- docs/tools/index.md | 4 +- .../basic-stateful-set.md | 8 +- .../stateful-application/cassandra.md | 2 +- .../stateful-application/zookeeper.md | 10 +- .../stateless-application/hello-minikube.md | 4 +- docs/user-guide/docker-cli-to-kubectl.md | 2 +- docs/user-guide/update-demo/index.md.orig | 2 +- docs/user-guide/walkthrough/k8s201.md | 2 +- 103 files changed, 277 insertions(+), 277 deletions(-) diff --git a/docs/admin/authorization/rbac.md b/docs/admin/authorization/rbac.md index a5bc8e1cfe933..5ec06ebef321b 100644 --- a/docs/admin/authorization/rbac.md +++ b/docs/admin/authorization/rbac.md @@ -504,7 +504,7 @@ This is commonly used by add-on API servers for unified authentication and autho system:kube-dns kube-dns service account in the kube-system namespace -Role for the kube-dns component. +Role for the kube-dns component. system:node-bootstrapper diff --git a/docs/admin/federation/index.md b/docs/admin/federation/index.md index ecdcca87d974b..3dd005933d7be 100644 --- a/docs/admin/federation/index.md +++ b/docs/admin/federation/index.md @@ -17,10 +17,10 @@ This guide explains how to set up cluster federation that lets us control multip ## Prerequisites This guide assumes that you have a running Kubernetes cluster. -If you need to start a new cluster, see the [getting started guides](/docs/getting-started-guides/) for instructions on bringing a cluster up. +If you need to start a new cluster, see the [getting started guides](/docs/home/) for instructions on bringing a cluster up. To use the commands in this guide, you must download a Kubernetes release from the -[getting started binary releases](/docs/getting-started-guides/binary_release/) and +[getting started binary releases](/docs/home/binary_release/) and extract into a directory; all the commands in this guide are run from that directory. diff --git a/docs/admin/high-availability/index.md b/docs/admin/high-availability/index.md index 298598d851470..bb5fee0f21432 100644 --- a/docs/admin/high-availability/index.md +++ b/docs/admin/high-availability/index.md @@ -6,7 +6,7 @@ title: Building High-Availability Clusters This document describes how to build a high-availability (HA) Kubernetes cluster. This is a fairly advanced topic. Users who merely want to experiment with Kubernetes are encouraged to use configurations that are simpler to set up such -as [Minikube](/docs/getting-started-guides/minikube/) +as [Minikube](/docs/home/minikube/) or try [Google Container Engine](https://cloud.google.com/container-engine/) for hosted Kubernetes. Also, at this time high availability support for Kubernetes is not continuously tested in our end-to-end (e2e) testing. We will diff --git a/docs/api-reference/apps/v1beta1/definitions.html b/docs/api-reference/apps/v1beta1/definitions.html index 2f15ecf9070e7..c743e6d6f7d30 100755 --- a/docs/api-reference/apps/v1beta1/definitions.html +++ b/docs/api-reference/apps/v1beta1/definitions.html @@ -3620,7 +3620,7 @@

    v1.PodSpec

    nodeSelector

    -

    NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: http://kubernetes.io/docs/user-guide/node-selection

    +

    NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: http://kubernetes.io/docs/concepts/configuration/assign-pod-node/

    false

    object

    diff --git a/docs/api-reference/batch/v1/definitions.html b/docs/api-reference/batch/v1/definitions.html index 50f6f28e449bb..26d22a60365df 100755 --- a/docs/api-reference/batch/v1/definitions.html +++ b/docs/api-reference/batch/v1/definitions.html @@ -3609,7 +3609,7 @@

    v1.PodSpec

    nodeSelector

    -

    NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: http://kubernetes.io/docs/user-guide/node-selection

    +

    NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: http://kubernetes.io/docs/concepts/configuration/assign-pod-node/

    false

    object

    diff --git a/docs/api-reference/extensions/v1beta1/definitions.html b/docs/api-reference/extensions/v1beta1/definitions.html index 262b7aed95ca1..c8da39dde1bb6 100755 --- a/docs/api-reference/extensions/v1beta1/definitions.html +++ b/docs/api-reference/extensions/v1beta1/definitions.html @@ -3457,7 +3457,7 @@

    v1.PodSpec

    nodeSelector

    -

    NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: http://kubernetes.io/docs/user-guide/node-selection

    +

    NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: http://kubernetes.io/docs/concepts/configuration/assign-pod-node/

    false

    object

    diff --git a/docs/api-reference/v1.5/index.html b/docs/api-reference/v1.5/index.html index 71b333af828c7..fe6040766a59a 100644 --- a/docs/api-reference/v1.5/index.html +++ b/docs/api-reference/v1.5/index.html @@ -8010,7 +8010,7 @@

    PodSpec v1

    nodeSelector
    object -NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: http://kubernetes.io/docs/user-guide/node-selection +NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: http://kubernetes.io/docs/concepts/configuration/assign-pod-node/ restartPolicy
    string @@ -18058,7 +18058,7 @@

    ServiceSpec v1

    clusterIP
    string -clusterIP is the IP address of the service and is usually assigned randomly by the master. If an address is specified manually and is not in use by others, it will be allocated to the service; otherwise, creation of the service will fail. This field can not be changed through updates. Valid values are "None", empty string (""), or a valid IP address. "None" can be specified for headless services when proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies +clusterIP is the IP address of the service and is usually assigned randomly by the master. If an address is specified manually and is not in use by others, it will be allocated to the service; otherwise, creation of the service will fail. This field can not be changed through updates. Valid values are "None", empty string (""), or a valid IP address. "None" can be specified for headless services when proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies deprecatedPublicIPs
    string array @@ -18078,23 +18078,23 @@

    ServiceSpec v1

    loadBalancerSourceRanges
    string array -If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: http://kubernetes.io/docs/user-guide/services-firewalls +If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: http://kubernetes.io/docs/concepts/services-networking/service/-firewalls ports
    ServicePort array -The list of ports that are exposed by this service. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies +The list of ports that are exposed by this service. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies selector
    object -Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/user-guide/services#overview +Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/concepts/services-networking/service/#overview sessionAffinity
    string -Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies +Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies type
    string -type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ExternalName" maps to the specified externalName. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a stable IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the clusterIP. More info: http://kubernetes.io/docs/user-guide/services#overview +type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ExternalName" maps to the specified externalName. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a stable IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the clusterIP. More info: http://kubernetes.io/docs/concepts/services-networking/service/#overview @@ -51143,7 +51143,7 @@

    ServicePort v1

    nodePort
    integer -The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info: http://kubernetes.io/docs/user-guide/services#type--nodeport +The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info: http://kubernetes.io/docs/concepts/services-networking/service/#type--nodeport port
    integer @@ -51155,7 +51155,7 @@

    ServicePort v1

    targetPort
    IntOrString -Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: http://kubernetes.io/docs/user-guide/services#defining-a-service +Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: http://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service diff --git a/docs/api-reference/v1.6/index.html b/docs/api-reference/v1.6/index.html index 64322a85620c0..9db59ca2ed4e0 100644 --- a/docs/api-reference/v1.6/index.html +++ b/docs/api-reference/v1.6/index.html @@ -17950,7 +17950,7 @@

    ServiceSpec v1 core

    clusterIP
    string -clusterIP is the IP address of the service and is usually assigned randomly by the master. If an address is specified manually and is not in use by others, it will be allocated to the service; otherwise, creation of the service will fail. This field can not be changed through updates. Valid values are "None", empty string (""), or a valid IP address. "None" can be specified for headless services when proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies +clusterIP is the IP address of the service and is usually assigned randomly by the master. If an address is specified manually and is not in use by others, it will be allocated to the service; otherwise, creation of the service will fail. This field can not be changed through updates. Valid values are "None", empty string (""), or a valid IP address. "None" can be specified for headless services when proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies deprecatedPublicIPs
    string array @@ -17970,23 +17970,23 @@

    ServiceSpec v1 core

    loadBalancerSourceRanges
    string array -If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: http://kubernetes.io/docs/user-guide/services-firewalls +If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: http://kubernetes.io/docs/concepts/services-networking/service/-firewalls ports
    ServicePort array -The list of ports that are exposed by this service. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies +The list of ports that are exposed by this service. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies selector
    object -Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/user-guide/services#overview +Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/concepts/services-networking/service/#overview sessionAffinity
    string -Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies +Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies type
    string -type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ExternalName" maps to the specified externalName. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a stable IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the clusterIP. More info: http://kubernetes.io/docs/user-guide/services#overview +type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ExternalName" maps to the specified externalName. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a stable IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the clusterIP. More info: http://kubernetes.io/docs/concepts/services-networking/service/#overview @@ -54388,7 +54388,7 @@

    ServicePort v1 core

    nodePort
    integer -The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info: http://kubernetes.io/docs/user-guide/services#type--nodeport +The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info: http://kubernetes.io/docs/concepts/services-networking/service/#type--nodeport port
    integer @@ -54400,7 +54400,7 @@

    ServicePort v1 core

    targetPort -Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: http://kubernetes.io/docs/user-guide/services#defining-a-service +Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: http://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service diff --git a/docs/api-reference/v1.7/index.html b/docs/api-reference/v1.7/index.html index 575999d8cdf2d..80c6572f06cab 100644 --- a/docs/api-reference/v1.7/index.html +++ b/docs/api-reference/v1.7/index.html @@ -191,7 +191,7 @@

    Container v1 core

    securityContext
    SecurityContext -Security options the pod should run with. More info: https://kubernetes.io/docs/concepts/policy/security-context/ More info: https://git.k8s.io/community/contributors/design-proposals/auth/security_context.md +Security options the pod should run with. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ More info: https://git.k8s.io/community/contributors/design-proposals/auth/security_context.md stdin
    boolean diff --git a/docs/concepts/architecture/nodes.md b/docs/concepts/architecture/nodes.md index aa75f6e08e5eb..b178a7c97ed2c 100644 --- a/docs/concepts/architecture/nodes.md +++ b/docs/concepts/architecture/nodes.md @@ -81,7 +81,7 @@ The information is gathered by Kubelet from the node. ## Management -Unlike [pods](/docs/user-guide/pods) and [services](/docs/user-guide/services), +Unlike [pods](/docs/user-guide/pods) and [services](/docs/concepts/services-networking/service/), a node is not inherently created by Kubernetes: it is created externally by cloud providers like Google Compute Engine, or exists in your pool of physical or virtual machines. What this means is that when Kubernetes creates a node, it is really diff --git a/docs/concepts/cluster-administration/cluster-administration-overview.md b/docs/concepts/cluster-administration/cluster-administration-overview.md index 97c07725e361f..ec41426044604 100644 --- a/docs/concepts/cluster-administration/cluster-administration-overview.md +++ b/docs/concepts/cluster-administration/cluster-administration-overview.md @@ -21,7 +21,7 @@ Before choosing a guide, here are some considerations: - **If you are designing for high-availability**, learn about configuring [clusters in multiple zones](/docs/admin/multi-cluster/). - Will you be using **a hosted Kubernetes cluster**, such as [Google Container Engine (GKE)](https://cloud.google.com/container-engine/), or **hosting your own cluster**? - Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly support hybrid clusters. Instead, you can set up multiple clusters. - - **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/admin/networking/) fits best. One option for custom networking is [*OpenVSwitch GRE/VxLAN networking*](/docs/admin/ovs-networking/), which uses OpenVSwitch to set up networking between pods across Kubernetes nodes. + - **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/concepts/cluster-administration/networking/) fits best. One option for custom networking is [*OpenVSwitch GRE/VxLAN networking*](/docs/admin/ovs-networking/), which uses OpenVSwitch to set up networking between pods across Kubernetes nodes. - Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**? - Do you **just want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the latter, choose a actively-developed distro. Some distros only use binary releases, but diff --git a/docs/concepts/cluster-administration/kubelet-garbage-collection.md b/docs/concepts/cluster-administration/kubelet-garbage-collection.md index 0a1036cd69ca1..068ee6bd2ab0c 100644 --- a/docs/concepts/cluster-administration/kubelet-garbage-collection.md +++ b/docs/concepts/cluster-administration/kubelet-garbage-collection.md @@ -72,4 +72,4 @@ Including: | `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | eviction generalizes disk thresholds to other resources | | `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | eviction generalizes disk pressure transition to other resources | -See [Configuring Out Of Resource Handling](/docs/concepts/cluster-administration/out-of-resource/) for more details. +See [Configuring Out Of Resource Handling](/docs/tasks/administer-cluster/out-of-resource/) for more details. diff --git a/docs/concepts/cluster-administration/manage-deployment.md b/docs/concepts/cluster-administration/manage-deployment.md index 4a946071255b8..c89990c04421b 100644 --- a/docs/concepts/cluster-administration/manage-deployment.md +++ b/docs/concepts/cluster-administration/manage-deployment.md @@ -256,7 +256,7 @@ my-nginx-2035384211-u3t6x 1/1 Running 0 23m fe This outputs all "app=nginx" pods, with an additional label column of pods' tier (specified with `-L` or `--label-columns`). -For more information, please see [labels](/docs/user-guide/labels/) and [kubectl label](/docs/user-guide/kubectl/{{page.version}}/#label) document. +For more information, please see [labels](/docs/concepts/overview/working-with-objects/labels/) and [kubectl label](/docs/user-guide/kubectl/{{page.version}}/#label) document. ## Updating annotations diff --git a/docs/concepts/configuration/assign-pod-node.md b/docs/concepts/configuration/assign-pod-node.md index c3d71ce0c5df5..d626ba98c63c8 100644 --- a/docs/concepts/configuration/assign-pod-node.md +++ b/docs/concepts/configuration/assign-pod-node.md @@ -16,7 +16,7 @@ that a pod ends up on a machine with an SSD attached to it, or to co-locate pods services that communicate a lot into the same availability zone. You can find all the files for these examples [in our docs -repo here](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/user-guide/node-selection). +repo here](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/concepts/configuration/assign-pod-node/). * TOC {:toc} diff --git a/docs/concepts/configuration/manage-compute-resources-container.md b/docs/concepts/configuration/manage-compute-resources-container.md index fa8d93cc5bd92..d6749314078e9 100644 --- a/docs/concepts/configuration/manage-compute-resources-container.md +++ b/docs/concepts/configuration/manage-compute-resources-container.md @@ -27,7 +27,7 @@ CPU and memory are collectively referred to as *compute resources*, or just resources are measurable quantities that can be requested, allocated, and consumed. They are distinct from [API resources](/docs/concepts/overview/kubernetes-api/). API resources, such as Pods and -[Services](/docs/user-guide/services) are objects that can be read and modified +[Services](/docs/concepts/services-networking/service/) are objects that can be read and modified through the Kubernetes API server. ## Resource requests and limits of Pod and Container diff --git a/docs/concepts/configuration/overview.md b/docs/concepts/configuration/overview.md index c354a2a6df100..61149cd3cad93 100644 --- a/docs/concepts/configuration/overview.md +++ b/docs/concepts/configuration/overview.md @@ -58,7 +58,7 @@ This is a living document. If you think of something that is not on this list bu ## Using Labels -- Define and use [labels](/docs/user-guide/labels/) that identify __semantic attributes__ of your application or deployment. For example, instead of attaching a label to a set of pods to explicitly represent some service (For example, `service: myservice`), or explicitly representing the replication controller managing the pods (for example, `controller: mycontroller`), attach labels that identify semantic attributes, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. This will let you select the object groups appropriate to the context— for example, a service for all "tier: frontend" pods, or all "test" phase components of app "myapp". See the [guestbook](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/) app for an example of this approach. +- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or deployment. For example, instead of attaching a label to a set of pods to explicitly represent some service (For example, `service: myservice`), or explicitly representing the replication controller managing the pods (for example, `controller: mycontroller`), attach labels that identify semantic attributes, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. This will let you select the object groups appropriate to the context— for example, a service for all "tier: frontend" pods, or all "test" phase components of app "myapp". See the [guestbook](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/) app for an example of this approach. A service can be made to span multiple deployments, such as is done across [rolling updates](/docs/tasks/run-application/rolling-update-replication-controller/), by simply omitting release-specific labels from its selector, rather than updating a service's selector to match the replication controller's selector fully. @@ -84,7 +84,7 @@ This is a living document. If you think of something that is not on this list bu - Use `kubectl delete` rather than `stop`. `Delete` has a superset of the functionality of `stop`, and `stop` is deprecated. -- Use kubectl bulk operations (via files and/or labels) for get and delete. See [label selectors](/docs/user-guide/labels/#label-selectors) and [using labels effectively](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively). +- Use kubectl bulk operations (via files and/or labels) for get and delete. See [label selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) and [using labels effectively](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively). - Use `kubectl run` and `expose` to quickly create and expose single container Deployments. See the [quick start guide](/docs/user-guide/quick-start/) for an example. diff --git a/docs/concepts/containers/container-environment-variables.md b/docs/concepts/containers/container-environment-variables.md index d5d0975cb7669..513b09cb46f22 100644 --- a/docs/concepts/containers/container-environment-variables.md +++ b/docs/concepts/containers/container-environment-variables.md @@ -31,7 +31,7 @@ It is available through the `hostname` command or the function call in libc. The Pod name and namespace are available as environment variables through the -[downward API](/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information/). +[downward API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/). User defined environment variables from the Pod definition are also available to the Container, as are any environment variables specified statically in the Docker image. diff --git a/docs/concepts/overview/what-is-kubernetes.md b/docs/concepts/overview/what-is-kubernetes.md index 0596b10b266a4..4475b323ada93 100644 --- a/docs/concepts/overview/what-is-kubernetes.md +++ b/docs/concepts/overview/what-is-kubernetes.md @@ -121,7 +121,7 @@ The name **Kubernetes** originates from Greek, meaning *helmsman* or *pilot*, an {% endcapture %} {% capture whatsnext %} -* Ready to [Get Started](/docs/getting-started-guides/)? +* Ready to [Get Started](/docs/home/)? * For more details, see the [Kubernetes Documentation](/docs/home/). {% endcapture %} {% include templates/concept.md %} diff --git a/docs/concepts/overview/working-with-objects/annotations.md b/docs/concepts/overview/working-with-objects/annotations.md index 2bb89e17e5a50..e0b844325328c 100644 --- a/docs/concepts/overview/working-with-objects/annotations.md +++ b/docs/concepts/overview/working-with-objects/annotations.md @@ -55,7 +55,7 @@ and the like. {% endcapture %} {% capture whatsnext %} -Learn more about [Labels and Selectors](/docs/user-guide/labels/). +Learn more about [Labels and Selectors](/docs/concepts/overview/working-with-objects/labels/). {% endcapture %} {% include templates/concept.md %} diff --git a/docs/concepts/overview/working-with-objects/labels.md b/docs/concepts/overview/working-with-objects/labels.md index a64512cd2d340..2407b910879af 100644 --- a/docs/concepts/overview/working-with-objects/labels.md +++ b/docs/concepts/overview/working-with-objects/labels.md @@ -130,7 +130,7 @@ $ kubectl get pods -l 'environment,environment notin (frontend)' ### Set references in API objects -Some Kubernetes objects, such as [`services`](/docs/user-guide/services) and [`replicationcontrollers`](/docs/user-guide/replication-controller), also use label selectors to specify sets of other resources, such as [pods](/docs/user-guide/pods). +Some Kubernetes objects, such as [`services`](/docs/concepts/services-networking/service/) and [`replicationcontrollers`](/docs/user-guide/replication-controller), also use label selectors to specify sets of other resources, such as [pods](/docs/user-guide/pods). #### Service and ReplicationController @@ -170,4 +170,4 @@ selector: #### Selecting sets of nodes One use case for selecting over labels is to constrain the set of nodes onto which a pod can schedule. -See the documentation on [node selection](/docs/user-guide/node-selection) for more information. +See the documentation on [node selection](/docs/concepts/configuration/assign-pod-node/) for more information. diff --git a/docs/concepts/overview/working-with-objects/namespaces.md b/docs/concepts/overview/working-with-objects/namespaces.md index aa06eff515e7b..5757399b8f156 100644 --- a/docs/concepts/overview/working-with-objects/namespaces.md +++ b/docs/concepts/overview/working-with-objects/namespaces.md @@ -72,7 +72,7 @@ $ kubectl config view | grep namespace: ## Namespaces and DNS -When you create a [Service](/docs/user-guide/services), it creates a corresponding [DNS entry](/docs/admin/dns). +When you create a [Service](/docs/concepts/services-networking/service/), it creates a corresponding [DNS entry](/docs/admin/dns). This entry is of the form `..svc.cluster.local`, which means that if a container just uses ``, it will resolve to the service which is local to a namespace. This is useful for using the same configuration across diff --git a/docs/concepts/services-networking/network-policies.md b/docs/concepts/services-networking/network-policies.md index 22518d48765e7..0063c9270e051 100644 --- a/docs/concepts/services-networking/network-policies.md +++ b/docs/concepts/services-networking/network-policies.md @@ -77,7 +77,7 @@ So, the example NetworkPolicy: 2. allows connections to TCP port 6379 of "role=db" pods in the "default" namespace from any pod in the "default" namespace with the label "role=frontend" 3. allows connections to TCP port 6379 of "role=db" pods in the "default" namespace from any pod in a namespace with the label "project=myproject" -See the [NetworkPolicy getting started guide](/docs/getting-started-guides/network-policy/walkthrough) for further examples. +See the [NetworkPolicy getting started guide](/docs/home/network-policy/walkthrough) for further examples. ## Default policies diff --git a/docs/concepts/storage/persistent-volumes.md b/docs/concepts/storage/persistent-volumes.md index 19a17e0491462..bc53b91e4d7c0 100644 --- a/docs/concepts/storage/persistent-volumes.md +++ b/docs/concepts/storage/persistent-volumes.md @@ -315,7 +315,7 @@ Claims, like pods, can request specific quantities of a resource. In this case, ### Selector -Claims can specify a [label selector](/docs/user-guide/labels/#label-selectors) to further filter the set of volumes. Only the volumes whose labels match the selector can be bound to the claim. The selector can consist of two fields: +Claims can specify a [label selector](/docs/concepts/overview/working-with-objects/labels/#label-selectors) to further filter the set of volumes. Only the volumes whose labels match the selector can be bound to the claim. The selector can consist of two fields: * matchLabels - the volume must have a label with this value * matchExpressions - a list of requirements made by specifying key, list of values, and operator that relates the key and values. Valid operators include In, NotIn, Exists, and DoesNotExist. diff --git a/docs/concepts/storage/volumes.md b/docs/concepts/storage/volumes.md index f6fce622b6ddf..02c3b0063ab04 100644 --- a/docs/concepts/storage/volumes.md +++ b/docs/concepts/storage/volumes.md @@ -454,7 +454,7 @@ details. A `downwardAPI` volume is used to make downward API data available to applications. It mounts a directory and writes the requested data in plain text files. -See the [`downwardAPI` volume example](/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information/) for more details. +See the [`downwardAPI` volume example](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) for more details. ### projected @@ -572,7 +572,7 @@ More details can be found [here](https://github.com/kubernetes/examples/tree/{{p ### vsphereVolume -**Prerequisite:** Kubernetes with vSphere Cloud Provider configured. For cloudprovider configuration please refer [vSphere getting started guide](/docs/getting-started-guides/vsphere/). +**Prerequisite:** Kubernetes with vSphere Cloud Provider configured. For cloudprovider configuration please refer [vSphere getting started guide](/docs/home/vsphere/). {: .note} A `vsphereVolume` is used to mount a vSphere VMDK Volume into your Pod. The contents diff --git a/docs/concepts/workloads/controllers/petset.md b/docs/concepts/workloads/controllers/petset.md index e8e7e32fc15bb..f9089024a6376 100644 --- a/docs/concepts/workloads/controllers/petset.md +++ b/docs/concepts/workloads/controllers/petset.md @@ -227,7 +227,7 @@ web-1 A pet can piece together its own identity: -1. Use the [downward api](/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information/) to find its pod name +1. Use the [downward api](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) to find its pod name 2. Run `hostname` to find its DNS name 3. Run `mount` or `df` to find its volumes (usually this is unnecessary) @@ -434,7 +434,7 @@ Deploying one RC of size 1/Service per pod is a popular alternative, as is simpl ## Next steps -* Learn about [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/), +* Learn about [StatefulSet](/docs/concepts/workloads/controllers/statefulset/), the replacement for PetSet introduced in Kubernetes version 1.5. * [Migrate your existing PetSets to StatefulSets](/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set/) when upgrading to Kubernetes version 1.5 or higher. diff --git a/docs/concepts/workloads/controllers/replicaset.md b/docs/concepts/workloads/controllers/replicaset.md index a9247f15aaba3..dfe140601f29e 100644 --- a/docs/concepts/workloads/controllers/replicaset.md +++ b/docs/concepts/workloads/controllers/replicaset.md @@ -12,7 +12,7 @@ ReplicaSet is the next-generation Replication Controller. The only difference between a _ReplicaSet_ and a [_Replication Controller_](/docs/concepts/workloads/controllers/replicationcontroller/) right now is the selector support. ReplicaSet supports the new set-based selector requirements -as described in the [labels user guide](/docs/user-guide/labels/#label-selectors) +as described in the [labels user guide](/docs/concepts/overview/working-with-objects/labels/#label-selectors) whereas a Replication Controller only supports equality-based selector requirements. {% endcapture %} diff --git a/docs/concepts/workloads/controllers/replicationcontroller.md b/docs/concepts/workloads/controllers/replicationcontroller.md index 42f929317f34a..12a37bc4456a7 100644 --- a/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/docs/concepts/workloads/controllers/replicationcontroller.md @@ -129,7 +129,7 @@ different, and the `.metadata.labels` do not affect the behavior of the Replicat ### Pod Selector -The `.spec.selector` field is a [label selector](/docs/user-guide/labels/#label-selectors). A ReplicationController +The `.spec.selector` field is a [label selector](/docs/concepts/overview/working-with-objects/labels/#label-selectors). A ReplicationController manages all the pods with labels that match the selector. It does not distinguish between pods that it created or deleted and pods that another person or process created or deleted. This allows the ReplicationController to be replaced without affecting the running pods. @@ -243,7 +243,7 @@ object](/docs/api-reference/{{page.version}}/#replicationcontroller-v1-core). ### ReplicaSet -[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/user-guide/labels/#set-based-requirement). +[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/concepts/overview/working-with-objects/labels/#set-based-requirement). It’s mainly used by [`Deployment`](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod creation, deletion and updates. Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don’t require updates at all. diff --git a/docs/concepts/workloads/pods/disruptions.md b/docs/concepts/workloads/pods/disruptions.md index 89c324b5bc8fe..80544d73fc779 100644 --- a/docs/concepts/workloads/pods/disruptions.md +++ b/docs/concepts/workloads/pods/disruptions.md @@ -72,7 +72,7 @@ Here are some ways to mitigate involuntary disruptions: and [stateful](/docs/tasks/run-application/run-replicated-stateful-application/) applications.) - For even higher availability when running replicated applications, spread applications across racks (using -[anti-affinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature)) +[anti-affinity](/docs/concepts/configuration/assign-pod-node//#inter-pod-affinity-and-anti-affinity-beta-feature)) or across zones (if using a [multi-zone cluster](/docs/admin/multiple-zones).) diff --git a/docs/concepts/workloads/pods/init-containers.md b/docs/concepts/workloads/pods/init-containers.md index e902b296c61b6..cb8bf04f162ab 100644 --- a/docs/concepts/workloads/pods/init-containers.md +++ b/docs/concepts/workloads/pods/init-containers.md @@ -87,7 +87,7 @@ Here are some ideas for how to use Init Containers: place the POD_IP value in a configuration and generate the main app configuration file using Jinja. -More detailed usage examples can be found in the [StatefulSets documentation](/docs/concepts/abstractions/controllers/statefulsets/) +More detailed usage examples can be found in the [StatefulSets documentation](/docs/concepts/workloads/controllers/statefulset/) and the [Production Pods guide](/docs/tasks/#handling-initialization). ### Init Containers in use diff --git a/docs/concepts/workloads/pods/pod-overview.md b/docs/concepts/workloads/pods/pod-overview.md index 3cfa73ef176c5..760678fc70984 100644 --- a/docs/concepts/workloads/pods/pod-overview.md +++ b/docs/concepts/workloads/pods/pod-overview.md @@ -64,7 +64,7 @@ A Controller can create and manage multiple Pods for you, handling replication a Some examples of Controllers that contain one or more pods include: * [Deployment](/docs/concepts/workloads/controllers/deployment/) -* [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/) +* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) * [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) In general, Controllers use a Pod Template that you provide to create the Pods for which it is responsible. diff --git a/docs/concepts/workloads/pods/pod.md b/docs/concepts/workloads/pods/pod.md index dd7dbd5fa3dfb..359ecea6ced4b 100644 --- a/docs/concepts/workloads/pods/pod.md +++ b/docs/concepts/workloads/pods/pod.md @@ -150,7 +150,7 @@ Pod is exposed as a primitive in order to facilitate: * clean composition of Kubelet-level functionality with cluster-level functionality — Kubelet is effectively the "pod controller" * high-availability applications, which will expect pods to be replaced in advance of their termination and certainly in advance of deletion, such as in the case of planned evictions, image prefetching, or live pod migration [#3949](http://issue.k8s.io/3949) -There is new first-class support for stateful pods with the [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/) controller (currently in beta). The feature was alpha in 1.4 and was called [PetSet](/docs/concepts/workloads/controllers/petset/). For prior versions of Kubernetes, best practice for having stateful pods is to create a replication controller with `replicas` equal to `1` and a corresponding service, see [this MySQL deployment example](/docs/tutorials/stateful-application/run-stateful-application/). +There is new first-class support for stateful pods with the [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) controller (currently in beta). The feature was alpha in 1.4 and was called [PetSet](/docs/concepts/workloads/controllers/petset/). For prior versions of Kubernetes, best practice for having stateful pods is to create a replication controller with `replicas` equal to `1` and a corresponding service, see [this MySQL deployment example](/docs/tutorials/stateful-application/run-stateful-application/). ## Termination of Pods diff --git a/docs/getting-started-guides/aws.md b/docs/getting-started-guides/aws.md index f723837295e17..787df11cefe35 100644 --- a/docs/getting-started-guides/aws.md +++ b/docs/getting-started-guides/aws.md @@ -165,9 +165,9 @@ cluster/kube-down.sh IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------------- | ---------- | --------------------------------------------- | ---------| ---------------------------- AWS | kops | Debian | k8s (VPC) | [docs](https://github.com/kubernetes/kops) | | Community ([@justinsb](https://github.com/justinsb)) -AWS | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/aws) | | Community +AWS | CoreOS | CoreOS | flannel | [docs](/docs/home/aws) | | Community -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. ## Further reading diff --git a/docs/getting-started-guides/binary_release.md b/docs/getting-started-guides/binary_release.md index ca1baf2c6d34b..4b08ff5c8ed40 100644 --- a/docs/getting-started-guides/binary_release.md +++ b/docs/getting-started-guides/binary_release.md @@ -57,4 +57,4 @@ Possible values for `YOUR_PROVIDER` include: * `vsphere` - VMWare VSphere * `rackspace` - Rackspace -For the complete, up-to-date list of providers supported by this script, see the [`/cluster`](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/cluster) folder in the main Kubernetes repo, where each folder represents a possible value for `YOUR_PROVIDER`. If you don't see your desired provider, try looking at our [getting started guides](/docs/getting-started-guides); there's a good chance we have docs for them. +For the complete, up-to-date list of providers supported by this script, see the [`/cluster`](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/cluster) folder in the main Kubernetes repo, where each folder represents a possible value for `YOUR_PROVIDER`. If you don't see your desired provider, try looking at our [getting started guides](/docs/home); there's a good chance we have docs for them. diff --git a/docs/getting-started-guides/centos/centos_manual_config.md b/docs/getting-started-guides/centos/centos_manual_config.md index bac68f39f25bb..d7d9e28941bf0 100644 --- a/docs/getting-started-guides/centos/centos_manual_config.md +++ b/docs/getting-started-guides/centos/centos_manual_config.md @@ -9,7 +9,7 @@ title: CentOS ## Warning -This guide [has been deprecated](https://github.com/kubernetes/kubernetes.github.io/issues/1613). It was originally written for Kubernetes 1.1.0. Please check [the latest guide](/docs/getting-started-guides/kubeadm/). +This guide [has been deprecated](https://github.com/kubernetes/kubernetes.github.io/issues/1613). It was originally written for Kubernetes 1.1.0. Please check [the latest guide](/docs/home/kubeadm/). ## Prerequisites @@ -233,6 +233,6 @@ centos-minion-3 Ready 3d v1.6.0+fff5156 IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Bare-metal | custom | CentOS | flannel | [docs](/docs/getting-started-guides/centos/centos_manual_config) | | Community ([@coolsvap](https://github.com/coolsvap)) +Bare-metal | custom | CentOS | flannel | [docs](/docs/home/centos/centos_manual_config) | | Community ([@coolsvap](https://github.com/coolsvap)) -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/cloudstack.md b/docs/getting-started-guides/cloudstack.md index c0d0263e60778..bce3f3b69d2a9 100644 --- a/docs/getting-started-guides/cloudstack.md +++ b/docs/getting-started-guides/cloudstack.md @@ -92,6 +92,6 @@ SSH to it using the key that was created and using the _core_ user and you can l IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -CloudStack | Ansible | CoreOS | flannel | [docs](/docs/getting-started-guides/cloudstack) | | Community ([@Guiques](https://github.com/ltupin/)) +CloudStack | Ansible | CoreOS | flannel | [docs](/docs/home/cloudstack) | | Community ([@Guiques](https://github.com/ltupin/)) -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/coreos/bare_metal_offline.md b/docs/getting-started-guides/coreos/bare_metal_offline.md index 35824a4f03fd9..7124c962cc9ea 100644 --- a/docs/getting-started-guides/coreos/bare_metal_offline.md +++ b/docs/getting-started-guides/coreos/bare_metal_offline.md @@ -213,7 +213,7 @@ Now for the good stuff! The following config files are tailored for the OFFLINE version of a Kubernetes deployment. -These are based on the work found here: [master.yml](/docs/getting-started-guides/coreos/cloud-configs/master.yaml), [node.yml](/docs/getting-started-guides/coreos/cloud-configs/node.yaml) +These are based on the work found here: [master.yml](/docs/home/coreos/cloud-configs/master.yaml), [node.yml](/docs/home/coreos/cloud-configs/node.yaml) To make the setup work, you need to replace a few placeholders: @@ -683,6 +683,6 @@ for i in `kubectl get pods | awk '{print $1}'`; do kubectl delete pod $i; done IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/bare_metal_offline/) | | Community ([@jeffbean](https://github.com/jeffbean)) +Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](/docs/home/coreos/bare_metal_offline/) | | Community ([@jeffbean](https://github.com/jeffbean)) -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions/) chart. +For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions/) chart. diff --git a/docs/getting-started-guides/coreos/index.md b/docs/getting-started-guides/coreos/index.md index c7a5ce0c44bd9..8065a9fe01c19 100644 --- a/docs/getting-started-guides/coreos/index.md +++ b/docs/getting-started-guides/coreos/index.md @@ -86,7 +86,7 @@ Configure a standalone Kubernetes or a Kubernetes cluster with [Foreman](https:/ IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires)) -Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles)) +GCE | CoreOS | CoreOS | flannel | [docs](/docs/home/coreos) | | Community ([@pires](https://github.com/pires)) +Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/home/coreos) | | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles)) -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/dcos.md b/docs/getting-started-guides/dcos.md index 816bd288a5276..7425ae6856472 100644 --- a/docs/getting-started-guides/dcos.md +++ b/docs/getting-started-guides/dcos.md @@ -138,6 +138,6 @@ $ dcos package uninstall kubernetes IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -DCOS | Marathon | CoreOS/Alpine | custom | [docs](/docs/getting-started-guides/dcos/) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) +DCOS | Marathon | CoreOS/Alpine | custom | [docs](/docs/home/dcos/) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/fedora/fedora_ansible_config.md b/docs/getting-started-guides/fedora/fedora_ansible_config.md index f87a6978937aa..8c3cb2675fc68 100644 --- a/docs/getting-started-guides/fedora/fedora_ansible_config.md +++ b/docs/getting-started-guides/fedora/fedora_ansible_config.md @@ -235,6 +235,6 @@ That's it! IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Bare-metal | Ansible | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/fedora_ansible_config) | | Project +Bare-metal | Ansible | Fedora | flannel | [docs](/docs/home/fedora/fedora_ansible_config) | | Project -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/fedora/fedora_manual_config.md b/docs/getting-started-guides/fedora/fedora_manual_config.md index 6f324096238c2..5516f35ca4fe0 100644 --- a/docs/getting-started-guides/fedora/fedora_manual_config.md +++ b/docs/getting-started-guides/fedora/fedora_manual_config.md @@ -193,7 +193,7 @@ kubectl delete -f ./node.json IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Bare-metal | custom | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config) | | Project +Bare-metal | custom | Fedora | _none_ | [docs](/docs/home/fedora/fedora_manual_config) | | Project -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md b/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md index 51d3aa0db38ab..9ad8acb6a315f 100644 --- a/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md +++ b/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md @@ -9,7 +9,7 @@ title: Fedora (Multi Node) * TOC {:toc} -This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](/docs/getting-started-guides/fedora/fedora_manual_config/) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. Flannel on each node configures an overlay network that docker uses. Flannel runs on each node to setup a unique class-C container network. +This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](/docs/home/fedora/fedora_manual_config/) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. Flannel on each node configures an overlay network that docker uses. Flannel runs on each node to setup a unique class-C container network. ## Prerequisites @@ -188,11 +188,11 @@ Now Kubernetes multi-node cluster is set up with overlay networking set up by fl IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Bare-metal | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) -libvirt | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) -KVM | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) +Bare-metal | custom | Fedora | flannel | [docs](/docs/home/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) +libvirt | custom | Fedora | flannel | [docs](/docs/home/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) +KVM | custom | Fedora | flannel | [docs](/docs/home/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/gce.md b/docs/getting-started-guides/gce.md index 3598d7c46a864..915d63463f1ac 100644 --- a/docs/getting-started-guides/gce.md +++ b/docs/getting-started-guides/gce.md @@ -59,7 +59,7 @@ cluster/kube-up.sh If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster. -If you run into trouble, please see the section on [troubleshooting](/docs/getting-started-guides/gce/#troubleshooting), post to the +If you run into trouble, please see the section on [troubleshooting](/docs/home/gce//#troubleshooting), post to the [kubernetes-users group](https://groups.google.com/forum/#!forum/kubernetes-users), or come ask questions on [Slack](/docs/troubleshooting/#slack). The next few steps will show you: @@ -96,7 +96,7 @@ Once `kubectl` is in your path, you can use it to look at your cluster. E.g., ru $ kubectl get --all-namespaces services ``` -should show a set of [services](/docs/user-guide/services) that look something like this: +should show a set of [services](/docs/concepts/services-networking/service/) that look something like this: ```shell NAMESPACE NAME CLUSTER_IP EXTERNAL_IP PORT(S) AGE @@ -202,9 +202,9 @@ field values: IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce) | | Project +GCE | Saltstack | Debian | GCE | [docs](/docs/home/gce/) | | Project -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. ## Further reading diff --git a/docs/getting-started-guides/libvirt-coreos.md b/docs/getting-started-guides/libvirt-coreos.md index 4c067f8d65b5e..3c65643089717 100644 --- a/docs/getting-started-guides/libvirt-coreos.md +++ b/docs/getting-started-guides/libvirt-coreos.md @@ -332,8 +332,8 @@ Ensure libvirtd has been restarted since ebtables was installed. IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](/docs/getting-started-guides/libvirt-coreos) | | Community ([@lhuard1A](https://github.com/lhuard1A)) +libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](/docs/home/libvirt-coreos/) | | Community ([@lhuard1A](https://github.com/lhuard1A)) -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/mesos-docker.md b/docs/getting-started-guides/mesos-docker.md index 05a26dac0500c..cfe889b674fa3 100644 --- a/docs/getting-started-guides/mesos-docker.md +++ b/docs/getting-started-guides/mesos-docker.md @@ -314,7 +314,7 @@ Breakdown: IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Mesos/Docker | custom | Ubuntu | Docker | [docs](/docs/getting-started-guides/mesos-docker) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) +Mesos/Docker | custom | Ubuntu | Docker | [docs](/docs/home/mesos-docker) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/mesos/index.md b/docs/getting-started-guides/mesos/index.md index f40c41ad707e2..1102bd98f6639 100644 --- a/docs/getting-started-guides/mesos/index.md +++ b/docs/getting-started-guides/mesos/index.md @@ -309,10 +309,10 @@ Address 1: 10.10.10.1 IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Mesos/GCE | | | | [docs](/docs/getting-started-guides/mesos/) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) +Mesos/GCE | | | | [docs](/docs/home/mesos/) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions/) chart. +For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions/) chart. ## What next? diff --git a/docs/getting-started-guides/openstack-heat.md b/docs/getting-started-guides/openstack-heat.md index 70f20a89f9e71..5e0ec86e261e5 100644 --- a/docs/getting-started-guides/openstack-heat.md +++ b/docs/getting-started-guides/openstack-heat.md @@ -255,6 +255,6 @@ If you have changed the default `$STACK_NAME`, you must specify the name. Note t IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -OpenStack Heat | Saltstack | CentOS | Neutron + flannel hostgw | [docs](/docs/getting-started-guides/openstack-heat) | | Community ([@FujitsuEnablingSoftwareTechnologyGmbH](https://github.com/FujitsuEnablingSoftwareTechnologyGmbH)) +OpenStack Heat | Saltstack | CentOS | Neutron + flannel hostgw | [docs](/docs/home/openstack-heat) | | Community ([@FujitsuEnablingSoftwareTechnologyGmbH](https://github.com/FujitsuEnablingSoftwareTechnologyGmbH)) -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/ovirt.md b/docs/getting-started-guides/ovirt.md index 325a74882f364..04f6e6720dbe0 100644 --- a/docs/getting-started-guides/ovirt.md +++ b/docs/getting-started-guides/ovirt.md @@ -58,6 +58,6 @@ This short screencast demonstrates how the oVirt Cloud Provider can be used to d IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -oVirt | | | | [docs](/docs/getting-started-guides/ovirt) | | Community ([@simon3z](https://github.com/simon3z)) +oVirt | | | | [docs](/docs/home/ovirt) | | Community ([@simon3z](https://github.com/simon3z)) -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/photon-controller.md b/docs/getting-started-guides/photon-controller.md index e0de503156419..4ef81c63ebca5 100644 --- a/docs/getting-started-guides/photon-controller.md +++ b/docs/getting-started-guides/photon-controller.md @@ -35,7 +35,7 @@ Mac, you can install this with [brew](http://brew.sh/): 5. You should have an ssh public key installed. This will be used to give you access to the VM's user account, `kube`. -6. Get or build a [binary release](/docs/getting-started-guides/binary_release/) +6. Get or build a [binary release](/docs/home/binary_release/) ### Download VM Image @@ -235,4 +235,4 @@ networks such as Weave or Calico. IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Vmware Photon | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/photon-controller) | | Community ([@alainroy](https://github.com/alainroy)) +Vmware Photon | Saltstack | Debian | OVS | [docs](/docs/home/photon-controller) | | Community ([@alainroy](https://github.com/alainroy)) diff --git a/docs/getting-started-guides/rkt/index.md b/docs/getting-started-guides/rkt/index.md index fe4cf7075db6a..a7654cd587975 100644 --- a/docs/getting-started-guides/rkt/index.md +++ b/docs/getting-started-guides/rkt/index.md @@ -19,7 +19,7 @@ This document describes how to run Kubernetes using [rkt](https://github.com/cor * The [rkt API service](https://coreos.com/rkt/docs/latest/subcommands/api-service.html) must be running on the node. -* You will need [kubelet](/docs/getting-started-guides/scratch/#kubelet) installed on the node, and it's recommended that you run [kube-proxy](/docs/getting-started-guides/scratch/#kube-proxy) on all nodes. This document describes how to set the parameters for kubelet so that it uses rkt as the runtime. +* You will need [kubelet](/docs/home/scratch/#kubelet) installed on the node, and it's recommended that you run [kube-proxy](/docs/home/scratch/#kube-proxy) on all nodes. This document describes how to set the parameters for kubelet so that it uses rkt as the runtime. ## Pod networking in rktnetes @@ -201,7 +201,7 @@ Use rkt's [*contained network*](#rkt-contained-network) with the KVM stage1, bec ## Known issues and differences between rkt and Docker -rkt and the default node container engine have very different designs, as do rkt's native ACI and the Docker container image format. Users may experience different behaviors when switching from one container engine to the other. More information can be found [in the Kubernetes rkt notes](/docs/getting-started-guides/rkt/notes/). +rkt and the default node container engine have very different designs, as do rkt's native ACI and the Docker container image format. Users may experience different behaviors when switching from one container engine to the other. More information can be found [in the Kubernetes rkt notes](/docs/home/rkt/notes/). ## Troubleshooting diff --git a/docs/getting-started-guides/scratch.md b/docs/getting-started-guides/scratch.md index 0e93a85301b35..fd7aaf784821a 100644 --- a/docs/getting-started-guides/scratch.md +++ b/docs/getting-started-guides/scratch.md @@ -8,7 +8,7 @@ title: Creating a Custom Cluster from Scratch This guide is for people who want to craft a custom Kubernetes cluster. If you can find an existing Getting Started Guide that meets your needs on [this -list](/docs/getting-started-guides/), then we recommend using it, as you will be able to benefit +list](/docs/home/), then we recommend using it, as you will be able to benefit from the experience of others. However, if you have specific IaaS, networking, configuration management, or operating system requirements not met by any of those guides, then this guide will provide an outline of the steps you need to @@ -58,7 +58,7 @@ on how flags are set on various components. ### Network #### Network Connectivity -Kubernetes has a distinctive [networking model](/docs/admin/networking/). +Kubernetes has a distinctive [networking model](/docs/concepts/cluster-administration/networking/). Kubernetes allocates an IP address to each pod. When creating a cluster, you need to allocate a block of IPs for Kubernetes to use as Pod IPs. The simplest @@ -91,7 +91,7 @@ to implement one of the above options: - You can also write your own. - **Compile support directly into Kubernetes** - This can be done by implementing the "Routes" interface of a Cloud Provider module. - - The Google Compute Engine ([GCE](/docs/getting-started-guides/gce)/) and [AWS](/docs/getting-started-guides/aws/) guides use this approach. + - The Google Compute Engine ([GCE](/docs/home/gce/)/) and [AWS](/docs/home/aws/) guides use this approach. - **Configure the network external to Kubernetes** - This can be done by manually running commands, or through a set of externally maintained scripts. - You have to implement this yourself, but it can give you an extra degree of flexibility. @@ -430,7 +430,7 @@ Each node needs to be allocated its own CIDR range for pod networking. Call this `NODE_X_POD_CIDR`. A bridge called `cbr0` needs to be created on each node. The bridge is explained -further in the [networking documentation](/docs/admin/networking/). The bridge itself +further in the [networking documentation](/docs/concepts/cluster-administration/networking/). The bridge itself needs an address from `$NODE_X_POD_CIDR` - by convention the first IP. Call this `NODE_X_BRIDGE_ADDR`. For example, if `NODE_X_POD_CIDR` is `10.0.0.0/16`, then `NODE_X_BRIDGE_ADDR` is `10.0.0.1/16`. NOTE: this retains the `/16` suffix @@ -878,7 +878,7 @@ Cluster validation succeeded ### Inspect pods and services -Try to run through the "Inspect your cluster" section in one of the other Getting Started Guides, such as [GCE](/docs/getting-started-guides/gce/#inspect-your-cluster). +Try to run through the "Inspect your cluster" section in one of the other Getting Started Guides, such as [GCE](/docs/home/gce//#inspect-your-cluster). You should see some services. You should also see "mirror pods" for the apiserver, scheduler and controller-manager, plus any add-ons you started. ### Try Examples @@ -896,7 +896,7 @@ pinging or SSH-ing from one node to another. ### Getting Help -If you run into trouble, please see the section on [troubleshooting](/docs/getting-started-guides/gce#troubleshooting), post to the +If you run into trouble, please see the section on [troubleshooting](/docs/home/gce/#troubleshooting), post to the [kubernetes-users group](https://groups.google.com/forum/#!forum/kubernetes-users), or come ask questions on [Slack](/docs/troubleshooting#slack). ## Support Level @@ -904,7 +904,7 @@ If you run into trouble, please see the section on [troubleshooting](/docs/getti IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -any | any | any | any | [docs](/docs/getting-started-guides/scratch/) | | Community ([@erictune](https://github.com/erictune)) +any | any | any | any | [docs](/docs/home/scratch/) | | Community ([@erictune](https://github.com/erictune)) -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions/) chart. +For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions/) chart. diff --git a/docs/getting-started-guides/stackpoint.md b/docs/getting-started-guides/stackpoint.md index 0459472bd3b88..739aca8ae87b0 100644 --- a/docs/getting-started-guides/stackpoint.md +++ b/docs/getting-started-guides/stackpoint.md @@ -38,7 +38,7 @@ Choose any extra options you may want to include with your cluster, then click * You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters). -For information on using and managing a Kubernetes cluster on AWS, [consult the Kubernetes documentation](/docs/getting-started-guides/aws/). +For information on using and managing a Kubernetes cluster on AWS, [consult the Kubernetes documentation](/docs/home/aws/). @@ -70,7 +70,7 @@ Choose any extra options you may want to include with your cluster, then click * You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters). -For information on using and managing a Kubernetes cluster on GCE, [consult the Kubernetes documentation](/docs/getting-started-guides/gce/). +For information on using and managing a Kubernetes cluster on GCE, [consult the Kubernetes documentation](/docs/home/gce//). @@ -168,7 +168,7 @@ Choose any extra options you may want to include with your cluster, then click * You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters). -For information on using and managing a Kubernetes cluster on Azure, [consult the Kubernetes documentation](/docs/getting-started-guides/azure/). +For information on using and managing a Kubernetes cluster on Azure, [consult the Kubernetes documentation](/docs/home/azure/). diff --git a/docs/getting-started-guides/ubuntu/index.md b/docs/getting-started-guides/ubuntu/index.md index c851f32a6e71b..8e9575e6118ec 100644 --- a/docs/getting-started-guides/ubuntu/index.md +++ b/docs/getting-started-guides/ubuntu/index.md @@ -36,24 +36,24 @@ conjure-up kubernetes These are more in-depth guides for users choosing to run Kubernetes in production: - - [Installation](/docs/getting-started-guides/ubuntu/installation/) - - [Validation](/docs/getting-started-guides/ubuntu/validation/) - - [Backups](/docs/getting-started-guides/ubuntu/backups/) - - [Upgrades](/docs/getting-started-guides/ubuntu/upgrades/) - - [Scaling](/docs/getting-started-guides/ubuntu/scaling/) - - [Logging](/docs/getting-started-guides/ubuntu/logging/) - - [Monitoring](/docs/getting-started-guides/ubuntu/monitoring/) - - [Networking](/docs/getting-started-guides/ubuntu/networking/) - - [Security](/docs/getting-started-guides/ubuntu/security/) - - [Storage](/docs/getting-started-guides/ubuntu/storage/) - - [Troubleshooting](/docs/getting-started-guides/ubuntu/troubleshooting/) - - [Decommissioning](/docs/getting-started-guides/ubuntu/decommissioning/) - - [Operational Considerations](/docs/getting-started-guides/ubuntu/operational-considerations/) - - [Glossary](/docs/getting-started-guides/ubuntu/glossary/) + - [Installation](/docs/home/ubuntu/installation/) + - [Validation](/docs/home/ubuntu/validation/) + - [Backups](/docs/home/ubuntu/backups/) + - [Upgrades](/docs/home/ubuntu/upgrades/) + - [Scaling](/docs/home/ubuntu/scaling/) + - [Logging](/docs/home/ubuntu/logging/) + - [Monitoring](/docs/home/ubuntu/monitoring/) + - [Networking](/docs/home/ubuntu/networking/) + - [Security](/docs/home/ubuntu/security/) + - [Storage](/docs/home/ubuntu/storage/) + - [Troubleshooting](/docs/home/ubuntu/troubleshooting/) + - [Decommissioning](/docs/home/ubuntu/decommissioning/) + - [Operational Considerations](/docs/home/ubuntu/operational-considerations/) + - [Glossary](/docs/home/ubuntu/glossary/) ## Developer Guides - - [Localhost using LXD](/docs/getting-started-guides/ubuntu/local/) + - [Localhost using LXD](/docs/home/ubuntu/local/) ## Where to find us diff --git a/docs/getting-started-guides/ubuntu/installation.md b/docs/getting-started-guides/ubuntu/installation.md index 53245567b0b63..5256f825fe756 100644 --- a/docs/getting-started-guides/ubuntu/installation.md +++ b/docs/getting-started-guides/ubuntu/installation.md @@ -251,16 +251,16 @@ Feature requests, bug reports, pull requests or any feedback would be much appre IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Amazon Web Services (AWS) | Juju | Ubuntu | flannel, calico* | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -OpenStack | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -Microsoft Azure | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -Google Compute Engine (GCE) | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -Joyent | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -Rackspace | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -VMWare vSphere | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -Bare Metal (MAAS) | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +Amazon Web Services (AWS) | Juju | Ubuntu | flannel, calico* | [docs](/docs/home/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +OpenStack | Juju | Ubuntu | flannel, calico | [docs](/docs/home/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +Microsoft Azure | Juju | Ubuntu | flannel | [docs](/docs/home/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +Google Compute Engine (GCE) | Juju | Ubuntu | flannel, calico | [docs](/docs/home/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +Joyent | Juju | Ubuntu | flannel | [docs](/docs/home/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +Rackspace | Juju | Ubuntu | flannel | [docs](/docs/home/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +VMWare vSphere | Juju | Ubuntu | flannel, calico | [docs](/docs/home/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +Bare Metal (MAAS) | Juju | Ubuntu | flannel, calico | [docs](/docs/home/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. {% include templates/task.md %} diff --git a/docs/getting-started-guides/ubuntu/operational-considerations.md b/docs/getting-started-guides/ubuntu/operational-considerations.md index 9e0a24a6660bb..d7bcb08300420 100644 --- a/docs/getting-started-guides/ubuntu/operational-considerations.md +++ b/docs/getting-started-guides/ubuntu/operational-considerations.md @@ -29,7 +29,7 @@ juju bootstrap --contraints "mem=8GB cpu-cores=4 root-disk=128G" Juju will select the cheapest instance type matching your constraints on your target cloud. You can also use the ```instance-type``` constraint in conjunction with ```root-disk``` for strict control. For more information about the constraints available, refer to the [official documentation](https://jujucharms.com/docs/stable/reference-constraints) -Additional information about logging can be found in the [logging section](/docs/getting-started-guides/ubuntu/logging) +Additional information about logging can be found in the [logging section](/docs/home/ubuntu/logging) ### SSHing into the Controller Node diff --git a/docs/getting-started-guides/ubuntu/upgrades.md b/docs/getting-started-guides/ubuntu/upgrades.md index d065993f28a80..e887786f8c923 100644 --- a/docs/getting-started-guides/ubuntu/upgrades.md +++ b/docs/getting-started-guides/ubuntu/upgrades.md @@ -11,7 +11,7 @@ This page assumes you have a working deployed cluster. ## Assumptions -You should always back up all your data before attempting an upgrade. Don't forget to include the workload inside your cluster! Refer to the [backup documentation](/docs/getting-started-guides/ubuntu/backups). +You should always back up all your data before attempting an upgrade. Don't forget to include the workload inside your cluster! Refer to the [backup documentation](/docs/home/ubuntu/backups). {% endcapture %} {% capture steps %} @@ -23,7 +23,7 @@ You can use `juju status` to see if an upgrade is available. There will either b # Upgrade etcd -Backing up etcd requires an export and snapshot, refer to the [backup documentation](/docs/getting-started-guides/ubuntu/backups) to create a snapshot. After the snapshot upgrade the etcd service with: +Backing up etcd requires an export and snapshot, refer to the [backup documentation](/docs/home/ubuntu/backups) to create a snapshot. After the snapshot upgrade the etcd service with: juju upgrade-charm etcd @@ -96,7 +96,7 @@ Where `x` is the minor version of Kubernetes. For example, `1.6/stable`. See abo `kubectl version` should return the newer version. -It is recommended to rerun a [cluster validation](/docs/getting-started-guides/ubuntu/validation) to ensure that the cluster upgrade has successfully completed. +It is recommended to rerun a [cluster validation](/docs/home/ubuntu/validation) to ensure that the cluster upgrade has successfully completed. # Upgrade Flannel diff --git a/docs/getting-started-guides/vsphere.md b/docs/getting-started-guides/vsphere.md index 22207e7d41580..c63ae81cf10b1 100644 --- a/docs/getting-started-guides/vsphere.md +++ b/docs/getting-started-guides/vsphere.md @@ -201,9 +201,9 @@ For quick support please join VMware Code Slack ([kubernetes](https://vmwarecode IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | --------- | ---------------------------- -Vmware vSphere | Kube-anywhere | Photon OS | Flannel | [docs](/docs/getting-started-guides/vsphere/) | | Community ([@abrarshivani](https://github.com/abrarshivani)), ([@kerneltime](https://github.com/kerneltime)), ([@BaluDontu](https://github.com/BaluDontu)), ([@luomiao](https://github.com/luomiao)), ([@divyenpatel](https://github.com/divyenpatel)) +Vmware vSphere | Kube-anywhere | Photon OS | Flannel | [docs](/docs/home/vsphere/) | | Community ([@abrarshivani](https://github.com/abrarshivani)), ([@kerneltime](https://github.com/kerneltime)), ([@BaluDontu](https://github.com/BaluDontu)), ([@luomiao](https://github.com/luomiao)), ([@divyenpatel](https://github.com/divyenpatel)) If you identify any issues/problems using the vSphere cloud provider, you can create an issue in our repo - [VMware Kubernetes](https://github.com/vmware/kubernetes). -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. diff --git a/docs/home/index.md b/docs/home/index.md index d9cdc94f34eb1..42969d4165745 100644 --- a/docs/home/index.md +++ b/docs/home/index.md @@ -13,7 +13,7 @@ The [Kubernetes Basics interactive tutorial](/docs/tutorials/kubernetes-basics/) ## Installing/Setting Up Kubernetes -[Picking the Right Solution](/docs/getting-started-guides/) can help you get a Kubernetes cluster up and running, either for local development, or on your cloud provider of choice. +[Picking the Right Solution](/docs/home/) can help you get a Kubernetes cluster up and running, either for local development, or on your cloud provider of choice. ## Concepts, Tasks, and Tutorials diff --git a/docs/reference/federation/extensions/v1beta1/definitions.html b/docs/reference/federation/extensions/v1beta1/definitions.html index 24da7f55d8d05..9cab569710637 100755 --- a/docs/reference/federation/extensions/v1beta1/definitions.html +++ b/docs/reference/federation/extensions/v1beta1/definitions.html @@ -5778,7 +5778,7 @@

    v1.Container

    securityContext

    -

    Security options the pod should run with. More info: https://kubernetes.io/docs/concepts/policy/security-context/ More info: https://git.k8s.io/community/contributors/design-proposals/auth/security_context.md

    +

    Security options the pod should run with. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ More info: https://git.k8s.io/community/contributors/design-proposals/auth/security_context.md

    false

    v1.SecurityContext

    diff --git a/docs/resources-reference/v1.5/index.html b/docs/resources-reference/v1.5/index.html index aca4c2871c5c9..ca5ef9de356a8 100644 --- a/docs/resources-reference/v1.5/index.html +++ b/docs/resources-reference/v1.5/index.html @@ -1062,7 +1062,7 @@

    PodSpec v1

    nodeSelector
    object -NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: http://kubernetes.io/docs/user-guide/node-selection +NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: http://kubernetes.io/docs/concepts/configuration/assign-pod-node/ restartPolicy
    string @@ -1967,7 +1967,7 @@

    ServiceSpec v1

    clusterIP
    string -clusterIP is the IP address of the service and is usually assigned randomly by the master. If an address is specified manually and is not in use by others, it will be allocated to the service; otherwise, creation of the service will fail. This field can not be changed through updates. Valid values are "None", empty string (""), or a valid IP address. "None" can be specified for headless services when proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies +clusterIP is the IP address of the service and is usually assigned randomly by the master. If an address is specified manually and is not in use by others, it will be allocated to the service; otherwise, creation of the service will fail. This field can not be changed through updates. Valid values are "None", empty string (""), or a valid IP address. "None" can be specified for headless services when proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies deprecatedPublicIPs
    string array @@ -1987,23 +1987,23 @@

    ServiceSpec v1

    loadBalancerSourceRanges
    string array -If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: http://kubernetes.io/docs/user-guide/services-firewalls +If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: http://kubernetes.io/docs/concepts/services-networking/service/-firewalls ports
    ServicePort array -The list of ports that are exposed by this service. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies +The list of ports that are exposed by this service. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies selector
    object -Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/user-guide/services#overview +Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/concepts/services-networking/service/#overview sessionAffinity
    string -Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies +Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies type
    string -type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ExternalName" maps to the specified externalName. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a stable IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the clusterIP. More info: http://kubernetes.io/docs/user-guide/services#overview +type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ExternalName" maps to the specified externalName. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a stable IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the clusterIP. More info: http://kubernetes.io/docs/concepts/services-networking/service/#overview @@ -9654,7 +9654,7 @@

    ServicePort v1

    nodePort
    integer -The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info: http://kubernetes.io/docs/user-guide/services#type--nodeport +The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info: http://kubernetes.io/docs/concepts/services-networking/service/#type--nodeport port
    integer @@ -9666,7 +9666,7 @@

    ServicePort v1

    targetPort
    IntOrString -Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: http://kubernetes.io/docs/user-guide/services#defining-a-service +Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: http://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service diff --git a/docs/resources-reference/v1.6/index.html b/docs/resources-reference/v1.6/index.html index 4c69ee05eb547..e323e7b714d27 100644 --- a/docs/resources-reference/v1.6/index.html +++ b/docs/resources-reference/v1.6/index.html @@ -2056,7 +2056,7 @@

    ServiceSpec v1 core

    clusterIP
    string -clusterIP is the IP address of the service and is usually assigned randomly by the master. If an address is specified manually and is not in use by others, it will be allocated to the service; otherwise, creation of the service will fail. This field can not be changed through updates. Valid values are "None", empty string (""), or a valid IP address. "None" can be specified for headless services when proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies +clusterIP is the IP address of the service and is usually assigned randomly by the master. If an address is specified manually and is not in use by others, it will be allocated to the service; otherwise, creation of the service will fail. This field can not be changed through updates. Valid values are "None", empty string (""), or a valid IP address. "None" can be specified for headless services when proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies deprecatedPublicIPs
    string array @@ -2076,23 +2076,23 @@

    ServiceSpec v1 core

    loadBalancerSourceRanges
    string array -If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: http://kubernetes.io/docs/user-guide/services-firewalls +If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: http://kubernetes.io/docs/concepts/services-networking/service/-firewalls ports
    ServicePort array -The list of ports that are exposed by this service. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies +The list of ports that are exposed by this service. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies selector
    object -Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/user-guide/services#overview +Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/concepts/services-networking/service/#overview sessionAffinity
    string -Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies +Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies type
    string -type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ExternalName" maps to the specified externalName. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a stable IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the clusterIP. More info: http://kubernetes.io/docs/user-guide/services#overview +type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ExternalName" maps to the specified externalName. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a stable IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the clusterIP. More info: http://kubernetes.io/docs/concepts/services-networking/service/#overview @@ -11254,7 +11254,7 @@

    ServicePort v1 core

    nodePort
    integer -The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info: http://kubernetes.io/docs/user-guide/services#type--nodeport +The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info: http://kubernetes.io/docs/concepts/services-networking/service/#type--nodeport port
    integer @@ -11266,7 +11266,7 @@

    ServicePort v1 core

    targetPort -Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: http://kubernetes.io/docs/user-guide/services#defining-a-service +Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: http://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service diff --git a/docs/resources-reference/v1.7/index.html b/docs/resources-reference/v1.7/index.html index da873a072418d..2441751bf868a 100644 --- a/docs/resources-reference/v1.7/index.html +++ b/docs/resources-reference/v1.7/index.html @@ -124,7 +124,7 @@

    Container v1 core

    securityContext
    SecurityContext -Security options the pod should run with. More info: https://kubernetes.io/docs/concepts/policy/security-context/ More info: https://git.k8s.io/community/contributors/design-proposals/auth/security_context.md +Security options the pod should run with. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ More info: https://git.k8s.io/community/contributors/design-proposals/auth/security_context.md stdin
    boolean diff --git a/docs/setup/independent/install-kubeadm.md b/docs/setup/independent/install-kubeadm.md index 4ff75f4e95e00..68d97db9cdefb 100644 --- a/docs/setup/independent/install-kubeadm.md +++ b/docs/setup/independent/install-kubeadm.md @@ -127,7 +127,7 @@ example. You have to do this until SELinux support is improved in the kubelet. {% capture whatsnext %} -* [Using kubeadm to Create a Cluster](/docs/getting-started-guides/kubeadm/) +* [Using kubeadm to Create a Cluster](/docs/home/kubeadm/) {% endcapture %} diff --git a/docs/setup/pick-right-solution.md b/docs/setup/pick-right-solution.md index 8a624c15236e6..e7bb7122fc256 100644 --- a/docs/setup/pick-right-solution.md +++ b/docs/setup/pick-right-solution.md @@ -17,7 +17,7 @@ When you are ready to scale up to more machines and higher availability, a [host [Turnkey cloud solutions](#turnkey-cloud-solutions) require only a few commands to create and cover a wide range of cloud providers. -If you already have a way to configure hosting resources, use [kubeadm](/docs/getting-started-guides/kubeadm/) to easily bring up a cluster with a single command per machine. +If you already have a way to configure hosting resources, use [kubeadm](/docs/home/kubeadm/) to easily bring up a cluster with a single command per machine. [Custom solutions](#custom-solutions) vary from step-by-step instructions to general advice for setting up a Kubernetes cluster from scratch. @@ -27,9 +27,9 @@ a Kubernetes cluster from scratch. # Local-machine Solutions -* [Minikube](/docs/getting-started-guides/minikube/) is the recommended method for creating a local, single-node Kubernetes cluster for development and testing. Setup is completely automated and doesn't require a cloud provider account. +* [Minikube](/docs/home/minikube/) is the recommended method for creating a local, single-node Kubernetes cluster for development and testing. Setup is completely automated and doesn't require a cloud provider account. -* [Ubuntu on LXD](/docs/getting-started-guides/ubuntu/local/) supports a nine-instance deployment on localhost. +* [Ubuntu on LXD](/docs/home/ubuntu/local/) supports a nine-instance deployment on localhost. * [IBM Cloud private-ce (Community Edition)](https://www.ibm.com/support/knowledgecenter/en/SSBS6K/product_welcome_cloud_private.html) can use VirtualBox on your machine to deploy Kubernetes to one or more VMs for dev and test scenarios. Scales to full multi-node cluster. Free version of the enterprise solution. @@ -62,13 +62,13 @@ a Kubernetes cluster from scratch. These solutions allow you to create Kubernetes clusters on a range of Cloud IaaS providers with only a few commands. These solutions are actively developed and have active community support. -* [Google Compute Engine (GCE)](/docs/getting-started-guides/gce/) -* [AWS](/docs/getting-started-guides/aws/) -* [Azure](/docs/getting-started-guides/azure/) +* [Google Compute Engine (GCE)](/docs/home/gce//) +* [AWS](/docs/home/aws/) +* [Azure](/docs/home/azure/) * [Tectonic by CoreOS](https://coreos.com/tectonic) -* [CenturyLink Cloud](/docs/getting-started-guides/clc/) +* [CenturyLink Cloud](/docs/home/clc/) * [IBM Bluemix](https://github.com/patrocinio/kubernetes-softlayer) -* [Stackpoint.io](/docs/getting-started-guides/stackpoint/) +* [Stackpoint.io](/docs/home/stackpoint/) * [KUBE2GO.io](https://kube2go.io/) * [Madcore.Ai](https://madcore.ai/) @@ -80,7 +80,7 @@ base operating systems. If you can find a guide below that matches your needs, use it. It may be a little out of date, but it will be easier than starting from scratch. If you do want to start from scratch, either because you have special requirements, or just because you want to understand what is underneath a Kubernetes -cluster, try the [Getting Started from Scratch](/docs/getting-started-guides/scratch/) guide. +cluster, try the [Getting Started from Scratch](/docs/home/scratch/) guide. If you are interested in supporting Kubernetes on a new platform, see [Writing a Getting Started Guide](https://git.k8s.io/community/contributors/devel/writing-a-getting-started-guide.md). @@ -88,47 +88,47 @@ If you are interested in supporting Kubernetes on a new platform, see ## Universal If you already have a way to configure hosting resources, use -[kubeadm](/docs/getting-started-guides/kubeadm/) to easily bring up a cluster +[kubeadm](/docs/home/kubeadm/) to easily bring up a cluster with a single command per machine. ## Cloud These solutions are combinations of cloud providers and operating systems not covered by the above solutions. -* [CoreOS on AWS or GCE](/docs/getting-started-guides/coreos/) -* [Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/) -* [Kubespray](/docs/getting-started-guides/kubespray/) +* [CoreOS on AWS or GCE](/docs/home/coreos/) +* [Kubernetes on Ubuntu](/docs/home/ubuntu/) +* [Kubespray](/docs/home/kubespray/) ## On-Premises VMs -* [Vagrant](/docs/getting-started-guides/coreos/) (uses CoreOS and flannel) -* [CloudStack](/docs/getting-started-guides/cloudstack/) (uses Ansible, CoreOS and flannel) -* [Vmware vSphere](/docs/getting-started-guides/vsphere/) (uses Debian) -* [Vmware Photon Controller](/docs/getting-started-guides/photon-controller/) (uses Debian) -* [Vmware vSphere, OpenStack, or Bare Metal](/docs/getting-started-guides/ubuntu/) (uses Juju, Ubuntu and flannel) -* [Vmware](/docs/getting-started-guides/coreos/) (uses CoreOS and flannel) -* [CoreOS on libvirt](/docs/getting-started-guides/libvirt-coreos/) (uses CoreOS) -* [oVirt](/docs/getting-started-guides/ovirt/) -* [OpenStack Heat](/docs/getting-started-guides/openstack-heat/) (uses CentOS and flannel) -* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) (uses Fedora and flannel) +* [Vagrant](/docs/home/coreos/) (uses CoreOS and flannel) +* [CloudStack](/docs/home/cloudstack/) (uses Ansible, CoreOS and flannel) +* [Vmware vSphere](/docs/home/vsphere/) (uses Debian) +* [Vmware Photon Controller](/docs/home/photon-controller/) (uses Debian) +* [Vmware vSphere, OpenStack, or Bare Metal](/docs/home/ubuntu/) (uses Juju, Ubuntu and flannel) +* [Vmware](/docs/home/coreos/) (uses CoreOS and flannel) +* [CoreOS on libvirt](/docs/home/libvirt-coreos//) (uses CoreOS) +* [oVirt](/docs/home/ovirt/) +* [OpenStack Heat](/docs/home/openstack-heat/) (uses CentOS and flannel) +* [Fedora (Multi Node)](/docs/home/fedora/flannel_multi_node_cluster/) (uses Fedora and flannel) ## Bare Metal -* [Offline](/docs/getting-started-guides/coreos/bare_metal_offline/) (no internet required. Uses CoreOS and Flannel) -* [Fedora via Ansible](/docs/getting-started-guides/fedora/fedora_ansible_config/) -* [Fedora (Single Node)](/docs/getting-started-guides/fedora/fedora_manual_config/) -* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) -* [CentOS](/docs/getting-started-guides/centos/centos_manual_config/) -* [Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/) -* [CoreOS on AWS or GCE](/docs/getting-started-guides/coreos/) +* [Offline](/docs/home/coreos/bare_metal_offline/) (no internet required. Uses CoreOS and Flannel) +* [Fedora via Ansible](/docs/home/fedora/fedora_ansible_config/) +* [Fedora (Single Node)](/docs/home/fedora/fedora_manual_config/) +* [Fedora (Multi Node)](/docs/home/fedora/flannel_multi_node_cluster/) +* [CentOS](/docs/home/centos/centos_manual_config/) +* [Kubernetes on Ubuntu](/docs/home/ubuntu/) +* [CoreOS on AWS or GCE](/docs/home/coreos/) ## Integrations These solutions provide integration with third-party schedulers, resource managers, and/or lower level platforms. -* [Kubernetes on Mesos](/docs/getting-started-guides/mesos/) +* [Kubernetes on Mesos](/docs/home/mesos/) * Instructions specify GCE, but are generic enough to be adapted to most existing Mesos clusters -* [DCOS](/docs/getting-started-guides/dcos/) +* [DCOS](/docs/home/dcos/) * Community Edition DCOS uses AWS * Enterprise Edition DCOS supports cloud hosting, on-premises VMs, and bare metal @@ -146,37 +146,37 @@ KUBE2GO.io | | multi-support | multi-support | [docs](http Madcore.Ai | Jenkins DSL | Ubuntu | flannel | [docs](https://madcore.ai) | Community ([@madcore-ai](https://github.com/madcore-ai)) Platform9 | | multi-support | multi-support | [docs](https://platform9.com/managed-kubernetes/) | Commercial Giant Swarm | | CoreOS | flannel and/or Calico | [docs](https://docs.giantswarm.io/) | Commercial -GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce/) | Project +GCE | Saltstack | Debian | GCE | [docs](/docs/home/gce//) | Project Azure Container Service | | Ubuntu | Azure | [docs](https://azure.microsoft.com/en-us/services/container-service/) | Commercial -Azure (IaaS) | | Ubuntu | Azure | [docs](/docs/getting-started-guides/azure/) | [Community (Microsoft)](https://github.com/Azure/acs-engine) -Bare-metal | Ansible | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/fedora_ansible_config/) | Project -Bare-metal | custom | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config/) | Project -Bare-metal | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) -libvirt | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) -KVM | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) -Mesos/Docker | custom | Ubuntu | Docker | [docs](/docs/getting-started-guides/mesos-docker/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) -Mesos/GCE | | | | [docs](/docs/getting-started-guides/mesos/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) -DCOS | Marathon | CoreOS/Alpine | custom | [docs](/docs/getting-started-guides/dcos/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) -AWS | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/aws/) | Community -GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires)) -Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles)) -Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/bare_metal_offline/) | Community ([@jeffbean](https://github.com/jeffbean)) -CloudStack | Ansible | CoreOS | flannel | [docs](/docs/getting-started-guides/cloudstack/) | Community ([@sebgoa](https://github.com/sebgoa)) -Vmware vSphere | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/vsphere/) | Community ([@imkin](https://github.com/imkin)) -Vmware Photon | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/photon-controller/) | Community ([@alainroy](https://github.com/alainroy)) -Bare-metal | custom | CentOS | flannel | [docs](/docs/getting-started-guides/centos/centos_manual_config/) | Community ([@coolsvap](https://github.com/coolsvap)) -AWS | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) -GCE | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) -Bare Metal | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) -Rackspace | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) -Vmware vSphere | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) -AWS | Saltstack | Debian | AWS | [docs](/docs/getting-started-guides/aws/) | Community ([@justinsb](https://github.com/justinsb)) +Azure (IaaS) | | Ubuntu | Azure | [docs](/docs/home/azure/) | [Community (Microsoft)](https://github.com/Azure/acs-engine) +Bare-metal | Ansible | Fedora | flannel | [docs](/docs/home/fedora/fedora_ansible_config/) | Project +Bare-metal | custom | Fedora | _none_ | [docs](/docs/home/fedora/fedora_manual_config/) | Project +Bare-metal | custom | Fedora | flannel | [docs](/docs/home/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) +libvirt | custom | Fedora | flannel | [docs](/docs/home/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) +KVM | custom | Fedora | flannel | [docs](/docs/home/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) +Mesos/Docker | custom | Ubuntu | Docker | [docs](/docs/home/mesos-docker/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) +Mesos/GCE | | | | [docs](/docs/home/mesos/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) +DCOS | Marathon | CoreOS/Alpine | custom | [docs](/docs/home/dcos/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) +AWS | CoreOS | CoreOS | flannel | [docs](/docs/home/aws/) | Community +GCE | CoreOS | CoreOS | flannel | [docs](/docs/home/coreos/) | Community ([@pires](https://github.com/pires)) +Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/home/coreos/) | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles)) +Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](/docs/home/coreos/bare_metal_offline/) | Community ([@jeffbean](https://github.com/jeffbean)) +CloudStack | Ansible | CoreOS | flannel | [docs](/docs/home/cloudstack/) | Community ([@sebgoa](https://github.com/sebgoa)) +Vmware vSphere | Saltstack | Debian | OVS | [docs](/docs/home/vsphere/) | Community ([@imkin](https://github.com/imkin)) +Vmware Photon | Saltstack | Debian | OVS | [docs](/docs/home/photon-controller/) | Community ([@alainroy](https://github.com/alainroy)) +Bare-metal | custom | CentOS | flannel | [docs](/docs/home/centos/centos_manual_config/) | Community ([@coolsvap](https://github.com/coolsvap)) +AWS | Juju | Ubuntu | flannel | [docs](/docs/home/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) +GCE | Juju | Ubuntu | flannel | [docs](/docs/home/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) +Bare Metal | Juju | Ubuntu | flannel | [docs](/docs/home/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) +Rackspace | Juju | Ubuntu | flannel | [docs](/docs/home/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) +Vmware vSphere | Juju | Ubuntu | flannel | [docs](/docs/home/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) +AWS | Saltstack | Debian | AWS | [docs](/docs/home/aws/) | Community ([@justinsb](https://github.com/justinsb)) AWS | kops | Debian | AWS | [docs](https://github.com/kubernetes/kops/) | Community ([@justinsb](https://github.com/justinsb)) -Bare-metal | custom | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY)) -libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](/docs/getting-started-guides/libvirt-coreos/) | Community ([@lhuard1A](https://github.com/lhuard1A)) -oVirt | | | | [docs](/docs/getting-started-guides/ovirt/) | Community ([@simon3z](https://github.com/simon3z)) -OpenStack Heat | Saltstack | CentOS | Neutron + flannel hostgw | [docs](/docs/getting-started-guides/openstack-heat/) | Community ([@FujitsuEnablingSoftwareTechnologyGmbH](https://github.com/FujitsuEnablingSoftwareTechnologyGmbH)) -any | any | any | any | [docs](/docs/getting-started-guides/scratch/) | Community ([@erictune](https://github.com/erictune)) +Bare-metal | custom | Ubuntu | flannel | [docs](/docs/home/ubuntu/) | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY)) +libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](/docs/home/libvirt-coreos//) | Community ([@lhuard1A](https://github.com/lhuard1A)) +oVirt | | | | [docs](/docs/home/ovirt/) | Community ([@simon3z](https://github.com/simon3z)) +OpenStack Heat | Saltstack | CentOS | Neutron + flannel hostgw | [docs](/docs/home/openstack-heat/) | Community ([@FujitsuEnablingSoftwareTechnologyGmbH](https://github.com/FujitsuEnablingSoftwareTechnologyGmbH)) +any | any | any | any | [docs](/docs/home/scratch/) | Community ([@erictune](https://github.com/erictune)) any | any | any | any | [docs](http://docs.projectcalico.org/v2.2/getting-started/kubernetes/installation/) | Commercial and Community **Note**: The above table is ordered by version test/used in nodes, followed by support level. diff --git a/docs/tasks/access-application-cluster/access-cluster.md b/docs/tasks/access-application-cluster/access-cluster.md index 641f3c4ed9f3c..5a278289e8f67 100644 --- a/docs/tasks/access-application-cluster/access-cluster.md +++ b/docs/tasks/access-application-cluster/access-cluster.md @@ -14,7 +14,7 @@ Kubernetes CLI, `kubectl`. To access a cluster, you need to know the location of the cluster and have credentials to access it. Typically, this is automatically set-up when you work through -a [Getting started guide](/docs/getting-started-guides/), +a [Getting started guide](/docs/home/), or someone else setup the cluster and provided you with credentials and a location. Check the location and credentials that kubectl knows about with this command: @@ -183,7 +183,7 @@ In each case, the credentials of the pod are used to communicate securely with t The previous section was about connecting the Kubernetes API server. This section is about connecting to other services running on Kubernetes cluster. In Kubernetes, the -[nodes](/docs/admin/node), [pods](/docs/user-guide/pods) and [services](/docs/user-guide/services) all have +[nodes](/docs/admin/node), [pods](/docs/user-guide/pods) and [services](/docs/concepts/services-networking/service/) all have their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a cluster will not be routable, so they will not be reachable from a machine outside the cluster, such as your desktop machine. @@ -194,7 +194,7 @@ You have several options for connecting to nodes, pods and services from outside - Access services through public IPs. - Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside - the cluster. See the [services](/docs/user-guide/services) and + the cluster. See the [services](/docs/concepts/services-networking/service/) and [kubectl expose](/docs/user-guide/kubectl/v1.6/#expose) documentation. - Depending on your cluster environment, this may just expose the service to your corporate network, or it may expose it to the internet. Think about whether the service being exposed is secure. diff --git a/docs/tasks/access-application-cluster/web-ui-dashboard.md b/docs/tasks/access-application-cluster/web-ui-dashboard.md index f77da393e5d94..a6bd934444684 100644 --- a/docs/tasks/access-application-cluster/web-ui-dashboard.md +++ b/docs/tasks/access-application-cluster/web-ui-dashboard.md @@ -64,7 +64,7 @@ To access the deploy wizard from the Welcome page, click the respective button. The deploy wizard expects that you provide the following information: -- **App name** (mandatory): Name for your application. A [label](/docs/user-guide/labels/) with the name will be added to the Deployment and Service, if any, that will be deployed. +- **App name** (mandatory): Name for your application. A [label](/docs/concepts/overview/working-with-objects/labels/) with the name will be added to the Deployment and Service, if any, that will be deployed. The application name must be unique within the selected Kubernetes [namespace](/docs/tasks/administer-cluster/namespaces/). It must start with a lowercase character, and end with a lowercase character or a number, and contain only lowercase letters, numbers and dashes (-). It is limited to 24 characters. Leading and trailing spaces are ignored. @@ -84,7 +84,7 @@ If needed, you can expand the **Advanced options** section where you can specify - **Description**: The text you enter here will be added as an [annotation](/docs/concepts/overview/working-with-objects/annotations/) to the Deployment and displayed in the application's details. -- **Labels**: Default [labels](/docs/user-guide/labels/) to be used for your application are application name and version. You can specify additional labels to be applied to the Deployment, Service (if any), and Pods, such as release, environment, tier, partition, and release track. +- **Labels**: Default [labels](/docs/concepts/overview/working-with-objects/labels/) to be used for your application are application name and version. You can specify additional labels to be applied to the Deployment, Service (if any), and Pods, such as release, environment, tier, partition, and release track. Example: diff --git a/docs/tasks/administer-cluster/access-cluster-api.md b/docs/tasks/administer-cluster/access-cluster-api.md index 88ef4334cf94e..a6af1d3f37dbd 100644 --- a/docs/tasks/administer-cluster/access-cluster-api.md +++ b/docs/tasks/administer-cluster/access-cluster-api.md @@ -22,7 +22,7 @@ Kubernetes command-line tool, `kubectl`. To access a cluster, you need to know the location of the cluster and have credentials to access it. Typically, this is automatically set-up when you work through -a [Getting started guide](/docs/getting-started-guides/), +a [Getting started guide](/docs/home/), or someone else setup the cluster and provided you with credentials and a location. Check the location and credentials that kubectl knows about with this command: diff --git a/docs/tasks/administer-cluster/access-cluster-services.md b/docs/tasks/administer-cluster/access-cluster-services.md index 5c55fa3acaca5..660acdde028a1 100644 --- a/docs/tasks/administer-cluster/access-cluster-services.md +++ b/docs/tasks/administer-cluster/access-cluster-services.md @@ -15,7 +15,7 @@ This page shows how to connect to services running on the Kubernetes cluster. ## Accessing services running on the cluster -In Kubernetes, [nodes](/docs/admin/node), [pods](/docs/user-guide/pods) and [services](/docs/user-guide/services) all have +In Kubernetes, [nodes](/docs/admin/node), [pods](/docs/user-guide/pods) and [services](/docs/concepts/services-networking/service/) all have their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a cluster will not be routable, so they will not be reachable from a machine outside the cluster, such as your desktop machine. @@ -26,7 +26,7 @@ You have several options for connecting to nodes, pods and services from outside - Access services through public IPs. - Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside - the cluster. See the [services](/docs/user-guide/services) and + the cluster. See the [services](/docs/concepts/services-networking/service/) and [kubectl expose](/docs/user-guide/kubectl/v1.6/#expose) documentation. - Depending on your cluster environment, this may just expose the service to your corporate network, or it may expose it to the internet. Think about whether the service being exposed is secure. diff --git a/docs/tasks/administer-cluster/calico-network-policy.md b/docs/tasks/administer-cluster/calico-network-policy.md index 4543aa7069743..f8879f07445cb 100644 --- a/docs/tasks/administer-cluster/calico-network-policy.md +++ b/docs/tasks/administer-cluster/calico-network-policy.md @@ -15,7 +15,7 @@ This page shows how to use Calico for NetworkPolicy. {% capture steps %} ## Deploying a cluster using Calico -You can deploy a cluster using Calico for network policy in the default [GCE deployment](/docs/getting-started-guides/gce) using the following set of commands: +You can deploy a cluster using Calico for network policy in the default [GCE deployment](/docs/home/gce/) using the following set of commands: ```shell export NETWORK_POLICY_PROVIDER=calico @@ -55,7 +55,7 @@ There are two main components to be aware of: {% endcapture %} {% capture whatsnext %} -Once your cluster is running, you can follow the [NetworkPolicy getting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy. +Once your cluster is running, you can follow the [NetworkPolicy getting started guide](/docs/home/network-policy/walkthrough) to try out Kubernetes NetworkPolicy. {% endcapture %} {% include templates/task.md %} diff --git a/docs/tasks/administer-cluster/cilium-network-policy.md b/docs/tasks/administer-cluster/cilium-network-policy.md index 6db677f313acd..0d881178d0b41 100644 --- a/docs/tasks/administer-cluster/cilium-network-policy.md +++ b/docs/tasks/administer-cluster/cilium-network-policy.md @@ -72,7 +72,7 @@ There are two main components to be aware of: {% endcapture %} {% capture whatsnext %} -Once your cluster is running, you can follow the [NetworkPolicy getting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy with Cilium. Have fun, and if you have questions, contact us using the [Cilium Slack Channel](https://cilium.herokuapp.com/). +Once your cluster is running, you can follow the [NetworkPolicy getting started guide](/docs/home/network-policy/walkthrough) to try out Kubernetes NetworkPolicy with Cilium. Have fun, and if you have questions, contact us using the [Cilium Slack Channel](https://cilium.herokuapp.com/). {% endcapture %} {% include templates/task.md %} diff --git a/docs/tasks/administer-cluster/cluster-management.md b/docs/tasks/administer-cluster/cluster-management.md index 3566ef2c6f212..cdadc8f7d5263 100644 --- a/docs/tasks/administer-cluster/cluster-management.md +++ b/docs/tasks/administer-cluster/cluster-management.md @@ -15,7 +15,7 @@ running cluster. ## Creating and configuring a Cluster -To install Kubernetes on a set of machines, consult one of the existing [Getting Started guides](/docs/getting-started-guides/) depending on your environment. +To install Kubernetes on a set of machines, consult one of the existing [Getting Started guides](/docs/home/) depending on your environment. ## Upgrading a cluster diff --git a/docs/tasks/administer-cluster/kube-router-network-policy.md b/docs/tasks/administer-cluster/kube-router-network-policy.md index 3794bf7d4578b..49d523b6d8368 100644 --- a/docs/tasks/administer-cluster/kube-router-network-policy.md +++ b/docs/tasks/administer-cluster/kube-router-network-policy.md @@ -18,7 +18,7 @@ The Kube-router Addon comes with a Network Policy Controller that watches Kubern {% endcapture %} {% capture whatsnext %} -Once you have installed the Kube-router addon, you can follow the [NetworkPolicy getting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy. +Once you have installed the Kube-router addon, you can follow the [NetworkPolicy getting started guide](/docs/home/network-policy/walkthrough) to try out Kubernetes NetworkPolicy. {% endcapture %} {% include templates/task.md %} diff --git a/docs/tasks/administer-cluster/namespaces-walkthrough.md b/docs/tasks/administer-cluster/namespaces-walkthrough.md index 6a6e47a37f8ec..d9d79eefbdf8a 100644 --- a/docs/tasks/administer-cluster/namespaces-walkthrough.md +++ b/docs/tasks/administer-cluster/namespaces-walkthrough.md @@ -20,7 +20,7 @@ This example demonstrates how to use Kubernetes namespaces to subdivide your clu This example assumes the following: -1. You have an [existing Kubernetes cluster](/docs/getting-started-guides/). +1. You have an [existing Kubernetes cluster](/docs/home/). 2. You have a basic understanding of Kubernetes _[Pods](/docs/concepts/workloads/pods/pod/)_, _[Services](/docs/concepts/services-networking/service/)_, and _[Deployments](/docs/concepts/workloads/controllers/deployment/)_. ### Step One: Understand the default namespace diff --git a/docs/tasks/administer-cluster/namespaces.md b/docs/tasks/administer-cluster/namespaces.md index d8e67b8999f46..5fadb62eaac69 100644 --- a/docs/tasks/administer-cluster/namespaces.md +++ b/docs/tasks/administer-cluster/namespaces.md @@ -10,7 +10,7 @@ This page shows how to view, work in, and delete namespaces. The page also shows {% endcapture %} {% capture prerequisites %} -* Have an [existing Kubernetes cluster](/docs/getting-started-guides/). +* Have an [existing Kubernetes cluster](/docs/home/). * Have a basic understanding of Kubernetes _[Pods](/docs/concepts/workloads/pods/pod/)_, _[Services](/docs/concepts/services-networking/service/)_, and _[Deployments](/docs/concepts/workloads/controllers/deployment/)_. {% endcapture %} diff --git a/docs/tasks/administer-cluster/out-of-resource.md b/docs/tasks/administer-cluster/out-of-resource.md index a86f70ddf2c4d..02098595df1fc 100644 --- a/docs/tasks/administer-cluster/out-of-resource.md +++ b/docs/tasks/administer-cluster/out-of-resource.md @@ -49,7 +49,7 @@ container, and if users use the [node allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) feature, out of resource decisions are made local to the end user pod part of the cgroup hierarchy as well as the root node. This -[script](/docs/concepts/cluster-administration/out-of-resource/memory-available.sh) +[script](/docs/tasks/administer-cluster/out-of-resource/memory-available.sh) reproduces the same set of steps that the `kubelet` performs to calculate `memory.available`. The `kubelet` excludes inactive_file (i.e. # of bytes of file-backed memory on inactive LRU list) from its calculation as it assumes that diff --git a/docs/tasks/administer-cluster/romana-network-policy.md b/docs/tasks/administer-cluster/romana-network-policy.md index ab98797713c2a..453e9e488a6e6 100644 --- a/docs/tasks/administer-cluster/romana-network-policy.md +++ b/docs/tasks/administer-cluster/romana-network-policy.md @@ -12,7 +12,7 @@ This page shows how to use Romana for NetworkPolicy. {% capture prerequisites %} -Complete steps 1, 2, and 3 of the [kubeadm getting started guide](/docs/getting-started-guides/kubeadm/). +Complete steps 1, 2, and 3 of the [kubeadm getting started guide](/docs/home/kubeadm/). {% endcapture %} @@ -34,7 +34,7 @@ To apply network policies use one of the following: {% capture whatsnext %} -Once your have installed Romana, you can follow the [NetworkPolicy getting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy. +Once your have installed Romana, you can follow the [NetworkPolicy getting started guide](/docs/home/network-policy/walkthrough) to try out Kubernetes NetworkPolicy. {% endcapture %} diff --git a/docs/tasks/administer-cluster/weave-network-policy.md b/docs/tasks/administer-cluster/weave-network-policy.md index 85537e93f3647..11f4d8548635c 100644 --- a/docs/tasks/administer-cluster/weave-network-policy.md +++ b/docs/tasks/administer-cluster/weave-network-policy.md @@ -12,7 +12,7 @@ This page shows how to use Weave Net for NetworkPolicy. {% capture prerequisites %} -Complete steps 1, 2, and 3 of the [kubeadm getting started guide](/docs/getting-started-guides/kubeadm/). +Complete steps 1, 2, and 3 of the [kubeadm getting started guide](/docs/home/kubeadm/). {% endcapture %} @@ -108,7 +108,7 @@ spec: {% capture whatsnext %} -Once you have installed the Weave Net addon, you can follow the [NetworkPolicy getting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy. +Once you have installed the Weave Net addon, you can follow the [NetworkPolicy getting started guide](/docs/home/network-policy/walkthrough) to try out Kubernetes NetworkPolicy. {% endcapture %} diff --git a/docs/tasks/administer-federation/events.md b/docs/tasks/administer-federation/events.md index 1d9f72ea0e811..dd7e1af68872e 100644 --- a/docs/tasks/administer-federation/events.md +++ b/docs/tasks/administer-federation/events.md @@ -19,7 +19,7 @@ this for you). Other tutorials, for example by Kelsey Hightower, are also available to help you. You are also expected to have a basic -[working knowledge of Kubernetes](/docs/getting-started-guides/) in +[working knowledge of Kubernetes](/docs/home/) in general. ## Overview diff --git a/docs/tasks/administer-federation/ingress.md b/docs/tasks/administer-federation/ingress.md index 7909063543bba..66982b02c4439 100644 --- a/docs/tasks/administer-federation/ingress.md +++ b/docs/tasks/administer-federation/ingress.md @@ -66,7 +66,7 @@ this for you). Other tutorials, for example by Kelsey Hightower, are also available to help you. You must also have a basic -[working knowledge of Kubernetes](/docs/getting-started-guides/) in +[working knowledge of Kubernetes](/docs/home/) in general, and [Ingress](/docs/concepts/services-networking/ingress/) in particular. {% endcapture %} diff --git a/docs/tasks/administer-federation/replicaset.md b/docs/tasks/administer-federation/replicaset.md index 896442b35bbcf..fa96f2f3dceb8 100644 --- a/docs/tasks/administer-federation/replicaset.md +++ b/docs/tasks/administer-federation/replicaset.md @@ -16,7 +16,7 @@ replicas exist across the registered clusters. * {% include federated-task-tutorial-prereqs.md %} * You are also expected to have a basic -[working knowledge of Kubernetes](/docs/getting-started-guides/) in +[working knowledge of Kubernetes](/docs/home/) in general and [ReplicaSets](/docs/concepts/workloads/controllers/replicaset/) in particular. {% endcapture %} diff --git a/docs/tasks/administer-federation/secret.md b/docs/tasks/administer-federation/secret.md index 2cd9aa26ea146..ec847af354fd9 100644 --- a/docs/tasks/administer-federation/secret.md +++ b/docs/tasks/administer-federation/secret.md @@ -18,7 +18,7 @@ this for you). Other tutorials, for example by Kelsey Hightower, are also available to help you. You are also expected to have a basic -[working knowledge of Kubernetes](/docs/getting-started-guides/) in +[working knowledge of Kubernetes](/docs/home/) in general and [Secrets](/docs/concepts/configuration/secret/) in particular. ## Overview diff --git a/docs/tasks/configure-pod-container/assign-pods-nodes.md b/docs/tasks/configure-pod-container/assign-pods-nodes.md index 06a29e575ac68..613c731a0ef40 100644 --- a/docs/tasks/configure-pod-container/assign-pods-nodes.md +++ b/docs/tasks/configure-pod-container/assign-pods-nodes.md @@ -75,7 +75,7 @@ a `disktype=ssd` label. {% capture whatsnext %} Learn more about -[labels and selectors](/docs/user-guide/labels/). +[labels and selectors](/docs/concepts/overview/working-with-objects/labels/). {% endcapture %} {% include templates/task.md %} diff --git a/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md b/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md index 71afeaffba6d5..5d8d46c4257f7 100644 --- a/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md +++ b/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md @@ -22,7 +22,7 @@ bound to a suitable PersistentVolume. * You need to have a Kubernetes cluster that has only one Node, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a single-node cluster, you can create one by using -[Minikube](/docs/getting-started-guides/minikube). +[Minikube](/docs/home/minikube). * Familiarize yourself with the material in [Persistent Volumes](/docs/concepts/storage/persistent-volumes/). diff --git a/docs/tasks/debug-application-cluster/debug-application-introspection.md b/docs/tasks/debug-application-cluster/debug-application-introspection.md index 55c4c24c7f8ce..292e86a36305c 100644 --- a/docs/tasks/debug-application-cluster/debug-application-introspection.md +++ b/docs/tasks/debug-application-cluster/debug-application-introspection.md @@ -379,7 +379,7 @@ Learn about additional debugging tools, including: * [Logging](/docs/user-guide/logging/overview) * [Monitoring](/docs/user-guide/monitoring) * [Getting into containers via `exec`](/docs/user-guide/getting-into-containers) -* [Connecting to containers via proxies](/docs/user-guide/connecting-to-applications-proxy) +* [Connecting to containers via proxies](/docs/tasks/access-kubernetes-api/http-proxy-access-api/) * [Connecting to containers via port forwarding](/docs/user-guide/connecting-to-applications-port-forward) diff --git a/docs/tasks/debug-application-cluster/debug-stateful-set.md b/docs/tasks/debug-application-cluster/debug-stateful-set.md index 070141ec9761f..4c36e26f97fa5 100644 --- a/docs/tasks/debug-application-cluster/debug-stateful-set.md +++ b/docs/tasks/debug-application-cluster/debug-stateful-set.md @@ -79,7 +79,7 @@ kubectl annotate pods pod.alpha.kubernetes.io/initialized="true" --ov {% capture whatsnext %} -Learn more about [debugging an init-container](/docs/tasks/troubleshoot/debug-init-containers/). +Learn more about [debugging an init-container](/docs/tasks/debug-application-cluster/debug-init-containers/). {% endcapture %} diff --git a/docs/tasks/debug-application-cluster/resource-usage-monitoring.md b/docs/tasks/debug-application-cluster/resource-usage-monitoring.md index 9ca48d9bd0373..45ac03b5865c8 100644 --- a/docs/tasks/debug-application-cluster/resource-usage-monitoring.md +++ b/docs/tasks/debug-application-cluster/resource-usage-monitoring.md @@ -4,7 +4,7 @@ approvers: title: Tools for Monitoring Compute, Storage, and Network Resources --- -Understanding how an application behaves when deployed is crucial to scaling the application and providing a reliable service. In a Kubernetes cluster, application performance can be examined at many different levels: containers, [pods](/docs/user-guide/pods), [services](/docs/user-guide/services), and whole clusters. As part of Kubernetes we want to provide users with detailed resource usage information about their running applications at all these levels. This will give users deep insights into how their applications are performing and where possible application bottlenecks may be found. In comes [Heapster](https://github.com/kubernetes/heapster), a project meant to provide a base monitoring platform on Kubernetes. +Understanding how an application behaves when deployed is crucial to scaling the application and providing a reliable service. In a Kubernetes cluster, application performance can be examined at many different levels: containers, [pods](/docs/user-guide/pods), [services](/docs/concepts/services-networking/service/), and whole clusters. As part of Kubernetes we want to provide users with detailed resource usage information about their running applications at all these levels. This will give users deep insights into how their applications are performing and where possible application bottlenecks may be found. In comes [Heapster](https://github.com/kubernetes/heapster), a project meant to provide a base monitoring platform on Kubernetes. ## Overview diff --git a/docs/tasks/federation/federation-service-discovery.md b/docs/tasks/federation/federation-service-discovery.md index a30910af72837..80ba72bb8c1f9 100644 --- a/docs/tasks/federation/federation-service-discovery.md +++ b/docs/tasks/federation/federation-service-discovery.md @@ -25,7 +25,7 @@ this for you). Other tutorials, for example by Kelsey Hightower, are also available to help you. You are also expected to have a basic -[working knowledge of Kubernetes](/docs/getting-started-guides/) in +[working knowledge of Kubernetes](/docs/home/) in general, and [Services](/docs/concepts/services-networking/service/) in particular. ## Overview diff --git a/docs/tasks/federation/set-up-cluster-federation-kubefed.md b/docs/tasks/federation/set-up-cluster-federation-kubefed.md index 8f6b970dde776..1786cf183f338 100644 --- a/docs/tasks/federation/set-up-cluster-federation-kubefed.md +++ b/docs/tasks/federation/set-up-cluster-federation-kubefed.md @@ -21,7 +21,7 @@ using `kubefed`. ## Prerequisites This guide assumes that you have a running Kubernetes cluster. Please -see one of the [getting started](/docs/getting-started-guides/) guides +see one of the [getting started](/docs/home/) guides for installation instructions for your platform. @@ -367,7 +367,7 @@ kubefed init fellowship \ ``` For more information see -[Setting up CoreDNS as DNS provider for Cluster Federation](/docs/tutorials/federation/set-up-coredns-provider-federation/). +[Setting up CoreDNS as DNS provider for Cluster Federation](/docs/tasks/federation/set-up-coredns-provider-federation/). ## Adding a cluster to a federation diff --git a/docs/tasks/federation/set-up-coredns-provider-federation.md b/docs/tasks/federation/set-up-coredns-provider-federation.md index 4268245dbaf73..c0cf27780269b 100644 --- a/docs/tasks/federation/set-up-coredns-provider-federation.md +++ b/docs/tasks/federation/set-up-coredns-provider-federation.md @@ -23,7 +23,7 @@ DNS provider for Cluster Federation. * You need to have a running Kubernetes cluster (which is referenced as host cluster). Please see one of the -[getting started](/docs/getting-started-guides/) guides for +[getting started](/docs/home/) guides for installation instructions for your platform. * Support for `LoadBalancer` services in member clusters of federation is mandatory to enable `CoreDNS` for service discovery across federated clusters. diff --git a/docs/tasks/federation/set-up-placement-policies-federation.md b/docs/tasks/federation/set-up-placement-policies-federation.md index a5dd281593abd..460055f0b6c1d 100644 --- a/docs/tasks/federation/set-up-placement-policies-federation.md +++ b/docs/tasks/federation/set-up-placement-policies-federation.md @@ -12,7 +12,7 @@ resources using an external policy engine. {% capture prerequisites %} You need to have a running Kubernetes cluster (which is referenced as host -cluster). Please see one of the [getting started](/docs/getting-started-guides/) +cluster). Please see one of the [getting started](/docs/home/) guides for installation instructions for your platform. {% endcapture %} diff --git a/docs/tasks/job/parallel-processing-expansion.md b/docs/tasks/job/parallel-processing-expansion.md index f8fac8066ec0f..7feb9c7602a4f 100644 --- a/docs/tasks/job/parallel-processing-expansion.md +++ b/docs/tasks/job/parallel-processing-expansion.md @@ -109,7 +109,7 @@ Processing item cherry In the first example, each instance of the template had one parameter, and that parameter was also used as a label. However label keys are limited in [what characters they can -contain](/docs/user-guide/labels/#syntax-and-character-set). +contain](/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set). This slightly more complex example uses the jinja2 template language to generate our objects. We will use a one-line python script to convert the template to a file. diff --git a/docs/tasks/manage-daemon/update-daemon-set.md b/docs/tasks/manage-daemon/update-daemon-set.md index 653eec57a145c..46a5823218b6e 100644 --- a/docs/tasks/manage-daemon/update-daemon-set.md +++ b/docs/tasks/manage-daemon/update-daemon-set.md @@ -159,7 +159,7 @@ causes: The rollout is stuck because new DaemonSet pods can't be scheduled on at least one node. This is possible when the node is -[running out of resources](/docs/concepts/cluster-administration/out-of-resource/). +[running out of resources](/docs/tasks/administer-cluster/out-of-resource/). When this happens, find the nodes that don't have the DaemonSet pods scheduled on by comparing the output of `kubectl get nodes` and the output of: diff --git a/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index 6d23d7d008a91..8416b7ec495ca 100644 --- a/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -18,7 +18,7 @@ This document walks you through an example of enabling Horizontal Pod Autoscalin This example requires a running Kubernetes cluster and kubectl, version 1.2 or later. [Heapster](https://github.com/kubernetes/heapster) monitoring needs to be deployed in the cluster as Horizontal Pod Autoscaler uses it to collect metrics -(if you followed [getting started on GCE guide](/docs/getting-started-guides/gce), +(if you followed [getting started on GCE guide](/docs/home/gce/), heapster monitoring will be turned-on by default). To specify multiple resource metrics for a Horizontal Pod Autoscaler, you must have a Kubernetes cluster diff --git a/docs/tasks/run-application/run-replicated-stateful-application.md b/docs/tasks/run-application/run-replicated-stateful-application.md index 9613bb2437d2d..6c86b8d0428c8 100644 --- a/docs/tasks/run-application/run-replicated-stateful-application.md +++ b/docs/tasks/run-application/run-replicated-stateful-application.md @@ -13,7 +13,7 @@ title: Run a Replicated Stateful Application {% capture overview %} This page shows how to run a replicated stateful application using a -[StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/) controller. +[StatefulSet](/docs/concepts/workloads/controllers/statefulset/) controller. The example is a MySQL single-master topology with multiple slaves running asynchronous replication. @@ -29,7 +29,7 @@ on general patterns for running stateful applications in Kubernetes. * {% include default-storage-class-prereqs.md %} * This tutorial assumes you are familiar with [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) - and [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/), + and [StatefulSets](/docs/concepts/workloads/controllers/statefulset/), as well as other core concepts like [Pods](/docs/concepts/workloads/pods/pod/), [Services](/docs/concepts/services-networking/service/), and [ConfigMaps](/docs/tasks/configure-pod-container/configmap/). @@ -169,7 +169,7 @@ Because the example topology consists of a single MySQL master and any number of slaves, the script simply assigns ordinal `0` to be the master, and everyone else to be slaves. Combined with the StatefulSet controller's -[deployment order guarantee](/docs/concepts/abstractions/controllers/statefulsets/#deployment-and-scaling-guarantee), +[deployment order guarantee](/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantee), this ensures the MySQL master is Ready before creating slaves, so they can begin replicating. diff --git a/docs/tasks/tools/install-kubectl.md b/docs/tasks/tools/install-kubectl.md index 5d52e04c74868..095f3a1a7d11d 100644 --- a/docs/tasks/tools/install-kubectl.md +++ b/docs/tasks/tools/install-kubectl.md @@ -130,7 +130,7 @@ Edit the config file with a text editor of your choice, such as Notepad for exam ## Configure kubectl -In order for kubectl to find and access a Kubernetes cluster, it needs a [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/), which is created automatically when you create a cluster using kube-up.sh or successfully deploy a Minikube cluster. See the [getting started guides](/docs/getting-started-guides/) for more about creating clusters. If you need access to a cluster you didn't create, see the [Sharing Cluster Access document](/docs/tasks/administer-cluster/share-configuration/). +In order for kubectl to find and access a Kubernetes cluster, it needs a [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/), which is created automatically when you create a cluster using kube-up.sh or successfully deploy a Minikube cluster. See the [getting started guides](/docs/home/) for more about creating clusters. If you need access to a cluster you didn't create, see the [Sharing Cluster Access document](/docs/tasks/administer-cluster/share-configuration/). By default, kubectl configuration is located at `~/.kube/config`. ## Check the kubectl configuration diff --git a/docs/tasks/tools/install-minikube.md b/docs/tasks/tools/install-minikube.md index 3246073871522..fec054e4ab409 100644 --- a/docs/tasks/tools/install-minikube.md +++ b/docs/tasks/tools/install-minikube.md @@ -46,7 +46,7 @@ If you do not already have a hypervisor installed, install one now. {% capture whatsnext %} -* [Running Kubernetes Locally via Minikube](/docs/getting-started-guides/minikube/) +* [Running Kubernetes Locally via Minikube](/docs/home/minikube/) {% endcapture %} diff --git a/docs/tools/index.md b/docs/tools/index.md index b4ba12ece0ed6..66817843f47d4 100644 --- a/docs/tools/index.md +++ b/docs/tools/index.md @@ -16,7 +16,7 @@ Kubernetes contains the following built-in tools: ##### Kubeadm -[`kubeadm`](/docs/getting-started-guides/kubeadm/) is the command line tool for easily provisioning a secure Kubernetes cluster on top of physical or cloud servers or virtual machines (currently in alpha). +[`kubeadm`](/docs/home/kubeadm/) is the command line tool for easily provisioning a secure Kubernetes cluster on top of physical or cloud servers or virtual machines (currently in alpha). ##### Kubefed @@ -25,7 +25,7 @@ to help you administrate your federated clusters. ##### Minikube -[`minikube`](/docs/getting-started-guides/minikube/) is a tool that makes it +[`minikube`](/docs/home/minikube/) is a tool that makes it easy to run a single-node Kubernetes cluster locally on your workstation for development and testing purposes. diff --git a/docs/tutorials/stateful-application/basic-stateful-set.md b/docs/tutorials/stateful-application/basic-stateful-set.md index ea1b752c7ba16..2dc65eabe6971 100644 --- a/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/docs/tutorials/stateful-application/basic-stateful-set.md @@ -11,7 +11,7 @@ title: StatefulSet Basics {% capture overview %} This tutorial provides an introduction to managing applications with -[StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/). It +[StatefulSets](/docs/concepts/workloads/controllers/statefulset/). It demonstrates how to create, delete, scale, and update the Pods of StatefulSets. {% endcapture %} @@ -24,7 +24,7 @@ following Kubernetes concepts. * [Headless Services](/docs/concepts/services-networking/service/#headless-services) * [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) * [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/persistent-volume-provisioning/) -* [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) +* [StatefulSets](/docs/concepts/workloads/controllers/statefulset/) * [kubectl CLI](/docs/user-guide/kubectl) This tutorial assumes that your cluster is configured to dynamically provision @@ -54,7 +54,7 @@ After this tutorial, you will be familiar with the following. Begin by creating a StatefulSet using the example below. It is similar to the example presented in the -[StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) concept. +[StatefulSets](/docs/concepts/workloads/controllers/statefulset/) concept. It creates a [Headless Service](/docs/concepts/services-networking/service/#headless-services), `nginx`, to publish the IP addresses of Pods in the StatefulSet, `web`. @@ -133,7 +133,7 @@ web-1 1/1 Running 0 1m ``` -As mentioned in the [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) +As mentioned in the [StatefulSets](/docs/concepts/workloads/controllers/statefulset/) concept, the Pods in a StatefulSet have a sticky, unique identity. This identity is based on a unique ordinal index that is assigned to each Pod by the StatefulSet controller. The Pods' names take the form diff --git a/docs/tutorials/stateful-application/cassandra.md b/docs/tutorials/stateful-application/cassandra.md index 1b729dfd42816..48e8b3202fb3a 100644 --- a/docs/tutorials/stateful-application/cassandra.md +++ b/docs/tutorials/stateful-application/cassandra.md @@ -45,7 +45,7 @@ To complete this tutorial, you should already have a basic familiarity with [Pod ### Additional Minikube Setup Instructions -**Caution:** [Minikube](/docs/getting-started-guides/minikube/) defaults to 1024MB of memory and 1 CPU which results in an insufficient resource errors during this tutorial. +**Caution:** [Minikube](/docs/home/minikube/) defaults to 1024MB of memory and 1 CPU which results in an insufficient resource errors during this tutorial. {: .caution} To avoid these errors, run minikube with: diff --git a/docs/tutorials/stateful-application/zookeeper.md b/docs/tutorials/stateful-application/zookeeper.md index 9ad45caef903f..64cff77a855f1 100644 --- a/docs/tutorials/stateful-application/zookeeper.md +++ b/docs/tutorials/stateful-application/zookeeper.md @@ -12,9 +12,9 @@ title: Running ZooKeeper, A CP Distributed System {% capture overview %} This tutorial demonstrates [Apache Zookeeper](https://zookeeper.apache.org) on -Kubernetes using [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/), +Kubernetes using [StatefulSets](/docs/concepts/workloads/controllers/statefulset/), [PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#specifying-a-poddisruptionbudget), -and [PodAntiAffinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature). +and [PodAntiAffinity](/docs/concepts/configuration/assign-pod-node//#inter-pod-affinity-and-anti-affinity-beta-feature). {% endcapture %} {% capture prerequisites %} @@ -28,9 +28,9 @@ Kubernetes concepts. * [PersistentVolumes](/docs/concepts/storage/volumes/) * [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/persistent-volume-provisioning/) * [ConfigMaps](/docs/tasks/configure-pod-container/configmap/) -* [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) +* [StatefulSets](/docs/concepts/workloads/controllers/statefulset/) * [PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#specifying-a-poddisruptionbudget) -* [PodAntiAffinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature) +* [PodAntiAffinity](/docs/concepts/configuration/assign-pod-node//#inter-pod-affinity-and-anti-affinity-beta-feature) * [kubectl CLI](/docs/user-guide/kubectl) You will require a cluster with at least four nodes, and each node will require @@ -92,7 +92,7 @@ The manifest below contains a [Headless Service](/docs/concepts/services-networking/service/#headless-services), a [ConfigMap](/docs/tasks/configure-pod-container/configmap/), a [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions//#specifying-a-poddisruptionbudget), -and a [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/). +and a [StatefulSet](/docs/concepts/workloads/controllers/statefulset/). {% include code.html language="yaml" file="zookeeper.yaml" ghlink="/docs/tutorials/stateful-application/zookeeper.yaml" %} diff --git a/docs/tutorials/stateless-application/hello-minikube.md b/docs/tutorials/stateless-application/hello-minikube.md index 0e4e21fc55199..9268b700b5c2d 100644 --- a/docs/tutorials/stateless-application/hello-minikube.md +++ b/docs/tutorials/stateless-application/hello-minikube.md @@ -7,7 +7,7 @@ title: Hello Minikube The goal of this tutorial is for you to turn a simple Hello World Node.js app into an application running on Kubernetes. The tutorial shows you how to take code that you have developed on your machine, turn it into a Docker -container image and then run that image on [Minikube](/docs/getting-started-guides/minikube). +container image and then run that image on [Minikube](/docs/home/minikube). Minikube provides a simple way of running Kubernetes on your local machine for free. {% endcapture %} @@ -45,7 +45,7 @@ create a local cluster. This tutorial also assumes you are using on OS X. If you are on a different platform like Linux, or using VirtualBox instead of Docker for Mac, the instructions to install Minikube may be slightly different. For general Minikube installation instructions, see -the [Minikube installation guide](/docs/getting-started-guides/minikube/). +the [Minikube installation guide](/docs/home/minikube/). Use `curl` to download and install the latest Minikube release: diff --git a/docs/user-guide/docker-cli-to-kubectl.md b/docs/user-guide/docker-cli-to-kubectl.md index 2f4b4b7948303..0ef2f42878258 100644 --- a/docs/user-guide/docker-cli-to-kubectl.md +++ b/docs/user-guide/docker-cli-to-kubectl.md @@ -43,7 +43,7 @@ $ kubectl expose deployment nginx-app --port=80 --name=nginx-http service "nginx-http" exposed ``` -With kubectl, we create a [Deployment](/docs/concepts/workloads/controllers/deployment/) which will make sure that N pods are running nginx (where N is the number of replicas stated in the spec, which defaults to 1). We also create a [service](/docs/user-guide/services) with a selector that matches the Deployment's selector. See the [Quick start](/docs/user-guide/quick-start) for more information. +With kubectl, we create a [Deployment](/docs/concepts/workloads/controllers/deployment/) which will make sure that N pods are running nginx (where N is the number of replicas stated in the spec, which defaults to 1). We also create a [service](/docs/concepts/services-networking/service/) with a selector that matches the Deployment's selector. See the [Quick start](/docs/user-guide/quick-start) for more information. By default images are run in the background, similar to `docker run -d ...`, if you want to run things in the foreground, use: diff --git a/docs/user-guide/update-demo/index.md.orig b/docs/user-guide/update-demo/index.md.orig index c6fbc3bf8c634..bfb600686ef42 100644 --- a/docs/user-guide/update-demo/index.md.orig +++ b/docs/user-guide/update-demo/index.md.orig @@ -11,7 +11,7 @@ here](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch} ### Step Zero: Prerequisites -This example assumes that you have forked the docs repository and [turned up a Kubernetes cluster](/docs/getting-started-guides/): +This example assumes that you have forked the docs repository and [turned up a Kubernetes cluster](/docs/home/): ```shell $ git clone -b {{page.docsbranch}} https://github.com/kubernetes/kubernetes.github.io diff --git a/docs/user-guide/walkthrough/k8s201.md b/docs/user-guide/walkthrough/k8s201.md index f5f42d7120473..b9d659c05f9a5 100644 --- a/docs/user-guide/walkthrough/k8s201.md +++ b/docs/user-guide/walkthrough/k8s201.md @@ -46,7 +46,7 @@ List all Pods with the label `app=nginx`: kubectl get pods -l app=nginx ``` -For more information, see [Labels](/docs/user-guide/labels/). +For more information, see [Labels](/docs/concepts/overview/working-with-objects/labels/). They are a core concept used by two additional Kubernetes building blocks: Deployments and Services. From f844a415020f537472f09e7859ca7362ddfe3aa3 Mon Sep 17 00:00:00 2001 From: pao Date: Mon, 18 Sep 2017 14:06:22 +0800 Subject: [PATCH 45/87] Update images.md --- docs/concepts/containers/images.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/containers/images.md b/docs/concepts/containers/images.md index 6d06187dbb972..6519f42a2e282 100644 --- a/docs/concepts/containers/images.md +++ b/docs/concepts/containers/images.md @@ -25,7 +25,7 @@ you can do one of the following: - set the `imagePullPolicy` of the container to `Always`; - use `:latest` as the tag for the image to use; -- enable the [AllwaysPullImages](/docs/admin/admission-controllers/#alwayspullimages) admission controller. +- enable the [AlwaysPullImages](/docs/admin/admission-controllers/#alwayspullimages) admission controller. If you did not specify tag of your image, it will be assumed as `:latest`, with pull image policy of `Always` correspondingly. From 24e53830712b95882af250f9c75e6873e2eb2c98 Mon Sep 17 00:00:00 2001 From: chenhuan12 Date: Fri, 22 Sep 2017 17:18:33 +0800 Subject: [PATCH 46/87] fix typo fix typo --- docs/concepts/workloads/controllers/petset.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/workloads/controllers/petset.md b/docs/concepts/workloads/controllers/petset.md index f9089024a6376..20cf8457585e8 100644 --- a/docs/concepts/workloads/controllers/petset.md +++ b/docs/concepts/workloads/controllers/petset.md @@ -268,7 +268,7 @@ web-0 1/1 Running 0 30s web-1 1/1 Running 0 36s $ kubectl patch petset web -p '{"spec":{"replicas":3}}' -"web" patched +petset "web" patched $ kubectl get po NAME READY STATUS RESTARTS AGE From 8db51e099f84b61d2dd0aeb405eb7e644c167187 Mon Sep 17 00:00:00 2001 From: Slava Semushin Date: Fri, 22 Sep 2017 16:48:54 +0200 Subject: [PATCH 47/87] pod-security-policy.md: fix broken link to PSP proposal. --- docs/concepts/policy/pod-security-policy.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/policy/pod-security-policy.md b/docs/concepts/policy/pod-security-policy.md index 69c59d71d2e2b..4d85b7f1f360f 100644 --- a/docs/concepts/policy/pod-security-policy.md +++ b/docs/concepts/policy/pod-security-policy.md @@ -8,7 +8,7 @@ Objects of type `PodSecurityPolicy` govern the ability to make requests on a pod that affect the `SecurityContext` that will be applied to a pod and container. -See [PodSecurityPolicy proposal](https://git.k8s.io/community/contributors/design-proposals/auth/security-context-constraints.md) for more information. +See [PodSecurityPolicy proposal](https://git.k8s.io/community/contributors/design-proposals/auth/pod-security-policy.md) for more information. * TOC {:toc} From 657734de6f9817d03b10e5427223ec04dbbceeb7 Mon Sep 17 00:00:00 2001 From: Lion-Wei Date: Tue, 26 Sep 2017 04:07:34 +0800 Subject: [PATCH 48/87] add the set of sessionAffinity timeoutseconds (#5474) * add the set of sessionAffinity timeoutseconds * Update service.md --- docs/concepts/services-networking/service.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/docs/concepts/services-networking/service.md b/docs/concepts/services-networking/service.md index 1d348bd60cdb7..a081ffd25035c 100644 --- a/docs/concepts/services-networking/service.md +++ b/docs/concepts/services-networking/service.md @@ -176,7 +176,9 @@ or `Services` or `Pods`. By default, the choice of backend is round robin. Client-IP based session affinity can be selected by setting `service.spec.sessionAffinity` to `"ClientIP"` (the -default is `"None"`). +default is `"None"`), and you can set the max session sticky time by setting the field +`service.spec.sessionAffinityConfig.clientIP.timeoutSeconds` if you have already set +`service.spec.sessionAffinity` to `"ClientIP"` (the default is "10800"). ![Services overview diagram for userspace proxy](/images/docs/services-userspace-overview.svg) @@ -191,7 +193,9 @@ select a backend `Pod`. By default, the choice of backend is random. Client-IP based session affinity can be selected by setting `service.spec.sessionAffinity` to `"ClientIP"` (the -default is `"None"`). +default is `"None"`), and you can set the max session sticky time by setting the field +`service.spec.sessionAffinityConfig.clientIP.timeoutSeconds` if you have already set +`service.spec.sessionAffinity` to `"ClientIP"` (the default is "10800"). As with the userspace proxy, the net result is that any traffic bound for the `Service`'s IP:Port is proxied to an appropriate backend without the clients From 2872def62b0a3065fa1d86c58ce3afeef0cd57cc Mon Sep 17 00:00:00 2001 From: linzhaoming Date: Sat, 23 Sep 2017 16:22:06 +0800 Subject: [PATCH 49/87] Fix the doc example --- cn/docs/tutorials/stateful-application/basic-stateful-set.md | 2 +- docs/tutorials/stateful-application/basic-stateful-set.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/cn/docs/tutorials/stateful-application/basic-stateful-set.md b/cn/docs/tutorials/stateful-application/basic-stateful-set.md index 3c45196561118..7290344cf329c 100644 --- a/cn/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/cn/docs/tutorials/stateful-application/basic-stateful-set.md @@ -532,7 +532,7 @@ web-2 gcr.io/google_containers/nginx-slim:0.7 Patch `web` StatefulSet 来执行 `RollingUpdate` 更新策略。 ```shell -kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}} +kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}' statefulset "web" patched ``` diff --git a/docs/tutorials/stateful-application/basic-stateful-set.md b/docs/tutorials/stateful-application/basic-stateful-set.md index 2dc65eabe6971..c5287cc5aa251 100644 --- a/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/docs/tutorials/stateful-application/basic-stateful-set.md @@ -545,7 +545,7 @@ reverse ordinal order, while respecting the StatefulSet guarantees. Patch the `web` StatefulSet to apply the `RollingUpdate` update strategy. ```shell -kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}} +kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}' statefulset "web" patched ``` From 7792c801e3d729f8d018208ef792305f47830876 Mon Sep 17 00:00:00 2001 From: Steve Perry Date: Mon, 25 Sep 2017 13:43:15 -0700 Subject: [PATCH 50/87] Revert "Update links to avoid redirects." (#5617) * Revert "Fix the doc example" This reverts commit 2872def62b0a3065fa1d86c58ce3afeef0cd57cc. * Revert "add the set of sessionAffinity timeoutseconds (#5474)" This reverts commit 657734de6f9817d03b10e5427223ec04dbbceeb7. * Revert "pod-security-policy.md: fix broken link to PSP proposal." This reverts commit 8db51e099f84b61d2dd0aeb405eb7e644c167187. * Revert "fix typo" This reverts commit 24e53830712b95882af250f9c75e6873e2eb2c98. * Revert "Update images.md" This reverts commit f844a415020f537472f09e7859ca7362ddfe3aa3. * Revert "Update links to avoid redirects. (#5614)" This reverts commit 35c7393849fb46505ee5a7493a8e7239b4fd0e6f. --- docs/admin/authorization/rbac.md | 2 +- docs/admin/federation/index.md | 4 +- docs/admin/high-availability/index.md | 2 +- .../apps/v1beta1/definitions.html | 2 +- docs/api-reference/batch/v1/definitions.html | 2 +- .../extensions/v1beta1/definitions.html | 2 +- docs/api-reference/v1.5/index.html | 18 +-- docs/api-reference/v1.6/index.html | 16 +-- docs/api-reference/v1.7/index.html | 2 +- docs/concepts/architecture/nodes.md | 2 +- .../cluster-administration-overview.md | 2 +- .../kubelet-garbage-collection.md | 2 +- .../manage-deployment.md | 2 +- .../concepts/configuration/assign-pod-node.md | 2 +- .../manage-compute-resources-container.md | 2 +- docs/concepts/configuration/overview.md | 4 +- .../container-environment-variables.md | 2 +- docs/concepts/overview/what-is-kubernetes.md | 2 +- .../working-with-objects/annotations.md | 2 +- .../overview/working-with-objects/labels.md | 4 +- .../working-with-objects/namespaces.md | 2 +- .../services-networking/network-policies.md | 2 +- docs/concepts/storage/persistent-volumes.md | 2 +- docs/concepts/storage/volumes.md | 4 +- docs/concepts/workloads/controllers/petset.md | 4 +- .../workloads/controllers/replicaset.md | 2 +- .../controllers/replicationcontroller.md | 4 +- docs/concepts/workloads/pods/disruptions.md | 2 +- .../workloads/pods/init-containers.md | 2 +- docs/concepts/workloads/pods/pod-overview.md | 2 +- docs/concepts/workloads/pods/pod.md | 2 +- docs/getting-started-guides/aws.md | 4 +- docs/getting-started-guides/binary_release.md | 2 +- .../centos/centos_manual_config.md | 6 +- docs/getting-started-guides/cloudstack.md | 4 +- .../coreos/bare_metal_offline.md | 6 +- docs/getting-started-guides/coreos/index.md | 6 +- docs/getting-started-guides/dcos.md | 4 +- .../fedora/fedora_ansible_config.md | 4 +- .../fedora/fedora_manual_config.md | 4 +- .../fedora/flannel_multi_node_cluster.md | 10 +- docs/getting-started-guides/gce.md | 8 +- docs/getting-started-guides/libvirt-coreos.md | 4 +- docs/getting-started-guides/mesos-docker.md | 4 +- docs/getting-started-guides/mesos/index.md | 4 +- docs/getting-started-guides/openstack-heat.md | 4 +- docs/getting-started-guides/ovirt.md | 4 +- .../photon-controller.md | 4 +- docs/getting-started-guides/rkt/index.md | 4 +- docs/getting-started-guides/scratch.md | 16 +-- docs/getting-started-guides/stackpoint.md | 6 +- docs/getting-started-guides/ubuntu/index.md | 30 ++--- .../ubuntu/installation.md | 18 +-- .../ubuntu/operational-considerations.md | 2 +- .../getting-started-guides/ubuntu/upgrades.md | 6 +- docs/getting-started-guides/vsphere.md | 4 +- docs/home/index.md | 2 +- .../extensions/v1beta1/definitions.html | 2 +- docs/resources-reference/v1.5/index.html | 18 +-- docs/resources-reference/v1.6/index.html | 16 +-- docs/resources-reference/v1.7/index.html | 2 +- docs/setup/independent/install-kubeadm.md | 2 +- docs/setup/pick-right-solution.md | 122 +++++++++--------- .../access-cluster.md | 6 +- .../web-ui-dashboard.md | 4 +- .../administer-cluster/access-cluster-api.md | 2 +- .../access-cluster-services.md | 4 +- .../calico-network-policy.md | 4 +- .../cilium-network-policy.md | 2 +- .../administer-cluster/cluster-management.md | 2 +- .../kube-router-network-policy.md | 2 +- .../namespaces-walkthrough.md | 2 +- docs/tasks/administer-cluster/namespaces.md | 2 +- .../administer-cluster/out-of-resource.md | 2 +- .../romana-network-policy.md | 4 +- .../weave-network-policy.md | 4 +- docs/tasks/administer-federation/events.md | 2 +- docs/tasks/administer-federation/ingress.md | 2 +- .../tasks/administer-federation/replicaset.md | 2 +- docs/tasks/administer-federation/secret.md | 2 +- .../assign-pods-nodes.md | 2 +- .../configure-persistent-volume-storage.md | 2 +- .../debug-application-introspection.md | 2 +- .../debug-stateful-set.md | 2 +- .../resource-usage-monitoring.md | 2 +- .../federation-service-discovery.md | 2 +- .../set-up-cluster-federation-kubefed.md | 4 +- .../set-up-coredns-provider-federation.md | 2 +- .../set-up-placement-policies-federation.md | 2 +- .../job/parallel-processing-expansion.md | 2 +- docs/tasks/manage-daemon/update-daemon-set.md | 2 +- .../horizontal-pod-autoscale-walkthrough.md | 2 +- .../run-replicated-stateful-application.md | 6 +- docs/tasks/tools/install-kubectl.md | 2 +- docs/tasks/tools/install-minikube.md | 2 +- docs/tools/index.md | 4 +- .../basic-stateful-set.md | 8 +- .../stateful-application/cassandra.md | 2 +- .../stateful-application/zookeeper.md | 10 +- .../stateless-application/hello-minikube.md | 4 +- docs/user-guide/docker-cli-to-kubectl.md | 2 +- docs/user-guide/update-demo/index.md.orig | 2 +- docs/user-guide/walkthrough/k8s201.md | 2 +- 103 files changed, 277 insertions(+), 277 deletions(-) diff --git a/docs/admin/authorization/rbac.md b/docs/admin/authorization/rbac.md index 5ec06ebef321b..a5bc8e1cfe933 100644 --- a/docs/admin/authorization/rbac.md +++ b/docs/admin/authorization/rbac.md @@ -504,7 +504,7 @@ This is commonly used by add-on API servers for unified authentication and autho system:kube-dns kube-dns service account in the kube-system namespace -Role for the kube-dns component. +Role for the kube-dns component. system:node-bootstrapper diff --git a/docs/admin/federation/index.md b/docs/admin/federation/index.md index 3dd005933d7be..ecdcca87d974b 100644 --- a/docs/admin/federation/index.md +++ b/docs/admin/federation/index.md @@ -17,10 +17,10 @@ This guide explains how to set up cluster federation that lets us control multip ## Prerequisites This guide assumes that you have a running Kubernetes cluster. -If you need to start a new cluster, see the [getting started guides](/docs/home/) for instructions on bringing a cluster up. +If you need to start a new cluster, see the [getting started guides](/docs/getting-started-guides/) for instructions on bringing a cluster up. To use the commands in this guide, you must download a Kubernetes release from the -[getting started binary releases](/docs/home/binary_release/) and +[getting started binary releases](/docs/getting-started-guides/binary_release/) and extract into a directory; all the commands in this guide are run from that directory. diff --git a/docs/admin/high-availability/index.md b/docs/admin/high-availability/index.md index bb5fee0f21432..298598d851470 100644 --- a/docs/admin/high-availability/index.md +++ b/docs/admin/high-availability/index.md @@ -6,7 +6,7 @@ title: Building High-Availability Clusters This document describes how to build a high-availability (HA) Kubernetes cluster. This is a fairly advanced topic. Users who merely want to experiment with Kubernetes are encouraged to use configurations that are simpler to set up such -as [Minikube](/docs/home/minikube/) +as [Minikube](/docs/getting-started-guides/minikube/) or try [Google Container Engine](https://cloud.google.com/container-engine/) for hosted Kubernetes. Also, at this time high availability support for Kubernetes is not continuously tested in our end-to-end (e2e) testing. We will diff --git a/docs/api-reference/apps/v1beta1/definitions.html b/docs/api-reference/apps/v1beta1/definitions.html index c743e6d6f7d30..2f15ecf9070e7 100755 --- a/docs/api-reference/apps/v1beta1/definitions.html +++ b/docs/api-reference/apps/v1beta1/definitions.html @@ -3620,7 +3620,7 @@

    v1.PodSpec

    nodeSelector

    -

    NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: http://kubernetes.io/docs/concepts/configuration/assign-pod-node/

    +

    NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: http://kubernetes.io/docs/user-guide/node-selection

    false

    object

    diff --git a/docs/api-reference/batch/v1/definitions.html b/docs/api-reference/batch/v1/definitions.html index 26d22a60365df..50f6f28e449bb 100755 --- a/docs/api-reference/batch/v1/definitions.html +++ b/docs/api-reference/batch/v1/definitions.html @@ -3609,7 +3609,7 @@

    v1.PodSpec

    nodeSelector

    -

    NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: http://kubernetes.io/docs/concepts/configuration/assign-pod-node/

    +

    NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: http://kubernetes.io/docs/user-guide/node-selection

    false

    object

    diff --git a/docs/api-reference/extensions/v1beta1/definitions.html b/docs/api-reference/extensions/v1beta1/definitions.html index c8da39dde1bb6..262b7aed95ca1 100755 --- a/docs/api-reference/extensions/v1beta1/definitions.html +++ b/docs/api-reference/extensions/v1beta1/definitions.html @@ -3457,7 +3457,7 @@

    v1.PodSpec

    nodeSelector

    -

    NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: http://kubernetes.io/docs/concepts/configuration/assign-pod-node/

    +

    NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node’s labels for the pod to be scheduled on that node. More info: http://kubernetes.io/docs/user-guide/node-selection

    false

    object

    diff --git a/docs/api-reference/v1.5/index.html b/docs/api-reference/v1.5/index.html index fe6040766a59a..71b333af828c7 100644 --- a/docs/api-reference/v1.5/index.html +++ b/docs/api-reference/v1.5/index.html @@ -8010,7 +8010,7 @@

    PodSpec v1

    nodeSelector
    object -NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: http://kubernetes.io/docs/concepts/configuration/assign-pod-node/ +NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: http://kubernetes.io/docs/user-guide/node-selection restartPolicy
    string @@ -18058,7 +18058,7 @@

    ServiceSpec v1

    clusterIP
    string -clusterIP is the IP address of the service and is usually assigned randomly by the master. If an address is specified manually and is not in use by others, it will be allocated to the service; otherwise, creation of the service will fail. This field can not be changed through updates. Valid values are "None", empty string (""), or a valid IP address. "None" can be specified for headless services when proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies +clusterIP is the IP address of the service and is usually assigned randomly by the master. If an address is specified manually and is not in use by others, it will be allocated to the service; otherwise, creation of the service will fail. This field can not be changed through updates. Valid values are "None", empty string (""), or a valid IP address. "None" can be specified for headless services when proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies deprecatedPublicIPs
    string array @@ -18078,23 +18078,23 @@

    ServiceSpec v1

    loadBalancerSourceRanges
    string array -If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: http://kubernetes.io/docs/concepts/services-networking/service/-firewalls +If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: http://kubernetes.io/docs/user-guide/services-firewalls ports
    ServicePort array -The list of ports that are exposed by this service. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies +The list of ports that are exposed by this service. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies selector
    object -Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/concepts/services-networking/service/#overview +Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/user-guide/services#overview sessionAffinity
    string -Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies +Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies type
    string -type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ExternalName" maps to the specified externalName. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a stable IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the clusterIP. More info: http://kubernetes.io/docs/concepts/services-networking/service/#overview +type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ExternalName" maps to the specified externalName. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a stable IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the clusterIP. More info: http://kubernetes.io/docs/user-guide/services#overview @@ -51143,7 +51143,7 @@

    ServicePort v1

    nodePort
    integer -The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info: http://kubernetes.io/docs/concepts/services-networking/service/#type--nodeport +The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info: http://kubernetes.io/docs/user-guide/services#type--nodeport port
    integer @@ -51155,7 +51155,7 @@

    ServicePort v1

    targetPort
    IntOrString -Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: http://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service +Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: http://kubernetes.io/docs/user-guide/services#defining-a-service diff --git a/docs/api-reference/v1.6/index.html b/docs/api-reference/v1.6/index.html index 9db59ca2ed4e0..64322a85620c0 100644 --- a/docs/api-reference/v1.6/index.html +++ b/docs/api-reference/v1.6/index.html @@ -17950,7 +17950,7 @@

    ServiceSpec v1 core

    clusterIP
    string -clusterIP is the IP address of the service and is usually assigned randomly by the master. If an address is specified manually and is not in use by others, it will be allocated to the service; otherwise, creation of the service will fail. This field can not be changed through updates. Valid values are "None", empty string (""), or a valid IP address. "None" can be specified for headless services when proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies +clusterIP is the IP address of the service and is usually assigned randomly by the master. If an address is specified manually and is not in use by others, it will be allocated to the service; otherwise, creation of the service will fail. This field can not be changed through updates. Valid values are "None", empty string (""), or a valid IP address. "None" can be specified for headless services when proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies deprecatedPublicIPs
    string array @@ -17970,23 +17970,23 @@

    ServiceSpec v1 core

    loadBalancerSourceRanges
    string array -If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: http://kubernetes.io/docs/concepts/services-networking/service/-firewalls +If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: http://kubernetes.io/docs/user-guide/services-firewalls ports
    ServicePort array -The list of ports that are exposed by this service. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies +The list of ports that are exposed by this service. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies selector
    object -Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/concepts/services-networking/service/#overview +Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/user-guide/services#overview sessionAffinity
    string -Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies +Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies type
    string -type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ExternalName" maps to the specified externalName. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a stable IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the clusterIP. More info: http://kubernetes.io/docs/concepts/services-networking/service/#overview +type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ExternalName" maps to the specified externalName. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a stable IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the clusterIP. More info: http://kubernetes.io/docs/user-guide/services#overview @@ -54388,7 +54388,7 @@

    ServicePort v1 core

    nodePort
    integer -The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info: http://kubernetes.io/docs/concepts/services-networking/service/#type--nodeport +The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info: http://kubernetes.io/docs/user-guide/services#type--nodeport port
    integer @@ -54400,7 +54400,7 @@

    ServicePort v1 core

    targetPort -Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: http://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service +Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: http://kubernetes.io/docs/user-guide/services#defining-a-service diff --git a/docs/api-reference/v1.7/index.html b/docs/api-reference/v1.7/index.html index 80c6572f06cab..575999d8cdf2d 100644 --- a/docs/api-reference/v1.7/index.html +++ b/docs/api-reference/v1.7/index.html @@ -191,7 +191,7 @@

    Container v1 core

    securityContext
    SecurityContext -Security options the pod should run with. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ More info: https://git.k8s.io/community/contributors/design-proposals/auth/security_context.md +Security options the pod should run with. More info: https://kubernetes.io/docs/concepts/policy/security-context/ More info: https://git.k8s.io/community/contributors/design-proposals/auth/security_context.md stdin
    boolean diff --git a/docs/concepts/architecture/nodes.md b/docs/concepts/architecture/nodes.md index b178a7c97ed2c..aa75f6e08e5eb 100644 --- a/docs/concepts/architecture/nodes.md +++ b/docs/concepts/architecture/nodes.md @@ -81,7 +81,7 @@ The information is gathered by Kubelet from the node. ## Management -Unlike [pods](/docs/user-guide/pods) and [services](/docs/concepts/services-networking/service/), +Unlike [pods](/docs/user-guide/pods) and [services](/docs/user-guide/services), a node is not inherently created by Kubernetes: it is created externally by cloud providers like Google Compute Engine, or exists in your pool of physical or virtual machines. What this means is that when Kubernetes creates a node, it is really diff --git a/docs/concepts/cluster-administration/cluster-administration-overview.md b/docs/concepts/cluster-administration/cluster-administration-overview.md index ec41426044604..97c07725e361f 100644 --- a/docs/concepts/cluster-administration/cluster-administration-overview.md +++ b/docs/concepts/cluster-administration/cluster-administration-overview.md @@ -21,7 +21,7 @@ Before choosing a guide, here are some considerations: - **If you are designing for high-availability**, learn about configuring [clusters in multiple zones](/docs/admin/multi-cluster/). - Will you be using **a hosted Kubernetes cluster**, such as [Google Container Engine (GKE)](https://cloud.google.com/container-engine/), or **hosting your own cluster**? - Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly support hybrid clusters. Instead, you can set up multiple clusters. - - **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/concepts/cluster-administration/networking/) fits best. One option for custom networking is [*OpenVSwitch GRE/VxLAN networking*](/docs/admin/ovs-networking/), which uses OpenVSwitch to set up networking between pods across Kubernetes nodes. + - **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/admin/networking/) fits best. One option for custom networking is [*OpenVSwitch GRE/VxLAN networking*](/docs/admin/ovs-networking/), which uses OpenVSwitch to set up networking between pods across Kubernetes nodes. - Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**? - Do you **just want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the latter, choose a actively-developed distro. Some distros only use binary releases, but diff --git a/docs/concepts/cluster-administration/kubelet-garbage-collection.md b/docs/concepts/cluster-administration/kubelet-garbage-collection.md index 068ee6bd2ab0c..0a1036cd69ca1 100644 --- a/docs/concepts/cluster-administration/kubelet-garbage-collection.md +++ b/docs/concepts/cluster-administration/kubelet-garbage-collection.md @@ -72,4 +72,4 @@ Including: | `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | eviction generalizes disk thresholds to other resources | | `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | eviction generalizes disk pressure transition to other resources | -See [Configuring Out Of Resource Handling](/docs/tasks/administer-cluster/out-of-resource/) for more details. +See [Configuring Out Of Resource Handling](/docs/concepts/cluster-administration/out-of-resource/) for more details. diff --git a/docs/concepts/cluster-administration/manage-deployment.md b/docs/concepts/cluster-administration/manage-deployment.md index c89990c04421b..4a946071255b8 100644 --- a/docs/concepts/cluster-administration/manage-deployment.md +++ b/docs/concepts/cluster-administration/manage-deployment.md @@ -256,7 +256,7 @@ my-nginx-2035384211-u3t6x 1/1 Running 0 23m fe This outputs all "app=nginx" pods, with an additional label column of pods' tier (specified with `-L` or `--label-columns`). -For more information, please see [labels](/docs/concepts/overview/working-with-objects/labels/) and [kubectl label](/docs/user-guide/kubectl/{{page.version}}/#label) document. +For more information, please see [labels](/docs/user-guide/labels/) and [kubectl label](/docs/user-guide/kubectl/{{page.version}}/#label) document. ## Updating annotations diff --git a/docs/concepts/configuration/assign-pod-node.md b/docs/concepts/configuration/assign-pod-node.md index d626ba98c63c8..c3d71ce0c5df5 100644 --- a/docs/concepts/configuration/assign-pod-node.md +++ b/docs/concepts/configuration/assign-pod-node.md @@ -16,7 +16,7 @@ that a pod ends up on a machine with an SSD attached to it, or to co-locate pods services that communicate a lot into the same availability zone. You can find all the files for these examples [in our docs -repo here](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/concepts/configuration/assign-pod-node/). +repo here](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/user-guide/node-selection). * TOC {:toc} diff --git a/docs/concepts/configuration/manage-compute-resources-container.md b/docs/concepts/configuration/manage-compute-resources-container.md index d6749314078e9..fa8d93cc5bd92 100644 --- a/docs/concepts/configuration/manage-compute-resources-container.md +++ b/docs/concepts/configuration/manage-compute-resources-container.md @@ -27,7 +27,7 @@ CPU and memory are collectively referred to as *compute resources*, or just resources are measurable quantities that can be requested, allocated, and consumed. They are distinct from [API resources](/docs/concepts/overview/kubernetes-api/). API resources, such as Pods and -[Services](/docs/concepts/services-networking/service/) are objects that can be read and modified +[Services](/docs/user-guide/services) are objects that can be read and modified through the Kubernetes API server. ## Resource requests and limits of Pod and Container diff --git a/docs/concepts/configuration/overview.md b/docs/concepts/configuration/overview.md index 61149cd3cad93..c354a2a6df100 100644 --- a/docs/concepts/configuration/overview.md +++ b/docs/concepts/configuration/overview.md @@ -58,7 +58,7 @@ This is a living document. If you think of something that is not on this list bu ## Using Labels -- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or deployment. For example, instead of attaching a label to a set of pods to explicitly represent some service (For example, `service: myservice`), or explicitly representing the replication controller managing the pods (for example, `controller: mycontroller`), attach labels that identify semantic attributes, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. This will let you select the object groups appropriate to the context— for example, a service for all "tier: frontend" pods, or all "test" phase components of app "myapp". See the [guestbook](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/) app for an example of this approach. +- Define and use [labels](/docs/user-guide/labels/) that identify __semantic attributes__ of your application or deployment. For example, instead of attaching a label to a set of pods to explicitly represent some service (For example, `service: myservice`), or explicitly representing the replication controller managing the pods (for example, `controller: mycontroller`), attach labels that identify semantic attributes, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. This will let you select the object groups appropriate to the context— for example, a service for all "tier: frontend" pods, or all "test" phase components of app "myapp". See the [guestbook](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/) app for an example of this approach. A service can be made to span multiple deployments, such as is done across [rolling updates](/docs/tasks/run-application/rolling-update-replication-controller/), by simply omitting release-specific labels from its selector, rather than updating a service's selector to match the replication controller's selector fully. @@ -84,7 +84,7 @@ This is a living document. If you think of something that is not on this list bu - Use `kubectl delete` rather than `stop`. `Delete` has a superset of the functionality of `stop`, and `stop` is deprecated. -- Use kubectl bulk operations (via files and/or labels) for get and delete. See [label selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) and [using labels effectively](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively). +- Use kubectl bulk operations (via files and/or labels) for get and delete. See [label selectors](/docs/user-guide/labels/#label-selectors) and [using labels effectively](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively). - Use `kubectl run` and `expose` to quickly create and expose single container Deployments. See the [quick start guide](/docs/user-guide/quick-start/) for an example. diff --git a/docs/concepts/containers/container-environment-variables.md b/docs/concepts/containers/container-environment-variables.md index 513b09cb46f22..d5d0975cb7669 100644 --- a/docs/concepts/containers/container-environment-variables.md +++ b/docs/concepts/containers/container-environment-variables.md @@ -31,7 +31,7 @@ It is available through the `hostname` command or the function call in libc. The Pod name and namespace are available as environment variables through the -[downward API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/). +[downward API](/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information/). User defined environment variables from the Pod definition are also available to the Container, as are any environment variables specified statically in the Docker image. diff --git a/docs/concepts/overview/what-is-kubernetes.md b/docs/concepts/overview/what-is-kubernetes.md index 4475b323ada93..0596b10b266a4 100644 --- a/docs/concepts/overview/what-is-kubernetes.md +++ b/docs/concepts/overview/what-is-kubernetes.md @@ -121,7 +121,7 @@ The name **Kubernetes** originates from Greek, meaning *helmsman* or *pilot*, an {% endcapture %} {% capture whatsnext %} -* Ready to [Get Started](/docs/home/)? +* Ready to [Get Started](/docs/getting-started-guides/)? * For more details, see the [Kubernetes Documentation](/docs/home/). {% endcapture %} {% include templates/concept.md %} diff --git a/docs/concepts/overview/working-with-objects/annotations.md b/docs/concepts/overview/working-with-objects/annotations.md index e0b844325328c..2bb89e17e5a50 100644 --- a/docs/concepts/overview/working-with-objects/annotations.md +++ b/docs/concepts/overview/working-with-objects/annotations.md @@ -55,7 +55,7 @@ and the like. {% endcapture %} {% capture whatsnext %} -Learn more about [Labels and Selectors](/docs/concepts/overview/working-with-objects/labels/). +Learn more about [Labels and Selectors](/docs/user-guide/labels/). {% endcapture %} {% include templates/concept.md %} diff --git a/docs/concepts/overview/working-with-objects/labels.md b/docs/concepts/overview/working-with-objects/labels.md index 2407b910879af..a64512cd2d340 100644 --- a/docs/concepts/overview/working-with-objects/labels.md +++ b/docs/concepts/overview/working-with-objects/labels.md @@ -130,7 +130,7 @@ $ kubectl get pods -l 'environment,environment notin (frontend)' ### Set references in API objects -Some Kubernetes objects, such as [`services`](/docs/concepts/services-networking/service/) and [`replicationcontrollers`](/docs/user-guide/replication-controller), also use label selectors to specify sets of other resources, such as [pods](/docs/user-guide/pods). +Some Kubernetes objects, such as [`services`](/docs/user-guide/services) and [`replicationcontrollers`](/docs/user-guide/replication-controller), also use label selectors to specify sets of other resources, such as [pods](/docs/user-guide/pods). #### Service and ReplicationController @@ -170,4 +170,4 @@ selector: #### Selecting sets of nodes One use case for selecting over labels is to constrain the set of nodes onto which a pod can schedule. -See the documentation on [node selection](/docs/concepts/configuration/assign-pod-node/) for more information. +See the documentation on [node selection](/docs/user-guide/node-selection) for more information. diff --git a/docs/concepts/overview/working-with-objects/namespaces.md b/docs/concepts/overview/working-with-objects/namespaces.md index 5757399b8f156..aa06eff515e7b 100644 --- a/docs/concepts/overview/working-with-objects/namespaces.md +++ b/docs/concepts/overview/working-with-objects/namespaces.md @@ -72,7 +72,7 @@ $ kubectl config view | grep namespace: ## Namespaces and DNS -When you create a [Service](/docs/concepts/services-networking/service/), it creates a corresponding [DNS entry](/docs/admin/dns). +When you create a [Service](/docs/user-guide/services), it creates a corresponding [DNS entry](/docs/admin/dns). This entry is of the form `..svc.cluster.local`, which means that if a container just uses ``, it will resolve to the service which is local to a namespace. This is useful for using the same configuration across diff --git a/docs/concepts/services-networking/network-policies.md b/docs/concepts/services-networking/network-policies.md index 0063c9270e051..22518d48765e7 100644 --- a/docs/concepts/services-networking/network-policies.md +++ b/docs/concepts/services-networking/network-policies.md @@ -77,7 +77,7 @@ So, the example NetworkPolicy: 2. allows connections to TCP port 6379 of "role=db" pods in the "default" namespace from any pod in the "default" namespace with the label "role=frontend" 3. allows connections to TCP port 6379 of "role=db" pods in the "default" namespace from any pod in a namespace with the label "project=myproject" -See the [NetworkPolicy getting started guide](/docs/home/network-policy/walkthrough) for further examples. +See the [NetworkPolicy getting started guide](/docs/getting-started-guides/network-policy/walkthrough) for further examples. ## Default policies diff --git a/docs/concepts/storage/persistent-volumes.md b/docs/concepts/storage/persistent-volumes.md index bc53b91e4d7c0..19a17e0491462 100644 --- a/docs/concepts/storage/persistent-volumes.md +++ b/docs/concepts/storage/persistent-volumes.md @@ -315,7 +315,7 @@ Claims, like pods, can request specific quantities of a resource. In this case, ### Selector -Claims can specify a [label selector](/docs/concepts/overview/working-with-objects/labels/#label-selectors) to further filter the set of volumes. Only the volumes whose labels match the selector can be bound to the claim. The selector can consist of two fields: +Claims can specify a [label selector](/docs/user-guide/labels/#label-selectors) to further filter the set of volumes. Only the volumes whose labels match the selector can be bound to the claim. The selector can consist of two fields: * matchLabels - the volume must have a label with this value * matchExpressions - a list of requirements made by specifying key, list of values, and operator that relates the key and values. Valid operators include In, NotIn, Exists, and DoesNotExist. diff --git a/docs/concepts/storage/volumes.md b/docs/concepts/storage/volumes.md index 02c3b0063ab04..f6fce622b6ddf 100644 --- a/docs/concepts/storage/volumes.md +++ b/docs/concepts/storage/volumes.md @@ -454,7 +454,7 @@ details. A `downwardAPI` volume is used to make downward API data available to applications. It mounts a directory and writes the requested data in plain text files. -See the [`downwardAPI` volume example](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) for more details. +See the [`downwardAPI` volume example](/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information/) for more details. ### projected @@ -572,7 +572,7 @@ More details can be found [here](https://github.com/kubernetes/examples/tree/{{p ### vsphereVolume -**Prerequisite:** Kubernetes with vSphere Cloud Provider configured. For cloudprovider configuration please refer [vSphere getting started guide](/docs/home/vsphere/). +**Prerequisite:** Kubernetes with vSphere Cloud Provider configured. For cloudprovider configuration please refer [vSphere getting started guide](/docs/getting-started-guides/vsphere/). {: .note} A `vsphereVolume` is used to mount a vSphere VMDK Volume into your Pod. The contents diff --git a/docs/concepts/workloads/controllers/petset.md b/docs/concepts/workloads/controllers/petset.md index 20cf8457585e8..c884679821908 100644 --- a/docs/concepts/workloads/controllers/petset.md +++ b/docs/concepts/workloads/controllers/petset.md @@ -227,7 +227,7 @@ web-1 A pet can piece together its own identity: -1. Use the [downward api](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) to find its pod name +1. Use the [downward api](/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information/) to find its pod name 2. Run `hostname` to find its DNS name 3. Run `mount` or `df` to find its volumes (usually this is unnecessary) @@ -434,7 +434,7 @@ Deploying one RC of size 1/Service per pod is a popular alternative, as is simpl ## Next steps -* Learn about [StatefulSet](/docs/concepts/workloads/controllers/statefulset/), +* Learn about [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/), the replacement for PetSet introduced in Kubernetes version 1.5. * [Migrate your existing PetSets to StatefulSets](/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set/) when upgrading to Kubernetes version 1.5 or higher. diff --git a/docs/concepts/workloads/controllers/replicaset.md b/docs/concepts/workloads/controllers/replicaset.md index dfe140601f29e..a9247f15aaba3 100644 --- a/docs/concepts/workloads/controllers/replicaset.md +++ b/docs/concepts/workloads/controllers/replicaset.md @@ -12,7 +12,7 @@ ReplicaSet is the next-generation Replication Controller. The only difference between a _ReplicaSet_ and a [_Replication Controller_](/docs/concepts/workloads/controllers/replicationcontroller/) right now is the selector support. ReplicaSet supports the new set-based selector requirements -as described in the [labels user guide](/docs/concepts/overview/working-with-objects/labels/#label-selectors) +as described in the [labels user guide](/docs/user-guide/labels/#label-selectors) whereas a Replication Controller only supports equality-based selector requirements. {% endcapture %} diff --git a/docs/concepts/workloads/controllers/replicationcontroller.md b/docs/concepts/workloads/controllers/replicationcontroller.md index 12a37bc4456a7..42f929317f34a 100644 --- a/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/docs/concepts/workloads/controllers/replicationcontroller.md @@ -129,7 +129,7 @@ different, and the `.metadata.labels` do not affect the behavior of the Replicat ### Pod Selector -The `.spec.selector` field is a [label selector](/docs/concepts/overview/working-with-objects/labels/#label-selectors). A ReplicationController +The `.spec.selector` field is a [label selector](/docs/user-guide/labels/#label-selectors). A ReplicationController manages all the pods with labels that match the selector. It does not distinguish between pods that it created or deleted and pods that another person or process created or deleted. This allows the ReplicationController to be replaced without affecting the running pods. @@ -243,7 +243,7 @@ object](/docs/api-reference/{{page.version}}/#replicationcontroller-v1-core). ### ReplicaSet -[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/concepts/overview/working-with-objects/labels/#set-based-requirement). +[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/user-guide/labels/#set-based-requirement). It’s mainly used by [`Deployment`](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod creation, deletion and updates. Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don’t require updates at all. diff --git a/docs/concepts/workloads/pods/disruptions.md b/docs/concepts/workloads/pods/disruptions.md index 80544d73fc779..89c324b5bc8fe 100644 --- a/docs/concepts/workloads/pods/disruptions.md +++ b/docs/concepts/workloads/pods/disruptions.md @@ -72,7 +72,7 @@ Here are some ways to mitigate involuntary disruptions: and [stateful](/docs/tasks/run-application/run-replicated-stateful-application/) applications.) - For even higher availability when running replicated applications, spread applications across racks (using -[anti-affinity](/docs/concepts/configuration/assign-pod-node//#inter-pod-affinity-and-anti-affinity-beta-feature)) +[anti-affinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature)) or across zones (if using a [multi-zone cluster](/docs/admin/multiple-zones).) diff --git a/docs/concepts/workloads/pods/init-containers.md b/docs/concepts/workloads/pods/init-containers.md index cb8bf04f162ab..e902b296c61b6 100644 --- a/docs/concepts/workloads/pods/init-containers.md +++ b/docs/concepts/workloads/pods/init-containers.md @@ -87,7 +87,7 @@ Here are some ideas for how to use Init Containers: place the POD_IP value in a configuration and generate the main app configuration file using Jinja. -More detailed usage examples can be found in the [StatefulSets documentation](/docs/concepts/workloads/controllers/statefulset/) +More detailed usage examples can be found in the [StatefulSets documentation](/docs/concepts/abstractions/controllers/statefulsets/) and the [Production Pods guide](/docs/tasks/#handling-initialization). ### Init Containers in use diff --git a/docs/concepts/workloads/pods/pod-overview.md b/docs/concepts/workloads/pods/pod-overview.md index 760678fc70984..3cfa73ef176c5 100644 --- a/docs/concepts/workloads/pods/pod-overview.md +++ b/docs/concepts/workloads/pods/pod-overview.md @@ -64,7 +64,7 @@ A Controller can create and manage multiple Pods for you, handling replication a Some examples of Controllers that contain one or more pods include: * [Deployment](/docs/concepts/workloads/controllers/deployment/) -* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) +* [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/) * [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) In general, Controllers use a Pod Template that you provide to create the Pods for which it is responsible. diff --git a/docs/concepts/workloads/pods/pod.md b/docs/concepts/workloads/pods/pod.md index 359ecea6ced4b..dd7dbd5fa3dfb 100644 --- a/docs/concepts/workloads/pods/pod.md +++ b/docs/concepts/workloads/pods/pod.md @@ -150,7 +150,7 @@ Pod is exposed as a primitive in order to facilitate: * clean composition of Kubelet-level functionality with cluster-level functionality — Kubelet is effectively the "pod controller" * high-availability applications, which will expect pods to be replaced in advance of their termination and certainly in advance of deletion, such as in the case of planned evictions, image prefetching, or live pod migration [#3949](http://issue.k8s.io/3949) -There is new first-class support for stateful pods with the [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) controller (currently in beta). The feature was alpha in 1.4 and was called [PetSet](/docs/concepts/workloads/controllers/petset/). For prior versions of Kubernetes, best practice for having stateful pods is to create a replication controller with `replicas` equal to `1` and a corresponding service, see [this MySQL deployment example](/docs/tutorials/stateful-application/run-stateful-application/). +There is new first-class support for stateful pods with the [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/) controller (currently in beta). The feature was alpha in 1.4 and was called [PetSet](/docs/concepts/workloads/controllers/petset/). For prior versions of Kubernetes, best practice for having stateful pods is to create a replication controller with `replicas` equal to `1` and a corresponding service, see [this MySQL deployment example](/docs/tutorials/stateful-application/run-stateful-application/). ## Termination of Pods diff --git a/docs/getting-started-guides/aws.md b/docs/getting-started-guides/aws.md index 787df11cefe35..f723837295e17 100644 --- a/docs/getting-started-guides/aws.md +++ b/docs/getting-started-guides/aws.md @@ -165,9 +165,9 @@ cluster/kube-down.sh IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------------- | ---------- | --------------------------------------------- | ---------| ---------------------------- AWS | kops | Debian | k8s (VPC) | [docs](https://github.com/kubernetes/kops) | | Community ([@justinsb](https://github.com/justinsb)) -AWS | CoreOS | CoreOS | flannel | [docs](/docs/home/aws) | | Community +AWS | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/aws) | | Community -For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. ## Further reading diff --git a/docs/getting-started-guides/binary_release.md b/docs/getting-started-guides/binary_release.md index 4b08ff5c8ed40..ca1baf2c6d34b 100644 --- a/docs/getting-started-guides/binary_release.md +++ b/docs/getting-started-guides/binary_release.md @@ -57,4 +57,4 @@ Possible values for `YOUR_PROVIDER` include: * `vsphere` - VMWare VSphere * `rackspace` - Rackspace -For the complete, up-to-date list of providers supported by this script, see the [`/cluster`](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/cluster) folder in the main Kubernetes repo, where each folder represents a possible value for `YOUR_PROVIDER`. If you don't see your desired provider, try looking at our [getting started guides](/docs/home); there's a good chance we have docs for them. +For the complete, up-to-date list of providers supported by this script, see the [`/cluster`](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/cluster) folder in the main Kubernetes repo, where each folder represents a possible value for `YOUR_PROVIDER`. If you don't see your desired provider, try looking at our [getting started guides](/docs/getting-started-guides); there's a good chance we have docs for them. diff --git a/docs/getting-started-guides/centos/centos_manual_config.md b/docs/getting-started-guides/centos/centos_manual_config.md index d7d9e28941bf0..bac68f39f25bb 100644 --- a/docs/getting-started-guides/centos/centos_manual_config.md +++ b/docs/getting-started-guides/centos/centos_manual_config.md @@ -9,7 +9,7 @@ title: CentOS ## Warning -This guide [has been deprecated](https://github.com/kubernetes/kubernetes.github.io/issues/1613). It was originally written for Kubernetes 1.1.0. Please check [the latest guide](/docs/home/kubeadm/). +This guide [has been deprecated](https://github.com/kubernetes/kubernetes.github.io/issues/1613). It was originally written for Kubernetes 1.1.0. Please check [the latest guide](/docs/getting-started-guides/kubeadm/). ## Prerequisites @@ -233,6 +233,6 @@ centos-minion-3 Ready 3d v1.6.0+fff5156 IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Bare-metal | custom | CentOS | flannel | [docs](/docs/home/centos/centos_manual_config) | | Community ([@coolsvap](https://github.com/coolsvap)) +Bare-metal | custom | CentOS | flannel | [docs](/docs/getting-started-guides/centos/centos_manual_config) | | Community ([@coolsvap](https://github.com/coolsvap)) -For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/cloudstack.md b/docs/getting-started-guides/cloudstack.md index bce3f3b69d2a9..c0d0263e60778 100644 --- a/docs/getting-started-guides/cloudstack.md +++ b/docs/getting-started-guides/cloudstack.md @@ -92,6 +92,6 @@ SSH to it using the key that was created and using the _core_ user and you can l IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -CloudStack | Ansible | CoreOS | flannel | [docs](/docs/home/cloudstack) | | Community ([@Guiques](https://github.com/ltupin/)) +CloudStack | Ansible | CoreOS | flannel | [docs](/docs/getting-started-guides/cloudstack) | | Community ([@Guiques](https://github.com/ltupin/)) -For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/coreos/bare_metal_offline.md b/docs/getting-started-guides/coreos/bare_metal_offline.md index 7124c962cc9ea..35824a4f03fd9 100644 --- a/docs/getting-started-guides/coreos/bare_metal_offline.md +++ b/docs/getting-started-guides/coreos/bare_metal_offline.md @@ -213,7 +213,7 @@ Now for the good stuff! The following config files are tailored for the OFFLINE version of a Kubernetes deployment. -These are based on the work found here: [master.yml](/docs/home/coreos/cloud-configs/master.yaml), [node.yml](/docs/home/coreos/cloud-configs/node.yaml) +These are based on the work found here: [master.yml](/docs/getting-started-guides/coreos/cloud-configs/master.yaml), [node.yml](/docs/getting-started-guides/coreos/cloud-configs/node.yaml) To make the setup work, you need to replace a few placeholders: @@ -683,6 +683,6 @@ for i in `kubectl get pods | awk '{print $1}'`; do kubectl delete pod $i; done IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](/docs/home/coreos/bare_metal_offline/) | | Community ([@jeffbean](https://github.com/jeffbean)) +Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/bare_metal_offline/) | | Community ([@jeffbean](https://github.com/jeffbean)) -For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions/) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions/) chart. diff --git a/docs/getting-started-guides/coreos/index.md b/docs/getting-started-guides/coreos/index.md index 8065a9fe01c19..c7a5ce0c44bd9 100644 --- a/docs/getting-started-guides/coreos/index.md +++ b/docs/getting-started-guides/coreos/index.md @@ -86,7 +86,7 @@ Configure a standalone Kubernetes or a Kubernetes cluster with [Foreman](https:/ IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -GCE | CoreOS | CoreOS | flannel | [docs](/docs/home/coreos) | | Community ([@pires](https://github.com/pires)) -Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/home/coreos) | | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles)) +GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires)) +Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles)) -For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/dcos.md b/docs/getting-started-guides/dcos.md index 7425ae6856472..816bd288a5276 100644 --- a/docs/getting-started-guides/dcos.md +++ b/docs/getting-started-guides/dcos.md @@ -138,6 +138,6 @@ $ dcos package uninstall kubernetes IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -DCOS | Marathon | CoreOS/Alpine | custom | [docs](/docs/home/dcos/) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) +DCOS | Marathon | CoreOS/Alpine | custom | [docs](/docs/getting-started-guides/dcos/) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) -For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/fedora/fedora_ansible_config.md b/docs/getting-started-guides/fedora/fedora_ansible_config.md index 8c3cb2675fc68..f87a6978937aa 100644 --- a/docs/getting-started-guides/fedora/fedora_ansible_config.md +++ b/docs/getting-started-guides/fedora/fedora_ansible_config.md @@ -235,6 +235,6 @@ That's it! IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Bare-metal | Ansible | Fedora | flannel | [docs](/docs/home/fedora/fedora_ansible_config) | | Project +Bare-metal | Ansible | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/fedora_ansible_config) | | Project -For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/fedora/fedora_manual_config.md b/docs/getting-started-guides/fedora/fedora_manual_config.md index 5516f35ca4fe0..6f324096238c2 100644 --- a/docs/getting-started-guides/fedora/fedora_manual_config.md +++ b/docs/getting-started-guides/fedora/fedora_manual_config.md @@ -193,7 +193,7 @@ kubectl delete -f ./node.json IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Bare-metal | custom | Fedora | _none_ | [docs](/docs/home/fedora/fedora_manual_config) | | Project +Bare-metal | custom | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config) | | Project -For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md b/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md index 9ad8acb6a315f..51d3aa0db38ab 100644 --- a/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md +++ b/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md @@ -9,7 +9,7 @@ title: Fedora (Multi Node) * TOC {:toc} -This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](/docs/home/fedora/fedora_manual_config/) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. Flannel on each node configures an overlay network that docker uses. Flannel runs on each node to setup a unique class-C container network. +This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](/docs/getting-started-guides/fedora/fedora_manual_config/) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. Flannel on each node configures an overlay network that docker uses. Flannel runs on each node to setup a unique class-C container network. ## Prerequisites @@ -188,11 +188,11 @@ Now Kubernetes multi-node cluster is set up with overlay networking set up by fl IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Bare-metal | custom | Fedora | flannel | [docs](/docs/home/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) -libvirt | custom | Fedora | flannel | [docs](/docs/home/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) -KVM | custom | Fedora | flannel | [docs](/docs/home/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) +Bare-metal | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) +libvirt | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) +KVM | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster) | | Community ([@aveshagarwal](https://github.com/aveshagarwal)) -For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/gce.md b/docs/getting-started-guides/gce.md index 915d63463f1ac..3598d7c46a864 100644 --- a/docs/getting-started-guides/gce.md +++ b/docs/getting-started-guides/gce.md @@ -59,7 +59,7 @@ cluster/kube-up.sh If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster. -If you run into trouble, please see the section on [troubleshooting](/docs/home/gce//#troubleshooting), post to the +If you run into trouble, please see the section on [troubleshooting](/docs/getting-started-guides/gce/#troubleshooting), post to the [kubernetes-users group](https://groups.google.com/forum/#!forum/kubernetes-users), or come ask questions on [Slack](/docs/troubleshooting/#slack). The next few steps will show you: @@ -96,7 +96,7 @@ Once `kubectl` is in your path, you can use it to look at your cluster. E.g., ru $ kubectl get --all-namespaces services ``` -should show a set of [services](/docs/concepts/services-networking/service/) that look something like this: +should show a set of [services](/docs/user-guide/services) that look something like this: ```shell NAMESPACE NAME CLUSTER_IP EXTERNAL_IP PORT(S) AGE @@ -202,9 +202,9 @@ field values: IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -GCE | Saltstack | Debian | GCE | [docs](/docs/home/gce/) | | Project +GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce) | | Project -For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. ## Further reading diff --git a/docs/getting-started-guides/libvirt-coreos.md b/docs/getting-started-guides/libvirt-coreos.md index 3c65643089717..4c067f8d65b5e 100644 --- a/docs/getting-started-guides/libvirt-coreos.md +++ b/docs/getting-started-guides/libvirt-coreos.md @@ -332,8 +332,8 @@ Ensure libvirtd has been restarted since ebtables was installed. IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](/docs/home/libvirt-coreos/) | | Community ([@lhuard1A](https://github.com/lhuard1A)) +libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](/docs/getting-started-guides/libvirt-coreos) | | Community ([@lhuard1A](https://github.com/lhuard1A)) -For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/mesos-docker.md b/docs/getting-started-guides/mesos-docker.md index cfe889b674fa3..05a26dac0500c 100644 --- a/docs/getting-started-guides/mesos-docker.md +++ b/docs/getting-started-guides/mesos-docker.md @@ -314,7 +314,7 @@ Breakdown: IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Mesos/Docker | custom | Ubuntu | Docker | [docs](/docs/home/mesos-docker) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) +Mesos/Docker | custom | Ubuntu | Docker | [docs](/docs/getting-started-guides/mesos-docker) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) -For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/mesos/index.md b/docs/getting-started-guides/mesos/index.md index 1102bd98f6639..f40c41ad707e2 100644 --- a/docs/getting-started-guides/mesos/index.md +++ b/docs/getting-started-guides/mesos/index.md @@ -309,10 +309,10 @@ Address 1: 10.10.10.1 IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Mesos/GCE | | | | [docs](/docs/home/mesos/) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) +Mesos/GCE | | | | [docs](/docs/getting-started-guides/mesos/) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) -For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions/) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions/) chart. ## What next? diff --git a/docs/getting-started-guides/openstack-heat.md b/docs/getting-started-guides/openstack-heat.md index 5e0ec86e261e5..70f20a89f9e71 100644 --- a/docs/getting-started-guides/openstack-heat.md +++ b/docs/getting-started-guides/openstack-heat.md @@ -255,6 +255,6 @@ If you have changed the default `$STACK_NAME`, you must specify the name. Note t IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -OpenStack Heat | Saltstack | CentOS | Neutron + flannel hostgw | [docs](/docs/home/openstack-heat) | | Community ([@FujitsuEnablingSoftwareTechnologyGmbH](https://github.com/FujitsuEnablingSoftwareTechnologyGmbH)) +OpenStack Heat | Saltstack | CentOS | Neutron + flannel hostgw | [docs](/docs/getting-started-guides/openstack-heat) | | Community ([@FujitsuEnablingSoftwareTechnologyGmbH](https://github.com/FujitsuEnablingSoftwareTechnologyGmbH)) -For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/ovirt.md b/docs/getting-started-guides/ovirt.md index 04f6e6720dbe0..325a74882f364 100644 --- a/docs/getting-started-guides/ovirt.md +++ b/docs/getting-started-guides/ovirt.md @@ -58,6 +58,6 @@ This short screencast demonstrates how the oVirt Cloud Provider can be used to d IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -oVirt | | | | [docs](/docs/home/ovirt) | | Community ([@simon3z](https://github.com/simon3z)) +oVirt | | | | [docs](/docs/getting-started-guides/ovirt) | | Community ([@simon3z](https://github.com/simon3z)) -For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/photon-controller.md b/docs/getting-started-guides/photon-controller.md index 4ef81c63ebca5..e0de503156419 100644 --- a/docs/getting-started-guides/photon-controller.md +++ b/docs/getting-started-guides/photon-controller.md @@ -35,7 +35,7 @@ Mac, you can install this with [brew](http://brew.sh/): 5. You should have an ssh public key installed. This will be used to give you access to the VM's user account, `kube`. -6. Get or build a [binary release](/docs/home/binary_release/) +6. Get or build a [binary release](/docs/getting-started-guides/binary_release/) ### Download VM Image @@ -235,4 +235,4 @@ networks such as Weave or Calico. IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Vmware Photon | Saltstack | Debian | OVS | [docs](/docs/home/photon-controller) | | Community ([@alainroy](https://github.com/alainroy)) +Vmware Photon | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/photon-controller) | | Community ([@alainroy](https://github.com/alainroy)) diff --git a/docs/getting-started-guides/rkt/index.md b/docs/getting-started-guides/rkt/index.md index a7654cd587975..fe4cf7075db6a 100644 --- a/docs/getting-started-guides/rkt/index.md +++ b/docs/getting-started-guides/rkt/index.md @@ -19,7 +19,7 @@ This document describes how to run Kubernetes using [rkt](https://github.com/cor * The [rkt API service](https://coreos.com/rkt/docs/latest/subcommands/api-service.html) must be running on the node. -* You will need [kubelet](/docs/home/scratch/#kubelet) installed on the node, and it's recommended that you run [kube-proxy](/docs/home/scratch/#kube-proxy) on all nodes. This document describes how to set the parameters for kubelet so that it uses rkt as the runtime. +* You will need [kubelet](/docs/getting-started-guides/scratch/#kubelet) installed on the node, and it's recommended that you run [kube-proxy](/docs/getting-started-guides/scratch/#kube-proxy) on all nodes. This document describes how to set the parameters for kubelet so that it uses rkt as the runtime. ## Pod networking in rktnetes @@ -201,7 +201,7 @@ Use rkt's [*contained network*](#rkt-contained-network) with the KVM stage1, bec ## Known issues and differences between rkt and Docker -rkt and the default node container engine have very different designs, as do rkt's native ACI and the Docker container image format. Users may experience different behaviors when switching from one container engine to the other. More information can be found [in the Kubernetes rkt notes](/docs/home/rkt/notes/). +rkt and the default node container engine have very different designs, as do rkt's native ACI and the Docker container image format. Users may experience different behaviors when switching from one container engine to the other. More information can be found [in the Kubernetes rkt notes](/docs/getting-started-guides/rkt/notes/). ## Troubleshooting diff --git a/docs/getting-started-guides/scratch.md b/docs/getting-started-guides/scratch.md index fd7aaf784821a..0e93a85301b35 100644 --- a/docs/getting-started-guides/scratch.md +++ b/docs/getting-started-guides/scratch.md @@ -8,7 +8,7 @@ title: Creating a Custom Cluster from Scratch This guide is for people who want to craft a custom Kubernetes cluster. If you can find an existing Getting Started Guide that meets your needs on [this -list](/docs/home/), then we recommend using it, as you will be able to benefit +list](/docs/getting-started-guides/), then we recommend using it, as you will be able to benefit from the experience of others. However, if you have specific IaaS, networking, configuration management, or operating system requirements not met by any of those guides, then this guide will provide an outline of the steps you need to @@ -58,7 +58,7 @@ on how flags are set on various components. ### Network #### Network Connectivity -Kubernetes has a distinctive [networking model](/docs/concepts/cluster-administration/networking/). +Kubernetes has a distinctive [networking model](/docs/admin/networking/). Kubernetes allocates an IP address to each pod. When creating a cluster, you need to allocate a block of IPs for Kubernetes to use as Pod IPs. The simplest @@ -91,7 +91,7 @@ to implement one of the above options: - You can also write your own. - **Compile support directly into Kubernetes** - This can be done by implementing the "Routes" interface of a Cloud Provider module. - - The Google Compute Engine ([GCE](/docs/home/gce/)/) and [AWS](/docs/home/aws/) guides use this approach. + - The Google Compute Engine ([GCE](/docs/getting-started-guides/gce)/) and [AWS](/docs/getting-started-guides/aws/) guides use this approach. - **Configure the network external to Kubernetes** - This can be done by manually running commands, or through a set of externally maintained scripts. - You have to implement this yourself, but it can give you an extra degree of flexibility. @@ -430,7 +430,7 @@ Each node needs to be allocated its own CIDR range for pod networking. Call this `NODE_X_POD_CIDR`. A bridge called `cbr0` needs to be created on each node. The bridge is explained -further in the [networking documentation](/docs/concepts/cluster-administration/networking/). The bridge itself +further in the [networking documentation](/docs/admin/networking/). The bridge itself needs an address from `$NODE_X_POD_CIDR` - by convention the first IP. Call this `NODE_X_BRIDGE_ADDR`. For example, if `NODE_X_POD_CIDR` is `10.0.0.0/16`, then `NODE_X_BRIDGE_ADDR` is `10.0.0.1/16`. NOTE: this retains the `/16` suffix @@ -878,7 +878,7 @@ Cluster validation succeeded ### Inspect pods and services -Try to run through the "Inspect your cluster" section in one of the other Getting Started Guides, such as [GCE](/docs/home/gce//#inspect-your-cluster). +Try to run through the "Inspect your cluster" section in one of the other Getting Started Guides, such as [GCE](/docs/getting-started-guides/gce/#inspect-your-cluster). You should see some services. You should also see "mirror pods" for the apiserver, scheduler and controller-manager, plus any add-ons you started. ### Try Examples @@ -896,7 +896,7 @@ pinging or SSH-ing from one node to another. ### Getting Help -If you run into trouble, please see the section on [troubleshooting](/docs/home/gce/#troubleshooting), post to the +If you run into trouble, please see the section on [troubleshooting](/docs/getting-started-guides/gce#troubleshooting), post to the [kubernetes-users group](https://groups.google.com/forum/#!forum/kubernetes-users), or come ask questions on [Slack](/docs/troubleshooting#slack). ## Support Level @@ -904,7 +904,7 @@ If you run into trouble, please see the section on [troubleshooting](/docs/home/ IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -any | any | any | any | [docs](/docs/home/scratch/) | | Community ([@erictune](https://github.com/erictune)) +any | any | any | any | [docs](/docs/getting-started-guides/scratch/) | | Community ([@erictune](https://github.com/erictune)) -For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions/) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions/) chart. diff --git a/docs/getting-started-guides/stackpoint.md b/docs/getting-started-guides/stackpoint.md index 739aca8ae87b0..0459472bd3b88 100644 --- a/docs/getting-started-guides/stackpoint.md +++ b/docs/getting-started-guides/stackpoint.md @@ -38,7 +38,7 @@ Choose any extra options you may want to include with your cluster, then click * You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters). -For information on using and managing a Kubernetes cluster on AWS, [consult the Kubernetes documentation](/docs/home/aws/). +For information on using and managing a Kubernetes cluster on AWS, [consult the Kubernetes documentation](/docs/getting-started-guides/aws/). @@ -70,7 +70,7 @@ Choose any extra options you may want to include with your cluster, then click * You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters). -For information on using and managing a Kubernetes cluster on GCE, [consult the Kubernetes documentation](/docs/home/gce//). +For information on using and managing a Kubernetes cluster on GCE, [consult the Kubernetes documentation](/docs/getting-started-guides/gce/). @@ -168,7 +168,7 @@ Choose any extra options you may want to include with your cluster, then click * You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters). -For information on using and managing a Kubernetes cluster on Azure, [consult the Kubernetes documentation](/docs/home/azure/). +For information on using and managing a Kubernetes cluster on Azure, [consult the Kubernetes documentation](/docs/getting-started-guides/azure/). diff --git a/docs/getting-started-guides/ubuntu/index.md b/docs/getting-started-guides/ubuntu/index.md index 8e9575e6118ec..c851f32a6e71b 100644 --- a/docs/getting-started-guides/ubuntu/index.md +++ b/docs/getting-started-guides/ubuntu/index.md @@ -36,24 +36,24 @@ conjure-up kubernetes These are more in-depth guides for users choosing to run Kubernetes in production: - - [Installation](/docs/home/ubuntu/installation/) - - [Validation](/docs/home/ubuntu/validation/) - - [Backups](/docs/home/ubuntu/backups/) - - [Upgrades](/docs/home/ubuntu/upgrades/) - - [Scaling](/docs/home/ubuntu/scaling/) - - [Logging](/docs/home/ubuntu/logging/) - - [Monitoring](/docs/home/ubuntu/monitoring/) - - [Networking](/docs/home/ubuntu/networking/) - - [Security](/docs/home/ubuntu/security/) - - [Storage](/docs/home/ubuntu/storage/) - - [Troubleshooting](/docs/home/ubuntu/troubleshooting/) - - [Decommissioning](/docs/home/ubuntu/decommissioning/) - - [Operational Considerations](/docs/home/ubuntu/operational-considerations/) - - [Glossary](/docs/home/ubuntu/glossary/) + - [Installation](/docs/getting-started-guides/ubuntu/installation/) + - [Validation](/docs/getting-started-guides/ubuntu/validation/) + - [Backups](/docs/getting-started-guides/ubuntu/backups/) + - [Upgrades](/docs/getting-started-guides/ubuntu/upgrades/) + - [Scaling](/docs/getting-started-guides/ubuntu/scaling/) + - [Logging](/docs/getting-started-guides/ubuntu/logging/) + - [Monitoring](/docs/getting-started-guides/ubuntu/monitoring/) + - [Networking](/docs/getting-started-guides/ubuntu/networking/) + - [Security](/docs/getting-started-guides/ubuntu/security/) + - [Storage](/docs/getting-started-guides/ubuntu/storage/) + - [Troubleshooting](/docs/getting-started-guides/ubuntu/troubleshooting/) + - [Decommissioning](/docs/getting-started-guides/ubuntu/decommissioning/) + - [Operational Considerations](/docs/getting-started-guides/ubuntu/operational-considerations/) + - [Glossary](/docs/getting-started-guides/ubuntu/glossary/) ## Developer Guides - - [Localhost using LXD](/docs/home/ubuntu/local/) + - [Localhost using LXD](/docs/getting-started-guides/ubuntu/local/) ## Where to find us diff --git a/docs/getting-started-guides/ubuntu/installation.md b/docs/getting-started-guides/ubuntu/installation.md index 5256f825fe756..53245567b0b63 100644 --- a/docs/getting-started-guides/ubuntu/installation.md +++ b/docs/getting-started-guides/ubuntu/installation.md @@ -251,16 +251,16 @@ Feature requests, bug reports, pull requests or any feedback would be much appre IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -Amazon Web Services (AWS) | Juju | Ubuntu | flannel, calico* | [docs](/docs/home/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -OpenStack | Juju | Ubuntu | flannel, calico | [docs](/docs/home/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -Microsoft Azure | Juju | Ubuntu | flannel | [docs](/docs/home/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -Google Compute Engine (GCE) | Juju | Ubuntu | flannel, calico | [docs](/docs/home/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -Joyent | Juju | Ubuntu | flannel | [docs](/docs/home/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -Rackspace | Juju | Ubuntu | flannel | [docs](/docs/home/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -VMWare vSphere | Juju | Ubuntu | flannel, calico | [docs](/docs/home/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -Bare Metal (MAAS) | Juju | Ubuntu | flannel, calico | [docs](/docs/home/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +Amazon Web Services (AWS) | Juju | Ubuntu | flannel, calico* | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +OpenStack | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +Microsoft Azure | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +Google Compute Engine (GCE) | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +Joyent | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +Rackspace | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +VMWare vSphere | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) +Bare Metal (MAAS) | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu/) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) ( [@mbruzek](https://github.com/mbruzek), [@chuckbutler](https://github.com/chuckbutler) ) -For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. {% include templates/task.md %} diff --git a/docs/getting-started-guides/ubuntu/operational-considerations.md b/docs/getting-started-guides/ubuntu/operational-considerations.md index d7bcb08300420..9e0a24a6660bb 100644 --- a/docs/getting-started-guides/ubuntu/operational-considerations.md +++ b/docs/getting-started-guides/ubuntu/operational-considerations.md @@ -29,7 +29,7 @@ juju bootstrap --contraints "mem=8GB cpu-cores=4 root-disk=128G" Juju will select the cheapest instance type matching your constraints on your target cloud. You can also use the ```instance-type``` constraint in conjunction with ```root-disk``` for strict control. For more information about the constraints available, refer to the [official documentation](https://jujucharms.com/docs/stable/reference-constraints) -Additional information about logging can be found in the [logging section](/docs/home/ubuntu/logging) +Additional information about logging can be found in the [logging section](/docs/getting-started-guides/ubuntu/logging) ### SSHing into the Controller Node diff --git a/docs/getting-started-guides/ubuntu/upgrades.md b/docs/getting-started-guides/ubuntu/upgrades.md index e887786f8c923..d065993f28a80 100644 --- a/docs/getting-started-guides/ubuntu/upgrades.md +++ b/docs/getting-started-guides/ubuntu/upgrades.md @@ -11,7 +11,7 @@ This page assumes you have a working deployed cluster. ## Assumptions -You should always back up all your data before attempting an upgrade. Don't forget to include the workload inside your cluster! Refer to the [backup documentation](/docs/home/ubuntu/backups). +You should always back up all your data before attempting an upgrade. Don't forget to include the workload inside your cluster! Refer to the [backup documentation](/docs/getting-started-guides/ubuntu/backups). {% endcapture %} {% capture steps %} @@ -23,7 +23,7 @@ You can use `juju status` to see if an upgrade is available. There will either b # Upgrade etcd -Backing up etcd requires an export and snapshot, refer to the [backup documentation](/docs/home/ubuntu/backups) to create a snapshot. After the snapshot upgrade the etcd service with: +Backing up etcd requires an export and snapshot, refer to the [backup documentation](/docs/getting-started-guides/ubuntu/backups) to create a snapshot. After the snapshot upgrade the etcd service with: juju upgrade-charm etcd @@ -96,7 +96,7 @@ Where `x` is the minor version of Kubernetes. For example, `1.6/stable`. See abo `kubectl version` should return the newer version. -It is recommended to rerun a [cluster validation](/docs/home/ubuntu/validation) to ensure that the cluster upgrade has successfully completed. +It is recommended to rerun a [cluster validation](/docs/getting-started-guides/ubuntu/validation) to ensure that the cluster upgrade has successfully completed. # Upgrade Flannel diff --git a/docs/getting-started-guides/vsphere.md b/docs/getting-started-guides/vsphere.md index c63ae81cf10b1..22207e7d41580 100644 --- a/docs/getting-started-guides/vsphere.md +++ b/docs/getting-started-guides/vsphere.md @@ -201,9 +201,9 @@ For quick support please join VMware Code Slack ([kubernetes](https://vmwarecode IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | --------- | ---------------------------- -Vmware vSphere | Kube-anywhere | Photon OS | Flannel | [docs](/docs/home/vsphere/) | | Community ([@abrarshivani](https://github.com/abrarshivani)), ([@kerneltime](https://github.com/kerneltime)), ([@BaluDontu](https://github.com/BaluDontu)), ([@luomiao](https://github.com/luomiao)), ([@divyenpatel](https://github.com/divyenpatel)) +Vmware vSphere | Kube-anywhere | Photon OS | Flannel | [docs](/docs/getting-started-guides/vsphere/) | | Community ([@abrarshivani](https://github.com/abrarshivani)), ([@kerneltime](https://github.com/kerneltime)), ([@BaluDontu](https://github.com/BaluDontu)), ([@luomiao](https://github.com/luomiao)), ([@divyenpatel](https://github.com/divyenpatel)) If you identify any issues/problems using the vSphere cloud provider, you can create an issue in our repo - [VMware Kubernetes](https://github.com/vmware/kubernetes). -For support level information on all solutions, see the [Table of solutions](/docs/home/#table-of-solutions) chart. +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. diff --git a/docs/home/index.md b/docs/home/index.md index 42969d4165745..d9cdc94f34eb1 100644 --- a/docs/home/index.md +++ b/docs/home/index.md @@ -13,7 +13,7 @@ The [Kubernetes Basics interactive tutorial](/docs/tutorials/kubernetes-basics/) ## Installing/Setting Up Kubernetes -[Picking the Right Solution](/docs/home/) can help you get a Kubernetes cluster up and running, either for local development, or on your cloud provider of choice. +[Picking the Right Solution](/docs/getting-started-guides/) can help you get a Kubernetes cluster up and running, either for local development, or on your cloud provider of choice. ## Concepts, Tasks, and Tutorials diff --git a/docs/reference/federation/extensions/v1beta1/definitions.html b/docs/reference/federation/extensions/v1beta1/definitions.html index 9cab569710637..24da7f55d8d05 100755 --- a/docs/reference/federation/extensions/v1beta1/definitions.html +++ b/docs/reference/federation/extensions/v1beta1/definitions.html @@ -5778,7 +5778,7 @@

    v1.Container

    securityContext

    -

    Security options the pod should run with. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ More info: https://git.k8s.io/community/contributors/design-proposals/auth/security_context.md

    +

    Security options the pod should run with. More info: https://kubernetes.io/docs/concepts/policy/security-context/ More info: https://git.k8s.io/community/contributors/design-proposals/auth/security_context.md

    false

    v1.SecurityContext

    diff --git a/docs/resources-reference/v1.5/index.html b/docs/resources-reference/v1.5/index.html index ca5ef9de356a8..aca4c2871c5c9 100644 --- a/docs/resources-reference/v1.5/index.html +++ b/docs/resources-reference/v1.5/index.html @@ -1062,7 +1062,7 @@

    PodSpec v1

    nodeSelector
    object -NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: http://kubernetes.io/docs/concepts/configuration/assign-pod-node/ +NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: http://kubernetes.io/docs/user-guide/node-selection restartPolicy
    string @@ -1967,7 +1967,7 @@

    ServiceSpec v1

    clusterIP
    string -clusterIP is the IP address of the service and is usually assigned randomly by the master. If an address is specified manually and is not in use by others, it will be allocated to the service; otherwise, creation of the service will fail. This field can not be changed through updates. Valid values are "None", empty string (""), or a valid IP address. "None" can be specified for headless services when proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies +clusterIP is the IP address of the service and is usually assigned randomly by the master. If an address is specified manually and is not in use by others, it will be allocated to the service; otherwise, creation of the service will fail. This field can not be changed through updates. Valid values are "None", empty string (""), or a valid IP address. "None" can be specified for headless services when proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies deprecatedPublicIPs
    string array @@ -1987,23 +1987,23 @@

    ServiceSpec v1

    loadBalancerSourceRanges
    string array -If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: http://kubernetes.io/docs/concepts/services-networking/service/-firewalls +If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: http://kubernetes.io/docs/user-guide/services-firewalls ports
    ServicePort array -The list of ports that are exposed by this service. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies +The list of ports that are exposed by this service. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies selector
    object -Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/concepts/services-networking/service/#overview +Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/user-guide/services#overview sessionAffinity
    string -Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies +Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies type
    string -type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ExternalName" maps to the specified externalName. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a stable IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the clusterIP. More info: http://kubernetes.io/docs/concepts/services-networking/service/#overview +type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ExternalName" maps to the specified externalName. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a stable IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the clusterIP. More info: http://kubernetes.io/docs/user-guide/services#overview @@ -9654,7 +9654,7 @@

    ServicePort v1

    nodePort
    integer -The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info: http://kubernetes.io/docs/concepts/services-networking/service/#type--nodeport +The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info: http://kubernetes.io/docs/user-guide/services#type--nodeport port
    integer @@ -9666,7 +9666,7 @@

    ServicePort v1

    targetPort
    IntOrString -Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: http://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service +Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: http://kubernetes.io/docs/user-guide/services#defining-a-service diff --git a/docs/resources-reference/v1.6/index.html b/docs/resources-reference/v1.6/index.html index e323e7b714d27..4c69ee05eb547 100644 --- a/docs/resources-reference/v1.6/index.html +++ b/docs/resources-reference/v1.6/index.html @@ -2056,7 +2056,7 @@

    ServiceSpec v1 core

    clusterIP
    string -clusterIP is the IP address of the service and is usually assigned randomly by the master. If an address is specified manually and is not in use by others, it will be allocated to the service; otherwise, creation of the service will fail. This field can not be changed through updates. Valid values are "None", empty string (""), or a valid IP address. "None" can be specified for headless services when proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies +clusterIP is the IP address of the service and is usually assigned randomly by the master. If an address is specified manually and is not in use by others, it will be allocated to the service; otherwise, creation of the service will fail. This field can not be changed through updates. Valid values are "None", empty string (""), or a valid IP address. "None" can be specified for headless services when proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies deprecatedPublicIPs
    string array @@ -2076,23 +2076,23 @@

    ServiceSpec v1 core

    loadBalancerSourceRanges
    string array -If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: http://kubernetes.io/docs/concepts/services-networking/service/-firewalls +If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: http://kubernetes.io/docs/user-guide/services-firewalls ports
    ServicePort array -The list of ports that are exposed by this service. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies +The list of ports that are exposed by this service. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies selector
    object -Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/concepts/services-networking/service/#overview +Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: http://kubernetes.io/docs/user-guide/services#overview sessionAffinity
    string -Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: http://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies +Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies type
    string -type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ExternalName" maps to the specified externalName. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a stable IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the clusterIP. More info: http://kubernetes.io/docs/concepts/services-networking/service/#overview +type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ExternalName" maps to the specified externalName. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a stable IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the clusterIP. More info: http://kubernetes.io/docs/user-guide/services#overview @@ -11254,7 +11254,7 @@

    ServicePort v1 core

    nodePort
    integer -The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info: http://kubernetes.io/docs/concepts/services-networking/service/#type--nodeport +The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info: http://kubernetes.io/docs/user-guide/services#type--nodeport port
    integer @@ -11266,7 +11266,7 @@

    ServicePort v1 core

    targetPort -Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: http://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service +Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: http://kubernetes.io/docs/user-guide/services#defining-a-service diff --git a/docs/resources-reference/v1.7/index.html b/docs/resources-reference/v1.7/index.html index 2441751bf868a..da873a072418d 100644 --- a/docs/resources-reference/v1.7/index.html +++ b/docs/resources-reference/v1.7/index.html @@ -124,7 +124,7 @@

    Container v1 core

    securityContext
    SecurityContext -Security options the pod should run with. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ More info: https://git.k8s.io/community/contributors/design-proposals/auth/security_context.md +Security options the pod should run with. More info: https://kubernetes.io/docs/concepts/policy/security-context/ More info: https://git.k8s.io/community/contributors/design-proposals/auth/security_context.md stdin
    boolean diff --git a/docs/setup/independent/install-kubeadm.md b/docs/setup/independent/install-kubeadm.md index 68d97db9cdefb..4ff75f4e95e00 100644 --- a/docs/setup/independent/install-kubeadm.md +++ b/docs/setup/independent/install-kubeadm.md @@ -127,7 +127,7 @@ example. You have to do this until SELinux support is improved in the kubelet. {% capture whatsnext %} -* [Using kubeadm to Create a Cluster](/docs/home/kubeadm/) +* [Using kubeadm to Create a Cluster](/docs/getting-started-guides/kubeadm/) {% endcapture %} diff --git a/docs/setup/pick-right-solution.md b/docs/setup/pick-right-solution.md index e7bb7122fc256..8a624c15236e6 100644 --- a/docs/setup/pick-right-solution.md +++ b/docs/setup/pick-right-solution.md @@ -17,7 +17,7 @@ When you are ready to scale up to more machines and higher availability, a [host [Turnkey cloud solutions](#turnkey-cloud-solutions) require only a few commands to create and cover a wide range of cloud providers. -If you already have a way to configure hosting resources, use [kubeadm](/docs/home/kubeadm/) to easily bring up a cluster with a single command per machine. +If you already have a way to configure hosting resources, use [kubeadm](/docs/getting-started-guides/kubeadm/) to easily bring up a cluster with a single command per machine. [Custom solutions](#custom-solutions) vary from step-by-step instructions to general advice for setting up a Kubernetes cluster from scratch. @@ -27,9 +27,9 @@ a Kubernetes cluster from scratch. # Local-machine Solutions -* [Minikube](/docs/home/minikube/) is the recommended method for creating a local, single-node Kubernetes cluster for development and testing. Setup is completely automated and doesn't require a cloud provider account. +* [Minikube](/docs/getting-started-guides/minikube/) is the recommended method for creating a local, single-node Kubernetes cluster for development and testing. Setup is completely automated and doesn't require a cloud provider account. -* [Ubuntu on LXD](/docs/home/ubuntu/local/) supports a nine-instance deployment on localhost. +* [Ubuntu on LXD](/docs/getting-started-guides/ubuntu/local/) supports a nine-instance deployment on localhost. * [IBM Cloud private-ce (Community Edition)](https://www.ibm.com/support/knowledgecenter/en/SSBS6K/product_welcome_cloud_private.html) can use VirtualBox on your machine to deploy Kubernetes to one or more VMs for dev and test scenarios. Scales to full multi-node cluster. Free version of the enterprise solution. @@ -62,13 +62,13 @@ a Kubernetes cluster from scratch. These solutions allow you to create Kubernetes clusters on a range of Cloud IaaS providers with only a few commands. These solutions are actively developed and have active community support. -* [Google Compute Engine (GCE)](/docs/home/gce//) -* [AWS](/docs/home/aws/) -* [Azure](/docs/home/azure/) +* [Google Compute Engine (GCE)](/docs/getting-started-guides/gce/) +* [AWS](/docs/getting-started-guides/aws/) +* [Azure](/docs/getting-started-guides/azure/) * [Tectonic by CoreOS](https://coreos.com/tectonic) -* [CenturyLink Cloud](/docs/home/clc/) +* [CenturyLink Cloud](/docs/getting-started-guides/clc/) * [IBM Bluemix](https://github.com/patrocinio/kubernetes-softlayer) -* [Stackpoint.io](/docs/home/stackpoint/) +* [Stackpoint.io](/docs/getting-started-guides/stackpoint/) * [KUBE2GO.io](https://kube2go.io/) * [Madcore.Ai](https://madcore.ai/) @@ -80,7 +80,7 @@ base operating systems. If you can find a guide below that matches your needs, use it. It may be a little out of date, but it will be easier than starting from scratch. If you do want to start from scratch, either because you have special requirements, or just because you want to understand what is underneath a Kubernetes -cluster, try the [Getting Started from Scratch](/docs/home/scratch/) guide. +cluster, try the [Getting Started from Scratch](/docs/getting-started-guides/scratch/) guide. If you are interested in supporting Kubernetes on a new platform, see [Writing a Getting Started Guide](https://git.k8s.io/community/contributors/devel/writing-a-getting-started-guide.md). @@ -88,47 +88,47 @@ If you are interested in supporting Kubernetes on a new platform, see ## Universal If you already have a way to configure hosting resources, use -[kubeadm](/docs/home/kubeadm/) to easily bring up a cluster +[kubeadm](/docs/getting-started-guides/kubeadm/) to easily bring up a cluster with a single command per machine. ## Cloud These solutions are combinations of cloud providers and operating systems not covered by the above solutions. -* [CoreOS on AWS or GCE](/docs/home/coreos/) -* [Kubernetes on Ubuntu](/docs/home/ubuntu/) -* [Kubespray](/docs/home/kubespray/) +* [CoreOS on AWS or GCE](/docs/getting-started-guides/coreos/) +* [Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/) +* [Kubespray](/docs/getting-started-guides/kubespray/) ## On-Premises VMs -* [Vagrant](/docs/home/coreos/) (uses CoreOS and flannel) -* [CloudStack](/docs/home/cloudstack/) (uses Ansible, CoreOS and flannel) -* [Vmware vSphere](/docs/home/vsphere/) (uses Debian) -* [Vmware Photon Controller](/docs/home/photon-controller/) (uses Debian) -* [Vmware vSphere, OpenStack, or Bare Metal](/docs/home/ubuntu/) (uses Juju, Ubuntu and flannel) -* [Vmware](/docs/home/coreos/) (uses CoreOS and flannel) -* [CoreOS on libvirt](/docs/home/libvirt-coreos//) (uses CoreOS) -* [oVirt](/docs/home/ovirt/) -* [OpenStack Heat](/docs/home/openstack-heat/) (uses CentOS and flannel) -* [Fedora (Multi Node)](/docs/home/fedora/flannel_multi_node_cluster/) (uses Fedora and flannel) +* [Vagrant](/docs/getting-started-guides/coreos/) (uses CoreOS and flannel) +* [CloudStack](/docs/getting-started-guides/cloudstack/) (uses Ansible, CoreOS and flannel) +* [Vmware vSphere](/docs/getting-started-guides/vsphere/) (uses Debian) +* [Vmware Photon Controller](/docs/getting-started-guides/photon-controller/) (uses Debian) +* [Vmware vSphere, OpenStack, or Bare Metal](/docs/getting-started-guides/ubuntu/) (uses Juju, Ubuntu and flannel) +* [Vmware](/docs/getting-started-guides/coreos/) (uses CoreOS and flannel) +* [CoreOS on libvirt](/docs/getting-started-guides/libvirt-coreos/) (uses CoreOS) +* [oVirt](/docs/getting-started-guides/ovirt/) +* [OpenStack Heat](/docs/getting-started-guides/openstack-heat/) (uses CentOS and flannel) +* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) (uses Fedora and flannel) ## Bare Metal -* [Offline](/docs/home/coreos/bare_metal_offline/) (no internet required. Uses CoreOS and Flannel) -* [Fedora via Ansible](/docs/home/fedora/fedora_ansible_config/) -* [Fedora (Single Node)](/docs/home/fedora/fedora_manual_config/) -* [Fedora (Multi Node)](/docs/home/fedora/flannel_multi_node_cluster/) -* [CentOS](/docs/home/centos/centos_manual_config/) -* [Kubernetes on Ubuntu](/docs/home/ubuntu/) -* [CoreOS on AWS or GCE](/docs/home/coreos/) +* [Offline](/docs/getting-started-guides/coreos/bare_metal_offline/) (no internet required. Uses CoreOS and Flannel) +* [Fedora via Ansible](/docs/getting-started-guides/fedora/fedora_ansible_config/) +* [Fedora (Single Node)](/docs/getting-started-guides/fedora/fedora_manual_config/) +* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) +* [CentOS](/docs/getting-started-guides/centos/centos_manual_config/) +* [Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/) +* [CoreOS on AWS or GCE](/docs/getting-started-guides/coreos/) ## Integrations These solutions provide integration with third-party schedulers, resource managers, and/or lower level platforms. -* [Kubernetes on Mesos](/docs/home/mesos/) +* [Kubernetes on Mesos](/docs/getting-started-guides/mesos/) * Instructions specify GCE, but are generic enough to be adapted to most existing Mesos clusters -* [DCOS](/docs/home/dcos/) +* [DCOS](/docs/getting-started-guides/dcos/) * Community Edition DCOS uses AWS * Enterprise Edition DCOS supports cloud hosting, on-premises VMs, and bare metal @@ -146,37 +146,37 @@ KUBE2GO.io | | multi-support | multi-support | [docs](http Madcore.Ai | Jenkins DSL | Ubuntu | flannel | [docs](https://madcore.ai) | Community ([@madcore-ai](https://github.com/madcore-ai)) Platform9 | | multi-support | multi-support | [docs](https://platform9.com/managed-kubernetes/) | Commercial Giant Swarm | | CoreOS | flannel and/or Calico | [docs](https://docs.giantswarm.io/) | Commercial -GCE | Saltstack | Debian | GCE | [docs](/docs/home/gce//) | Project +GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce/) | Project Azure Container Service | | Ubuntu | Azure | [docs](https://azure.microsoft.com/en-us/services/container-service/) | Commercial -Azure (IaaS) | | Ubuntu | Azure | [docs](/docs/home/azure/) | [Community (Microsoft)](https://github.com/Azure/acs-engine) -Bare-metal | Ansible | Fedora | flannel | [docs](/docs/home/fedora/fedora_ansible_config/) | Project -Bare-metal | custom | Fedora | _none_ | [docs](/docs/home/fedora/fedora_manual_config/) | Project -Bare-metal | custom | Fedora | flannel | [docs](/docs/home/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) -libvirt | custom | Fedora | flannel | [docs](/docs/home/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) -KVM | custom | Fedora | flannel | [docs](/docs/home/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) -Mesos/Docker | custom | Ubuntu | Docker | [docs](/docs/home/mesos-docker/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) -Mesos/GCE | | | | [docs](/docs/home/mesos/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) -DCOS | Marathon | CoreOS/Alpine | custom | [docs](/docs/home/dcos/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) -AWS | CoreOS | CoreOS | flannel | [docs](/docs/home/aws/) | Community -GCE | CoreOS | CoreOS | flannel | [docs](/docs/home/coreos/) | Community ([@pires](https://github.com/pires)) -Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/home/coreos/) | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles)) -Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](/docs/home/coreos/bare_metal_offline/) | Community ([@jeffbean](https://github.com/jeffbean)) -CloudStack | Ansible | CoreOS | flannel | [docs](/docs/home/cloudstack/) | Community ([@sebgoa](https://github.com/sebgoa)) -Vmware vSphere | Saltstack | Debian | OVS | [docs](/docs/home/vsphere/) | Community ([@imkin](https://github.com/imkin)) -Vmware Photon | Saltstack | Debian | OVS | [docs](/docs/home/photon-controller/) | Community ([@alainroy](https://github.com/alainroy)) -Bare-metal | custom | CentOS | flannel | [docs](/docs/home/centos/centos_manual_config/) | Community ([@coolsvap](https://github.com/coolsvap)) -AWS | Juju | Ubuntu | flannel | [docs](/docs/home/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) -GCE | Juju | Ubuntu | flannel | [docs](/docs/home/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) -Bare Metal | Juju | Ubuntu | flannel | [docs](/docs/home/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) -Rackspace | Juju | Ubuntu | flannel | [docs](/docs/home/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) -Vmware vSphere | Juju | Ubuntu | flannel | [docs](/docs/home/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) -AWS | Saltstack | Debian | AWS | [docs](/docs/home/aws/) | Community ([@justinsb](https://github.com/justinsb)) +Azure (IaaS) | | Ubuntu | Azure | [docs](/docs/getting-started-guides/azure/) | [Community (Microsoft)](https://github.com/Azure/acs-engine) +Bare-metal | Ansible | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/fedora_ansible_config/) | Project +Bare-metal | custom | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config/) | Project +Bare-metal | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) +libvirt | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) +KVM | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) +Mesos/Docker | custom | Ubuntu | Docker | [docs](/docs/getting-started-guides/mesos-docker/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) +Mesos/GCE | | | | [docs](/docs/getting-started-guides/mesos/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) +DCOS | Marathon | CoreOS/Alpine | custom | [docs](/docs/getting-started-guides/dcos/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) +AWS | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/aws/) | Community +GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires)) +Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles)) +Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/bare_metal_offline/) | Community ([@jeffbean](https://github.com/jeffbean)) +CloudStack | Ansible | CoreOS | flannel | [docs](/docs/getting-started-guides/cloudstack/) | Community ([@sebgoa](https://github.com/sebgoa)) +Vmware vSphere | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/vsphere/) | Community ([@imkin](https://github.com/imkin)) +Vmware Photon | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/photon-controller/) | Community ([@alainroy](https://github.com/alainroy)) +Bare-metal | custom | CentOS | flannel | [docs](/docs/getting-started-guides/centos/centos_manual_config/) | Community ([@coolsvap](https://github.com/coolsvap)) +AWS | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) +GCE | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) +Bare Metal | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) +Rackspace | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) +Vmware vSphere | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](http://www.ubuntu.com/cloud/kubernetes) and [Community](https://github.com/juju-solutions/bundle-canonical-kubernetes) ( [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) +AWS | Saltstack | Debian | AWS | [docs](/docs/getting-started-guides/aws/) | Community ([@justinsb](https://github.com/justinsb)) AWS | kops | Debian | AWS | [docs](https://github.com/kubernetes/kops/) | Community ([@justinsb](https://github.com/justinsb)) -Bare-metal | custom | Ubuntu | flannel | [docs](/docs/home/ubuntu/) | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY)) -libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](/docs/home/libvirt-coreos//) | Community ([@lhuard1A](https://github.com/lhuard1A)) -oVirt | | | | [docs](/docs/home/ovirt/) | Community ([@simon3z](https://github.com/simon3z)) -OpenStack Heat | Saltstack | CentOS | Neutron + flannel hostgw | [docs](/docs/home/openstack-heat/) | Community ([@FujitsuEnablingSoftwareTechnologyGmbH](https://github.com/FujitsuEnablingSoftwareTechnologyGmbH)) -any | any | any | any | [docs](/docs/home/scratch/) | Community ([@erictune](https://github.com/erictune)) +Bare-metal | custom | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY)) +libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](/docs/getting-started-guides/libvirt-coreos/) | Community ([@lhuard1A](https://github.com/lhuard1A)) +oVirt | | | | [docs](/docs/getting-started-guides/ovirt/) | Community ([@simon3z](https://github.com/simon3z)) +OpenStack Heat | Saltstack | CentOS | Neutron + flannel hostgw | [docs](/docs/getting-started-guides/openstack-heat/) | Community ([@FujitsuEnablingSoftwareTechnologyGmbH](https://github.com/FujitsuEnablingSoftwareTechnologyGmbH)) +any | any | any | any | [docs](/docs/getting-started-guides/scratch/) | Community ([@erictune](https://github.com/erictune)) any | any | any | any | [docs](http://docs.projectcalico.org/v2.2/getting-started/kubernetes/installation/) | Commercial and Community **Note**: The above table is ordered by version test/used in nodes, followed by support level. diff --git a/docs/tasks/access-application-cluster/access-cluster.md b/docs/tasks/access-application-cluster/access-cluster.md index 5a278289e8f67..641f3c4ed9f3c 100644 --- a/docs/tasks/access-application-cluster/access-cluster.md +++ b/docs/tasks/access-application-cluster/access-cluster.md @@ -14,7 +14,7 @@ Kubernetes CLI, `kubectl`. To access a cluster, you need to know the location of the cluster and have credentials to access it. Typically, this is automatically set-up when you work through -a [Getting started guide](/docs/home/), +a [Getting started guide](/docs/getting-started-guides/), or someone else setup the cluster and provided you with credentials and a location. Check the location and credentials that kubectl knows about with this command: @@ -183,7 +183,7 @@ In each case, the credentials of the pod are used to communicate securely with t The previous section was about connecting the Kubernetes API server. This section is about connecting to other services running on Kubernetes cluster. In Kubernetes, the -[nodes](/docs/admin/node), [pods](/docs/user-guide/pods) and [services](/docs/concepts/services-networking/service/) all have +[nodes](/docs/admin/node), [pods](/docs/user-guide/pods) and [services](/docs/user-guide/services) all have their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a cluster will not be routable, so they will not be reachable from a machine outside the cluster, such as your desktop machine. @@ -194,7 +194,7 @@ You have several options for connecting to nodes, pods and services from outside - Access services through public IPs. - Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside - the cluster. See the [services](/docs/concepts/services-networking/service/) and + the cluster. See the [services](/docs/user-guide/services) and [kubectl expose](/docs/user-guide/kubectl/v1.6/#expose) documentation. - Depending on your cluster environment, this may just expose the service to your corporate network, or it may expose it to the internet. Think about whether the service being exposed is secure. diff --git a/docs/tasks/access-application-cluster/web-ui-dashboard.md b/docs/tasks/access-application-cluster/web-ui-dashboard.md index a6bd934444684..f77da393e5d94 100644 --- a/docs/tasks/access-application-cluster/web-ui-dashboard.md +++ b/docs/tasks/access-application-cluster/web-ui-dashboard.md @@ -64,7 +64,7 @@ To access the deploy wizard from the Welcome page, click the respective button. The deploy wizard expects that you provide the following information: -- **App name** (mandatory): Name for your application. A [label](/docs/concepts/overview/working-with-objects/labels/) with the name will be added to the Deployment and Service, if any, that will be deployed. +- **App name** (mandatory): Name for your application. A [label](/docs/user-guide/labels/) with the name will be added to the Deployment and Service, if any, that will be deployed. The application name must be unique within the selected Kubernetes [namespace](/docs/tasks/administer-cluster/namespaces/). It must start with a lowercase character, and end with a lowercase character or a number, and contain only lowercase letters, numbers and dashes (-). It is limited to 24 characters. Leading and trailing spaces are ignored. @@ -84,7 +84,7 @@ If needed, you can expand the **Advanced options** section where you can specify - **Description**: The text you enter here will be added as an [annotation](/docs/concepts/overview/working-with-objects/annotations/) to the Deployment and displayed in the application's details. -- **Labels**: Default [labels](/docs/concepts/overview/working-with-objects/labels/) to be used for your application are application name and version. You can specify additional labels to be applied to the Deployment, Service (if any), and Pods, such as release, environment, tier, partition, and release track. +- **Labels**: Default [labels](/docs/user-guide/labels/) to be used for your application are application name and version. You can specify additional labels to be applied to the Deployment, Service (if any), and Pods, such as release, environment, tier, partition, and release track. Example: diff --git a/docs/tasks/administer-cluster/access-cluster-api.md b/docs/tasks/administer-cluster/access-cluster-api.md index a6af1d3f37dbd..88ef4334cf94e 100644 --- a/docs/tasks/administer-cluster/access-cluster-api.md +++ b/docs/tasks/administer-cluster/access-cluster-api.md @@ -22,7 +22,7 @@ Kubernetes command-line tool, `kubectl`. To access a cluster, you need to know the location of the cluster and have credentials to access it. Typically, this is automatically set-up when you work through -a [Getting started guide](/docs/home/), +a [Getting started guide](/docs/getting-started-guides/), or someone else setup the cluster and provided you with credentials and a location. Check the location and credentials that kubectl knows about with this command: diff --git a/docs/tasks/administer-cluster/access-cluster-services.md b/docs/tasks/administer-cluster/access-cluster-services.md index 660acdde028a1..5c55fa3acaca5 100644 --- a/docs/tasks/administer-cluster/access-cluster-services.md +++ b/docs/tasks/administer-cluster/access-cluster-services.md @@ -15,7 +15,7 @@ This page shows how to connect to services running on the Kubernetes cluster. ## Accessing services running on the cluster -In Kubernetes, [nodes](/docs/admin/node), [pods](/docs/user-guide/pods) and [services](/docs/concepts/services-networking/service/) all have +In Kubernetes, [nodes](/docs/admin/node), [pods](/docs/user-guide/pods) and [services](/docs/user-guide/services) all have their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a cluster will not be routable, so they will not be reachable from a machine outside the cluster, such as your desktop machine. @@ -26,7 +26,7 @@ You have several options for connecting to nodes, pods and services from outside - Access services through public IPs. - Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside - the cluster. See the [services](/docs/concepts/services-networking/service/) and + the cluster. See the [services](/docs/user-guide/services) and [kubectl expose](/docs/user-guide/kubectl/v1.6/#expose) documentation. - Depending on your cluster environment, this may just expose the service to your corporate network, or it may expose it to the internet. Think about whether the service being exposed is secure. diff --git a/docs/tasks/administer-cluster/calico-network-policy.md b/docs/tasks/administer-cluster/calico-network-policy.md index f8879f07445cb..4543aa7069743 100644 --- a/docs/tasks/administer-cluster/calico-network-policy.md +++ b/docs/tasks/administer-cluster/calico-network-policy.md @@ -15,7 +15,7 @@ This page shows how to use Calico for NetworkPolicy. {% capture steps %} ## Deploying a cluster using Calico -You can deploy a cluster using Calico for network policy in the default [GCE deployment](/docs/home/gce/) using the following set of commands: +You can deploy a cluster using Calico for network policy in the default [GCE deployment](/docs/getting-started-guides/gce) using the following set of commands: ```shell export NETWORK_POLICY_PROVIDER=calico @@ -55,7 +55,7 @@ There are two main components to be aware of: {% endcapture %} {% capture whatsnext %} -Once your cluster is running, you can follow the [NetworkPolicy getting started guide](/docs/home/network-policy/walkthrough) to try out Kubernetes NetworkPolicy. +Once your cluster is running, you can follow the [NetworkPolicy getting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy. {% endcapture %} {% include templates/task.md %} diff --git a/docs/tasks/administer-cluster/cilium-network-policy.md b/docs/tasks/administer-cluster/cilium-network-policy.md index 0d881178d0b41..6db677f313acd 100644 --- a/docs/tasks/administer-cluster/cilium-network-policy.md +++ b/docs/tasks/administer-cluster/cilium-network-policy.md @@ -72,7 +72,7 @@ There are two main components to be aware of: {% endcapture %} {% capture whatsnext %} -Once your cluster is running, you can follow the [NetworkPolicy getting started guide](/docs/home/network-policy/walkthrough) to try out Kubernetes NetworkPolicy with Cilium. Have fun, and if you have questions, contact us using the [Cilium Slack Channel](https://cilium.herokuapp.com/). +Once your cluster is running, you can follow the [NetworkPolicy getting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy with Cilium. Have fun, and if you have questions, contact us using the [Cilium Slack Channel](https://cilium.herokuapp.com/). {% endcapture %} {% include templates/task.md %} diff --git a/docs/tasks/administer-cluster/cluster-management.md b/docs/tasks/administer-cluster/cluster-management.md index cdadc8f7d5263..3566ef2c6f212 100644 --- a/docs/tasks/administer-cluster/cluster-management.md +++ b/docs/tasks/administer-cluster/cluster-management.md @@ -15,7 +15,7 @@ running cluster. ## Creating and configuring a Cluster -To install Kubernetes on a set of machines, consult one of the existing [Getting Started guides](/docs/home/) depending on your environment. +To install Kubernetes on a set of machines, consult one of the existing [Getting Started guides](/docs/getting-started-guides/) depending on your environment. ## Upgrading a cluster diff --git a/docs/tasks/administer-cluster/kube-router-network-policy.md b/docs/tasks/administer-cluster/kube-router-network-policy.md index 49d523b6d8368..3794bf7d4578b 100644 --- a/docs/tasks/administer-cluster/kube-router-network-policy.md +++ b/docs/tasks/administer-cluster/kube-router-network-policy.md @@ -18,7 +18,7 @@ The Kube-router Addon comes with a Network Policy Controller that watches Kubern {% endcapture %} {% capture whatsnext %} -Once you have installed the Kube-router addon, you can follow the [NetworkPolicy getting started guide](/docs/home/network-policy/walkthrough) to try out Kubernetes NetworkPolicy. +Once you have installed the Kube-router addon, you can follow the [NetworkPolicy getting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy. {% endcapture %} {% include templates/task.md %} diff --git a/docs/tasks/administer-cluster/namespaces-walkthrough.md b/docs/tasks/administer-cluster/namespaces-walkthrough.md index d9d79eefbdf8a..6a6e47a37f8ec 100644 --- a/docs/tasks/administer-cluster/namespaces-walkthrough.md +++ b/docs/tasks/administer-cluster/namespaces-walkthrough.md @@ -20,7 +20,7 @@ This example demonstrates how to use Kubernetes namespaces to subdivide your clu This example assumes the following: -1. You have an [existing Kubernetes cluster](/docs/home/). +1. You have an [existing Kubernetes cluster](/docs/getting-started-guides/). 2. You have a basic understanding of Kubernetes _[Pods](/docs/concepts/workloads/pods/pod/)_, _[Services](/docs/concepts/services-networking/service/)_, and _[Deployments](/docs/concepts/workloads/controllers/deployment/)_. ### Step One: Understand the default namespace diff --git a/docs/tasks/administer-cluster/namespaces.md b/docs/tasks/administer-cluster/namespaces.md index 5fadb62eaac69..d8e67b8999f46 100644 --- a/docs/tasks/administer-cluster/namespaces.md +++ b/docs/tasks/administer-cluster/namespaces.md @@ -10,7 +10,7 @@ This page shows how to view, work in, and delete namespaces. The page also shows {% endcapture %} {% capture prerequisites %} -* Have an [existing Kubernetes cluster](/docs/home/). +* Have an [existing Kubernetes cluster](/docs/getting-started-guides/). * Have a basic understanding of Kubernetes _[Pods](/docs/concepts/workloads/pods/pod/)_, _[Services](/docs/concepts/services-networking/service/)_, and _[Deployments](/docs/concepts/workloads/controllers/deployment/)_. {% endcapture %} diff --git a/docs/tasks/administer-cluster/out-of-resource.md b/docs/tasks/administer-cluster/out-of-resource.md index 02098595df1fc..a86f70ddf2c4d 100644 --- a/docs/tasks/administer-cluster/out-of-resource.md +++ b/docs/tasks/administer-cluster/out-of-resource.md @@ -49,7 +49,7 @@ container, and if users use the [node allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) feature, out of resource decisions are made local to the end user pod part of the cgroup hierarchy as well as the root node. This -[script](/docs/tasks/administer-cluster/out-of-resource/memory-available.sh) +[script](/docs/concepts/cluster-administration/out-of-resource/memory-available.sh) reproduces the same set of steps that the `kubelet` performs to calculate `memory.available`. The `kubelet` excludes inactive_file (i.e. # of bytes of file-backed memory on inactive LRU list) from its calculation as it assumes that diff --git a/docs/tasks/administer-cluster/romana-network-policy.md b/docs/tasks/administer-cluster/romana-network-policy.md index 453e9e488a6e6..ab98797713c2a 100644 --- a/docs/tasks/administer-cluster/romana-network-policy.md +++ b/docs/tasks/administer-cluster/romana-network-policy.md @@ -12,7 +12,7 @@ This page shows how to use Romana for NetworkPolicy. {% capture prerequisites %} -Complete steps 1, 2, and 3 of the [kubeadm getting started guide](/docs/home/kubeadm/). +Complete steps 1, 2, and 3 of the [kubeadm getting started guide](/docs/getting-started-guides/kubeadm/). {% endcapture %} @@ -34,7 +34,7 @@ To apply network policies use one of the following: {% capture whatsnext %} -Once your have installed Romana, you can follow the [NetworkPolicy getting started guide](/docs/home/network-policy/walkthrough) to try out Kubernetes NetworkPolicy. +Once your have installed Romana, you can follow the [NetworkPolicy getting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy. {% endcapture %} diff --git a/docs/tasks/administer-cluster/weave-network-policy.md b/docs/tasks/administer-cluster/weave-network-policy.md index 11f4d8548635c..85537e93f3647 100644 --- a/docs/tasks/administer-cluster/weave-network-policy.md +++ b/docs/tasks/administer-cluster/weave-network-policy.md @@ -12,7 +12,7 @@ This page shows how to use Weave Net for NetworkPolicy. {% capture prerequisites %} -Complete steps 1, 2, and 3 of the [kubeadm getting started guide](/docs/home/kubeadm/). +Complete steps 1, 2, and 3 of the [kubeadm getting started guide](/docs/getting-started-guides/kubeadm/). {% endcapture %} @@ -108,7 +108,7 @@ spec: {% capture whatsnext %} -Once you have installed the Weave Net addon, you can follow the [NetworkPolicy getting started guide](/docs/home/network-policy/walkthrough) to try out Kubernetes NetworkPolicy. +Once you have installed the Weave Net addon, you can follow the [NetworkPolicy getting started guide](/docs/getting-started-guides/network-policy/walkthrough) to try out Kubernetes NetworkPolicy. {% endcapture %} diff --git a/docs/tasks/administer-federation/events.md b/docs/tasks/administer-federation/events.md index dd7e1af68872e..1d9f72ea0e811 100644 --- a/docs/tasks/administer-federation/events.md +++ b/docs/tasks/administer-federation/events.md @@ -19,7 +19,7 @@ this for you). Other tutorials, for example by Kelsey Hightower, are also available to help you. You are also expected to have a basic -[working knowledge of Kubernetes](/docs/home/) in +[working knowledge of Kubernetes](/docs/getting-started-guides/) in general. ## Overview diff --git a/docs/tasks/administer-federation/ingress.md b/docs/tasks/administer-federation/ingress.md index 66982b02c4439..7909063543bba 100644 --- a/docs/tasks/administer-federation/ingress.md +++ b/docs/tasks/administer-federation/ingress.md @@ -66,7 +66,7 @@ this for you). Other tutorials, for example by Kelsey Hightower, are also available to help you. You must also have a basic -[working knowledge of Kubernetes](/docs/home/) in +[working knowledge of Kubernetes](/docs/getting-started-guides/) in general, and [Ingress](/docs/concepts/services-networking/ingress/) in particular. {% endcapture %} diff --git a/docs/tasks/administer-federation/replicaset.md b/docs/tasks/administer-federation/replicaset.md index fa96f2f3dceb8..896442b35bbcf 100644 --- a/docs/tasks/administer-federation/replicaset.md +++ b/docs/tasks/administer-federation/replicaset.md @@ -16,7 +16,7 @@ replicas exist across the registered clusters. * {% include federated-task-tutorial-prereqs.md %} * You are also expected to have a basic -[working knowledge of Kubernetes](/docs/home/) in +[working knowledge of Kubernetes](/docs/getting-started-guides/) in general and [ReplicaSets](/docs/concepts/workloads/controllers/replicaset/) in particular. {% endcapture %} diff --git a/docs/tasks/administer-federation/secret.md b/docs/tasks/administer-federation/secret.md index ec847af354fd9..2cd9aa26ea146 100644 --- a/docs/tasks/administer-federation/secret.md +++ b/docs/tasks/administer-federation/secret.md @@ -18,7 +18,7 @@ this for you). Other tutorials, for example by Kelsey Hightower, are also available to help you. You are also expected to have a basic -[working knowledge of Kubernetes](/docs/home/) in +[working knowledge of Kubernetes](/docs/getting-started-guides/) in general and [Secrets](/docs/concepts/configuration/secret/) in particular. ## Overview diff --git a/docs/tasks/configure-pod-container/assign-pods-nodes.md b/docs/tasks/configure-pod-container/assign-pods-nodes.md index 613c731a0ef40..06a29e575ac68 100644 --- a/docs/tasks/configure-pod-container/assign-pods-nodes.md +++ b/docs/tasks/configure-pod-container/assign-pods-nodes.md @@ -75,7 +75,7 @@ a `disktype=ssd` label. {% capture whatsnext %} Learn more about -[labels and selectors](/docs/concepts/overview/working-with-objects/labels/). +[labels and selectors](/docs/user-guide/labels/). {% endcapture %} {% include templates/task.md %} diff --git a/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md b/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md index 5d8d46c4257f7..71afeaffba6d5 100644 --- a/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md +++ b/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md @@ -22,7 +22,7 @@ bound to a suitable PersistentVolume. * You need to have a Kubernetes cluster that has only one Node, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a single-node cluster, you can create one by using -[Minikube](/docs/home/minikube). +[Minikube](/docs/getting-started-guides/minikube). * Familiarize yourself with the material in [Persistent Volumes](/docs/concepts/storage/persistent-volumes/). diff --git a/docs/tasks/debug-application-cluster/debug-application-introspection.md b/docs/tasks/debug-application-cluster/debug-application-introspection.md index 292e86a36305c..55c4c24c7f8ce 100644 --- a/docs/tasks/debug-application-cluster/debug-application-introspection.md +++ b/docs/tasks/debug-application-cluster/debug-application-introspection.md @@ -379,7 +379,7 @@ Learn about additional debugging tools, including: * [Logging](/docs/user-guide/logging/overview) * [Monitoring](/docs/user-guide/monitoring) * [Getting into containers via `exec`](/docs/user-guide/getting-into-containers) -* [Connecting to containers via proxies](/docs/tasks/access-kubernetes-api/http-proxy-access-api/) +* [Connecting to containers via proxies](/docs/user-guide/connecting-to-applications-proxy) * [Connecting to containers via port forwarding](/docs/user-guide/connecting-to-applications-port-forward) diff --git a/docs/tasks/debug-application-cluster/debug-stateful-set.md b/docs/tasks/debug-application-cluster/debug-stateful-set.md index 4c36e26f97fa5..070141ec9761f 100644 --- a/docs/tasks/debug-application-cluster/debug-stateful-set.md +++ b/docs/tasks/debug-application-cluster/debug-stateful-set.md @@ -79,7 +79,7 @@ kubectl annotate pods pod.alpha.kubernetes.io/initialized="true" --ov {% capture whatsnext %} -Learn more about [debugging an init-container](/docs/tasks/debug-application-cluster/debug-init-containers/). +Learn more about [debugging an init-container](/docs/tasks/troubleshoot/debug-init-containers/). {% endcapture %} diff --git a/docs/tasks/debug-application-cluster/resource-usage-monitoring.md b/docs/tasks/debug-application-cluster/resource-usage-monitoring.md index 45ac03b5865c8..9ca48d9bd0373 100644 --- a/docs/tasks/debug-application-cluster/resource-usage-monitoring.md +++ b/docs/tasks/debug-application-cluster/resource-usage-monitoring.md @@ -4,7 +4,7 @@ approvers: title: Tools for Monitoring Compute, Storage, and Network Resources --- -Understanding how an application behaves when deployed is crucial to scaling the application and providing a reliable service. In a Kubernetes cluster, application performance can be examined at many different levels: containers, [pods](/docs/user-guide/pods), [services](/docs/concepts/services-networking/service/), and whole clusters. As part of Kubernetes we want to provide users with detailed resource usage information about their running applications at all these levels. This will give users deep insights into how their applications are performing and where possible application bottlenecks may be found. In comes [Heapster](https://github.com/kubernetes/heapster), a project meant to provide a base monitoring platform on Kubernetes. +Understanding how an application behaves when deployed is crucial to scaling the application and providing a reliable service. In a Kubernetes cluster, application performance can be examined at many different levels: containers, [pods](/docs/user-guide/pods), [services](/docs/user-guide/services), and whole clusters. As part of Kubernetes we want to provide users with detailed resource usage information about their running applications at all these levels. This will give users deep insights into how their applications are performing and where possible application bottlenecks may be found. In comes [Heapster](https://github.com/kubernetes/heapster), a project meant to provide a base monitoring platform on Kubernetes. ## Overview diff --git a/docs/tasks/federation/federation-service-discovery.md b/docs/tasks/federation/federation-service-discovery.md index 80ba72bb8c1f9..a30910af72837 100644 --- a/docs/tasks/federation/federation-service-discovery.md +++ b/docs/tasks/federation/federation-service-discovery.md @@ -25,7 +25,7 @@ this for you). Other tutorials, for example by Kelsey Hightower, are also available to help you. You are also expected to have a basic -[working knowledge of Kubernetes](/docs/home/) in +[working knowledge of Kubernetes](/docs/getting-started-guides/) in general, and [Services](/docs/concepts/services-networking/service/) in particular. ## Overview diff --git a/docs/tasks/federation/set-up-cluster-federation-kubefed.md b/docs/tasks/federation/set-up-cluster-federation-kubefed.md index 1786cf183f338..8f6b970dde776 100644 --- a/docs/tasks/federation/set-up-cluster-federation-kubefed.md +++ b/docs/tasks/federation/set-up-cluster-federation-kubefed.md @@ -21,7 +21,7 @@ using `kubefed`. ## Prerequisites This guide assumes that you have a running Kubernetes cluster. Please -see one of the [getting started](/docs/home/) guides +see one of the [getting started](/docs/getting-started-guides/) guides for installation instructions for your platform. @@ -367,7 +367,7 @@ kubefed init fellowship \ ``` For more information see -[Setting up CoreDNS as DNS provider for Cluster Federation](/docs/tasks/federation/set-up-coredns-provider-federation/). +[Setting up CoreDNS as DNS provider for Cluster Federation](/docs/tutorials/federation/set-up-coredns-provider-federation/). ## Adding a cluster to a federation diff --git a/docs/tasks/federation/set-up-coredns-provider-federation.md b/docs/tasks/federation/set-up-coredns-provider-federation.md index c0cf27780269b..4268245dbaf73 100644 --- a/docs/tasks/federation/set-up-coredns-provider-federation.md +++ b/docs/tasks/federation/set-up-coredns-provider-federation.md @@ -23,7 +23,7 @@ DNS provider for Cluster Federation. * You need to have a running Kubernetes cluster (which is referenced as host cluster). Please see one of the -[getting started](/docs/home/) guides for +[getting started](/docs/getting-started-guides/) guides for installation instructions for your platform. * Support for `LoadBalancer` services in member clusters of federation is mandatory to enable `CoreDNS` for service discovery across federated clusters. diff --git a/docs/tasks/federation/set-up-placement-policies-federation.md b/docs/tasks/federation/set-up-placement-policies-federation.md index 460055f0b6c1d..a5dd281593abd 100644 --- a/docs/tasks/federation/set-up-placement-policies-federation.md +++ b/docs/tasks/federation/set-up-placement-policies-federation.md @@ -12,7 +12,7 @@ resources using an external policy engine. {% capture prerequisites %} You need to have a running Kubernetes cluster (which is referenced as host -cluster). Please see one of the [getting started](/docs/home/) +cluster). Please see one of the [getting started](/docs/getting-started-guides/) guides for installation instructions for your platform. {% endcapture %} diff --git a/docs/tasks/job/parallel-processing-expansion.md b/docs/tasks/job/parallel-processing-expansion.md index 7feb9c7602a4f..f8fac8066ec0f 100644 --- a/docs/tasks/job/parallel-processing-expansion.md +++ b/docs/tasks/job/parallel-processing-expansion.md @@ -109,7 +109,7 @@ Processing item cherry In the first example, each instance of the template had one parameter, and that parameter was also used as a label. However label keys are limited in [what characters they can -contain](/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set). +contain](/docs/user-guide/labels/#syntax-and-character-set). This slightly more complex example uses the jinja2 template language to generate our objects. We will use a one-line python script to convert the template to a file. diff --git a/docs/tasks/manage-daemon/update-daemon-set.md b/docs/tasks/manage-daemon/update-daemon-set.md index 46a5823218b6e..653eec57a145c 100644 --- a/docs/tasks/manage-daemon/update-daemon-set.md +++ b/docs/tasks/manage-daemon/update-daemon-set.md @@ -159,7 +159,7 @@ causes: The rollout is stuck because new DaemonSet pods can't be scheduled on at least one node. This is possible when the node is -[running out of resources](/docs/tasks/administer-cluster/out-of-resource/). +[running out of resources](/docs/concepts/cluster-administration/out-of-resource/). When this happens, find the nodes that don't have the DaemonSet pods scheduled on by comparing the output of `kubectl get nodes` and the output of: diff --git a/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index 8416b7ec495ca..6d23d7d008a91 100644 --- a/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -18,7 +18,7 @@ This document walks you through an example of enabling Horizontal Pod Autoscalin This example requires a running Kubernetes cluster and kubectl, version 1.2 or later. [Heapster](https://github.com/kubernetes/heapster) monitoring needs to be deployed in the cluster as Horizontal Pod Autoscaler uses it to collect metrics -(if you followed [getting started on GCE guide](/docs/home/gce/), +(if you followed [getting started on GCE guide](/docs/getting-started-guides/gce), heapster monitoring will be turned-on by default). To specify multiple resource metrics for a Horizontal Pod Autoscaler, you must have a Kubernetes cluster diff --git a/docs/tasks/run-application/run-replicated-stateful-application.md b/docs/tasks/run-application/run-replicated-stateful-application.md index 6c86b8d0428c8..9613bb2437d2d 100644 --- a/docs/tasks/run-application/run-replicated-stateful-application.md +++ b/docs/tasks/run-application/run-replicated-stateful-application.md @@ -13,7 +13,7 @@ title: Run a Replicated Stateful Application {% capture overview %} This page shows how to run a replicated stateful application using a -[StatefulSet](/docs/concepts/workloads/controllers/statefulset/) controller. +[StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/) controller. The example is a MySQL single-master topology with multiple slaves running asynchronous replication. @@ -29,7 +29,7 @@ on general patterns for running stateful applications in Kubernetes. * {% include default-storage-class-prereqs.md %} * This tutorial assumes you are familiar with [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) - and [StatefulSets](/docs/concepts/workloads/controllers/statefulset/), + and [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/), as well as other core concepts like [Pods](/docs/concepts/workloads/pods/pod/), [Services](/docs/concepts/services-networking/service/), and [ConfigMaps](/docs/tasks/configure-pod-container/configmap/). @@ -169,7 +169,7 @@ Because the example topology consists of a single MySQL master and any number of slaves, the script simply assigns ordinal `0` to be the master, and everyone else to be slaves. Combined with the StatefulSet controller's -[deployment order guarantee](/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantee), +[deployment order guarantee](/docs/concepts/abstractions/controllers/statefulsets/#deployment-and-scaling-guarantee), this ensures the MySQL master is Ready before creating slaves, so they can begin replicating. diff --git a/docs/tasks/tools/install-kubectl.md b/docs/tasks/tools/install-kubectl.md index 095f3a1a7d11d..5d52e04c74868 100644 --- a/docs/tasks/tools/install-kubectl.md +++ b/docs/tasks/tools/install-kubectl.md @@ -130,7 +130,7 @@ Edit the config file with a text editor of your choice, such as Notepad for exam ## Configure kubectl -In order for kubectl to find and access a Kubernetes cluster, it needs a [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/), which is created automatically when you create a cluster using kube-up.sh or successfully deploy a Minikube cluster. See the [getting started guides](/docs/home/) for more about creating clusters. If you need access to a cluster you didn't create, see the [Sharing Cluster Access document](/docs/tasks/administer-cluster/share-configuration/). +In order for kubectl to find and access a Kubernetes cluster, it needs a [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/), which is created automatically when you create a cluster using kube-up.sh or successfully deploy a Minikube cluster. See the [getting started guides](/docs/getting-started-guides/) for more about creating clusters. If you need access to a cluster you didn't create, see the [Sharing Cluster Access document](/docs/tasks/administer-cluster/share-configuration/). By default, kubectl configuration is located at `~/.kube/config`. ## Check the kubectl configuration diff --git a/docs/tasks/tools/install-minikube.md b/docs/tasks/tools/install-minikube.md index fec054e4ab409..3246073871522 100644 --- a/docs/tasks/tools/install-minikube.md +++ b/docs/tasks/tools/install-minikube.md @@ -46,7 +46,7 @@ If you do not already have a hypervisor installed, install one now. {% capture whatsnext %} -* [Running Kubernetes Locally via Minikube](/docs/home/minikube/) +* [Running Kubernetes Locally via Minikube](/docs/getting-started-guides/minikube/) {% endcapture %} diff --git a/docs/tools/index.md b/docs/tools/index.md index 66817843f47d4..b4ba12ece0ed6 100644 --- a/docs/tools/index.md +++ b/docs/tools/index.md @@ -16,7 +16,7 @@ Kubernetes contains the following built-in tools: ##### Kubeadm -[`kubeadm`](/docs/home/kubeadm/) is the command line tool for easily provisioning a secure Kubernetes cluster on top of physical or cloud servers or virtual machines (currently in alpha). +[`kubeadm`](/docs/getting-started-guides/kubeadm/) is the command line tool for easily provisioning a secure Kubernetes cluster on top of physical or cloud servers or virtual machines (currently in alpha). ##### Kubefed @@ -25,7 +25,7 @@ to help you administrate your federated clusters. ##### Minikube -[`minikube`](/docs/home/minikube/) is a tool that makes it +[`minikube`](/docs/getting-started-guides/minikube/) is a tool that makes it easy to run a single-node Kubernetes cluster locally on your workstation for development and testing purposes. diff --git a/docs/tutorials/stateful-application/basic-stateful-set.md b/docs/tutorials/stateful-application/basic-stateful-set.md index c5287cc5aa251..21f63cfbfedee 100644 --- a/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/docs/tutorials/stateful-application/basic-stateful-set.md @@ -11,7 +11,7 @@ title: StatefulSet Basics {% capture overview %} This tutorial provides an introduction to managing applications with -[StatefulSets](/docs/concepts/workloads/controllers/statefulset/). It +[StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/). It demonstrates how to create, delete, scale, and update the Pods of StatefulSets. {% endcapture %} @@ -24,7 +24,7 @@ following Kubernetes concepts. * [Headless Services](/docs/concepts/services-networking/service/#headless-services) * [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) * [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/persistent-volume-provisioning/) -* [StatefulSets](/docs/concepts/workloads/controllers/statefulset/) +* [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) * [kubectl CLI](/docs/user-guide/kubectl) This tutorial assumes that your cluster is configured to dynamically provision @@ -54,7 +54,7 @@ After this tutorial, you will be familiar with the following. Begin by creating a StatefulSet using the example below. It is similar to the example presented in the -[StatefulSets](/docs/concepts/workloads/controllers/statefulset/) concept. +[StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) concept. It creates a [Headless Service](/docs/concepts/services-networking/service/#headless-services), `nginx`, to publish the IP addresses of Pods in the StatefulSet, `web`. @@ -133,7 +133,7 @@ web-1 1/1 Running 0 1m ``` -As mentioned in the [StatefulSets](/docs/concepts/workloads/controllers/statefulset/) +As mentioned in the [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) concept, the Pods in a StatefulSet have a sticky, unique identity. This identity is based on a unique ordinal index that is assigned to each Pod by the StatefulSet controller. The Pods' names take the form diff --git a/docs/tutorials/stateful-application/cassandra.md b/docs/tutorials/stateful-application/cassandra.md index 48e8b3202fb3a..1b729dfd42816 100644 --- a/docs/tutorials/stateful-application/cassandra.md +++ b/docs/tutorials/stateful-application/cassandra.md @@ -45,7 +45,7 @@ To complete this tutorial, you should already have a basic familiarity with [Pod ### Additional Minikube Setup Instructions -**Caution:** [Minikube](/docs/home/minikube/) defaults to 1024MB of memory and 1 CPU which results in an insufficient resource errors during this tutorial. +**Caution:** [Minikube](/docs/getting-started-guides/minikube/) defaults to 1024MB of memory and 1 CPU which results in an insufficient resource errors during this tutorial. {: .caution} To avoid these errors, run minikube with: diff --git a/docs/tutorials/stateful-application/zookeeper.md b/docs/tutorials/stateful-application/zookeeper.md index 64cff77a855f1..9ad45caef903f 100644 --- a/docs/tutorials/stateful-application/zookeeper.md +++ b/docs/tutorials/stateful-application/zookeeper.md @@ -12,9 +12,9 @@ title: Running ZooKeeper, A CP Distributed System {% capture overview %} This tutorial demonstrates [Apache Zookeeper](https://zookeeper.apache.org) on -Kubernetes using [StatefulSets](/docs/concepts/workloads/controllers/statefulset/), +Kubernetes using [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/), [PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#specifying-a-poddisruptionbudget), -and [PodAntiAffinity](/docs/concepts/configuration/assign-pod-node//#inter-pod-affinity-and-anti-affinity-beta-feature). +and [PodAntiAffinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature). {% endcapture %} {% capture prerequisites %} @@ -28,9 +28,9 @@ Kubernetes concepts. * [PersistentVolumes](/docs/concepts/storage/volumes/) * [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/persistent-volume-provisioning/) * [ConfigMaps](/docs/tasks/configure-pod-container/configmap/) -* [StatefulSets](/docs/concepts/workloads/controllers/statefulset/) +* [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) * [PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#specifying-a-poddisruptionbudget) -* [PodAntiAffinity](/docs/concepts/configuration/assign-pod-node//#inter-pod-affinity-and-anti-affinity-beta-feature) +* [PodAntiAffinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature) * [kubectl CLI](/docs/user-guide/kubectl) You will require a cluster with at least four nodes, and each node will require @@ -92,7 +92,7 @@ The manifest below contains a [Headless Service](/docs/concepts/services-networking/service/#headless-services), a [ConfigMap](/docs/tasks/configure-pod-container/configmap/), a [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions//#specifying-a-poddisruptionbudget), -and a [StatefulSet](/docs/concepts/workloads/controllers/statefulset/). +and a [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/). {% include code.html language="yaml" file="zookeeper.yaml" ghlink="/docs/tutorials/stateful-application/zookeeper.yaml" %} diff --git a/docs/tutorials/stateless-application/hello-minikube.md b/docs/tutorials/stateless-application/hello-minikube.md index 9268b700b5c2d..0e4e21fc55199 100644 --- a/docs/tutorials/stateless-application/hello-minikube.md +++ b/docs/tutorials/stateless-application/hello-minikube.md @@ -7,7 +7,7 @@ title: Hello Minikube The goal of this tutorial is for you to turn a simple Hello World Node.js app into an application running on Kubernetes. The tutorial shows you how to take code that you have developed on your machine, turn it into a Docker -container image and then run that image on [Minikube](/docs/home/minikube). +container image and then run that image on [Minikube](/docs/getting-started-guides/minikube). Minikube provides a simple way of running Kubernetes on your local machine for free. {% endcapture %} @@ -45,7 +45,7 @@ create a local cluster. This tutorial also assumes you are using on OS X. If you are on a different platform like Linux, or using VirtualBox instead of Docker for Mac, the instructions to install Minikube may be slightly different. For general Minikube installation instructions, see -the [Minikube installation guide](/docs/home/minikube/). +the [Minikube installation guide](/docs/getting-started-guides/minikube/). Use `curl` to download and install the latest Minikube release: diff --git a/docs/user-guide/docker-cli-to-kubectl.md b/docs/user-guide/docker-cli-to-kubectl.md index 0ef2f42878258..2f4b4b7948303 100644 --- a/docs/user-guide/docker-cli-to-kubectl.md +++ b/docs/user-guide/docker-cli-to-kubectl.md @@ -43,7 +43,7 @@ $ kubectl expose deployment nginx-app --port=80 --name=nginx-http service "nginx-http" exposed ``` -With kubectl, we create a [Deployment](/docs/concepts/workloads/controllers/deployment/) which will make sure that N pods are running nginx (where N is the number of replicas stated in the spec, which defaults to 1). We also create a [service](/docs/concepts/services-networking/service/) with a selector that matches the Deployment's selector. See the [Quick start](/docs/user-guide/quick-start) for more information. +With kubectl, we create a [Deployment](/docs/concepts/workloads/controllers/deployment/) which will make sure that N pods are running nginx (where N is the number of replicas stated in the spec, which defaults to 1). We also create a [service](/docs/user-guide/services) with a selector that matches the Deployment's selector. See the [Quick start](/docs/user-guide/quick-start) for more information. By default images are run in the background, similar to `docker run -d ...`, if you want to run things in the foreground, use: diff --git a/docs/user-guide/update-demo/index.md.orig b/docs/user-guide/update-demo/index.md.orig index bfb600686ef42..c6fbc3bf8c634 100644 --- a/docs/user-guide/update-demo/index.md.orig +++ b/docs/user-guide/update-demo/index.md.orig @@ -11,7 +11,7 @@ here](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch} ### Step Zero: Prerequisites -This example assumes that you have forked the docs repository and [turned up a Kubernetes cluster](/docs/home/): +This example assumes that you have forked the docs repository and [turned up a Kubernetes cluster](/docs/getting-started-guides/): ```shell $ git clone -b {{page.docsbranch}} https://github.com/kubernetes/kubernetes.github.io diff --git a/docs/user-guide/walkthrough/k8s201.md b/docs/user-guide/walkthrough/k8s201.md index b9d659c05f9a5..f5f42d7120473 100644 --- a/docs/user-guide/walkthrough/k8s201.md +++ b/docs/user-guide/walkthrough/k8s201.md @@ -46,7 +46,7 @@ List all Pods with the label `app=nginx`: kubectl get pods -l app=nginx ``` -For more information, see [Labels](/docs/concepts/overview/working-with-objects/labels/). +For more information, see [Labels](/docs/user-guide/labels/). They are a core concept used by two additional Kubernetes building blocks: Deployments and Services. From dc3c4305e0f569fbbfa29b987cac47228dbb7495 Mon Sep 17 00:00:00 2001 From: jianglingxia Date: Tue, 26 Sep 2017 04:57:42 +0800 Subject: [PATCH 51/87] update pageversion and 404 error (#5612) --- .../create-external-load-balancer.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/tasks/access-application-cluster/create-external-load-balancer.md b/docs/tasks/access-application-cluster/create-external-load-balancer.md index e3d69fdd581c9..1ac15d13b2ebd 100644 --- a/docs/tasks/access-application-cluster/create-external-load-balancer.md +++ b/docs/tasks/access-application-cluster/create-external-load-balancer.md @@ -25,7 +25,7 @@ cluster nodes _provided your cluster runs in a supported environment and is conf ## Configuration file To create an external load balancer, add the following line to your -[service configuration file](/docs/concepts/services-networking/service/operations/#service-configuration-file): +[service configuration file](/docs/concepts/services-networking/service/#type-loadbalancer): ```json "type": "LoadBalancer" @@ -68,7 +68,7 @@ resource (in the case of the example above, a replication controller named `example`). For more information, including optional flags, refer to the -[`kubectl expose` reference](/docs/user-guide/kubectl/v1.6/#expose). +[`kubectl expose` reference](/docs/user-guide/kubectl/{{page.version}}/#expose). ## Finding your IP address From d9a853e9a3bcf8e6db842874e8cb374bafb1ec36 Mon Sep 17 00:00:00 2001 From: Zack Proser Date: Mon, 25 Sep 2017 14:15:46 -0700 Subject: [PATCH 52/87] Add missing 'a' on line 123. (#5616) --- docs/concepts/services-networking/service.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/services-networking/service.md b/docs/concepts/services-networking/service.md index a081ffd25035c..26370b483d2e5 100644 --- a/docs/concepts/services-networking/service.md +++ b/docs/concepts/services-networking/service.md @@ -120,7 +120,7 @@ subsets: NOTE: Endpoint IPs may not be loopback (127.0.0.0/8), link-local (169.254.0.0/16), or link-local multicast (224.0.0.0/24). -Accessing a `Service` without a selector works the same as if it had selector. +Accessing a `Service` without a selector works the same as if it had a selector. The traffic will be routed to endpoints defined by the user (`1.2.3.4:9376` in this example). From fd31a775c69a748f1fcca34b57a0b855f8ca94d8 Mon Sep 17 00:00:00 2001 From: Nick Leli Date: Mon, 25 Sep 2017 14:40:49 -0700 Subject: [PATCH 53/87] Updating DC/OS getting started guide with the new integration. (#5547) Resolving merge conflicts --- docs/getting-started-guides/dcos.md | 145 ++-------------------------- 1 file changed, 9 insertions(+), 136 deletions(-) diff --git a/docs/getting-started-guides/dcos.md b/docs/getting-started-guides/dcos.md index 816bd288a5276..23ad7912dd65d 100644 --- a/docs/getting-started-guides/dcos.md +++ b/docs/getting-started-guides/dcos.md @@ -1,143 +1,16 @@ --- approvers: -- karlkfi -title: DCOS +- smugcloud +title: Kubernetes on DCOS --- -{% assign for_k8s_version="1.6" %}{% include feature-state-deprecated.md %} +Mesosphere provides an easy option to provision Kubernetes onto [DC/OS](https://mesosphere.com/product/), offering: -This guide will walk you through installing [Kubernetes-Mesos](https://github.com/mesosphere/kubernetes-mesos) on [Datacenter Operating System (DCOS)](https://mesosphere.com/product/) with the [DCOS CLI](https://github.com/mesosphere/dcos-cli) and operating Kubernetes with the [DCOS Kubectl plugin](https://github.com/mesosphere/dcos-kubectl). +* Pure upstream Kubernetes +* Single-click cluster provisioning +* Highly available and secure by default +* Kubernetes running alongside fast-data platforms (e.g. Akka, Cassandra, Kafka, Spark) -* TOC -{:toc} +## Official Mesosphere Guide - -## About Kubernetes on DCOS - -DCOS is system software that manages computer cluster hardware and software resources and provides common services for distributed applications. Among other services, it provides [Apache Mesos](http://mesos.apache.org/) as its cluster kernel and [Marathon](https://mesosphere.github.io/marathon/) as its init system. With DCOS CLI, Mesos frameworks like [Kubernetes-Mesos](https://github.com/mesosphere/kubernetes-mesos) can be installed with a single command. - -Another feature of the DCOS CLI is that it allows plugins like the [DCOS Kubectl plugin](https://github.com/mesosphere/dcos-kubectl). This allows for easy access to a version-compatible Kubectl without having to manually download or install. - -Further information about the benefits of installing Kubernetes on DCOS can be found in the [Kubernetes-Mesos documentation](https://releases.k8s.io/{{page.githubbranch}}/contrib/mesos/README.md). - -For more details about the Kubernetes DCOS packaging, see the [Kubernetes-Mesos project](https://github.com/mesosphere/kubernetes-mesos). - -Since Kubernetes-Mesos is still alpha, it is a good idea to familiarize yourself with the [current known issues](https://releases.k8s.io/{{page.githubbranch}}/contrib/mesos/docs/issues.md) which may limit or modify the behavior of Kubernetes on DCOS. - -If you have problems completing the steps below, please [file an issue against the kubernetes-mesos project](https://github.com/mesosphere/kubernetes-mesos/issues). - - -## Resources - -Explore the following resources for more information about Kubernetes, Kubernetes on Mesos/DCOS, and DCOS itself. - -- [DCOS Documentation](https://docs.mesosphere.com/) -- [Managing DCOS Services](https://docs.mesosphere.com/services/kubernetes/) -- [Kubernetes Examples](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/) -- [Kubernetes on Mesos Documentation](https://github.com/kubernetes-incubator/kube-mesos-framework/blob/master/README.md) -- [Kubernetes on Mesos Release Notes](https://github.com/mesosphere/kubernetes-mesos/releases) -- [Kubernetes on DCOS Package Source](https://github.com/mesosphere/kubernetes-mesos) - - -## Prerequisites - -- A running [DCOS cluster](https://mesosphere.com/product/) - - [DCOS Community Edition](https://docs.mesosphere.com/1.7/archived-dcos-enterprise-edition/installing-enterprise-edition-1-6/cloud/) is currently available on [AWS](https://mesosphere.com/amazon/). - - [DCOS Enterprise Edition](https://mesosphere.com/product/) can be deployed on virtual or bare metal machines. Contact sales@mesosphere.com for more info and to set up an engagement. -- [DCOS CLI](https://docs.mesosphere.com/install/cli/) installed locally - - -## Install - -1. Configure and validate the [Mesosphere Multiverse](https://github.com/mesosphere/multiverse) as a package source repository - - ```shell -$ dcos config prepend package.sources https://github.com/mesosphere/multiverse/archive/version-1.x.zip - $ dcos package update --validate - ``` -2. Install etcd - - By default, the Kubernetes DCOS package starts a single-node etcd. In order to avoid state loss in the event of Kubernetes component container failure, install an HA [etcd-mesos](https://github.com/mesosphere/etcd-mesos) cluster on DCOS. - - ```shell -$ dcos package install etcd - ``` -3. Verify that etcd is installed and healthy - - The etcd cluster takes a short while to deploy. Verify that `/etcd` is healthy before going on to the next step. - - ```shell -$ dcos marathon app list - ID MEM CPUS TASKS HEALTH DEPLOYMENT CONTAINER CMD - /etcd 128 0.2 1/1 1/1 --- DOCKER None - ``` -4. Create Kubernetes installation configuration - - Configure Kubernetes to use the HA etcd installed on DCOS. - - ```shell -$ cat >/tmp/options.json < Date: Tue, 26 Sep 2017 00:50:11 +0300 Subject: [PATCH 54/87] Update organize-cluster-access-kubeconfig.md (#5611) * Update organize-cluster-access-kubeconfig.md Explain that context is just a named group for convenience, and that current context is used by default if no other params are present * Update organize-cluster-access-kubeconfig.md --- .../organize-cluster-access-kubeconfig.md | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/docs/concepts/configuration/organize-cluster-access-kubeconfig.md b/docs/concepts/configuration/organize-cluster-access-kubeconfig.md index 78a63da117d35..3709914bd5ec2 100644 --- a/docs/concepts/configuration/organize-cluster-access-kubeconfig.md +++ b/docs/concepts/configuration/organize-cluster-access-kubeconfig.md @@ -37,16 +37,20 @@ in a variety of ways. For example: - Administrators might have sets of certificates that they provide to individual users. With kubeconfig files, you can organize your clusters, users, and namespaces. -And you can define contexts that enable users to quickly and easily switch between +You can also define contexts to quickly and easily switch between clusters and namespaces. ## Context -A kubeconfig file can have *context* elements. Each context is a triple -(cluster, namespace, user). You can use `kubectl config use-context` to set -the current context. The `kubectl` command-line tool communicates with the -cluster and namespace listed in the current context. And it uses the -credentials of the user listed in the current context. +A *context* element in a kubeconfig file is used to group access parameters +under a convenient name. Each context has three parameters: cluster, namespace, and user. +By default, the `kubectl` command-line tool uses parameters from +the *current context* to communicate with the cluster. + +To choose the current context: +``` +kubectl config use-context +``` ## The KUBECONFIG environment variable From d9ccdfa85ca5b07d86adf759b00f858d8d6e599d Mon Sep 17 00:00:00 2001 From: Quentin Revel Date: Tue, 26 Sep 2017 00:46:33 +0200 Subject: [PATCH 55/87] Update mysql-wordpress-persistent-volume.md (#5155) From d28f19900eea9bbc6057f83326615445514e3251 Mon Sep 17 00:00:00 2001 From: Vitaliy Tverdokhlib Date: Tue, 12 Sep 2017 17:12:55 +0200 Subject: [PATCH 56/87] Update zookeeper.md --- docs/tutorials/stateful-application/zookeeper.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/stateful-application/zookeeper.md b/docs/tutorials/stateful-application/zookeeper.md index 9ad45caef903f..fd1c2987857f6 100644 --- a/docs/tutorials/stateful-application/zookeeper.md +++ b/docs/tutorials/stateful-application/zookeeper.md @@ -1000,7 +1000,7 @@ This is because the Pods in the `zk` StatefulSet have a PodAntiAffinity specifie topologyKey: "kubernetes.io/hostname" ``` -The `requiredDuringSchedulingRequiredDuringExecution` field tells the +The `requiredDuringSchedulingIgnoredDuringExecution` field tells the Kubernetes Scheduler that it should never co-locate two Pods from the `zk-headless` Service in the domain defined by the `topologyKey`. The `topologyKey` `kubernetes.io/hostname` indicates that the domain is an individual node. Using From 56006b8cdcef6c2f9617edb320165dffd024d632 Mon Sep 17 00:00:00 2001 From: Nikhita Raghunath Date: Tue, 26 Sep 2017 02:38:25 +0530 Subject: [PATCH 57/87] Fix link after design proposal move --- docs/concepts/policy/resource-quotas.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/policy/resource-quotas.md b/docs/concepts/policy/resource-quotas.md index 2d6935b786a12..f609be52bf6f6 100644 --- a/docs/concepts/policy/resource-quotas.md +++ b/docs/concepts/policy/resource-quotas.md @@ -237,4 +237,4 @@ See a [detailed example for how to use resource quota](/docs/tasks/administer-cl ## Read More -See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/admission_control_resource_quota.md) for more information. +See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information. From fd46c299f4b01896f8499bd228ef65d2d85a24b5 Mon Sep 17 00:00:00 2001 From: jianglingxia Date: Mon, 25 Sep 2017 15:26:15 +0800 Subject: [PATCH 58/87] api-reference add version number --- docs/tasks/configure-pod-container/assign-cpu-resource.md | 2 +- docs/tasks/configure-pod-container/assign-memory-resource.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/tasks/configure-pod-container/assign-cpu-resource.md b/docs/tasks/configure-pod-container/assign-cpu-resource.md index 083b43855b43d..81da766f9827d 100644 --- a/docs/tasks/configure-pod-container/assign-cpu-resource.md +++ b/docs/tasks/configure-pod-container/assign-cpu-resource.md @@ -219,7 +219,7 @@ could use all of the CPU resources available on the Node where it is running. * The Container is running in a namespace that has a default CPU limit, and the Container is automatically assigned the default limit. Cluster administrators can use a -[LimitRange](https://kubernetes.io/docs/api-reference/v1.6/) +[LimitRange](https://kubernetes.io/docs/api-reference/v1.7/#limitrange-v1-core/) to specify a default value for the CPU limit. ## Motivation for CPU requests and limits diff --git a/docs/tasks/configure-pod-container/assign-memory-resource.md b/docs/tasks/configure-pod-container/assign-memory-resource.md index d717828345c21..bc3a4bf90857b 100644 --- a/docs/tasks/configure-pod-container/assign-memory-resource.md +++ b/docs/tasks/configure-pod-container/assign-memory-resource.md @@ -313,7 +313,7 @@ could use all of the memory available on the Node where it is running. * The Container is running in a namespace that has a default memory limit, and the Container is automatically assigned the default limit. Cluster administrators can use a -[LimitRange](https://kubernetes.io/docs/api-reference/v1.6/) +[LimitRange](https://kubernetes.io/docs/api-reference/v1.7/#limitrange-v1-core) to specify a default value for the memory limit. ## Motivation for memory requests and limits From bd1ef6e4647b5f9b4b9bb9974b48744225b60d3d Mon Sep 17 00:00:00 2001 From: Yash Thakkar Date: Mon, 25 Sep 2017 04:09:28 +0530 Subject: [PATCH 59/87] Fixed hyperlinks for different ConfigMap headers ConfigMap Header names are starting with "create", but in link it is written as "creating", because of that hyperlink was not working. --- docs/tasks/configure-pod-container/configmap.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tasks/configure-pod-container/configmap.md b/docs/tasks/configure-pod-container/configmap.md index 352ea85c11bd7..8236fd4ceba71 100644 --- a/docs/tasks/configure-pod-container/configmap.md +++ b/docs/tasks/configure-pod-container/configmap.md @@ -23,7 +23,7 @@ This page shows you how to configure an application using a ConfigMap. ConfigMap ## Use kubectl to create a ConfigMap -Use the `kubectl create configmap` command to create configmaps from [directories](#creating-configmaps-from-directories), [files](#creating-configmaps-from-files), or [literal values](#creating-configmaps-from-literal-values): +Use the `kubectl create configmap` command to create configmaps from [directories](#create-configmaps-from-directories), [files](#create-configmaps-from-files), or [literal values](#create-configmaps-from-literal-values): ```shell kubectl create configmap From cb795036b13644443ace9c40468558c653394f82 Mon Sep 17 00:00:00 2001 From: Michal Skalski Date: Fri, 22 Sep 2017 18:23:04 +0200 Subject: [PATCH 60/87] Remove outdated link Flannel combained RBAC info into main manifest [1]. [1] https://github.com/coreos/flannel/commit/a154d2f68edd511498c948e33c8cbde20a5901ee --- docs/setup/independent/create-cluster-kubeadm.md | 1 - 1 file changed, 1 deletion(-) diff --git a/docs/setup/independent/create-cluster-kubeadm.md b/docs/setup/independent/create-cluster-kubeadm.md index ccd155adaafef..5e24b6e892075 100644 --- a/docs/setup/independent/create-cluster-kubeadm.md +++ b/docs/setup/independent/create-cluster-kubeadm.md @@ -249,7 +249,6 @@ kubectl apply -f https://raw.githubusercontent.com/projectcalico/canal/master/k8 ```shell kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml -kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml ``` {% endcapture %} From 69114e0cfef71b339d292765ac5f87bb4c2cb771 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Michael=20Vorburger=20=E2=9B=91=EF=B8=8F?= Date: Sun, 24 Sep 2017 03:20:08 +0200 Subject: [PATCH 61/87] Remove 3 links in jobs-run-to-completion.md which go nowhere these links don't go anywhere anymore (the respective pages must have been moved?), and are more of a distraction than adding any real value when reading that paragraph. --- docs/concepts/workloads/controllers/jobs-run-to-completion.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/docs/concepts/workloads/controllers/jobs-run-to-completion.md b/docs/concepts/workloads/controllers/jobs-run-to-completion.md index 4ba2f74b006ad..398037fa6426d 100644 --- a/docs/concepts/workloads/controllers/jobs-run-to-completion.md +++ b/docs/concepts/workloads/controllers/jobs-run-to-completion.md @@ -92,9 +92,7 @@ $ kubectl logs $pods ## Writing a Job Spec -As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields. For -general information about working with config files, see [here](/docs/user-guide/simple-yaml), -[here](/docs/user-guide/configuring-containers), and [here](/docs/user-guide/working-with-resources). +As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields. A Job also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status). From 248c1c854c2f598445df02730957e16c7ca2e582 Mon Sep 17 00:00:00 2001 From: lostlivio Date: Fri, 22 Sep 2017 10:18:30 -0700 Subject: [PATCH 62/87] Updated outdated information regarding API deprecation policy with pointer to current information --- docs/concepts/overview/kubernetes-api.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/overview/kubernetes-api.md b/docs/concepts/overview/kubernetes-api.md index a6d38679a59d6..b74650ff8db42 100644 --- a/docs/concepts/overview/kubernetes-api.md +++ b/docs/concepts/overview/kubernetes-api.md @@ -18,7 +18,7 @@ Kubernetes itself is decomposed into multiple components, which interact through ## API changes -In our experience, any system that is successful needs to grow and change as new use cases emerge or existing ones change. Therefore, we expect the Kubernetes API to continuously change and grow. However, we intend to not break compatibility with existing clients, for an extended period of time. In general, new API resources and new resource fields can be expected to be added frequently. Elimination of resources or fields will require following a deprecation process. The precise deprecation policy for eliminating features is TBD, but once we reach our 1.0 milestone, there will be a specific policy. +In our experience, any system that is successful needs to grow and change as new use cases emerge or existing ones change. Therefore, we expect the Kubernetes API to continuously change and grow. However, we intend to not break compatibility with existing clients, for an extended period of time. In general, new API resources and new resource fields can be expected to be added frequently. Elimination of resources or fields will require following the [API deprecation policy](https://kubernetes.io/docs/reference/deprecation-policy/). What constitutes a compatible change and how to change the API are detailed by the [API change document](https://git.k8s.io/community/contributors/devel/api_changes.md). From 779c64dd15381b9ed75049638e995badafa88fcc Mon Sep 17 00:00:00 2001 From: Steve Perry Date: Mon, 25 Sep 2017 16:06:20 -0700 Subject: [PATCH 63/87] Update links to avoid redirects. (#5622) --- docs/admin/admission-controllers.md | 2 +- .../apps/v1beta1/definitions.html | 16 ++++---- docs/api-reference/batch/v1/definitions.html | 6 +-- .../extensions/v1beta1/definitions.html | 6 +-- docs/api-reference/v1.5/index.html | 38 +++++++++--------- docs/api-reference/v1.6/index.html | 40 +++++++++---------- docs/concepts/policy/resource-quotas.md | 2 +- docs/concepts/storage/volumes.md | 2 +- docs/resources-reference/v1.5/index.html | 38 +++++++++--------- docs/resources-reference/v1.6/index.html | 40 +++++++++---------- 10 files changed, 95 insertions(+), 95 deletions(-) diff --git a/docs/admin/admission-controllers.md b/docs/admin/admission-controllers.md index e25bc55fcb506..27f7d868f13fe 100644 --- a/docs/admin/admission-controllers.md +++ b/docs/admin/admission-controllers.md @@ -71,7 +71,7 @@ class is marked as default, it rejects any creation of `PersistentVolumeClaim` w must revisit `StorageClass` objects and mark only one as default. This plugin ignores any `PersistentVolumeClaim` updates; it acts only on creation. -See [persistent volume](/docs/user-guide/persistent-volumes) documentation about persistent volume claims and +See [persistent volume](/docs/concepts/storage/persistent-volumes/) documentation about persistent volume claims and storage classes and how to mark a storage class as default. ### DefaultTolerationSeconds diff --git a/docs/api-reference/apps/v1beta1/definitions.html b/docs/api-reference/apps/v1beta1/definitions.html index 2f15ecf9070e7..7043dfcba4b78 100755 --- a/docs/api-reference/apps/v1beta1/definitions.html +++ b/docs/api-reference/apps/v1beta1/definitions.html @@ -367,7 +367,7 @@

    v1.PersistentVolumeClaimSpec

    accessModes

    -

    AccessModes contains the desired access modes the volume should have. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#access-modes-1

    +

    AccessModes contains the desired access modes the volume should have. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes-1

    false

    v1.PersistentVolumeAccessMode array

    @@ -381,7 +381,7 @@

    v1.PersistentVolumeClaimSpec

    resources

    -

    Resources represents the minimum resources the volume should have. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#resources

    +

    Resources represents the minimum resources the volume should have. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#resources

    false

    v1.ResourceRequirements

    @@ -1160,7 +1160,7 @@

    v1.Container

    resources

    -

    Compute Resources required by this container. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#resources

    +

    Compute Resources required by this container. Cannot be updated. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#resources

    false

    v1.ResourceRequirements

    @@ -1896,14 +1896,14 @@

    v1.PersistentVolumeClaim

    spec

    -

    Spec defines the desired characteristics of a volume requested by a pod author. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims

    +

    Spec defines the desired characteristics of a volume requested by a pod author. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims

    false

    v1.PersistentVolumeClaimSpec

    status

    -

    Status represents the current information/status of a persistent volume claim. Read-only. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims

    +

    Status represents the current information/status of a persistent volume claim. Read-only. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims

    false

    v1.PersistentVolumeClaimStatus

    @@ -2006,7 +2006,7 @@

    v1.PersistentVolumeClaimVolumeSou

    claimName

    -

    ClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims

    +

    ClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims

    true

    string

    @@ -2194,7 +2194,7 @@

    v1.PersistentVolumeClaimStatus

    accessModes

    -

    AccessModes contains the actual access modes the volume backing the PVC has. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#access-modes-1

    +

    AccessModes contains the actual access modes the volume backing the PVC has. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes-1

    false

    v1.PersistentVolumeAccessMode array

    @@ -2988,7 +2988,7 @@

    v1.Volume

    persistentVolumeClaim

    -

    PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims

    +

    PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims

    false

    v1.PersistentVolumeClaimVolumeSource

    diff --git a/docs/api-reference/batch/v1/definitions.html b/docs/api-reference/batch/v1/definitions.html index 50f6f28e449bb..20ecb4cf031d1 100755 --- a/docs/api-reference/batch/v1/definitions.html +++ b/docs/api-reference/batch/v1/definitions.html @@ -1066,7 +1066,7 @@

    v1.Container

    resources

    -

    Compute Resources required by this container. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#resources

    +

    Compute Resources required by this container. Cannot be updated. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#resources

    false

    v1.ResourceRequirements

    @@ -1857,7 +1857,7 @@

    v1.PersistentVolumeClaimVolumeSou

    claimName

    -

    ClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims

    +

    ClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims

    true

    string

    @@ -2977,7 +2977,7 @@

    v1.Volume

    persistentVolumeClaim

    -

    PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims

    +

    PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims

    false

    v1.PersistentVolumeClaimVolumeSource

    diff --git a/docs/api-reference/extensions/v1beta1/definitions.html b/docs/api-reference/extensions/v1beta1/definitions.html index 262b7aed95ca1..7830bf56664e4 100755 --- a/docs/api-reference/extensions/v1beta1/definitions.html +++ b/docs/api-reference/extensions/v1beta1/definitions.html @@ -1938,7 +1938,7 @@

    v1.PersistentVolumeClaimVolumeSou

    claimName

    -

    ClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims

    +

    ClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims

    true

    string

    @@ -2749,7 +2749,7 @@

    v1.Volume

    persistentVolumeClaim

    -

    PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims

    +

    PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims

    false

    v1.PersistentVolumeClaimVolumeSource

    @@ -4993,7 +4993,7 @@

    v1.Container

    resources

    -

    Compute Resources required by this container. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#resources

    +

    Compute Resources required by this container. Cannot be updated. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#resources

    false

    v1.ResourceRequirements

    diff --git a/docs/api-reference/v1.5/index.html b/docs/api-reference/v1.5/index.html index 71b333af828c7..de6a3b57dacce 100644 --- a/docs/api-reference/v1.5/index.html +++ b/docs/api-reference/v1.5/index.html @@ -179,7 +179,7 @@

    Container v1

    resources
    ResourceRequirements -Compute Resources required by this container. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#resources +Compute Resources required by this container. Cannot be updated. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#resources securityContext
    SecurityContext @@ -23125,11 +23125,11 @@

    PersistentVolumeClaim v1

    spec
    PersistentVolumeClaimSpec -Spec defines the desired characteristics of a volume requested by a pod author. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims +Spec defines the desired characteristics of a volume requested by a pod author. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims status
    PersistentVolumeClaimStatus -Status represents the current information/status of a persistent volume claim. Read-only. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims +Status represents the current information/status of a persistent volume claim. Read-only. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims @@ -23147,11 +23147,11 @@

    PersistentVolumeClaimSpec v1

    accessModes
    string array -AccessModes contains the desired access modes the volume should have. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#access-modes-1 +AccessModes contains the desired access modes the volume should have. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes-1 resources
    ResourceRequirements -Resources represents the minimum resources the volume should have. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#resources +Resources represents the minimum resources the volume should have. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#resources selector
    LabelSelector @@ -23177,7 +23177,7 @@

    PersistentVolumeClaimStatus v1

    accessModes
    string array -AccessModes contains the actual access modes the volume backing the PVC has. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#access-modes-1 +AccessModes contains the actual access modes the volume backing the PVC has. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes-1 capacity
    object @@ -23204,7 +23204,7 @@

    PersistentVolumeClaimList v1

    items
    PersistentVolumeClaim array -A list of persistent volume claims. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims +A list of persistent volume claims. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims kind
    string @@ -25413,7 +25413,7 @@

    Volume v1

    persistentVolumeClaim
    PersistentVolumeClaimVolumeSource -PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims +PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims photonPersistentDisk
    PhotonPersistentDiskVolumeSource @@ -36717,7 +36717,7 @@

    NodeStatus v1

    capacity
    object -Capacity represents the total resources of a node. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#capacity for more details. +Capacity represents the total resources of a node. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#capacity for more details. conditions
    NodeCondition array @@ -39039,7 +39039,7 @@

    PersistentVolume v1

    -

    PersistentVolume (PV) is a storage resource provisioned by an administrator. It is analogous to a node. More info: http://kubernetes.io/docs/user-guide/persistent-volumes

    +

    PersistentVolume (PV) is a storage resource provisioned by an administrator. It is analogous to a node. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/

    @@ -39065,11 +39065,11 @@

    PersistentVolume v1

    spec
    PersistentVolumeSpec -Spec defines a specification of a persistent volume owned by the cluster. Provisioned by an administrator. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistent-volumes +Spec defines a specification of a persistent volume owned by the cluster. Provisioned by an administrator. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes status
    PersistentVolumeStatus -Status represents the current information/status for the persistent volume. Populated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistent-volumes +Status represents the current information/status for the persistent volume. Populated by the system. Read-only. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes @@ -39087,7 +39087,7 @@

    PersistentVolumeSpec v1

    accessModes
    string array -AccessModes contains all ways the volume can be mounted. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#access-modes +AccessModes contains all ways the volume can be mounted. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes awsElasticBlockStore
    AWSElasticBlockStoreVolumeSource @@ -39103,7 +39103,7 @@

    PersistentVolumeSpec v1

    capacity
    object -A description of the persistent volume's resources and capacity. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#capacity +A description of the persistent volume's resources and capacity. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#capacity cephfs
    CephFSVolumeSource @@ -39115,7 +39115,7 @@

    PersistentVolumeSpec v1

    claimRef
    ObjectReference -ClaimRef is part of a bi-directional binding between PersistentVolume and PersistentVolumeClaim. Expected to be non-nil when bound. claim.VolumeName is the authoritative bind between PV and PVC. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#binding +ClaimRef is part of a bi-directional binding between PersistentVolume and PersistentVolumeClaim. Expected to be non-nil when bound. claim.VolumeName is the authoritative bind between PV and PVC. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#binding fc
    FCVolumeSource @@ -39151,7 +39151,7 @@

    PersistentVolumeSpec v1

    persistentVolumeReclaimPolicy
    string -What happens to a persistent volume when released from its claim. Valid options are Retain (default) and Recycle. Recycling must be supported by the volume plugin underlying this persistent volume. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#recycling-policy +What happens to a persistent volume when released from its claim. Valid options are Retain (default) and Recycle. Recycling must be supported by the volume plugin underlying this persistent volume. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#recycling-policy photonPersistentDisk
    PhotonPersistentDiskVolumeSource @@ -39189,7 +39189,7 @@

    PersistentVolumeStatus v1

    phase
    string -Phase indicates if a volume is available, bound to a claim, or released by a claim. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#phase +Phase indicates if a volume is available, bound to a claim, or released by a claim. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#phase reason
    string @@ -39212,7 +39212,7 @@

    PersistentVolumeList v1

    items
    PersistentVolume array -List of persistent volumes. More info: http://kubernetes.io/docs/user-guide/persistent-volumes +List of persistent volumes. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/ kind
    string @@ -50004,7 +50004,7 @@

    PersistentVolumeClaimVolumeSource claimName
    string -ClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims +ClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims readOnly
    boolean diff --git a/docs/api-reference/v1.6/index.html b/docs/api-reference/v1.6/index.html index 64322a85620c0..37d5d7f9c26af 100644 --- a/docs/api-reference/v1.6/index.html +++ b/docs/api-reference/v1.6/index.html @@ -183,7 +183,7 @@

    Container v1 core

    resources
    ResourceRequirements -Compute Resources required by this container. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#resources +Compute Resources required by this container. Cannot be updated. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#resources securityContext
    SecurityContext @@ -23025,11 +23025,11 @@

    PersistentVolumeClaim v1 core

    spec
    PersistentVolumeClaimSpec -Spec defines the desired characteristics of a volume requested by a pod author. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims +Spec defines the desired characteristics of a volume requested by a pod author. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims status
    PersistentVolumeClaimStatus -Status represents the current information/status of a persistent volume claim. Read-only. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims +Status represents the current information/status of a persistent volume claim. Read-only. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims @@ -23047,11 +23047,11 @@

    PersistentVolumeClaimSpec v1 core

    accessModes
    string array -AccessModes contains the desired access modes the volume should have. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#access-modes-1 +AccessModes contains the desired access modes the volume should have. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes-1 resources
    ResourceRequirements -Resources represents the minimum resources the volume should have. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#resources +Resources represents the minimum resources the volume should have. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#resources selector
    LabelSelector @@ -23059,7 +23059,7 @@

    PersistentVolumeClaimSpec v1 core

    storageClassName
    string -Name of the StorageClass required by the claim. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#class-1 +Name of the StorageClass required by the claim. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1 volumeName
    string @@ -23081,7 +23081,7 @@

    PersistentVolumeClaimStatus v1 core accessModes
    string array -AccessModes contains the actual access modes the volume backing the PVC has. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#access-modes-1 +AccessModes contains the actual access modes the volume backing the PVC has. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes-1 capacity
    object @@ -23108,7 +23108,7 @@

    PersistentVolumeClaimList v1 core

    items
    PersistentVolumeClaim array -A list of persistent volume claims. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims +A list of persistent volume claims. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims kind
    string @@ -25328,7 +25328,7 @@

    Volume v1 core

    persistentVolumeClaim
    PersistentVolumeClaimVolumeSource -PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims +PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims photonPersistentDisk
    PhotonPersistentDiskVolumeSource @@ -38720,7 +38720,7 @@

    NodeStatus v1 core

    capacity
    object -Capacity represents the total resources of a node. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#capacity for more details. +Capacity represents the total resources of a node. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#capacity for more details. conditions
    NodeCondition array @@ -41046,7 +41046,7 @@

    PersistentVolume v1 core

    -

    PersistentVolume (PV) is a storage resource provisioned by an administrator. It is analogous to a node. More info: http://kubernetes.io/docs/user-guide/persistent-volumes

    +

    PersistentVolume (PV) is a storage resource provisioned by an administrator. It is analogous to a node. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/

    @@ -41072,11 +41072,11 @@

    PersistentVolume v1 core

    spec
    PersistentVolumeSpec -Spec defines a specification of a persistent volume owned by the cluster. Provisioned by an administrator. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistent-volumes +Spec defines a specification of a persistent volume owned by the cluster. Provisioned by an administrator. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes status
    PersistentVolumeStatus -Status represents the current information/status for the persistent volume. Populated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistent-volumes +Status represents the current information/status for the persistent volume. Populated by the system. Read-only. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes @@ -41094,7 +41094,7 @@

    PersistentVolumeSpec v1 core

    accessModes
    string array -AccessModes contains all ways the volume can be mounted. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#access-modes +AccessModes contains all ways the volume can be mounted. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes awsElasticBlockStore
    AWSElasticBlockStoreVolumeSource @@ -41110,7 +41110,7 @@

    PersistentVolumeSpec v1 core

    capacity
    object -A description of the persistent volume's resources and capacity. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#capacity +A description of the persistent volume's resources and capacity. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#capacity cephfs
    CephFSVolumeSource @@ -41122,7 +41122,7 @@

    PersistentVolumeSpec v1 core

    claimRef
    ObjectReference -ClaimRef is part of a bi-directional binding between PersistentVolume and PersistentVolumeClaim. Expected to be non-nil when bound. claim.VolumeName is the authoritative bind between PV and PVC. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#binding +ClaimRef is part of a bi-directional binding between PersistentVolume and PersistentVolumeClaim. Expected to be non-nil when bound. claim.VolumeName is the authoritative bind between PV and PVC. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#binding fc
    FCVolumeSource @@ -41158,7 +41158,7 @@

    PersistentVolumeSpec v1 core

    persistentVolumeReclaimPolicy
    string -What happens to a persistent volume when released from its claim. Valid options are Retain (default) and Recycle. Recycling must be supported by the volume plugin underlying this persistent volume. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#recycling-policy +What happens to a persistent volume when released from its claim. Valid options are Retain (default) and Recycle. Recycling must be supported by the volume plugin underlying this persistent volume. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#recycling-policy photonPersistentDisk
    PhotonPersistentDiskVolumeSource @@ -41208,7 +41208,7 @@

    PersistentVolumeStatus v1 core

    phase
    string -Phase indicates if a volume is available, bound to a claim, or released by a claim. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#phase +Phase indicates if a volume is available, bound to a claim, or released by a claim. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#phase reason
    string @@ -41231,7 +41231,7 @@

    PersistentVolumeList v1 core

    items
    PersistentVolume array -List of persistent volumes. More info: http://kubernetes.io/docs/user-guide/persistent-volumes +List of persistent volumes. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/ kind
    string @@ -52657,7 +52657,7 @@

    PersistentVolumeClaimVolumeSo claimName
    string -ClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims +ClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims readOnly
    boolean diff --git a/docs/concepts/policy/resource-quotas.md b/docs/concepts/policy/resource-quotas.md index f609be52bf6f6..9a39bb564a2d2 100644 --- a/docs/concepts/policy/resource-quotas.md +++ b/docs/concepts/policy/resource-quotas.md @@ -67,7 +67,7 @@ The following resource types are supported: ## Storage Resource Quota -You can limit the total sum of [storage resources](/docs/user-guide/persistent-volumes) that can be requested in a given namespace. +You can limit the total sum of [storage resources](/docs/concepts/storage/persistent-volumes/) that can be requested in a given namespace. In addition, you can limit consumption of storage resources based on associated storage-class. diff --git a/docs/concepts/storage/volumes.md b/docs/concepts/storage/volumes.md index f6fce622b6ddf..baba44b07e54c 100644 --- a/docs/concepts/storage/volumes.md +++ b/docs/concepts/storage/volumes.md @@ -442,7 +442,7 @@ Secrets are described in more detail [here](/docs/user-guide/secrets). ### persistentVolumeClaim A `persistentVolumeClaim` volume is used to mount a -[PersistentVolume](/docs/user-guide/persistent-volumes) into a pod. PersistentVolumes are a +[PersistentVolume](/docs/concepts/storage/persistent-volumes/) into a pod. PersistentVolumes are a way for users to "claim" durable storage (such as a GCE PersistentDisk or an iSCSI volume) without knowing the details of the particular cloud environment. diff --git a/docs/resources-reference/v1.5/index.html b/docs/resources-reference/v1.5/index.html index aca4c2871c5c9..186d452d46250 100644 --- a/docs/resources-reference/v1.5/index.html +++ b/docs/resources-reference/v1.5/index.html @@ -112,7 +112,7 @@

    Container v1

    resources
    ResourceRequirements -Compute Resources required by this container. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#resources +Compute Resources required by this container. Cannot be updated. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#resources securityContext
    SecurityContext @@ -2268,11 +2268,11 @@

    PersistentVolumeClaim v1

    spec
    PersistentVolumeClaimSpec -Spec defines the desired characteristics of a volume requested by a pod author. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims +Spec defines the desired characteristics of a volume requested by a pod author. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims status
    PersistentVolumeClaimStatus -Status represents the current information/status of a persistent volume claim. Read-only. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims +Status represents the current information/status of a persistent volume claim. Read-only. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims @@ -2290,11 +2290,11 @@

    PersistentVolumeClaimSpec v1

    accessModes
    string array -AccessModes contains the desired access modes the volume should have. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#access-modes-1 +AccessModes contains the desired access modes the volume should have. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes-1 resources
    ResourceRequirements -Resources represents the minimum resources the volume should have. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#resources +Resources represents the minimum resources the volume should have. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#resources selector
    LabelSelector @@ -2320,7 +2320,7 @@

    PersistentVolumeClaimStatus v1

    accessModes
    string array -AccessModes contains the actual access modes the volume backing the PVC has. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#access-modes-1 +AccessModes contains the actual access modes the volume backing the PVC has. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes-1 capacity
    object @@ -2347,7 +2347,7 @@

    PersistentVolumeClaimList v1

    items
    PersistentVolumeClaim array -A list of persistent volume claims. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims +A list of persistent volume claims. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims kind
    string @@ -2543,7 +2543,7 @@

    Volume v1

    persistentVolumeClaim
    PersistentVolumeClaimVolumeSource -PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims +PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims photonPersistentDisk
    PhotonPersistentDiskVolumeSource @@ -3913,7 +3913,7 @@

    NodeStatus v1

    capacity
    object -Capacity represents the total resources of a node. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#capacity for more details. +Capacity represents the total resources of a node. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#capacity for more details. conditions
    NodeCondition array @@ -3995,7 +3995,7 @@

    PersistentVolume v1

    -

    PersistentVolume (PV) is a storage resource provisioned by an administrator. It is analogous to a node. More info: http://kubernetes.io/docs/user-guide/persistent-volumes

    +

    PersistentVolume (PV) is a storage resource provisioned by an administrator. It is analogous to a node. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/

    @@ -4021,11 +4021,11 @@

    PersistentVolume v1

    spec
    PersistentVolumeSpec -Spec defines a specification of a persistent volume owned by the cluster. Provisioned by an administrator. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistent-volumes +Spec defines a specification of a persistent volume owned by the cluster. Provisioned by an administrator. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes status
    PersistentVolumeStatus -Status represents the current information/status for the persistent volume. Populated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistent-volumes +Status represents the current information/status for the persistent volume. Populated by the system. Read-only. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes @@ -4043,7 +4043,7 @@

    PersistentVolumeSpec v1

    accessModes
    string array -AccessModes contains all ways the volume can be mounted. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#access-modes +AccessModes contains all ways the volume can be mounted. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes awsElasticBlockStore
    AWSElasticBlockStoreVolumeSource @@ -4059,7 +4059,7 @@

    PersistentVolumeSpec v1

    capacity
    object -A description of the persistent volume's resources and capacity. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#capacity +A description of the persistent volume's resources and capacity. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#capacity cephfs
    CephFSVolumeSource @@ -4071,7 +4071,7 @@

    PersistentVolumeSpec v1

    claimRef
    ObjectReference -ClaimRef is part of a bi-directional binding between PersistentVolume and PersistentVolumeClaim. Expected to be non-nil when bound. claim.VolumeName is the authoritative bind between PV and PVC. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#binding +ClaimRef is part of a bi-directional binding between PersistentVolume and PersistentVolumeClaim. Expected to be non-nil when bound. claim.VolumeName is the authoritative bind between PV and PVC. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#binding fc
    FCVolumeSource @@ -4107,7 +4107,7 @@

    PersistentVolumeSpec v1

    persistentVolumeReclaimPolicy
    string -What happens to a persistent volume when released from its claim. Valid options are Retain (default) and Recycle. Recycling must be supported by the volume plugin underlying this persistent volume. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#recycling-policy +What happens to a persistent volume when released from its claim. Valid options are Retain (default) and Recycle. Recycling must be supported by the volume plugin underlying this persistent volume. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#recycling-policy photonPersistentDisk
    PhotonPersistentDiskVolumeSource @@ -4145,7 +4145,7 @@

    PersistentVolumeStatus v1

    phase
    string -Phase indicates if a volume is available, bound to a claim, or released by a claim. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#phase +Phase indicates if a volume is available, bound to a claim, or released by a claim. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#phase reason
    string @@ -4168,7 +4168,7 @@

    PersistentVolumeList v1

    items
    PersistentVolume array -List of persistent volumes. More info: http://kubernetes.io/docs/user-guide/persistent-volumes +List of persistent volumes. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/ kind
    string @@ -8515,7 +8515,7 @@

    PersistentVolumeClaimVolumeSource claimName
    string -ClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims +ClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims readOnly
    boolean diff --git a/docs/resources-reference/v1.6/index.html b/docs/resources-reference/v1.6/index.html index 4c69ee05eb547..563418f892221 100644 --- a/docs/resources-reference/v1.6/index.html +++ b/docs/resources-reference/v1.6/index.html @@ -116,7 +116,7 @@

    Container v1 core

    resources
    ResourceRequirements -Compute Resources required by this container. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#resources +Compute Resources required by this container. Cannot be updated. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#resources securityContext
    SecurityContext @@ -2357,11 +2357,11 @@

    PersistentVolumeClaim v1 core

    spec
    PersistentVolumeClaimSpec -Spec defines the desired characteristics of a volume requested by a pod author. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims +Spec defines the desired characteristics of a volume requested by a pod author. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims status
    PersistentVolumeClaimStatus -Status represents the current information/status of a persistent volume claim. Read-only. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims +Status represents the current information/status of a persistent volume claim. Read-only. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims @@ -2379,11 +2379,11 @@

    PersistentVolumeClaimSpec v1 core

    accessModes
    string array -AccessModes contains the desired access modes the volume should have. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#access-modes-1 +AccessModes contains the desired access modes the volume should have. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes-1 resources
    ResourceRequirements -Resources represents the minimum resources the volume should have. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#resources +Resources represents the minimum resources the volume should have. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#resources selector
    LabelSelector @@ -2391,7 +2391,7 @@

    PersistentVolumeClaimSpec v1 core

    storageClassName
    string -Name of the StorageClass required by the claim. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#class-1 +Name of the StorageClass required by the claim. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1 volumeName
    string @@ -2413,7 +2413,7 @@

    PersistentVolumeClaimStatus v1 core accessModes
    string array -AccessModes contains the actual access modes the volume backing the PVC has. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#access-modes-1 +AccessModes contains the actual access modes the volume backing the PVC has. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes-1 capacity
    object @@ -2440,7 +2440,7 @@

    PersistentVolumeClaimList v1 core

    items
    PersistentVolumeClaim array -A list of persistent volume claims. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims +A list of persistent volume claims. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims kind
    string @@ -2639,7 +2639,7 @@

    Volume v1 core

    persistentVolumeClaim
    PersistentVolumeClaimVolumeSource -PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims +PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims photonPersistentDisk
    PhotonPersistentDiskVolumeSource @@ -4299,7 +4299,7 @@

    NodeStatus v1 core

    capacity
    object -Capacity represents the total resources of a node. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#capacity for more details. +Capacity represents the total resources of a node. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#capacity for more details. conditions
    NodeCondition array @@ -4381,7 +4381,7 @@

    PersistentVolume v1 core

    -

    PersistentVolume (PV) is a storage resource provisioned by an administrator. It is analogous to a node. More info: http://kubernetes.io/docs/user-guide/persistent-volumes

    +

    PersistentVolume (PV) is a storage resource provisioned by an administrator. It is analogous to a node. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/

    @@ -4407,11 +4407,11 @@

    PersistentVolume v1 core

    spec
    PersistentVolumeSpec -Spec defines a specification of a persistent volume owned by the cluster. Provisioned by an administrator. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistent-volumes +Spec defines a specification of a persistent volume owned by the cluster. Provisioned by an administrator. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes status
    PersistentVolumeStatus -Status represents the current information/status for the persistent volume. Populated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistent-volumes +Status represents the current information/status for the persistent volume. Populated by the system. Read-only. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes @@ -4429,7 +4429,7 @@

    PersistentVolumeSpec v1 core

    accessModes
    string array -AccessModes contains all ways the volume can be mounted. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#access-modes +AccessModes contains all ways the volume can be mounted. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes awsElasticBlockStore
    AWSElasticBlockStoreVolumeSource @@ -4445,7 +4445,7 @@

    PersistentVolumeSpec v1 core

    capacity
    object -A description of the persistent volume's resources and capacity. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#capacity +A description of the persistent volume's resources and capacity. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#capacity cephfs
    CephFSVolumeSource @@ -4457,7 +4457,7 @@

    PersistentVolumeSpec v1 core

    claimRef
    ObjectReference -ClaimRef is part of a bi-directional binding between PersistentVolume and PersistentVolumeClaim. Expected to be non-nil when bound. claim.VolumeName is the authoritative bind between PV and PVC. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#binding +ClaimRef is part of a bi-directional binding between PersistentVolume and PersistentVolumeClaim. Expected to be non-nil when bound. claim.VolumeName is the authoritative bind between PV and PVC. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#binding fc
    FCVolumeSource @@ -4493,7 +4493,7 @@

    PersistentVolumeSpec v1 core

    persistentVolumeReclaimPolicy
    string -What happens to a persistent volume when released from its claim. Valid options are Retain (default) and Recycle. Recycling must be supported by the volume plugin underlying this persistent volume. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#recycling-policy +What happens to a persistent volume when released from its claim. Valid options are Retain (default) and Recycle. Recycling must be supported by the volume plugin underlying this persistent volume. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#recycling-policy photonPersistentDisk
    PhotonPersistentDiskVolumeSource @@ -4543,7 +4543,7 @@

    PersistentVolumeStatus v1 core

    phase
    string -Phase indicates if a volume is available, bound to a claim, or released by a claim. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#phase +Phase indicates if a volume is available, bound to a claim, or released by a claim. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#phase reason
    string @@ -4566,7 +4566,7 @@

    PersistentVolumeList v1 core

    items
    PersistentVolume array -List of persistent volumes. More info: http://kubernetes.io/docs/user-guide/persistent-volumes +List of persistent volumes. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/ kind
    string @@ -9523,7 +9523,7 @@

    PersistentVolumeClaimVolumeSo claimName
    string -ClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: http://kubernetes.io/docs/user-guide/persistent-volumes#persistentvolumeclaims +ClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: http://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims readOnly
    boolean From 9d835058ccd93552a9f2c492e6339a237eff7262 Mon Sep 17 00:00:00 2001 From: jianglingxia Date: Tue, 26 Sep 2017 07:09:16 +0800 Subject: [PATCH 64/87] fix the typo of lable in statefulset (#5555) * fix the typo of lable in statefulset * update it --- .../stateful-application/basic-stateful-set.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/tutorials/stateful-application/basic-stateful-set.md b/docs/tutorials/stateful-application/basic-stateful-set.md index 21f63cfbfedee..c6dcdb6b8d617 100644 --- a/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/docs/tutorials/stateful-application/basic-stateful-set.md @@ -651,7 +651,7 @@ pod "web-2" deleted Wait for the Pod to be Running and Ready. ```shell -kubectl get po -lapp=nginx -w +kubectl get po -l app=nginx -w NAME READY STATUS RESTARTS AGE web-0 1/1 Running 0 4m web-1 1/1 Running 0 4m @@ -686,7 +686,7 @@ statefulset "web" patched Wait for `web-2` to be Running and Ready. ```shell -kubectl get po -lapp=nginx -w +kubectl get po -l app=nginx -w NAME READY STATUS RESTARTS AGE web-0 1/1 Running 0 4m web-1 1/1 Running 0 4m @@ -716,7 +716,7 @@ pod "web-1" deleted Wait for the `web-1` Pod to be Running and Ready. ```shell -kubectl get po -lapp=nginx -w +kubectl get po -l app=nginx -w NAME READY STATUS RESTARTS AGE web-0 1/1 Running 0 6m web-1 0/1 Terminating 0 6m @@ -761,7 +761,7 @@ statefulset "web" patched Wait for all of the Pods in the StatefulSet to become Running and Ready. ```shell -kubectl get po -lapp=nginx -w +kubectl get po -l app=nginx -w NAME READY STATUS RESTARTS AGE web-0 1/1 Running 0 3m web-1 0/1 ContainerCreating 0 11s @@ -1014,7 +1014,7 @@ of the `web` StatefulSet is set to `Parallel`. In one terminal, watch the Pods in the StatefulSet. ```shell -kubectl get po -lapp=nginx -w +kubectl get po -l app=nginx -w ``` In another terminal, create the StatefulSet and Service in the manifest. @@ -1028,7 +1028,7 @@ statefulset "web" created Examine the output of the `kubectl get` command that you executed in the first terminal. ```shell -kubectl get po -lapp=nginx -w +kubectl get po -l app=nginx -w NAME READY STATUS RESTARTS AGE web-0 0/1 Pending 0 0s web-0 0/1 Pending 0 0s From b9fa59644f72d1c564bdf116c8a3e36413d23cb3 Mon Sep 17 00:00:00 2001 From: jianglingxia Date: Mon, 25 Sep 2017 14:45:50 +0800 Subject: [PATCH 65/87] update some redirects and scale type --- docs/admin/federation/index.md | 2 +- .../run-replicated-stateful-application.md | 10 +++++----- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/admin/federation/index.md b/docs/admin/federation/index.md index ecdcca87d974b..d4c524a9261a6 100644 --- a/docs/admin/federation/index.md +++ b/docs/admin/federation/index.md @@ -134,7 +134,7 @@ existing Kubernetes cluster. It also starts a [`type: LoadBalancer`](/docs/concepts/services-networking/service/#type-loadbalancer) [`Service`](/docs/concepts/services-networking/service/) for the `federation-apiserver` and a -[`PVC`](/docs/concepts/storage/persistent-volumes/) backed +[`PVC`](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims/) backed by a dynamically provisioned [`PV`](/docs/concepts/storage/persistent-volumes/) for `etcd`. All these components are created in the `federation` namespace. diff --git a/docs/tasks/run-application/run-replicated-stateful-application.md b/docs/tasks/run-application/run-replicated-stateful-application.md index 9613bb2437d2d..b3e7dd45dc4b5 100644 --- a/docs/tasks/run-application/run-replicated-stateful-application.md +++ b/docs/tasks/run-application/run-replicated-stateful-application.md @@ -149,7 +149,7 @@ properties to perform orderly startup of MySQL replication. ### Generating configuration Before starting any of the containers in the Pod spec, the Pod first runs any -[Init Containers](/docs/user-guide/production-pods/#handling-initialization) +[Init Containers](/docs/concepts/workloads/pods/init-containers/) in the order defined. The first Init Container, named `init-mysql`, generates special MySQL config @@ -169,7 +169,7 @@ Because the example topology consists of a single MySQL master and any number of slaves, the script simply assigns ordinal `0` to be the master, and everyone else to be slaves. Combined with the StatefulSet controller's -[deployment order guarantee](/docs/concepts/abstractions/controllers/statefulsets/#deployment-and-scaling-guarantee), +[deployment order guarantee](/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees/), this ensures the MySQL master is Ready before creating slaves, so they can begin replicating. @@ -293,7 +293,7 @@ running while you force a Pod out of the Ready state. ### Break the Readiness Probe -The [readiness probe](/docs/user-guide/production-pods/#liveness-and-readiness-probes-aka-health-checks) +The [readiness probe](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes) for the `mysql` container runs the command `mysql -h 127.0.0.1 -e 'SELECT 1'` to make sure the server is up and able to execute queries. @@ -411,7 +411,7 @@ With MySQL replication, you can scale your read query capacity by adding slaves. With StatefulSet, you can do this with a single command: ```shell -kubectl scale --replicas=5 statefulset mysql +kubectl scale statefulset mysql --replicas=5 ``` Watch the new Pods come up by running: @@ -444,7 +444,7 @@ pod "mysql-client" deleted Scaling back down is also seamless: ```shell -kubectl scale --replicas=3 statefulset mysql +kubectl scale statefulset mysql --replicas=3 ``` Note, however, that while scaling up creates new PersistentVolumeClaims From f7edb8254775406bc1894a282be57021f55e3384 Mon Sep 17 00:00:00 2001 From: jianglingxia Date: Mon, 25 Sep 2017 16:40:38 +0800 Subject: [PATCH 66/87] the pod yaml type error and add apiVersion --- .../inject-data-application/podpreset.md | 53 ++++++++++--------- 1 file changed, 27 insertions(+), 26 deletions(-) diff --git a/docs/tasks/inject-data-application/podpreset.md b/docs/tasks/inject-data-application/podpreset.md index 3018fcdefb64f..6e1026a5df707 100644 --- a/docs/tasks/inject-data-application/podpreset.md +++ b/docs/tasks/inject-data-application/podpreset.md @@ -325,34 +325,35 @@ spec: **Pod spec after admission controller:** ```yaml +apiVersion: v1 kind: Pod - metadata: - labels: - app: guestbook - tier: frontend - annotations: +metadata: + labels: + app: guestbook + tier: frontend + annotations: podpreset.admission.kubernetes.io/podpreset-allow-database: "resource version" - spec: - containers: - - name: php-redis - image: gcr.io/google_samples/gb-frontend:v3 - resources: - requests: - cpu: 100m - memory: 100Mi - volumeMounts: - - mountPath: /cache - name: cache-volume - env: - - name: GET_HOSTS_FROM - value: dns - - name: DB_PORT - value: "6379" - ports: - - containerPort: 80 - volumes: - - name: cache-volume - emptyDir: {} +spec: + containers: + - name: php-redis + image: gcr.io/google_samples/gb-frontend:v3 + resources: + requests: + cpu: 100m + memory: 100Mi + volumeMounts: + - mountPath: /cache + name: cache-volume + env: + - name: GET_HOSTS_FROM + value: dns + - name: DB_PORT + value: "6379" + ports: + - containerPort: 80 + volumes: + - name: cache-volume + emptyDir: {} ``` ### Multiple PodPreset Example From 7b305cb11c2f0662850801c5d413f0bf024799c0 Mon Sep 17 00:00:00 2001 From: Adam Fordham Date: Thu, 24 Aug 2017 00:56:49 -0700 Subject: [PATCH 67/87] minor updates tutorial - change afinity field value to match yaml spec - update and reorder steps for cordoning nodes to make more sense. currenlty says "all but four". In reality, want to cordon the three nodes that the pods are scheduled on. not "all but four". - add new line to code snippet for easier "copy/paste" --- .../tutorials/stateful-application/zookeeper.md | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/docs/tutorials/stateful-application/zookeeper.md b/docs/tutorials/stateful-application/zookeeper.md index fd1c2987857f6..2f85f49a10690 100644 --- a/docs/tutorials/stateful-application/zookeeper.md +++ b/docs/tutorials/stateful-application/zookeeper.md @@ -1022,13 +1022,6 @@ Get the nodes in your cluster. kubectl get nodes ``` -Use [`kubectl cordon`](/docs/user-guide/kubectl/{{page.version}}/#cordon) to -cordon all but four of the nodes in your cluster. - -```shell{% raw %} -kubectl cordon < node name > -```{% endraw %} - Get the `zk-budget` PodDisruptionBudget. ```shell @@ -1060,6 +1053,13 @@ kubernetes-minion-group-i4c4 {% endraw %} ``` +Use [`kubectl cordon`](/docs/user-guide/kubectl/{{page.version}}/#cordon) to +cordon the three nodes that the Pods are currently scheduled on. + +```shell{% raw %} +kubectl cordon < node name > +{% endraw %}``` + Use [`kubectl drain`](/docs/user-guide/kubectl/{{page.version}}/#drain) to cordon and drain the node on which the `zk-0` Pod is scheduled. @@ -1095,7 +1095,8 @@ Keep watching the StatefulSet's Pods in the first terminal and drain the node on `zk-1` is scheduled. ```shell{% raw %} -kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data "kubernetes-minion-group-ixsl" cordoned +kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data +node "kubernetes-minion-group-ixsl" cordoned WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-ixsl, kube-proxy-kubernetes-minion-group-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74 pod "zk-1" deleted node "kubernetes-minion-group-ixsl" drained From 5a665549f2b5346dea1dde5bba06b93fb15851ff Mon Sep 17 00:00:00 2001 From: MengZnLee Date: Mon, 25 Sep 2017 18:21:29 -0500 Subject: [PATCH 68/87] Fix index redirects (#5502) * Add command.yaml file * Fix fix index picking the right solution redirects --- docs/home/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/home/index.md b/docs/home/index.md index d9cdc94f34eb1..983c9fae1e233 100644 --- a/docs/home/index.md +++ b/docs/home/index.md @@ -13,7 +13,7 @@ The [Kubernetes Basics interactive tutorial](/docs/tutorials/kubernetes-basics/) ## Installing/Setting Up Kubernetes -[Picking the Right Solution](/docs/getting-started-guides/) can help you get a Kubernetes cluster up and running, either for local development, or on your cloud provider of choice. +[Picking the Right Solution](/docs/setup/pick-right-solution/) can help you get a Kubernetes cluster up and running, either for local development, or on your cloud provider of choice. ## Concepts, Tasks, and Tutorials From 0ab5bb143219e1ba811f7e6c647f8b9efd775bfd Mon Sep 17 00:00:00 2001 From: Ryan McGinnis Date: Mon, 11 Sep 2017 09:09:46 -0700 Subject: [PATCH 69/87] Edits cpu-constraint-namespace.md - Removes stray link in middle of paragraph - "cpu" becomes "CPU" throughout --- .../administer-cluster/cpu-constraint-namespace.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/tasks/administer-cluster/cpu-constraint-namespace.md b/docs/tasks/administer-cluster/cpu-constraint-namespace.md index 89f771794ede6..25681d77d7027 100644 --- a/docs/tasks/administer-cluster/cpu-constraint-namespace.md +++ b/docs/tasks/administer-cluster/cpu-constraint-namespace.md @@ -195,7 +195,7 @@ resources: Because your Container did not specify its own CPU request and limit, it was given the [default CPU request and limit](/docs/tasks/administer-cluster/cpu-default-namespace/) from the LimitRange. -* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace) + At this point, your Container might be running or it might not be running. Recall that a prerequisite for this task is that your Nodes have at least 1 CPU. If each of your Nodes has only 1 CPU, then there might not be enough allocatable CPU on any Node to accommodate a request @@ -219,12 +219,12 @@ Pods that were created previously. As a cluster administrator, you might want to impose restrictions on the CPU resources that Pods can use. For example: -* Each Node in a cluster has 2 cpu. You do not want to accept any Pod that requests -more than 2 cpu, because no Node in the cluster can support the request. +* Each Node in a cluster has 2 CPU. You do not want to accept any Pod that requests +more than 2 CPU, because no Node in the cluster can support the request. * A cluster is shared by your production and development departments. -You want to allow production workloads to consume up to 3 cpu, but you want development workloads to be limited -to 1 cpu. You create separate namespaces for production and development, and you apply CPU constraints to +You want to allow production workloads to consume up to 3 CPU, but you want development workloads to be limited +to 1 CPU. You create separate namespaces for production and development, and you apply CPU constraints to each namespace. ## Clean up From e227272f5c43f81b8ea30c42a980a2c0c88e861a Mon Sep 17 00:00:00 2001 From: Steve Perry Date: Mon, 25 Sep 2017 16:43:35 -0700 Subject: [PATCH 70/87] Fix 404s. (#5623) --- docs/concepts/storage/persistent-volumes.md | 2 +- docs/concepts/workloads/controllers/petset.md | 2 +- .../load-balance-access-application-cluster.md | 2 +- .../service-access-application-cluster.md | 2 +- docs/tasks/administer-cluster/securing-a-cluster.md | 2 +- docs/tools/kompose/user-guide.md | 2 +- 6 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/concepts/storage/persistent-volumes.md b/docs/concepts/storage/persistent-volumes.md index 19a17e0491462..96ebfed733bb3 100644 --- a/docs/concepts/storage/persistent-volumes.md +++ b/docs/concepts/storage/persistent-volumes.md @@ -35,7 +35,7 @@ administrators. Kubernetes itself is unopinionated about what classes represent. This concept is sometimes called "profiles" in other storage systems. -Please see the [detailed walkthrough with working examples](/docs/concepts/storage/persistent-volumes/walkthrough/). +Please see the [detailed walkthrough with working examples](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/). ## Lifecycle of a volume and claim diff --git a/docs/concepts/workloads/controllers/petset.md b/docs/concepts/workloads/controllers/petset.md index c884679821908..2ec7151d1b924 100644 --- a/docs/concepts/workloads/controllers/petset.md +++ b/docs/concepts/workloads/controllers/petset.md @@ -24,7 +24,7 @@ Throughout this doc you will see a few terms that are sometimes used interchange * Node: A single virtual or physical machine in a Kubernetes cluster. * Cluster: A group of nodes in a single failure domain, unless mentioned otherwise. -* Persistent Volume Claim (PVC): A request for storage, typically a [persistent volume](/docs/concepts/storage/persistent-volumes/walkthrough/). +* Persistent Volume Claim (PVC): A request for storage, typically a [persistent volume](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/). * Host name: The hostname attached to the UTS namespace of the pod, i.e. the output of `hostname` in the pod. * DNS/Domain name: A *cluster local* domain name resolvable using standard methods (e.g.: [gethostbyname](http://linux.die.net/man/3/gethostbyname)). * Ordinality: the property of being "ordinal", or occupying a position in a sequence. diff --git a/docs/tasks/access-application-cluster/load-balance-access-application-cluster.md b/docs/tasks/access-application-cluster/load-balance-access-application-cluster.md index e4f59bff75faf..458dc2062e028 100644 --- a/docs/tasks/access-application-cluster/load-balance-access-application-cluster.md +++ b/docs/tasks/access-application-cluster/load-balance-access-application-cluster.md @@ -101,7 +101,7 @@ load-balanced access to an application running in a cluster. ## Using a service configuration file As an alternative to using `kubectl expose`, you can use a -[service configuration file](/docs/concepts/services-networking/service/operations) +[service configuration file](/docs/concepts/services-networking/service/) to create a Service. diff --git a/docs/tasks/access-application-cluster/service-access-application-cluster.md b/docs/tasks/access-application-cluster/service-access-application-cluster.md index 84909650c7b61..212e1be9e6826 100644 --- a/docs/tasks/access-application-cluster/service-access-application-cluster.md +++ b/docs/tasks/access-application-cluster/service-access-application-cluster.md @@ -117,7 +117,7 @@ provides load balancing for an application that has two running instances. ## Using a service configuration file As an alternative to using `kubectl expose`, you can use a -[service configuration file](/docs/concepts/services-networking/service/operations) +[service configuration file](/docs/concepts/services-networking/service/) to create a Service. {% endcapture %} diff --git a/docs/tasks/administer-cluster/securing-a-cluster.md b/docs/tasks/administer-cluster/securing-a-cluster.md index 6df8e0f1b32fb..b967a01138089 100644 --- a/docs/tasks/administer-cluster/securing-a-cluster.md +++ b/docs/tasks/administer-cluster/securing-a-cluster.md @@ -82,7 +82,7 @@ resources granted to a namespace. This is most often used to limit the amount of or persistent disk a namespace can allocate, but can also control how many pods, services, or volumes exist in each namespace. -[Limit ranges](/docs/admin/limitrange) restrict the maximum or minimum size of some of the +[Limit ranges](/docs/tasks/administer-cluster/memory-default-namespace/) restrict the maximum or minimum size of some of the resources above, to prevent users from requesting unreasonably high or low values for commonly reserved resources like memory, or to provide default limits when none are specified. diff --git a/docs/tools/kompose/user-guide.md b/docs/tools/kompose/user-guide.md index efd2b1c233390..2aa6666bde449 100644 --- a/docs/tools/kompose/user-guide.md +++ b/docs/tools/kompose/user-guide.md @@ -572,4 +572,4 @@ Please note that changing service name might break some `docker-compose` files. Kompose supports Docker Compose versions: 1, 2 and 3. We have limited support on versions 2.1 and 3.2 due to their experimental nature. -A full list on compatibility between all three versions is listed in our [conversion document](/docs/conversion.md) including a list of all incompatible Docker Compose keys. +A full list on compatibility between all three versions is listed in our [conversion document](https://github.com/kubernetes/kompose/blob/master/docs/conversion.md) including a list of all incompatible Docker Compose keys. From 39a3da735ff70bb297c4e483a69f40b0831d7109 Mon Sep 17 00:00:00 2001 From: Steve Perry Date: Mon, 25 Sep 2017 17:35:40 -0700 Subject: [PATCH 71/87] Fix 404s. (#5624) --- _redirects | 4 ++++ docs/tasks/access-application-cluster/access-cluster.md | 2 +- docs/tasks/administer-cluster/access-cluster-api.md | 2 +- 3 files changed, 6 insertions(+), 2 deletions(-) diff --git a/_redirects b/_redirects index 8aba4e9d3d98d..f91eda55cc484 100644 --- a/_redirects +++ b/_redirects @@ -44,6 +44,7 @@ /docs/admin/addons/ /docs/concepts/cluster-administration/addons/ 301 /docs/admin/apparmor/ /docs/tutorials/clusters/apparmor/ 301 /docs/admin/audit/ /docs/tasks/debug-application-cluster/audit/ 301 +//docs/admin/authorization/rbac.md /docs/admin/authorization/rbac/ 301 /docs/admin/cluster-components/ /docs/concepts/overview/components/ 301 /docs/admin/cluster-management/ /docs/tasks/administer-cluster/cluster-management/ 301 /docs/admin/cluster-troubleshooting/ /docs/tasks/debug-application-cluster/debug-cluster/ 301 @@ -70,6 +71,7 @@ /docs/admin/networking/ /docs/concepts/cluster-administration/networking/ 301 /docs/admin/node/ /docs/concepts/architecture/nodes/ 301 /docs/admin/node-allocatable/ /docs/tasks/administer-cluster/reserve-compute-resources/ 301 +//docs/admin/node-allocatable.md /docs/tasks/administer-cluster/reserve-compute-resources/ 301 /docs/admin/node-conformance.md /docs/admin/node-conformance/ 301 /docs/admin/node-problem/ /docs/tasks/debug-application-cluster/monitor-node-health/ 301 /docs/admin/out-of-resource/ /docs/tasks/administer-cluster/out-of-resource/ 301 @@ -125,6 +127,7 @@ /docs/hellonode/ /docs/tutorials/stateless-application/hello-minikube/ 301 /docs/ /docs/home/ 301 +/docs/home/coreos/ /docs/getting-started-guides/coreos/ 301 /docs/samples/ /docs/tutorials/ 301 /docs/tasks/administer-cluster/apply-resource-quota-limit/ /docs/tasks/administer-cluster/quota-api-object/ 301 @@ -165,6 +168,7 @@ /docs/tutorials/clusters/multiple-schedulers/ /docs/tasks/administer-cluster/configure-multiple-schedulers/ 301 /docs/tutorials/connecting-apps/connecting-frontend-backend/ /docs/tasks/access-application-cluster/connecting-frontend-backend/ 301 /docs/tutorials/federation/set-up-cluster-federation-kubefed/ /docs/tasks/federation/set-up-cluster-federation-kubefed/ 301 +//docs/tutorials/federation/set-up-cluster-federation-kubefed.md /docs/tasks/federation/set-up-cluster-federation-kubefed/ 301 /docs/tutorials/federation/set-up-coredns-provider-federation/ /docs/tasks/federation/set-up-coredns-provider-federation/ 301 /docs/tutorials/federation/set-up-placement-policies-federation/ /docs/tasks/federation/set-up-placement-policies-federation/ 301 /docs/tutorials/getting-started/create-cluster/ /docs/tutorials/kubernetes-basics/cluster-intro/ 301 diff --git a/docs/tasks/access-application-cluster/access-cluster.md b/docs/tasks/access-application-cluster/access-cluster.md index 641f3c4ed9f3c..297ff597d71d4 100644 --- a/docs/tasks/access-application-cluster/access-cluster.md +++ b/docs/tasks/access-application-cluster/access-cluster.md @@ -136,7 +136,7 @@ If the application is deployed as a Pod in the cluster, please refer to the [nex To use [Python client](https://github.com/kubernetes-incubator/client-python), run the following command: `pip install kubernetes`. See [Python Client Library page](https://github.com/kubernetes-incubator/client-python) for more installation options. -The Python client can use the same [kubeconfig file](docs/tasks/access-application-cluster/configure-access-multiple-clusters/) +The Python client can use the same [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes-incubator/client-python/tree/master/examples/example1.py). #### Other languages diff --git a/docs/tasks/administer-cluster/access-cluster-api.md b/docs/tasks/administer-cluster/access-cluster-api.md index 88ef4334cf94e..0eb4d059ca3a0 100644 --- a/docs/tasks/administer-cluster/access-cluster-api.md +++ b/docs/tasks/administer-cluster/access-cluster-api.md @@ -147,7 +147,7 @@ If the application is deployed as a Pod in the cluster, please refer to the [nex To use [Python client](https://github.com/kubernetes-incubator/client-python), run the following command: `pip install kubernetes` See [Python Client Library page](https://github.com/kubernetes-incubator/client-python) for more installation options. -The Python client can use the same [kubeconfig file](docs/tasks/access-application-cluster/configure-access-multiple-clusters/) +The Python client can use the same [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes-incubator/client-python/tree/master/examples/example1.py): ```python From 096fee2c395bcd77b06a0b7186c5828f4b6aa718 Mon Sep 17 00:00:00 2001 From: Steve Perry Date: Mon, 25 Sep 2017 18:43:26 -0700 Subject: [PATCH 72/87] Update links to avoid redirects. (#5625) --- _redirects | 1 + .../cluster-administration/kubelet-garbage-collection.md | 2 +- docs/concepts/cluster-administration/manage-deployment.md | 2 +- docs/concepts/configuration/overview.md | 4 ++-- docs/concepts/overview/working-with-objects/annotations.md | 2 +- docs/concepts/storage/persistent-volumes.md | 2 +- docs/concepts/workloads/controllers/replicaset.md | 2 +- docs/concepts/workloads/controllers/replicationcontroller.md | 4 ++-- docs/tasks/access-application-cluster/web-ui-dashboard.md | 4 ++-- docs/tasks/administer-cluster/out-of-resource.md | 2 +- docs/tasks/configure-pod-container/assign-pods-nodes.md | 2 +- .../debug-application-introspection.md | 2 +- docs/tasks/job/parallel-processing-expansion.md | 2 +- docs/tasks/manage-daemon/update-daemon-set.md | 2 +- docs/user-guide/walkthrough/k8s201.md | 2 +- 15 files changed, 18 insertions(+), 17 deletions(-) diff --git a/_redirects b/_redirects index f91eda55cc484..6d1aa85864f97 100644 --- a/_redirects +++ b/_redirects @@ -237,6 +237,7 @@ /docs/user-guide/node-selection/ /docs/concepts/configuration/assign-pod-node/ 301 /docs/user-guide/persistent-volumes/ /docs/concepts/storage/persistent-volumes/ 301 /docs/user-guide/persistent-volumes/index /docs/concepts/storage/persistent-volumes/ 301 +/docs/user-guide/persistent-volumes/index.md /docs/concepts/storage/persistent-volumes/ 301 /docs/user-guide/persistent-volumes/walkthrough/ /docs/tasks/configure-pod-container/configure-persistent-volume-storage/ 301 /docs/user-guide/petset/ /docs/concepts/workloads/controllers/petset/ 301 /docs/user-guide/petset/bootstrapping/ /docs/concepts/workloads/controllers/petset/ 301 diff --git a/docs/concepts/cluster-administration/kubelet-garbage-collection.md b/docs/concepts/cluster-administration/kubelet-garbage-collection.md index 0a1036cd69ca1..068ee6bd2ab0c 100644 --- a/docs/concepts/cluster-administration/kubelet-garbage-collection.md +++ b/docs/concepts/cluster-administration/kubelet-garbage-collection.md @@ -72,4 +72,4 @@ Including: | `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | eviction generalizes disk thresholds to other resources | | `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | eviction generalizes disk pressure transition to other resources | -See [Configuring Out Of Resource Handling](/docs/concepts/cluster-administration/out-of-resource/) for more details. +See [Configuring Out Of Resource Handling](/docs/tasks/administer-cluster/out-of-resource/) for more details. diff --git a/docs/concepts/cluster-administration/manage-deployment.md b/docs/concepts/cluster-administration/manage-deployment.md index 4a946071255b8..c89990c04421b 100644 --- a/docs/concepts/cluster-administration/manage-deployment.md +++ b/docs/concepts/cluster-administration/manage-deployment.md @@ -256,7 +256,7 @@ my-nginx-2035384211-u3t6x 1/1 Running 0 23m fe This outputs all "app=nginx" pods, with an additional label column of pods' tier (specified with `-L` or `--label-columns`). -For more information, please see [labels](/docs/user-guide/labels/) and [kubectl label](/docs/user-guide/kubectl/{{page.version}}/#label) document. +For more information, please see [labels](/docs/concepts/overview/working-with-objects/labels/) and [kubectl label](/docs/user-guide/kubectl/{{page.version}}/#label) document. ## Updating annotations diff --git a/docs/concepts/configuration/overview.md b/docs/concepts/configuration/overview.md index c354a2a6df100..61149cd3cad93 100644 --- a/docs/concepts/configuration/overview.md +++ b/docs/concepts/configuration/overview.md @@ -58,7 +58,7 @@ This is a living document. If you think of something that is not on this list bu ## Using Labels -- Define and use [labels](/docs/user-guide/labels/) that identify __semantic attributes__ of your application or deployment. For example, instead of attaching a label to a set of pods to explicitly represent some service (For example, `service: myservice`), or explicitly representing the replication controller managing the pods (for example, `controller: mycontroller`), attach labels that identify semantic attributes, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. This will let you select the object groups appropriate to the context— for example, a service for all "tier: frontend" pods, or all "test" phase components of app "myapp". See the [guestbook](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/) app for an example of this approach. +- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or deployment. For example, instead of attaching a label to a set of pods to explicitly represent some service (For example, `service: myservice`), or explicitly representing the replication controller managing the pods (for example, `controller: mycontroller`), attach labels that identify semantic attributes, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. This will let you select the object groups appropriate to the context— for example, a service for all "tier: frontend" pods, or all "test" phase components of app "myapp". See the [guestbook](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/) app for an example of this approach. A service can be made to span multiple deployments, such as is done across [rolling updates](/docs/tasks/run-application/rolling-update-replication-controller/), by simply omitting release-specific labels from its selector, rather than updating a service's selector to match the replication controller's selector fully. @@ -84,7 +84,7 @@ This is a living document. If you think of something that is not on this list bu - Use `kubectl delete` rather than `stop`. `Delete` has a superset of the functionality of `stop`, and `stop` is deprecated. -- Use kubectl bulk operations (via files and/or labels) for get and delete. See [label selectors](/docs/user-guide/labels/#label-selectors) and [using labels effectively](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively). +- Use kubectl bulk operations (via files and/or labels) for get and delete. See [label selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) and [using labels effectively](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively). - Use `kubectl run` and `expose` to quickly create and expose single container Deployments. See the [quick start guide](/docs/user-guide/quick-start/) for an example. diff --git a/docs/concepts/overview/working-with-objects/annotations.md b/docs/concepts/overview/working-with-objects/annotations.md index 2bb89e17e5a50..e0b844325328c 100644 --- a/docs/concepts/overview/working-with-objects/annotations.md +++ b/docs/concepts/overview/working-with-objects/annotations.md @@ -55,7 +55,7 @@ and the like. {% endcapture %} {% capture whatsnext %} -Learn more about [Labels and Selectors](/docs/user-guide/labels/). +Learn more about [Labels and Selectors](/docs/concepts/overview/working-with-objects/labels/). {% endcapture %} {% include templates/concept.md %} diff --git a/docs/concepts/storage/persistent-volumes.md b/docs/concepts/storage/persistent-volumes.md index 96ebfed733bb3..0ad9d76d069ce 100644 --- a/docs/concepts/storage/persistent-volumes.md +++ b/docs/concepts/storage/persistent-volumes.md @@ -315,7 +315,7 @@ Claims, like pods, can request specific quantities of a resource. In this case, ### Selector -Claims can specify a [label selector](/docs/user-guide/labels/#label-selectors) to further filter the set of volumes. Only the volumes whose labels match the selector can be bound to the claim. The selector can consist of two fields: +Claims can specify a [label selector](/docs/concepts/overview/working-with-objects/labels/#label-selectors) to further filter the set of volumes. Only the volumes whose labels match the selector can be bound to the claim. The selector can consist of two fields: * matchLabels - the volume must have a label with this value * matchExpressions - a list of requirements made by specifying key, list of values, and operator that relates the key and values. Valid operators include In, NotIn, Exists, and DoesNotExist. diff --git a/docs/concepts/workloads/controllers/replicaset.md b/docs/concepts/workloads/controllers/replicaset.md index a9247f15aaba3..dfe140601f29e 100644 --- a/docs/concepts/workloads/controllers/replicaset.md +++ b/docs/concepts/workloads/controllers/replicaset.md @@ -12,7 +12,7 @@ ReplicaSet is the next-generation Replication Controller. The only difference between a _ReplicaSet_ and a [_Replication Controller_](/docs/concepts/workloads/controllers/replicationcontroller/) right now is the selector support. ReplicaSet supports the new set-based selector requirements -as described in the [labels user guide](/docs/user-guide/labels/#label-selectors) +as described in the [labels user guide](/docs/concepts/overview/working-with-objects/labels/#label-selectors) whereas a Replication Controller only supports equality-based selector requirements. {% endcapture %} diff --git a/docs/concepts/workloads/controllers/replicationcontroller.md b/docs/concepts/workloads/controllers/replicationcontroller.md index 42f929317f34a..12a37bc4456a7 100644 --- a/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/docs/concepts/workloads/controllers/replicationcontroller.md @@ -129,7 +129,7 @@ different, and the `.metadata.labels` do not affect the behavior of the Replicat ### Pod Selector -The `.spec.selector` field is a [label selector](/docs/user-guide/labels/#label-selectors). A ReplicationController +The `.spec.selector` field is a [label selector](/docs/concepts/overview/working-with-objects/labels/#label-selectors). A ReplicationController manages all the pods with labels that match the selector. It does not distinguish between pods that it created or deleted and pods that another person or process created or deleted. This allows the ReplicationController to be replaced without affecting the running pods. @@ -243,7 +243,7 @@ object](/docs/api-reference/{{page.version}}/#replicationcontroller-v1-core). ### ReplicaSet -[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/user-guide/labels/#set-based-requirement). +[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/concepts/overview/working-with-objects/labels/#set-based-requirement). It’s mainly used by [`Deployment`](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod creation, deletion and updates. Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don’t require updates at all. diff --git a/docs/tasks/access-application-cluster/web-ui-dashboard.md b/docs/tasks/access-application-cluster/web-ui-dashboard.md index f77da393e5d94..a6bd934444684 100644 --- a/docs/tasks/access-application-cluster/web-ui-dashboard.md +++ b/docs/tasks/access-application-cluster/web-ui-dashboard.md @@ -64,7 +64,7 @@ To access the deploy wizard from the Welcome page, click the respective button. The deploy wizard expects that you provide the following information: -- **App name** (mandatory): Name for your application. A [label](/docs/user-guide/labels/) with the name will be added to the Deployment and Service, if any, that will be deployed. +- **App name** (mandatory): Name for your application. A [label](/docs/concepts/overview/working-with-objects/labels/) with the name will be added to the Deployment and Service, if any, that will be deployed. The application name must be unique within the selected Kubernetes [namespace](/docs/tasks/administer-cluster/namespaces/). It must start with a lowercase character, and end with a lowercase character or a number, and contain only lowercase letters, numbers and dashes (-). It is limited to 24 characters. Leading and trailing spaces are ignored. @@ -84,7 +84,7 @@ If needed, you can expand the **Advanced options** section where you can specify - **Description**: The text you enter here will be added as an [annotation](/docs/concepts/overview/working-with-objects/annotations/) to the Deployment and displayed in the application's details. -- **Labels**: Default [labels](/docs/user-guide/labels/) to be used for your application are application name and version. You can specify additional labels to be applied to the Deployment, Service (if any), and Pods, such as release, environment, tier, partition, and release track. +- **Labels**: Default [labels](/docs/concepts/overview/working-with-objects/labels/) to be used for your application are application name and version. You can specify additional labels to be applied to the Deployment, Service (if any), and Pods, such as release, environment, tier, partition, and release track. Example: diff --git a/docs/tasks/administer-cluster/out-of-resource.md b/docs/tasks/administer-cluster/out-of-resource.md index a86f70ddf2c4d..02098595df1fc 100644 --- a/docs/tasks/administer-cluster/out-of-resource.md +++ b/docs/tasks/administer-cluster/out-of-resource.md @@ -49,7 +49,7 @@ container, and if users use the [node allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) feature, out of resource decisions are made local to the end user pod part of the cgroup hierarchy as well as the root node. This -[script](/docs/concepts/cluster-administration/out-of-resource/memory-available.sh) +[script](/docs/tasks/administer-cluster/out-of-resource/memory-available.sh) reproduces the same set of steps that the `kubelet` performs to calculate `memory.available`. The `kubelet` excludes inactive_file (i.e. # of bytes of file-backed memory on inactive LRU list) from its calculation as it assumes that diff --git a/docs/tasks/configure-pod-container/assign-pods-nodes.md b/docs/tasks/configure-pod-container/assign-pods-nodes.md index 06a29e575ac68..613c731a0ef40 100644 --- a/docs/tasks/configure-pod-container/assign-pods-nodes.md +++ b/docs/tasks/configure-pod-container/assign-pods-nodes.md @@ -75,7 +75,7 @@ a `disktype=ssd` label. {% capture whatsnext %} Learn more about -[labels and selectors](/docs/user-guide/labels/). +[labels and selectors](/docs/concepts/overview/working-with-objects/labels/). {% endcapture %} {% include templates/task.md %} diff --git a/docs/tasks/debug-application-cluster/debug-application-introspection.md b/docs/tasks/debug-application-cluster/debug-application-introspection.md index 55c4c24c7f8ce..292e86a36305c 100644 --- a/docs/tasks/debug-application-cluster/debug-application-introspection.md +++ b/docs/tasks/debug-application-cluster/debug-application-introspection.md @@ -379,7 +379,7 @@ Learn about additional debugging tools, including: * [Logging](/docs/user-guide/logging/overview) * [Monitoring](/docs/user-guide/monitoring) * [Getting into containers via `exec`](/docs/user-guide/getting-into-containers) -* [Connecting to containers via proxies](/docs/user-guide/connecting-to-applications-proxy) +* [Connecting to containers via proxies](/docs/tasks/access-kubernetes-api/http-proxy-access-api/) * [Connecting to containers via port forwarding](/docs/user-guide/connecting-to-applications-port-forward) diff --git a/docs/tasks/job/parallel-processing-expansion.md b/docs/tasks/job/parallel-processing-expansion.md index f8fac8066ec0f..7feb9c7602a4f 100644 --- a/docs/tasks/job/parallel-processing-expansion.md +++ b/docs/tasks/job/parallel-processing-expansion.md @@ -109,7 +109,7 @@ Processing item cherry In the first example, each instance of the template had one parameter, and that parameter was also used as a label. However label keys are limited in [what characters they can -contain](/docs/user-guide/labels/#syntax-and-character-set). +contain](/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set). This slightly more complex example uses the jinja2 template language to generate our objects. We will use a one-line python script to convert the template to a file. diff --git a/docs/tasks/manage-daemon/update-daemon-set.md b/docs/tasks/manage-daemon/update-daemon-set.md index 653eec57a145c..46a5823218b6e 100644 --- a/docs/tasks/manage-daemon/update-daemon-set.md +++ b/docs/tasks/manage-daemon/update-daemon-set.md @@ -159,7 +159,7 @@ causes: The rollout is stuck because new DaemonSet pods can't be scheduled on at least one node. This is possible when the node is -[running out of resources](/docs/concepts/cluster-administration/out-of-resource/). +[running out of resources](/docs/tasks/administer-cluster/out-of-resource/). When this happens, find the nodes that don't have the DaemonSet pods scheduled on by comparing the output of `kubectl get nodes` and the output of: diff --git a/docs/user-guide/walkthrough/k8s201.md b/docs/user-guide/walkthrough/k8s201.md index f5f42d7120473..b9d659c05f9a5 100644 --- a/docs/user-guide/walkthrough/k8s201.md +++ b/docs/user-guide/walkthrough/k8s201.md @@ -46,7 +46,7 @@ List all Pods with the label `app=nginx`: kubectl get pods -l app=nginx ``` -For more information, see [Labels](/docs/user-guide/labels/). +For more information, see [Labels](/docs/concepts/overview/working-with-objects/labels/). They are a core concept used by two additional Kubernetes building blocks: Deployments and Services. From cfa77d46707aba890cd04313292ebb824e5b7345 Mon Sep 17 00:00:00 2001 From: Steve Perry Date: Mon, 25 Sep 2017 19:14:20 -0700 Subject: [PATCH 73/87] Update links to avoid redirects. (#5627) * Update links to avoid redirects. * Fix double forward slash. --- docs/concepts/containers/container-environment-variables.md | 2 +- docs/concepts/storage/volumes.md | 2 +- docs/concepts/workloads/controllers/petset.md | 4 ++-- docs/getting-started-guides/gce.md | 2 +- docs/getting-started-guides/scratch.md | 4 ++-- docs/tasks/administer-cluster/calico-network-policy.md | 2 +- .../run-application/horizontal-pod-autoscale-walkthrough.md | 2 +- docs/tutorials/index.md | 2 +- 8 files changed, 10 insertions(+), 10 deletions(-) diff --git a/docs/concepts/containers/container-environment-variables.md b/docs/concepts/containers/container-environment-variables.md index d5d0975cb7669..513b09cb46f22 100644 --- a/docs/concepts/containers/container-environment-variables.md +++ b/docs/concepts/containers/container-environment-variables.md @@ -31,7 +31,7 @@ It is available through the `hostname` command or the function call in libc. The Pod name and namespace are available as environment variables through the -[downward API](/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information/). +[downward API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/). User defined environment variables from the Pod definition are also available to the Container, as are any environment variables specified statically in the Docker image. diff --git a/docs/concepts/storage/volumes.md b/docs/concepts/storage/volumes.md index baba44b07e54c..d919a01d09903 100644 --- a/docs/concepts/storage/volumes.md +++ b/docs/concepts/storage/volumes.md @@ -454,7 +454,7 @@ details. A `downwardAPI` volume is used to make downward API data available to applications. It mounts a directory and writes the requested data in plain text files. -See the [`downwardAPI` volume example](/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information/) for more details. +See the [`downwardAPI` volume example](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) for more details. ### projected diff --git a/docs/concepts/workloads/controllers/petset.md b/docs/concepts/workloads/controllers/petset.md index 2ec7151d1b924..48e6f6bd81656 100644 --- a/docs/concepts/workloads/controllers/petset.md +++ b/docs/concepts/workloads/controllers/petset.md @@ -10,7 +10,7 @@ approvers: title: PetSets --- -__Warning:__ Starting in Kubernetes version 1.5, PetSet has been renamed to [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets). To use (or continue to use) PetSet in Kubernetes 1.5, you _must_ [migrate](/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set/) your existing PetSets to StatefulSets. For information on working with StatefulSet, see the tutorial on [how to run replicated stateful applications](/docs/tutorials/stateful-application/run-replicated-stateful-application). +__Warning:__ Starting in Kubernetes version 1.5, PetSet has been renamed to [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets). To use (or continue to use) PetSet in Kubernetes 1.5, you _must_ [migrate](/docs/tasks/manage-stateful-set/upgrade-pet-set-to-stateful-set/) your existing PetSets to StatefulSets. For information on working with StatefulSet, see the tutorial on [how to run replicated stateful applications](/docs/tasks/run-application/run-replicated-stateful-application/). __This document has been deprecated__, but can still apply if you're using Kubernetes version 1.4 or earlier. @@ -227,7 +227,7 @@ web-1 A pet can piece together its own identity: -1. Use the [downward api](/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information/) to find its pod name +1. Use the [downward api](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) to find its pod name 2. Run `hostname` to find its DNS name 3. Run `mount` or `df` to find its volumes (usually this is unnecessary) diff --git a/docs/getting-started-guides/gce.md b/docs/getting-started-guides/gce.md index 3598d7c46a864..74823a8ee3321 100644 --- a/docs/getting-started-guides/gce.md +++ b/docs/getting-started-guides/gce.md @@ -202,7 +202,7 @@ field values: IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level -------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce) | | Project +GCE | Saltstack | Debian | GCE | [docs](/docs/getting-started-guides/gce/) | | Project For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/scratch.md b/docs/getting-started-guides/scratch.md index 0e93a85301b35..43e0bc7db22b1 100644 --- a/docs/getting-started-guides/scratch.md +++ b/docs/getting-started-guides/scratch.md @@ -91,7 +91,7 @@ to implement one of the above options: - You can also write your own. - **Compile support directly into Kubernetes** - This can be done by implementing the "Routes" interface of a Cloud Provider module. - - The Google Compute Engine ([GCE](/docs/getting-started-guides/gce)/) and [AWS](/docs/getting-started-guides/aws/) guides use this approach. + - The Google Compute Engine ([GCE](/docs/getting-started-guides/gce/)/) and [AWS](/docs/getting-started-guides/aws/) guides use this approach. - **Configure the network external to Kubernetes** - This can be done by manually running commands, or through a set of externally maintained scripts. - You have to implement this yourself, but it can give you an extra degree of flexibility. @@ -896,7 +896,7 @@ pinging or SSH-ing from one node to another. ### Getting Help -If you run into trouble, please see the section on [troubleshooting](/docs/getting-started-guides/gce#troubleshooting), post to the +If you run into trouble, please see the section on [troubleshooting](/docs/getting-started-guides/gce/#troubleshooting), post to the [kubernetes-users group](https://groups.google.com/forum/#!forum/kubernetes-users), or come ask questions on [Slack](/docs/troubleshooting#slack). ## Support Level diff --git a/docs/tasks/administer-cluster/calico-network-policy.md b/docs/tasks/administer-cluster/calico-network-policy.md index 4543aa7069743..424ffea4df518 100644 --- a/docs/tasks/administer-cluster/calico-network-policy.md +++ b/docs/tasks/administer-cluster/calico-network-policy.md @@ -15,7 +15,7 @@ This page shows how to use Calico for NetworkPolicy. {% capture steps %} ## Deploying a cluster using Calico -You can deploy a cluster using Calico for network policy in the default [GCE deployment](/docs/getting-started-guides/gce) using the following set of commands: +You can deploy a cluster using Calico for network policy in the default [GCE deployment](/docs/getting-started-guides/gce/) using the following set of commands: ```shell export NETWORK_POLICY_PROVIDER=calico diff --git a/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index 6d23d7d008a91..b13ea7a0ff8ea 100644 --- a/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -18,7 +18,7 @@ This document walks you through an example of enabling Horizontal Pod Autoscalin This example requires a running Kubernetes cluster and kubectl, version 1.2 or later. [Heapster](https://github.com/kubernetes/heapster) monitoring needs to be deployed in the cluster as Horizontal Pod Autoscaler uses it to collect metrics -(if you followed [getting started on GCE guide](/docs/getting-started-guides/gce), +(if you followed [getting started on GCE guide](/docs/getting-started-guides/gce/), heapster monitoring will be turned-on by default). To specify multiple resource metrics for a Horizontal Pod Autoscaler, you must have a Kubernetes cluster diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md index e0a9b24d95aeb..09c9ad91ea8b6 100644 --- a/docs/tutorials/index.md +++ b/docs/tutorials/index.md @@ -31,7 +31,7 @@ each of which has a sequence of steps. * [Running a Single-Instance Stateful Application](/docs/tutorials/stateful-application/run-stateful-application/) -* [Running a Replicated Stateful Application](/docs/tutorials/stateful-application/run-replicated-stateful-application/) +* [Running a Replicated Stateful Application](/docs/tasks/run-application/run-replicated-stateful-application/) * [Example: WordPress and MySQL with Persistent Volumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) From 255e3eb83df5356b9ba4000906f5b162e3926ccf Mon Sep 17 00:00:00 2001 From: Dragons Date: Tue, 26 Sep 2017 10:30:48 +0800 Subject: [PATCH 74/87] concepts-overview-components-fix --- cn/docs/concepts/overview/components.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/cn/docs/concepts/overview/components.md b/cn/docs/concepts/overview/components.md index 897ea8b42c0f2..344d25bcfe81a 100644 --- a/cn/docs/concepts/overview/components.md +++ b/cn/docs/concepts/overview/components.md @@ -81,7 +81,7 @@ dashboard 提供了集群状态的只读概述。有关更多信息,请参阅[ #### 集群层面日志 -[Cluster-level logging](/docs/user-guide/logging/overview) 机制负责将容器的日志数据保存到一个集中的日志存储中,该存储能够提供搜索和浏览接口。 +[集群层面日志](/docs/user-guide/logging/overview) 机制负责将容器的日志数据保存到一个集中的日志存储中,该存储能够提供搜索和浏览接口。 ## 节点组件 @@ -89,13 +89,13 @@ dashboard 提供了集群状态的只读概述。有关更多信息,请参阅[ ### kubelet -[kubelet](/docs/admin/kubelet)是主要的节点代理,它监测已分配给其节点的 Pod(通过 apiserver 或通过本地配置文件),提供如下的功能: +[kubelet](/docs/admin/kubelet)是主要的节点代理,它监测已分配给其节点的 Pod(通过 apiserver 或通过本地配置文件),提供如下功能: * 挂载 Pod 所需要的数据卷(Volume)。 * 下载 Pod 的 secrets。 * 通过 Docker 运行(或通过 rkt)运行 Pod 的容器。 * 周期性的对容器生命周期进行探测。 -* 如果需要,通过创建 *mirror pod* 将 Pod 的状态报告回系统的其余部分。 +* 如果需要,通过创建 *镜像 Pod(Mirror Pod)* 将 Pod 的状态报告回系统的其余部分。 * 将节点的状态报告回系统的其余部分。 ### kube-proxy @@ -113,7 +113,7 @@ Docker 用于运行容器。 ### supervisord -supervisord 是一个轻量级的过程监控系统,可以用来保证 kubelet 和 docker 运行。 +supervisord 是一个轻量级的进程监控系统,可以用来保证 kubelet 和 docker 运行。 ### fluentd From e8d9bb60f48c3c445effab21c861e2d92732b39e Mon Sep 17 00:00:00 2001 From: Dragons Date: Tue, 26 Sep 2017 10:59:22 +0800 Subject: [PATCH 75/87] concepts-overview-components+abac-fix --- cn/docs/admin/authorization/abac.md | 2 +- cn/docs/concepts/overview/components.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/cn/docs/admin/authorization/abac.md b/cn/docs/admin/authorization/abac.md index c1b71874328a1..1db4103ed8c8c 100644 --- a/cn/docs/admin/authorization/abac.md +++ b/cn/docs/admin/authorization/abac.md @@ -75,7 +75,7 @@ title: ABAC 模式 Kubectl 使用 api-server 的 `/api` 和 `/apis` 端点进行协商客户端/服务器版本。 通过创建/更新来验证发送到API的对象操作,kubectl 查询某些 swagger 资源。 对于API版本"v1", 那就是`/swaggerapi/api/v1` & `/swaggerapi/ experimental/v1`。 -当使用 ABAC 授权时,这些特殊资源必须明确通过策略中的 `nonResourcePath` 属性暴露出来(参见下面的[examples](#examples)): +当使用 ABAC 授权时,这些特殊资源必须明确通过策略中的 `nonResourcePath` 属性暴露出来(参见下面的[例子](#examples)): * `/api`,`/api/*`,`/apis`和`/apis/*` 用于 API 版本协商. * `/version` 通过 `kubectl version` 检索服务器版本. diff --git a/cn/docs/concepts/overview/components.md b/cn/docs/concepts/overview/components.md index 344d25bcfe81a..8275ba4df614c 100644 --- a/cn/docs/concepts/overview/components.md +++ b/cn/docs/concepts/overview/components.md @@ -117,7 +117,7 @@ supervisord 是一个轻量级的进程监控系统,可以用来保证 kubelet ### fluentd -fluentd 是一个守护进程,它有助于提供[cluster-level logging](#cluster-level-logging) 集群层面的日志。 +fluentd 是一个守护进程,它有助于提供[集群层面日志](#cluster-level-logging) 集群层面的日志。 {% endcapture %} From 1309ea3832bfb05ec1c841950f02ab32d6e2d595 Mon Sep 17 00:00:00 2001 From: tanshanshan Date: Tue, 26 Sep 2017 11:05:43 +0800 Subject: [PATCH 76/87] fix typo --- docs/tasks/administer-cluster/configure-upgrade-etcd.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tasks/administer-cluster/configure-upgrade-etcd.md b/docs/tasks/administer-cluster/configure-upgrade-etcd.md index b672375a9ee19..c5b7037185f65 100644 --- a/docs/tasks/administer-cluster/configure-upgrade-etcd.md +++ b/docs/tasks/administer-cluster/configure-upgrade-etcd.md @@ -184,7 +184,7 @@ etcd supports restoring from snapshots that are taken from an etcd process of th Before starting the restore operation, a snapshot file must be present. It can either be a snapshot file from a previous backup operation, or from a remaining [data directory](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md#--data-dir). `datadir` is located at `$DATA_DIR/member/snap/db`. For more information and examples on restoring a cluster from a snapshot file, see [etcd disaster recovery documentation](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/recovery.md#restoring-a-cluster). -If the access URLs of the restored cluster is changed from the previous cluster, the Kubernetes API server must be reconfigured accordingly. In this case, restart Kubernetes API server with the flag `--etcd-servers=$NEW_ETCD_CLUSTER` instead of the flag `--etcd-servers=$OLD__ETCD_CLUSTER`. Replace `$NEW_ETCD_CLUSTER` and `$OLD__ETCD_CLUSTER` with the respective IP addresses. If a load balancer is used in front of an etcd cluster, you might need to update the load balancer instead. +If the access URLs of the restored cluster is changed from the previous cluster, the Kubernetes API server must be reconfigured accordingly. In this case, restart Kubernetes API server with the flag `--etcd-servers=$NEW_ETCD_CLUSTER` instead of the flag `--etcd-servers=$OLD_ETCD_CLUSTER`. Replace `$NEW_ETCD_CLUSTER` and `$OLD_ETCD_CLUSTER` with the respective IP addresses. If a load balancer is used in front of an etcd cluster, you might need to update the load balancer instead. If the majority of etcd members have permanently failed, the etcd cluster is considered failed. In this scenario, Kubernetes cannot make any changes to its current state. Although the scheduled pods might continue to run, no new pods can be scheduled. In such cases, recover the etcd cluster and potentially reconfigure Kubernetes API server to fix the issue. From 356d3ad2ff7d26bbd22b3bbc7026c4bee8e6cfbb Mon Sep 17 00:00:00 2001 From: Joe Heck Date: Mon, 25 Sep 2017 20:44:33 -0700 Subject: [PATCH 77/87] minor encoding fix for CN page --- cn/docs/user-guide/kubectl-overview.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/cn/docs/user-guide/kubectl-overview.md b/cn/docs/user-guide/kubectl-overview.md index ef90b69247aa6..704999df096c7 100644 --- a/cn/docs/user-guide/kubectl-overview.md +++ b/cn/docs/user-guide/kubectl-overview.md @@ -2,7 +2,7 @@ approvers: - bgrant0607 - hw-qiaolei -title:kubectl概述 +title: kubectl概述 --- kubectl是用于针对Kubernetes集群运行命令的命令行接口。本概述涵盖`kubectl`语法,描述命令操作,并提供常见的示例。有关每个命令的详细信息,包括所有支持的flags和子命令,请参考[kubectl](/docs/user-guide/kubectl)相关文档。有关安装说明,请参阅[安装kubectl](/docs/tasks/kubectl/install/)。 @@ -22,19 +22,19 @@ kubectl [command] [TYPE] [NAME] [flags] $ kubectl get pod pod1 $ kubectl get pods pod1 $ kubectl get po pod1 - + `NAME`:指定资源的名称。名称区分大小写。如果省略名称,则会显示所有资源的详细信息,比如`$ kubectl get pods`。 在多个资源上执行操作时,可以按类型和名称指定每个资源,或指定一个或多个文件: * 按类型和名称指定资源: - + * 要分组资源,如果它们都是相同的类型:`TYPE1 name1 name2 name<#>`.
    例: `$ kubectl get pod example-pod1 example-pod2` * 要分别指定多种资源类型: `TYPE1/name1 TYPE1/name2 TYPE2/name3 TYPE<#>/name<#>`.
    例: `$ kubectl get pod/example-pod1 replicationcontroller/example-rc1` - + 使用一个或多个文件指定资源: `-f file1 -f file2 -f file<#>` 使用[YAML而不是JSON](/docs/concepts/configuration/overview/#general-config-tips),因为YAML往往更加用户友好,特别是对于配置文件。
    例:$ kubectl get pod -f ./pod.yaml @@ -286,4 +286,4 @@ $ kubectl logs -f ## 下一步 -开始使用[kubectl](/docs/user-guide/kubectl)命令。 \ No newline at end of file +开始使用[kubectl](/docs/user-guide/kubectl)命令。 From c88e80b7c031d8afcea13d495ca77d31fecb7f92 Mon Sep 17 00:00:00 2001 From: lichuqiang Date: Tue, 26 Sep 2017 14:24:04 +0800 Subject: [PATCH 78/87] translate doc resource-quotas into chinese --- cn/docs/concepts/policy/resource-quotas.md | 220 +++++++++++++++++++++ 1 file changed, 220 insertions(+) create mode 100644 cn/docs/concepts/policy/resource-quotas.md diff --git a/cn/docs/concepts/policy/resource-quotas.md b/cn/docs/concepts/policy/resource-quotas.md new file mode 100644 index 0000000000000..5054cb7f25c49 --- /dev/null +++ b/cn/docs/concepts/policy/resource-quotas.md @@ -0,0 +1,220 @@ +--- +approvers: +- derekwaynecarr +title: 资源配额 +--- + +当多个用户或团队共享具有固定数目节点的集群时,人们会担心有人使用的资源超出应有的份额。 + +资源配额是帮助管理员解决这一问题的工具。 + +资源配额, 通过 `ResourceQuota` 对象来定义, 对每个namespace的资源消耗总量提供限制。 它可以按类型限制namespace下可以创建的对象的数量,也可以限制可被该项目以资源形式消耗的计算资源的总量。 + +资源配额的工作方式如下: + +- 不同的团队在不同的namespace下工作。 目前这是自愿的, 但计划通过ACL (Access Control List 访问控制列表) + 使其变为强制性的。 +- 管理员为每个namespace创建一个或多个资源配额对象。 +- 用户在namespace下创建资源 (pods、 services等),同时配额系统会跟踪使用情况,来确保其不超过 + 资源配额中定义的硬性资源限额。 +- 如果资源的创建或更新违反了配额约束,则请求会失败,并返回 HTTP状态码 `403 FORBIDDEN` ,以及说明违反配额 + 约束的信息。 +- 如果namespace下的计算资源 (如 `cpu` 和 `memory`)的配额被启用,则用户必须为这些资源设定请求值(request) + 和约束值(limit),否则配额系统将拒绝Pod的创建。 + 提示: 可使用 LimitRange 准入控制器来为没有设置计算资源需求的Pod设置默认值。 + 作为示例,请参考 [演练](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) 来避免这个问题。 + +下面是使用namespace和配额构建策略的示例: + +- 在具有 32 GiB 内存 和 16 核CPU资源的集群中, 允许A团队使用 20 GiB 内存 和 10 核的CPU资源, + 允许B团队使用 10GiB 内存和 4 核的CPU资源, 并且预留 2GiB 内存和 2 核的CPU资源供将来分配。 +- 限制 "testing" namespace使用 1 核CPU资源和 1GiB 内存。 允许 "production" namespace使用任意数量。 + +在集群容量小于各namespace配额总和的情况下,可能存在资源竞争。 Kubernetes采用先到先服务的方式处理这类问题。 + +无论是资源竞争还是配额的变更都不会影响已经创建的资源。 + +## 启用资源配额 + +资源配额的支持在很多Kubernetes版本中是默认开启的。 当 apiserver 的 +`--admission-control=` 参数中包含 `ResourceQuota` 时,资源配额会被启用。 + +当namespace中存在一个 `ResourceQuota` 对象时,该namespace即开始实施资源配额管理。 +一个namespace中最多只应存在一个 `ResourceQuota` 对象 + +## 计算资源配额 + +用户可以对给定namespace下的 [计算资源](/docs/user-guide/compute-resources) 总量进行限制。 + +配额机制所支持的资源类型: + +| 资源名称 | 描述 | +| --------------------- | ----------------------------------------------------------- | +| `cpu` | 所有非终止状态的Pod中,其CPU需求总量不能超过该值。 | +| `limits.cpu` | 所有非终止状态的Pod中,其CPU限额总量不能超过该值。 | +| `limits.memory` | 所有非终止状态的Pod中,其内存限额总量不能超过该值。 | +| `memory` | 所有非终止状态的Pod中,其内存需求总量不能超过该值。 | +| `requests.cpu` | 所有非终止状态的Pod中,其CPU需求总量不能超过该值。 | +| `requests.memory` | 所有非终止状态的Pod中,其内存需求总量不能超过该值。 | + +## 存储资源配额 + +用户可以对给定namespace下的 [存储资源](/docs/user-guide/persistent-volumes) 总量进行限制。 + +此外,还可以根据相关的存储类(Storage Class)来限制存储资源的消耗。 + +| 资源名称 | 描述 | +| --------------------- | ----------------------------------------------------------- | +| `requests.storage` | 所有的PVC中,存储资源的需求不能超过该值。 | +| `persistentvolumeclaims` | namespace中所允许的 [PVC](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) 总量。 | +| `.storageclass.storage.k8s.io/requests.storage` | 所有该storage-class-name相关的PVC中, 存储资源的需求不能超过该值。 | +| `.storageclass.storage.k8s.io/persistentvolumeclaims` | namespace中所允许的该storage-class-name相关的[PVC](/docs/user-guide/persistent-volumes/#persistentvolumeclaims)的总量。 | + +例如,如果一个操作人员针对 "黄金" 存储类型与 "铜" 存储类型设置配额,操作员可以 +定义配额如下: + +* `gold.storageclass.storage.k8s.io/requests.storage: 500Gi` +* `bronze.storageclass.storage.k8s.io/requests.storage: 100Gi` + +## 对象数量配额 + +给定类型的对象数量可以被限制。 支持以下类型: + +| 资源名称 | 描述 | +| ------------------------------- | ------------------------------------------------- | +| `configmaps` | namespace下允许存在的configmap的数量。 | +| `persistentvolumeclaims` | namespace下允许存在的[PVC](/docs/user-guide/persistent-volumes/#persistentvolumeclaims)的数量。 | +| `pods` | namespace下允许存在的非终止状态的pod数量。 如果pod 的 `status.phase 为 Failed 或 Succeeded` , 那么其处于终止状态。 | +| `replicationcontrollers` | namespace下允许存在的replication controllers的数量。 | +| `resourcequotas` | namespace下允许存在的 [resource quotas](/docs/admin/admission-controllers/#resourcequota) 的数量。 | +| `services` | namespace下允许存在的service的数量。 | +| `services.loadbalancers` | namespace下允许存在的load balancer类型的service的数量。 | +| `services.nodeports` | namespace下允许存在的node port类型的service的数量。 | +| `secrets` | namespace下允许存在的secret的数量。 | + +例如 `pods` 配额统计并保证单个namespace下创建 `pods` 的最大数量。 + +用户可能希望在namespace中为pod设置配额,来避免有用户创建很多小的pod,从而耗尽集群提供的pod IP地址。 + +## 配额作用域 + +每个配额都有一组相关的作用域(scope),配额只会对作用域内的资源生效。 + +当一个作用域被添加到配额中后,它会对作用域相关的资源数量作限制。 +如配额中指定了允许(作用域)集合之外的资源,会导致验证错误。 + +| 范围 | 描述 | +| ----- | ----------- | +| `Terminating` | 匹配 `spec.activeDeadlineSeconds >= 0` 的pod。 | +| `NotTerminating` | 匹配 `spec.activeDeadlineSeconds is nil` 的pod。 | +| `BestEffort` | 匹配"尽力而为(best effort)"服务类型的pod。 | +| `NotBestEffort` | 匹配非"尽力而为(best effort)"服务类型的pod。 | + +`BestEffort` 作用域限制配额跟踪以下资源: `pods` + +`Terminating`、 `NotTerminating` 和 `NotBestEffort` 限制配额跟踪以下资源: + +* `cpu` +* `limits.cpu` +* `limits.memory` +* `memory` +* `pods` +* `requests.cpu` +* `requests.memory` + +## 请求/约束 + +分配计算资源时,每个容器可以为CPU或内存指定请求和约束。 +也可以设置两者中的任何一个。 + +如果配额中指定了 `requests.cpu` 或 `requests.memory` 的值,那么它要求每个进来的容器针对这些资源有明确的请求。 如果配额中指定了 `limits.cpu` 或 `limits.memory`的值,那么它要求每个进来的容器针对这些资源指定明确的约束。 + +## 查看和设置配额 + +Kubectl 支持创建、更新和查看配额: + +```shell +$ kubectl create namespace myspace + +$ cat < compute-resources.yaml +apiVersion: v1 +kind: ResourceQuota +metadata: + name: compute-resources +spec: + hard: + pods: "4" + requests.cpu: "1" + requests.memory: 1Gi + limits.cpu: "2" + limits.memory: 2Gi +EOF +$ kubectl create -f ./compute-resources.yaml --namespace=myspace + +$ cat < object-counts.yaml +apiVersion: v1 +kind: ResourceQuota +metadata: + name: object-counts +spec: + hard: + configmaps: "10" + persistentvolumeclaims: "4" + replicationcontrollers: "20" + secrets: "10" + services: "10" + services.loadbalancers: "2" +EOF +$ kubectl create -f ./object-counts.yaml --namespace=myspace + +$ kubectl get quota --namespace=myspace +NAME AGE +compute-resources 30s +object-counts 32s + +$ kubectl describe quota compute-resources --namespace=myspace +Name: compute-resources +Namespace: myspace +Resource Used Hard +-------- ---- ---- +limits.cpu 0 2 +limits.memory 0 2Gi +pods 0 4 +requests.cpu 0 1 +requests.memory 0 1Gi + +$ kubectl describe quota object-counts --namespace=myspace +Name: object-counts +Namespace: myspace +Resource Used Hard +-------- ---- ---- +configmaps 0 10 +persistentvolumeclaims 0 4 +replicationcontrollers 0 20 +secrets 1 10 +services 0 10 +services.loadbalancers 0 2 +``` + +## 配额和集群容量 + +配额对象是独立于集群容量的。它们通过绝对的单位来表示。 所以,为集群添加节点, *不会* +自动赋予每个namespace消耗更多资源的能力。 + +有时可能需要更复杂的策略,比如: + + - 在几个团队中按比例划分总的集群资源。 + - 允许每个租户根据需要增加资源使用量,但要有足够的限制以防止意外资源耗尽。 + - 在namespace中添加节点、提高配额的额外需求。 + +这些策略可以基于 ResourceQuota,通过编写一个检测配额使用,并根据其他信号调整各namespace下的配额硬性限制的 "控制器" 来实现。 + +注意:资源配额对集群资源总体进行划分,但它对节点没有限制:来自多个namespace的Pod可能在同一节点上运行。 + +## 示例 + +查看 [如何使用资源配额的详细示例](/docs/tasks/administer-cluster/quota-api-object/)。 + +## 更多信息 + +查看 [资源配额设计文档](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) 了解更多信息。 From 69db088a3fd1c1aee76482662a2bdb880ef7e785 Mon Sep 17 00:00:00 2001 From: Kaitlyn Barnard Date: Tue, 26 Sep 2017 10:15:28 -0700 Subject: [PATCH 79/87] Updates to case study landing page (#5629) --- case-studies/index.html | 102 +++++++++---------------- images/case_studies/Video-Clip-Box.png | Bin 0 -> 134595 bytes images/case_studies/box-small.png | Bin 0 -> 8519 bytes images/case_studies/golfnow_logo.png | Bin 0 -> 8858 bytes images/case_studies/peardeck_logo.png | Bin 0 -> 9260 bytes images/case_studies/wink.png | Bin 0 -> 5623 bytes 6 files changed, 35 insertions(+), 67 deletions(-) create mode 100644 images/case_studies/Video-Clip-Box.png create mode 100644 images/case_studies/box-small.png create mode 100644 images/case_studies/golfnow_logo.png create mode 100644 images/case_studies/peardeck_logo.png create mode 100644 images/case_studies/wink.png diff --git a/case-studies/index.html b/case-studies/index.html index be88512679a62..72d97413c04a4 100644 --- a/case-studies/index.html +++ b/case-studies/index.html @@ -38,36 +38,6 @@
    A collection of users running Kubernetes in production.
    Read about Ancestry.com

    -
    - GolfNow -

    "If you haven’t come from the Kubernetes world and I tell you this is what I’ve been doing, you wouldn’t believe me."

    - - Read about GolfNow -
    -
    - Pearson -

    "We chose Kubernetes because of its flexibility, ease of management and the way it improves our engineers' productivity."

    - - Read about Pearson -
    -
    - Wikimedia -

    "With Kubernetes, we're simplifying our environment and making it easier for developers to build the tools that make wikis run better."

    - - Read about Wikimedia -
    -
    - eBay -

    Inside eBay's shift to Kubernetes and containers atop OpenStack

    - - Read about eBay -
    -
    - box -

    "Kubernetes has the opportunity to be the new cloud platform. Because it’s a never-before-seen level of automation and intelligence surrounding infrastructure."

    - - Read about Box -
    @@ -75,18 +45,13 @@
    A collection of users running Kubernetes in production.
    - - - - -
    - -

    SAP's OpenStack, running on Kubernetes in production

    - + +

    "Kubernetes has the opportunity to be the new cloud platform. The amount of innovation that's going to come from being able to standardize on Kubernetes as a platform is incredibly exciting - more exciting than anything I've seen in the last 10 years of working on the cloud."

    +
    -
    +
    @@ -94,39 +59,42 @@

    SAP's OpenStack, running on Kubernetes in production

    Kubernetes Users

    - New York Times - OpenAI - Goldman Sachs - SAP - Samsung SDS - WePay - SoundCloud - UK Home Office - Concur - Amadeus - Ancestry.com - CCP Games - LivePerson - monzo - Box - Pokemon GO - Yahoo! Japan - Philips - buffer - Comcast - Wikimedia - Pearson - zulily - Ebay - JD.COM - Tell your story + Amadeus + Ancestry.com + box + Buffer + CCP Games + Comcast + Concur + Ebay + Goldman Sachs + GolfNow + JD.COM + LivePerson + monzo + New York Times + OpenAI + peardeck + Pearson + Philips + Pokemon GO + Samsung SDS + SAP + SoundCloud + UK Home Office + WePay + Wink + Wikimedia + Yahoo! Japan + zulily + Tell your story
    - - + +
    diff --git a/images/case_studies/Video-Clip-Box.png b/images/case_studies/Video-Clip-Box.png new file mode 100644 index 0000000000000000000000000000000000000000..4c61e7440fc48cdc122d5a014d67f106be58f28c GIT binary patch literal 134595 zcmaI6V|-@K(k~oMY}+=k*tRE}*vS=M@kA5bwv!1awr$%JV`BTAdp~kHG7@TFVBm>=+dHt(fA2$^k!^n;cpyn_kh+5f z$lci498A>A!NiBPly88-RtCmxYa!iIt6ym6wl|jr89a`Cno$` z^|vQTZUq84^0BbEySp>Hb1*wNTe7h6^768O-)Gi=3^Z%po|CO^?M=;PB?QU;a$&Z%Hsj-!1OTK& zr8s#-xy0DmB)NH{I7G#{c)7(zx!F0SI3@psm2fb11)AG~{)099KUl8+iv1@XfR2A9 zOPD)byP2CwIXeJJ|21ts>;J9``+w#4Z>-sW*M;N1Vp;ww!}5=F|9{T@&(Plp`ltE7 zq5HSX+@X5kXr+Z^MPtukuwy z=jiBpUu32HYryfQaqXfk|9IvO^G9`a@1@rc=k$&n#SWVi?7^Va{KkWXCZnXx_t}p7 z3#@2gv~_jWT+Z>98t^&F@^&yYbCvBOn1229OfOvglYxPQJ%!*%c--%!Q0Qsak2~MB z1v)H6088uyAQ0rQcE?e&O<{#49!NX}`7Y+Cc8Abe@TB?CEZyz-hHT|EI)A8{=Fapw z9MN4Z^p&I{c5(Q0FZZx8G>7JgZbN2B(HL&+!(HI26PoTfndZ{ zFP51rI!;yMK={v0gWHqx49t<4d38FG*$3+xNh5K|n|To#J2DA96^?GYm>8o%iq&~e zm2yhmQ|KZOJQGg`RGN2E|6;A)|0tjqOA22xVpaz&VKk?M=|7@eY+5#;Aq@o^&$gMh zWkRZYhX+CS;SDQ>=;al!>?9BU1JCSkHKalYuV*HIG@zK6a1u($XG(pj!@C0uf_!pT zHIaf^w%yYyQzXVe>Y=yPa%a#nJ_$U}9IOCo23F|$^3FHA=s*L9)e871>mKu!inU2> z%pObP-}WvfP_Vg&ho>b+6>$n#OxIwV4c+kvi1fLlSLfH%TC7iy8bczOR#Kf87j^Yr z0dKVXdXmAjqH)sbF0xQK`1aMbM?fc%w{Eui;jXO}i|&&aFd9 zoQ4WZ|ky zdV;UjPmqt({DvL`ZSbO)Vk1(zP5@ zBorhPMYgI@4l}xvuTd`tLM7q)!{Yk9Y~Grx^Y8vD+BOAW9)WP20oUQ>6Z-WXRS3vZ zQ)ApM?Tf&OX+wbKrsh&BDeIa0;M+6)Dx4al7u@(a_D%1vAp_wEZrm!u0iN&Agq|1s zH^ayXS?bJgqf!if&sZloCdCPqiArj4!EGPWM zgp-Fu1YryrUVoRZzd3;lCXY-uv020SWSWS^Q#eebTqUT5OZ+f!pN&P>RK6m1Tv}48 zg+s+z5f~#Fw<{SfV7d`>5BXz0fd?p)k|te~`4@<8m-3*qD_acFt))z(rCL5Na-iCg z^JP;W@uA6@IkBIZDZSqNns4+kZ@qoOQlsI?0|bCK8+aNw&*VSs zm?cj%%NOG7W}?wak`Iln_v9*K+=+&rhDV(4c8d`XkbU)k8?MSo>mjm>U^?m2 zKfuY5w{P-1`%M_RVR3jL&*fsLZAZF;`J>;RH_G5(4N>M`w7MW}ko44$hWRl;4V|1_fdF z>aDXL@pu*H624!t(Jm%wkX$AX4^!@p4KD$tnN+iBL=JRd@&Z3<5tb2QN4rWtm&mBg zpw=|CB9v)l$28M)|0bBPB-=0wV&w!+yI-2u$#kLU1%Dcf6swbv_Tvl=8QG*Zoi;Jr z6$vi;?b>05@{CDg*JAsvyBJCM?KY_O4Vn1I^F{JS6Ent9qD5Gn#FN0S%i}$y%Wcs~ zX(D8&mrukOjeu~`aanyo+&&Wif=rpV{ym^u7 zq(;$fex<9>7-f**1tyw#M28LfL;g9X?TjF+dH=cPJufe)CNW6rHaa5=X(UekNK87e zb8oI}ag=AVfP%s(aBXuP4SuBR$prT#OTs?IPxm}X? zZg{mjdX1{TlmFD{Eo$QgcIW_}jDx=77OH0Vid>RP(tcrYt1YsIf3D*?9m8B1)Z_kQ z@v-P)_6-4(xp~M;Qs-zXrs~VCzmYHoKZMP6l{-tb160yzHE5&EZC_}N^ zNuobqb|uB{6*B;8tSBN))?!YI9-KFM4g1o?E?|l%k*#N}Xtqe02Hixo^3Sz=#AMM( zl7x(YJ=xwRgWk?^>AB01CZwI(2F>1dIk(oVC25!|QZ%jfa+#8m$nYTl`d8XxcDZ8F zaFwT5MKDZwr~KKMPvR(p|vdKlgQTfL_n0I$zvZ_EJSS5X=Qpw{i{+X!Vh65dD+p>q<Fw>YMP3_(Rp;&{6`zM1u;E*Jd#Ru;pZcis<5l%fZbx?~_E2X_t_wipysPq9yB`CW*i|gT!oc zk=&zW8bn3D$stzgng%5^Td3+}gRi>E-dD%|r97AlgVpn{74x;BYN2dNM=9W<)7;W! zKD-V$o%**XDyrBJu$`^Njf%6X#iFyQRRQ{d*vgX9+{C@ZTVuP9^4lrx!gn`6K2EmVj}4n&NH^y*NqE{MnY4?mkIYPHmt1E81mhbdB>D z&-r$=VIe%BZGbTTbX;^BfK1v%CVL|SH%KEFzI2<1+_{o&dr1vscTe+s)j33o9GZ=+ zsq9i@DwTGmxjXYvPWfSZfx2S=*qo%ldbEPtHM3eu`Ea8aH>Uby=^GtkxvNZYcXSyM z6@8lB$ZndqUR^`9!Ap)@9a%%Y}xfi3l zJKubu?Q{ao-J>KBcqW$fWpt%@JxP|$D~Ic9jng;76?N=6!FHNXlv+^R#vOM_Vo?Ow z%~+kDt7=685f4}>6XC`~&+hsYSJiGwnzU6Ea@j!boyI5zyH+JP@iHu;bL5#ye9Q=S z-=_f+c;GZJFVfx^G#HG*!`qlDKpXs>IaUE+3KrcHrtT4PnoSlu+WZqS2Q2o|JG_4v z-`P0R(>PW<3^zrcgkP>1EMES;s$lwb&(DR`8bEIa$V^BXJ#aQDdU^~4E7U%f1T2$J z*NLjiK?(+H?!$NPTMx@Z&5PzM!3Q7}OHfZ4$pa}^d7pVsp;`18JSixC^1sJIfYnYK zq~SLkQ{cs}ZR|6YoP2>-Q+yNs{5=}Byq|J6CGg-fP4`zoRz8|N;tY)&B@mG zAD6F=))mD*j3&uZL}~jV-CnX1P+oO0kwsh6;Cb@0)!MCKcRAhIlF=XbplMDS_ z_$tG7&oJ@%BA>??W6pg;01GMT>2-+cTcQ3-zNAJbB9i1%@Sf~%-qi9(*&;NliGD|Q z0cBFGu-D$scMgxf&{V_tci&m)e3%@N<-kUGg;R)*m(*`CytvSHectw95>MwChlYmY z#*dudR-CRsU|>R7IXOf2XsA`HIE$1EHmi+oZeW*=5I}_oGsyfQnrf0Ysv%(EmPqjx z?qx;V(*|{g(-FPikSwhg>?3omrcYPyM3N8Tas$EW$!j{*Co42YSIdk}-0^6vR)eF+i%U5&% zPIFa;8mNFCvlW}hkR6(3ih{IGccypow=I%G<5ONb|GOj~FPVPB4o1a8k&w?RG|8|M z3E4xNw3m2^QnQEtGIJs@i=tJA0sr0f()dSF?s4UHugn-=_*T)iM59RKIWE)}@n%O= z4u0eY8p6isz?P@JOjY{*t5c50G1uQ3C9KGLO!)v#SHTlo=UE)JMh5Y@nzQ&q90=SP z1NPNQ|Cn(531vfHeAY39W2oJl183^N;mgqjmt8{E<15}2auc;a#L%+2~9V|y}Ea&Fh={ertRC9nz^E?Uh7zkO5h1}xZvy%``{Ha)z|g6 zu5f^zuzV3l;P2JUW!9LDjOI}iAZ)dPR#2Y$EUb@Z&x>L0WOU{}(sE$ZISHJ=ugF9C z1U3LyVVen>d8b2R8Y5l3d2@b~L5u8s;~m&6F6%&UM#!83ff}dbyi~{t?zMe1cXM}8f*TP*oZAjLG7JVr9bvD< zpEI5C6{swBD7gTqxhJ1FJ&DT$%&hYKzXi_}fsa;&c?@@rg0%T4T7g`t+LUg@&#Fah ztQkLYi;3)`OBRlO9WiNLpUY@roZ_al_d?#a1JMhVL?@Z8+^=IVFl`gPD^I#v0#1nP z>gqhVf4;5M7&8S7(f#qhr|LLJu$^7k8rpdKK^F#%kS2sxxTV{ZqG(C3g)1SBDP}V2 zP#Ssqt=7Nkgw))Yymp@U0o`r(233US`7ow=A}70UJJ~T4RC=PiPQY1wZ(>W)^#Bs3m@0CySzWwf0 z@BgZI^5B}T2+bG+pY%)LT2drTT@sBfY%iC9IKzv(YQ=`?QI za^~XekVYhogSNFi%%5NLYAZA%=#}RDkpmhe+|z@$CYDQ8fq7r6GQHB5C2QqjA*OiQ z8)>7(x`Mg_(D;^!DRc&G1;o#&!e*RB}0J~zqQJ#Y%5q1UeNgfE_YVBmf^)pf)P`b@dSL}vBrYGwcpTjhv zI4w{7{dUPk0f~y;8gbgYQ2e}Y?~fu*Zi=vh^)yoZ(1x}`@crKu9@R#S2=uEnezc?M z+Q?c{^DPcLRc{!~&}%3~d#nsys<;o@qVHs`RnquzkJ7jRd?#IM+PS=%=NJmS?>^oJ z-WV*aV|r|@Pm_hFAR1LEhBQTEo)wV^|-2PI_X2%qy8C9pK<4F zEKv(%8yy|59zJ8=JZr^ya*20sB2Kgpid`PqJpoa(=Zl{OVGWa#Z(}s0O5m+` z@10VI2x_L@X$J)a5N8RrlFr1BA5cy1&1Ik0MB7<4O#DEUbm{_^bNI5Aj_cG`0^|#= zudiQe>GEH;uO! zjI6THV-_T^eY~{5urHt`keYInErZgko26+O$60EAVLn+~FBUj3*p*w5{Ws&%U7&A9 zvc)=IwSIifyWcadox_~_Qp%aUSfp3E%<4QO6;-7Z*oxj6t6rY3BOX$2RepngCzwjG z(}pX2f)a3ol=o@kgy}OLqOMDI-!NcyI&T|FFMI3u(|;%^bI?pm8^rmFXgXbqlg$1= z{cVnH4pfbT&J|5Uk0J)%xSRN@ey%HrH{V#=;4q+c4b&UxJ)|_g)GFON^F3 znYnu9kSOLL9CqkIC9DD_zTQ!q*7uO6`9os1?KP71_R?(;QT=tss0Hc%87#d)-f-ZW z{>q))s#S!a#mpj0?RPpF#ijf|nMYILpr63r^LU5mb3BNIQ!^FNsT=GLR*mV=vv#v_ zW@Yv0=-vuw2-ystYTx@+1xOT>J3W@!?*=KShRr*X%}0NqJgGM&Rx4nY6mpO9aoW+|n5 z#y{fd%K9aJ`~qDg0Nac7iCCjI?>M^#J0pKVLaqZ%IJXoBjk;M|uhq?Wr%%3{rQnXNyaWXRaxo1FltwcJ1vBA_-v3+OE+thJsk8HL|CkCAXRIjVSCPV}2) z%CSA)K+J1FZ@RTOqM1$oj|s~0Ro}t_ej(BgHY?1sHZ9E4T7zFmsCI!1$A|19hyN(Y z8+H{p2$*Nlnj5D=X3teOj-GpzivL1VWGBQ+HcX>re0!*Pd`>cK3-ge#7O7fgZMWtO zK<@g$k0=&}>J;j;h;|d2Zda5V#f-s8Evfx{q(VvG2nm8f{nl@m^1W zeu7mA@Q|m|AhH|@LLHw*Hce5KMz_pt9byo-A((4KPz8IiMZW&38489$U0jY@T}R z(7h?_6#j}kThg5J8Q(zl|yH4plVwYo=siZdMVU zN~aj4W$F~|EthHHP|}~D5n+DuUr`$}#WE?sFs=E7=(rz3+1qmXpj(r_-Iut}al5Vh zvstXppMA_CYgDGuUsZ}uLmU1TDG0w@4bOaRqNynwS0jO6w$98h?!-l>x?9#kX{ZP$ zneD~}Dn^`O2_sG)*+AmnX3>?9%05|sytU4($IwnLDGspA zLQpfMZ^U`_WcXA4G03uuWo9UQ9?-VqWf8OFkp952?5H@@^#>s~bSG2hJL)u;2I`lz zOJ0J>`_#n8?7*MggRgMnOpHBlwEVO|#Ms}?MosC*SG3`w=|I2V$yRZA*4g@E9SdS10<#|-%JJ^dRA0+@S_xh#fqFe;x`+2G z4Rtm%C*x#)S_#i@TCL_>-yqzEXLT$$dYvz3hHDbs$aQ4_C7~1Oq?*NoumHIG88zQ@L^0kimNJ- zD>CDtBkmw(V6fjdd005<3M4r9fUBSn_hnjK@X)M1M{mB*E~2*;kUfV#b|a2pGZ?}+ zF*zuLaV+L&I|o}Vrrt!=_ zs46dm^BF_pjbPt-FTfiWN-P+{-~A51<=x-v0sN#B=)OdSKMhdq-keid9xUTSlW$1i zNr=a_^PKY4j%y=HX}^8Ls1fw-$Fy-Hv1#5)pM%dEww*rzTw@b`q)W?3Ms&-c2!!im z7GeKOp}UqS=9LMXO><>B@Q5J}G&p})COcYhMldt4FtNySf)dwz5R6!E&P z>9pDpS3uB0#bwzw2pBKzb>*_DrDHiM@>zQTcutd5ng$(me`Mm&~kn>xP^g<~9rP@LnA z8nujo=eoi_y@fD0p1Niv=z+k!j=w)*sK`vJW!9piwa?em1Zwj-Lisqv2&v?I?>H8d z{GO3|7W@e=Un{HYq=EHnp?EdCnrv{q9e&;r(AAGJ4`2vmw}WH{9NT7gQDx%U>+-ld!b@9c_OCBfOw`RwOiSL7Sdnn2obVi>7(Xr0C1vE#JaZ9peWUWU zRZ9OFD5UYS71FxZT$qkDK&JUX%rSuE62t3_cJ-|5g>*0|dDo!#^|`O4b#uj4Hx_(+ zl4E_?g;8crt(+z^$j=@C>+L#tgB$Efg=uA}-5)M^u$7}d$GEtL{@%GggWqjvqf-pW zH-3sC8$ouVLfWWAPCQKt^$mo8TN@o>GX4Z_B6~vmY%qQJ6y1drnpQ1>kP19V!ZCE3 z!h>l#QF|o-CRSK2ROJx8d@UCmsC z+_KvtDFXYoe=eLj469UVvT|w{5mYhxpQ{FG)0qMBYorjW2htJ?hp2kdc*p^no1=4el~81+WL8%y*|NWu=BO*S&z= zMs=$~0vb(QuGYr$#(AB^=!fEO1@|Z|kmgNQ$PpH8xZ)=*6+h9p6A8blf|km-s7{ew*g^`m(1_mn z6&AJwAFeDIX93t0rW=$TR>|0DNO7HR4+pfbNO!{5?%FoYvn0SZ z%ptnum7uKh#xyD4%TA9BNwsND1YU3`Za_aVz{GCrm9mMXJ@x96(xmMu^KLK^Lmiid zNK`mL%5FaVe58M1R_E^F!6a{5`s<>T@fm9f$uoUUMi&EHzvI*)>?v)z?5=AeU=*YBm)mpY?XN?_awhN zcV*W|S5UQwZx-S56`@v5eX87>wPi@Nnpyu_kbWr?_(j2exe(S^r}o50XF#vV%W5D# z7TS=+^n1jScI2Jfcs&iaa3l@$!8JiztJHMsrvHjhF|C7jePbIHAbE^-eUj^a7>xIJ zzkMm{7i<1JfTZuT2qpBuaXFAtYw&05{5MCgPgDn41g9LlPTOvvxHxTc{r>1oFe1Tz zw8OCLvOWlZR#c8qCpyz%EoPY}N$;exM3g2jAWUaO=zoH51oj|&)mmIwK4zd@gT&3bUmqgnvQeTC#};9A|zkgaR9ET4NSSb0x~ zgrCM+aHi~XbM>RvCKkENov}F#$nNn*0%YYi^9MR1#0f7MPV5xixn;JL!Pq|{>zJ!g z6L<nrdg)*Wy@g z5y7@vj!Fl+OJq95Tu^agk@ds^{F98htQKroxP?X*quYgX9JryARgS(&tT;c8g!~G9 zUt}E;O;xpdCSm+lVkV4u&d5mTYHd2#FCAn`b_{&u02w_N)nV$kcwV~k7xaYm>!qu! zu0}nqo3_)rGgK+n<=0z0Dz3=CKj|s5`_(@A1l}g#-d0O5`zAIZwytA1uJTFzgl(b) z9*C-qO_qjp-p#HI()z`fnRY9-g@SMld1ogXYJE5eshpG8d41r0-p%joPwU6#`-n?6 z$0M~yCEv=^+i*`@O(g|hR^a~lD*onwoY*P1G#KQs-i*?Z8~kGzh*&9AKRZCatU(Js zg_`Dbf5!4V#_xlMr{NRpZ9G<=iD=ijoj)6AePp1%6E);um+wX0+kQQ=;WW&U{#;uc z^&nWkDH_Sv`v3zbhHz+LRQCCi0Y+&j-SW}oGeA5oO(0;)8BSh45*L>=?(KN6WoN_b z@Arq4q%4J=1u3F*y85m|v?)zKW}|hTWs|*hoHVxSc%_r@)1Zr2pELh&*=G+fV@P)M zL;=bvZ2_J(fVB+?*;(nB0>2Bc->y@3oX`-|42~uY4U#Xq#Swif!ptof@R-oAMJvSP z!HvJF6TdnI0Hq#(msB8GAH+YKL-Bco@jZ6(Ur4cEQ9q-^Dd1E^C_ zVwKZ6IZk*<*{z*N-7|#qfm2)RZ36jCf13oNLL6*oRa$w>{88P_W)-Of9t^9{Jrd$H ziP)$k6y@N-X>|NXB3Sx2yS#2?00g0~J3+_&a>ji>yTTe~w(0cTVSK#q8Y{BtS-3+U z=*HhJ_8kU0o_4itK)fI)P%1l|x+mw>p{FmnxnLwlq#{Iw!9|Go-KscU7d#-DJG>%R zAja_%;cgX|Li%xcw?8(|AFg2}JQlXt%x9JtkN85`!EY%FuQV)lK8QFwm933tSW0fl zQU3CqNIbSx_v7z`Lc{*4aGcsg_UP-;*IA4U8p~0{lrV|Ikrj1W9cK>aN4x{(0h4XT zGEY1T7W3(Rj>hk2&Z@b7*f=qT4mX$aG)86xUm0zs<)eph?LJ9P`{*+n48+h(7bMzJ z%o>6V%v1*e-GO%W zS1AF{FxKXXi_w&&$}*?!_x~&;k<+N_u z53?uSQ7-Jobj+sAF55hzqObVApDK5gLr%!OQ^|@;f-YXY#-a9uG502mkb(%A0EJ=G z@7pMQ5s}kFey@RgS1+|xt=8Qy&5sl|OIJ%3&3VKJtll@22EHd?_H9l$#9np@_KiCF z-#^^GUyq~2v+XWu=fVbH(=uK%SU|B3xZFcK*#Tm;9&>uCF1G?qa%>qwEa?Bfl-5p< z?~4U(7jCCXOsBG5@u72(0o`j@0u}dXZ%j>0(5AaVaa(Eqy%h6S4eU@^>Q8~fZv-8; zM9XL@QwhV%k@C##PuHdI;KT8tXPE+Vuqw37LMpwke5?#YPRMbkEa!x#DO`WyIX3tcD zjDyu?sa_L^^DHmfr(rV>Bk}vqBc%Ft%rwNK(4?;h?TGdu=gXR73d?a+R7&i7I<*4X zB+kLSV0&jzQorM*YP<#sXZT!g=x3q5WDZm7hqi%8r@a76Hd zOX406BDHS+9`mt(l?^e+##AF$Ld$@iS3b&Aewa(LT9_>?hSP$L-J8X$A1)%z%gRq@Z@L6A2z z8=tx3uz6ptynoZldx0~0=rtmKkv~(#s45fUSDIPAz!9<48$C2%i%6u}de5hdgmaKJ zgw9<%=yvTjir9DY)U9_!-^iLMPlUjoQwW{DVC9<9aM2ZSYMGT}`~n-gb(`?TPR$I6 zf#uk(FjBw2p0H-*7+w`6>&%e>cq=m4vv7hP7t74?Ph$vnN`l2}tM-ycz^)a;dn8Ej z)xX;U_-vb)qh&59T{WG#QJF?wdK;k_cpR{L!C_|6Qay(4vrGQjbRp!Z$KZMGn`{9s z>mnip2O`Y`@Khj9e#DyJ>;@lI_YNKbnhr|zW?{2G|#)(;X7ExoR4&*_WZ01=0Tgw)(daXrltrS5%%KF-~OMFjsS zM#f(>EJ?ae8o90|o0#S<8jZ76M=$0PF(jkPu*Wnr)h!3qn7W`fe~vugyp-IVwtNNz zJ}H0R-#)Q)zq%Y>GMLVnMB@^Ci`;(6?jD$`Z`z9(pDMhsWNn`rN>si~9EtlS`0LA3 zmhHSUbrNT$X8g4SdJj7oN-Uy@y)VT5n10O4U32)h{j(Ro9aPN+*(=lO8;L_xvo zTt)^)YD8V0Qp691ea>TCsz9knmC%^96}tG^SmP{u;R$jvKdHp)3*V!m`%CTZ`xrhg zC`$rPYQpEdM?$A0w$<=m;e?oV=U0gLiUg92#M$4bGE4-aIYMcJ2iT9BuVwp|wUfv%lQsxWC>fj@k3!u`$( zQX<7Vt=LF``wMA+=H`kd^Nrr!!%<}{d|n_0D?_X*E#8t}4C=Jfw+HeCmnMzNoarmS}CgJ@dj!|lM zc8j1%CO&`3JSBHK$L%Y|$=FIB%ou9E^nptrOQ^X%H6wJ-sznQa!~NK;NBO-4 z^SHBOmMkzQByigYb@6%;`E&vKdbJxl&%>KI>`vkZ{)^~47K(5k$>)7+S7dNTY zfycf`9+y{EgRNISp~pcQxJvRQZ-L77q~RtYq{mgf)|F+hXB;*NJXl}qW-{y1q1lFg zT2i3AR&-)&k@g2{^sJMo15UC{c0!*Md!^EUgQ4NJPz{`e~xP+vcF}Q<%QbZ>)edm(9D`|!VwT?5~BDkhf5txhzPAZ(p zxUE$|ia>jVxQ;_ARh65~lf4E7a=M=R_ZjKejrABaH^pNt*2H!~Y45Gzub=71(If5E zinF{U5g4alAWC(y$1gNLowsBmk}%AYmh=1W_12Q@>qV_v-(*jcExI}+U7L$hELuinDb zhW9t2*17!ZPX)q2(CeoAvhkL>otB??+e7ihpI7f@rC)7WmAoA;pw*Z5xKB!W z8fEyoL*!~p1*WYvJ;PzBazgp5Yh;H?$)F8Z<$S$HG!sHs4bGy=h4E~t^K zI=s>*!eW7P{othX2Dk}g^Aaqz<BTDQasLiJdNj4L zkZo(G(i}RGu$cDJ`6J8|T$<~H6f4+9K+!MQvU_t|Yk4VJ4&#(z_nJbS)oDwS5sgn> zTJdy2@hhSBm&}%rmm6{OrWZiyIg_ME-rLgg&(qlxiQnBV57&ARr1C{6YEw|`XWuz! z$5Y_p55v>WD@(uR(PhHIY&2&0K0L9cq9k960I}CT*uw>XFr+lYNWxz2IgpE&^<;jZo@y2;aH+1;QfjJFtDuhTya_hRW8VWpU^zO>@*q3)CNR z+X@iQMAvUgx%x*CYi_CfU=zIIk0*|0yAS|3GB!*yY@v2O3w&^Du0^JJMI&bZOX3Q) zE_CkTF_!i{oRUAH)zAh5xM(FJuWU*a42X{W*;{0wBtEB3=$zqszcnDY>XSgz64I@K zOcqXb0D{^L;~2ik_xY2!zdynT3q#_>x2K1`=}3ZU5lk^e99NX8EcRQe1fjctl$(lU z;i@sS&ETPri@Am%#=?ICaVbLSxcy{$CMFo=)cb+loyp2-Zs)SMr2OERjmn5wk&rlB zA$EpRv~zC;e419XKDb$Y$eQVF3M#-tp_B46R%5{<#uabst0;0;j%MWR6~!{_X$dY) zYxAzIfaQ1*L_u*Y++r5u>~4}y`Pc%mh=#XH^NQBL2QmtLoa+dlt9aV(H`5q@#oNiC z7GTxbg^m{U?b&|0QA4-vnv8e7Y_ZJznIsAzPy{7xAByM!=bbWWVizCZnN?e-D3cnF zw#eziy}~|+Q{XBstPC$NC#c?DB?@0pnI-=&u>oHcT~C&TV!#v zl~`&rKE5E1tXfAZkxbyLy9o8|r{Sp#XrYlQFg-6m530AkG3`AK`nmE8J;V4#wyx?R z-r1n05^&wsv&FXu@hymA<}NJH^H&lDM_S5u8wHP~1T-~v8br5ML_pM$>lt&L)cHpIu znCe=<7ec{_!AIVRxt~vm+$8%jT#No@Vfkcb_&KQkOf-`i`<#(*if{W-~Y^EnaN{XxA1ZbqPNb zPFV|D+O^eW)?oGBtcIP`wBU|g800Y(qmlM@393bf!NX;NIuu>xM_ui<233@l$*lPr z&NxGCd~|yvuF+CTJ!Rs-4c5F`75c54JlSe{VvW+!*+kuPhbCDQTR|zykp_iUL4oPf z+TGVa+9ZBo(%y+PCKAT>ClJYx%*bOIR|C0bMcKF9O=kUQK*sU~rAIpP;HiUg>{xj1 z?@lPYNSA%g_Q*naP?+HOE;gntRN$j%XrAwdUJzRg%}!x4P9|@R3)*^IN}~f-^_wd{ zBB=X6-c~RB4$RK#ehEves;`4qhUvKy3fi%LT)xl5v{mCY?e4GHTIqq?uw;w;UVP_K znv#!+X^$1zXXPz<)`cAWBf#taC7ygNHIf#yZVqhYH!m+5MSq#CS?H|3i(tmPW#JOQ z$GwSUUa2zKGuW9>aNjOglUUO5UPpa6p*J9a!A&oOMe^F1pz)1bp}PA^VlYzKK=y)t zk2UOcNq$32yT?g_I2L_2UX5ANp(X@*xD5l}ZBfAjdWs59jHGbcL8D4u%~vQK;Jgve z=2tyeg1@^v>*Y1&296?`x!9Gq`&z9+DpP_Gy7?KUeYIB~Vq!XB7n{vmIL^-^6e-(e z8)_FdO1|>UA{Bb9$-Kx8Qu+~H_t7E8qC}-tT;DE1?z@-2KY*Z)zJ15ux?akL0&D)Q z?)N6iI5}CM5Gg5vz^LNPIgsmC<;j8PvJMzZkNMg%Jh#_Y3TKi=yg7E9Ct>d5`z{t%Pe|-t(dSq{U=aj+i z_qbuJ`a%>)P^ZuH*LMWrLtd=Bm9D>FyXQ({TGkXOpl^ z%yDi@ch`lY_^4{PH5b67kqc6eb9AkR1RH|rApo_T_%D{#g0wG{m4l2e3a6(h$lx(Bn-?cnl_ z_1ZjW1(a&8iL5nV4MI`Vy{eT02H3-gOxn$8tE&kSo)~Jm+Su=R^v?B)j8M%K7wcgm zz=QB_riEM?yf48@6c1)dc^$!B41UK>bT2M!l+JvWP8*5MCSg%yRpmN=dYTq;6nc_QJe={U9G^ z@;GI{nWdg zH!1MIre{)jaDo>Ttpb>0+9Yx&*WDa8&0MTE;=@QW>+d6Ikk94uC6=~%wI*sL9j$-B zLDEkobGNAg7L=$N=;gT^tajner+OGta~Hn>V|J%KiGKacEIOkyrV6#=EoS_NoU)7S z3P!dF%q`oeCQ4P>TnPw1JvHWTlUge&)n_TZxL`JbUihvlZ^oEQ>hO8-N-{*_ce(5| zj%1mq=EME5-|Yu?@1?0NC{}S71>(m!L?Ou;$add~Me#e9zPkO~?c6UW>EjHL7}i@# zU{3J*GWZJ^@%82z?A$BL?={mec>3dcxYraWMwxK1bsfg(I_7>QmkBlPu5gJ@EF_^I zgcWtsUm*@>CUG^m@pr=ETpmtVP=0BitHKeZnZKeQJ&lA*isf=jVf z^RQ>%3qrr;+7GAkShFh*Tl6!w#0OJD%2?@s&#k*eLD|&X!lp^L7K&whe}-AFp9(cm}T@IcANNTA?RIrFth~ahV&f0Gqa6C1aOgcrk!L&3Z_sl}uBwyYsrKam(k@80W~` zvtH5<6`QoLO-q@3((gpwIwd@*y0z*Is3<$M4C2hV@6-u+Y70P-)DEh_4l+nLU@kKX z$9b{JC{jGHlYM`!xB=MwV>lCEfj_?wZIv_5mvt~XT9tGb_j^PK5S_RO75y$`hsUJx z&^7UM%1LI_*!^WD*1muMW%Dh#l-X#9o;I~rCW2^}ePQuLvMOGPetj6{O@0TjcrlJ{ zI7{jzk|GL>mYz+bn0hW9M7^(!17}=~b7l{d12t<(0Ex1>O4+jvMcot~heaHmrqAr% zjbLU!hAVqeXilLWEuhJ|S#(}+sS2cxb zWQ{c)e;##jl>IVUYB^^gDUxBfTia%xiuCq1wP!G1xtD-s1{qH!`h;GpfjlOA5(d4P zAG;n$`z{bA0%n|Z0LK2YFh_ewcYdNBwsU;bbCEg{=20X(`;fJ8A~Ob6(iL0>h-?jk z4HhfaYE$6Wmbx3-!9}jPr|rV^Y+UHs#y&3@o0~S~);mmocDioe4P-oe&x((E9SWl@ zRGwD+^!!BOl09Y-S>rT^Z_tnVcV|?y!zK-en1zda01=#)4A)&?o_y^vK?|kQZ z?0fwPe)PSUVt8N>pZ&rEGXD0XAHE6Ow{ONDeCn&nQPgwgRp;PabPOj>OyIzQ3Czq@ z#PBj8N`DXE7=DbAkpWCi&Qs*nN}Ay`I?AlD<9-Facs&V$Ar^0}vQ$IPZ{doI&+HOG z&(n&oS3~;lDo#$#V*mbWiXIe2TvtlHWaiWyMR@_X?HI?~uG&OvJA)^mJ|Ju|qtfe% zBjs3tDn;3KYmieCI?iHjY!f0nj#Cq_%i{J$f9LxPfIGwNvP4REh`64W;a92?9r1c< z5^#7uMVx#6)fm{a9@To&dY>*AuRDyxWg@C>;BsuIH$rs__Ba6VjE&HSa1t9WHko?q zdCO$*wUfw5$17brc^z?8E60+Ep7Omx(P@}PBf5;hfioIxh##k~o)?gT4QD7NM9Hn@9 zEDdhLMv9b5^T*Lrp*YO}tO^6)&%&mQP-<5&urP}w*GKsc7j-WJPdULc% z|Fx{2T%?^Jt9}o|OAks@qAvoOfhdS*n|hYnOq@bjLzo{WAQ`w4Q=vKL5`}4b1{B(G zCPpimo!CR)T>?`UvRRgF$O(gknJq#MKkO3zBeOQc^IX zcx+iauyZST)GF5{MD5gDF1y^3Hg=v?O9UicNS-WqC$$c}VrRNgmzBA&MopI& zdY1NG8#a~;mGx{{q^w`cuQBn{W*kR+Um~IzJ zM|OYTT73KN7ce;ZIeg%b50TgF*%7>9LOd{E5z->cV{x&KKmOw{qqkhdt?zs*){gW@ zaX%NEj%{XwDP@{J`+ECu_dUD|0qhyfY6m{E1@=W zp8WHF^B~rb=kfmA-qv-FTdg{N{g>X2?PqPqk>e2_e&lH^F3gZ;-;!9q-l(IT^YP05 zC4B2!kI6L<`FmFY*BwI+m1;WF#_i%1ZHxgM8CVwsp@#7&-C~8Q!`mhIGv%RiDR{UW^ZnV%^#`(j;DKmT{^cNd(X1Ei#$N?zbmH!1|-Z$eg$z z74Pk64{wn|d&_c}ip<=q?Zv>-17tT23l%+PWp#I=w3(B{${3;Q!{X@WI5u<{CL?o= zxOuc>1+Y1i;dA9T8_iGDYIC_pbq=)(J(uVvq$dv!YPS|?Co72P$y|I6v%;_`{QNkq zOaB1NqsQ`WOPIw)W@!oxO`0MZ=&L$KPSZgP8y%>woUTg1#L*|cgn84620Sxc8zyC@ zlBt7!LPc~86*tRVq7g4+=#!E+4=jU+QJlb&e_8+)+AYhD_=ex_1)lkvDBfn|X(k(* z_XMfmS?XNoh});UU%r<`7b9%jJd7K!zXbVQMxvs}AK!<)2acoMJAgf}9>o-S*GrWe zW@am*j>_~`_GaW_TM|I2P{LxRjc1;D3L7`B!x>vP5|~uU+g4brw(-D2FQZbck#kTK zn(N_1M=?6wi(mWYkKndj-htQl9mfkV9>k-Me-GQwJQF|pp*v_n7xB5jz6aA&GtxbN z*SDU;%uHPxBER?2tJ0goP69gvd-mOoK2!%TgZR8u-#@Wa;{fhAnDYW;oXU}W6^wKjZ zQtG!rt#H=%jaW0*FI~BpQh?P4h>DZfvD`W}4mDdXBHfPLQPixk0z(VB|7N-uI!Dq_s?Vkz^?1A z!1%V!!iF35T%mOxu~R58rFtFZyhFC=YgN3$r8?_)*ZR} z)cNUj&Z{m_R*mHu2cu;c#yjcvk)$DCS!ao(6844I!K1?$1{K#SW($23ZH!{9eiA`_ zUfvhj$*VeHb<%xDo_v-(d)=V-WX91d_F#^ClPpEojZJWjZ0;ecdpXPvZpOBF9(v&j z8E)H0r#eY=iJH7NMrd8B`B@yx4Wc+aAStiV)(n_-wfU7tF@F3JWW9N^7X|U|m2&YD zmmz2k(!G56;2}&LJthhaeFYDry#@63^pJ7(v9-S+*X=q7S8m;gokJryYh(=Ddi&}1 z0i3mVEiT@21}@&T6=(DhV|}iOb)_N(h6@x2)2tG}G^B_XhfUiM zNH+qf${1W?CxH@*$1m!Y{$@o^Dku8O$Sa(U2D2tA+_0(HF8`t%^q?A}4X_}3m z@6r|)JftVa^O7`{vQ3u)P`+oM?lq%o%c(BcfF<+E-Hh%aYg1(z+pKAUM&^E-S>MV= zacpelb+V>Bn`FoKaom2(6)5I%(vR`zlZUYP-~{^nhv?kG3u0)M6daJ@*23ZeN2l&)6)+vr-Fj{{t^$iM*a1edprB z0xmdjE&l4$zkz{4@>T|7y#2aMaqm5k6Nub`PyX&t(uDX4$y_kR(W zUVI@w^EcndyKlV`x8Ht)L@#_^^MxEYzVXecY5$MX zeijXtO4YWMB?Zm`k zyzSa^1-Rw5@yvQ?Mloh*=4GN{nN~ZeXt{5sP|V|tfBg{dduSiZbYqIVNWz3#d1g)0 z@aOW!)+1)H%Vv5ox@H4T%^gQ|_5`irJQC5X)D%j*LEDMd^ywClWNL1si5Bhqd2hQ4 z8+M!_^k1Wh?R6PvLnU5mMv(o#O(!R*skz@)w=g8`X_Ia-@S zT$_|^h^Jh)d0<3YSzg}bTHG`_!t-<<*CQ&H&=7U-4zN1O9@|oeI^82ngX^)CY#$aT zBopE(BRnY``{`-*emy8CV@)N;1mzwAD z?D3-~aq9R<0n0vGpE~~8I(eHgI)lOh!@g4%2 z0|s2}06(*gJllWM68l=|yAUVGug6J>C}~p|q_>rAtt2Mwl?aePHP?<>O{YJUMqKvd{E0WQreCJh^1< z3^~h}M%_GPZO# zxb(^o;?V~`gUZx#(G$yIw~`j>mb|U4{+koGH>f5;3q7 zq|q%-sorGloJ;?@W<=L>rLDxj9p+?Jt)vDa4aR)$*xIH}==O$KqSKiJr}(VY>C;B7 z(e{@0vp6ygzlYO>RY`G*=BTWBU%9lG0T(xVs?2ije7?yPJeS>$%!Vd55@1A2hh^3Z zE1Aon2ip|N-&<1bxb8)Cl1YK@MmR7cpu{y4ZbA%`eKejtM;M25!x%X8He5jGf8oe; zB7n*n3aQW7EJ^H}^S7%#jrh6y@fv++Y5g|zAJ~UY`=6luwkp>u)P8CxTBoiwGc}8a z$&<)2HMx*Sk^8YCrpdQiw4RK!k30h(54($u4!-ns@b%{@-RzW0#XDo8REQ){KBTo6 zWzeQKEs@<@(4pvhDJy>I)Bm!5la?Aer1Jr??ekD<9zZGF2Ns?WRn{Dz?vRPHx-I9c z-m?QUV{gR?-^j0!84^ZVU?+!iYO7|Cain<^0q5{cb6C0(Zaojr6{H=@qy-r#MJFt7 zk%!r|O1y$>89`;%zhMuOfkliNJ$crNF>G0>6)=UC-48aLl^R7sb(UBj67b>TtLH@j zgjUPSxE@ji!nVejz(|NYOthTCp_2QItd0?aN}<);6o|Kr`*cl0QZQWRnevt~WG!mY6D99`3q zu^kgiFW2Mhq*YAL-8(H=lOFx5l^s$ftS`?fr!w&aRrK`rvSpCgp%H+oO^oSi`NW`XST^uuxR7P^L9#&gdMYiJvUD`Ra?5zL1CZkx*9o3vkZG}d_TvQZ_wx@$>sL!f%;jJ%W-HC4?p4%s2 z%*AOf#d%vZr5!BLv5mH0rZ4g7Zn+H=iXxUg!@z3C9f|qK7V2BH?cD?@!CCLY8RU?( z4?P3L`f?d%2izG4wJk;Z2oJ1Nj`H<@o)Kt!js}SdXd@!y^q5)uF*6WF*3Y{ zG1`n#a1!udWPz=l$L?94eQYBXCvA}q&pS6nDxd2qt7>cs1hLmOW5m%c1te9JiXo6_ zD#b}Hz{z;)B!o`0H{!HM=h?c;GLNoGcgf5=c@8{KN;H`uHw6Yzj-;C)EM|h^2{H@Z+-k@_>aHxVeH;> z92+(aV(;EZXmKy%SAOtSkk!T_8airUZWby zgBCNntR1-HEmO#~y#=w0=CsgW*jUOqRY#9h4!{iviAu98fZL9+t`w6P#f*s@55vFe+DgXG|Tr}`E^V8)##cUsZ$hoHkKTb zw8=R*`^4cqQ{XgFyMi8CSLNOuD?wq7>ly5P z**O~po0?i^aD=iI{`y<7hRyoiUatCDDRg5mVvpHKWTd?sHiR#ubfSpYGq)ldD%&C6 z-DGXg%Gq;pwkOjAf5*+($xg}Z&zs(~gqv!dYeUxwabDvH;^PNo%40ig$vw8u!?6@u zdsvYZGdnwl$y0N*K3PmFFU}8NQ?8Tt5A>iuH!HOH1$ups$rt2!XE~Kkb|A}R8|ZU> zK9FW!1|5!=m~q<@vWh1%^CfO@j7wzT2@G{z)~94=lF_gR2!nXi^jI+f%Q-U()WmkN z9A72B4f7+nP{g)Y=Erbat8CMcoc-e>{sP-BLgJz795ziqjDf};1Y34oFE_N@)L~>L zRT@?bh4ZOQ7j9WDwdu_;VuxI4Z;9kPM-#mBT!dF5&2FV2c*qY+uQTTwW^AVLDfpyh zj1df2ZaW}spDbd2b~84t-9#W$B%@(nf8Ww^m6@h2v|?BPNTeYAE~D>QUS2XBMkkZ! zRC-RmMj~>DWM+fzo_p0b%*brcTdA)#X=+T8VV%EQIf%gP=oq=jTQ1o~o>~sMOjZV0 z@tpQL#q~S_cX4hJ_uhQ~V?)p4rgy%DB9yCTZ0X(iJ}D`@7F~xLeI{zfT^?_`NZQvk zjz=Coj!zAJ89({YZpIz&KN~NC@RmG?N=!^KZU;hZ+al@}t*zMyBUwRxby>bF; z*9>F-YfJdO|MofDddsEQI5vVzp+p{|GV>c8r8NO_r=4b1o?|4_-8jl?dEC*iB@&f> zpDa5Pu{bfVp^MXRqJxw%^)n6>n3f%DAVoo7AZ0-*@0c2$PmSieBD}!ao~WsE1fHb`dVW zvVupy`z4Cj>*PV2aSY}@HMOI(K&i5+AeO@6`RqHdxeQw_IETFNR?-otQ>-jyN6LP` zN0JIlMelK(QkZ~~b}<94+M!X=HtaSpxu>w=#F@^#b68f{8!^|kS~-F;k)4*l683Ca zopuy+_)aE1a@-7HGj_3nb{B~NNCKcp7OUj4MB0{Rl}uAI zX%}o-63ePmm0hlq$`(!AB59g3%PbNk0g@m{2t-<7vDlnG-twkyUxy^sF`*fI&fmNttY`TcMg8=B6UBsH)6oZFh-oj_CSdT4f^~Rt~6E!O?KqS8Bt2L2!&Ak++i~Ci!72O zR8FbXLVp8c>idJhe$51FTSg}>FO$qrtOSa--s3tmj8Qs~McI*cnRLLK9p5DKFedwV zlSqGCGL#u}<kMx7VG+|M6ST;OmcbV4A^&OBCGo1itdk6-Kqk! ziz}Pi8BLOwtI^r8890QUT^B4n!wI8u0VY&Y0)$b@>~y?$rT5#$v<$R5i}134M2R)R zA^H-FpxhpR2lK%Tv_@leZ>&)!{t3*@@52gnq$r@UXrfmE){a2EN)^fzg|i5c6mT&7 zV+8ech;u~;*VT@BZXwvYsEZT*CsDXc!87;~V7h{qSIhM?C08Z#6Jx)U7u6K>=}+Pi z&jeSWmTWxcS(~yz5nI*SirES(+9cab${EqxDh*J(KB-WY zJVft(%@nZC&wePNYsW57895we1Vn8D#lO3q+mGhHN3c>kg}SyC8^eF|9#25myiK2P z()OF`Y~$wjuVJ?R0$txg<}{ROmJ%FznzIKEwmC>iP(~#rmJxJ`92T(~Ryqq^=MkYD z{L{V}?(uuk#evf!(pET;DxW?D8%Bo>2YD&C%aw{`8JrEr2}aziw`tdpw!Yy?Aj$Ui zRdRzw;z5N8L2|Uh@W?NfSjc+VAQF*C?7-4nmb*!3CTgq&;IjbVSW~IvqLYx6iY!AP z8`+1P^Ob2;&(}KWm$c|31qZ9>_e->3`e-(`snRIogZCcJ{M5-^)R|P`|J)t4aK~Mz zaq{THc>39w@!e;xp;nneiGoa6IDn^KtmB2(Uy?RtsWL~KPvgnwuS#W5o1DcHPcGBX zEz3|p%DIY>%WTS`Erb-3NfjvDYST3`YOPW#%I5N8S9vB8_qVO}Y?OuB`v3kC9)09~+;{)2_@h603>PmiQxL6K#e!}N6Quxu|L1=aFTH#TU;NTn zt;CKy=*0LSu+Keabyd37`gWV{0cGA?$rNJgSu+{bw$(g|&?IJEk;)J zo2Qu(?YKQ#gV>sT81<<;u-3D| zmZ86lJj=Vs%dN?6`dX#k$H}cHFx7sNDwa0=-jcjdBv3YIeVJGs2(&g7Av)uFKFVoH zr8h2NBe|+Z3J6%G*MPsV=8PZx44>CPDWn0&6O9-bg(-GQ(u?T4Br5 zP7fWn@-;odaz{~RY?ujwxyXHttVt2Waf@uGK<~Sld+Wq7?_kc}`G|Qy_uY2_$B!Ps z`8Qt1!w=nw+itlT-+b)5qV2db@Kt}kKZ#(DAn z4iYA@J?UBGlK0EW4}1_eJ#;@+Bz>5^mdZQzJW*pj^Nc#S8nKXb)wLRtFiPZ1gcRwZ z^oYDAOE&Yog3KBZ9!3qMC$^8)>O^WfVQ zNWLoVmA;u3CB+i|{@yd#x*TF6_i5y2C$XjNzBI09FMj3R@}d`8p@Z$a`O|pd*r##u z>K8DmU$N3C&a_tcKmEHyft?xQM+VPQK#4KTeH_8mBs#GTRx*oo0Y4H|V)`ppC>Jn? z?%bzwq(_NY?=_hzSnPPvHLJ zx8t@u-ivnJz_%}5##68KaO~I_sN>c+kdsmclNGy4fkCiymwxqTY)svYxQ7^#^=-NpJ*r4p7-=B)FoJ6jjbuQspA)+NXf_!d_e;ihxT8V71x=UO8XKfBp}DTZa2l-dF|DLJlPk7^R{( z{wdn*><3-Qvq=`Wh;k?tb8_%au)y0~4dt>$bw!k_u^&f~{$xh{HU|VQnVsZ68$G@9 z@-?3`{&F~RY+m%cu3X#1+%#M6=Fo0$;h_(n#J>I0c;n42 zeEts~!&ZG8D@#}Kp4(?}+pRO`^g6iZ=0k#&=b(6KA%_P)a1Z|CE6?HLm1X4fp=+tQ zHj>f~P!Wlc9Gp;a5LxAcm6bY9oY{|m_A5Vzr(U~?U-+eejf4C4;g|pWzk}cUf4_n0 zY94po{84=IPyQU2t}o%yNAJOJ{EJ`0=fCh}eDA53D2Rsm_=oPrO=nNzTi<>iSFf$$ zp$CuSfBGMPhBmFnMt2=2PTYbQUb>2x&tJv#^d#C;4S-Qur3jnr+af4giny1}J?*j_ zM`>Fi_G0mgyP`cHty~Wupk3x$=?YHY{1DDjhPt(M5mB*b4`1%Hv-CdwdF=fkz^#uw zB&`WndbSeiW)#$#NVzQmD%4aH%o7%ajo5 zCG)Z#E-C3!W#&pI)%Q}~P|GAnb&BrIp?C|2hmYfk`M31xF1n^FlmI!9g96H?LmAf# zP;V$Y&_01^W)7>gTzXDWkeekUu^!nu%2FG6ymAWn9{Ma6^qRu9xfUT3+IK~~aQrUh4jsf`+s+zh_n$OJh zj?g8oOE)Ei+R+qE70->d_@+9mSZZuTS0`Anl>)oBMCN;A1A3JL-pTrtIMSrbTQ{jn z$_dQglaC`S2eF$hozAdeF(xUkMp`qz{7t~0VHUd*Ck8;AC8?w!@WakwqQUPaRD#SV zRmyV$LS%1?a)Vo?OvaykOcH#&5=mh$4z8P)d%I)y%`Vu>y=DAHd)Fo@o#D)Da&(9e z=z+X>r77=aAYzkUJI7Hr=xP_RB9;12hi>r2hfWl*L)}0YTP|;nopKaN3iJczJO`3I zwp%d;n}V=xIT-gSt!Q^OW$xQ(G&bnqZ=k-tN|nhW`urx=*0<1Z4bf@S2F?`((^09h zA34v=*^9c zCvho94s|AGv1|P?IHBDg;FhV9td?K;AAScHuGDe&9cS>+?fdZs%J9w{t>CAA^mcsp ztB;AD?cA)!M?ZWLkN%Bcz$ZTQU*my$Zo#ko)4zviw}Bu3ZpMH5cOJ(2))4>lUw#?q&tJk%edI~5)7g_oW1o!c!@T7n={Dbgix-zZG7t958{r8 zA3=*U(|&I#hptZGv!k<-v_esPuC?8eSLy684FYhg)ogw>9K;TF7$ll=W+bsP892R3 z)Pki=meKagO5uzWVAKv(V?JuRO?^l|jbnP5UZX+xs|d{}m$p9Wn(dkBmzARd!uE?; zK1ciDrcYsVY6ktl3PKv^TP|={V~6s~Eea^*6Zp`vPhsKuA7HTl2Ax4whlb;%W+(}+ zZQXQnr2i7a%LYs4!)VYNw#HpHFfcHH^}ISfo1>5YgElH%y586!SA*RF3LfiI9)>vd z%0;}CKZ41VhryW3UZE@ubKT#T%Dq_5iA6)6lEESUn<4l4iAhAQmOPJq&Jvw@*(0liLXwU17xj&`Sn$!Q1UPZ!lV zd)80MQRrXBfwjNHFnk2v>ZAh-+mAlqH@bnx8_(h}RU}0$qsyHKV_W}LJd~cWKZMN) zm%@y}3Exh{4kL_Wl`wo&24UswbTe-$H5D>Q3m!M4QFgd)2jf8^P&eA7QkkF^0=p=2 ztMSd1ZSMwVPwc+Ru$1=-9KHpn&2F+8aJp2f`YMMDZhcHoJF=A@@p?BpFPo9*AEmxB z+CgsldG&cv$H1SSaRQsxkKVAp1Ddp03%1oI0V>4Hw74%`T}G?bMUFPs%9JD<#I`Y5l=pO4u_8%#KuM!txkuklmqDY+t}Lf;oO@q<0Bt=5SK15 z;`x^@;?`SE;<@K9;LbbWCj)bxc!+ywt#+=QmY^1RS3p=V7VaIFig_$VL&xR@ZOFMHT3rj+n3wyA z%lHRB`!o2-&)kRk`AIzS!gW0Td;@cHXYuO!ZQOI$e*E%(^$1>i%L^) zqv#Ln9RWR+r=e>N`nY%dHB5i6j_>IoML-ooZ<5>gbRA8)|BV(UZS?o)67A!V4t=&X zu#t$XM@DrPD3C;=d#EHS;{;%;T!o^2*u1fX=JgxcSYM<{x`mtHcR!Bpqw0y0%5eWQ z&TaH?adkk!DUg|+;(ncp_Y!(=BD8$Zdzc-*M3s)hCO(Kpbp|abvCl$|3)|1*$o4l; z8Y~I&%qmR>i4=dpvrl%rOAkR0w~bTDl8nMBRDeCS6C5fA8H0~eGai*;gnCkj<=_U{ zM1CY{D%Y6K+M_CD=N04Az`aTZ>6W{m09USE782wPWrnZ5wn_oxOXBa&_OZY5TVIufi4j6iKYN|BAzH*& zVi9|k){TQ}D@SxnRC$fnpt3C&s}&VewKYFq#QAe;`1ODDXZZDh@n7Pf{NhjGcmLqa za?fr*brAbzL-Dz`_PYF0M6JBf&WZAj6r+rmGOEdm8vgR}r*QS!b(}rL!i=XWXiQ*v zbAu|uKF-kR_{x6%7r%@XM(obVy-wZA2#Q93sp{&g+?Bn;>eR)G*t{H!VZGQGcXru-5&c<4D$u7cWv^P z^9+4gx$`rP;-Pz;7)0w?|AGnFdxK`sgC3_dwRd#>7O;Og=X zZ1>siv5X2;Hj}vsXHT44J zV$UL4A5zTzCIj4pqWpqA}+s+#j*q@hWPg4z7wE~&}L(4m7_{!S%&O|d~Q@# zVB`=oBiXn7?l{6LuV13di7JU_>iD(a{C)iF&;BI-?KfueM}Pd6_abZJ5 zBDXfz@N+-+0sJTb*+W=cZ{m;t_#2q07(8(AomBPBV2Og{i_a}lwgx=<&^`E{f90p} z_3yqe-s0WXipXkw;zu6D+#ko>*fXXLIxLNta)bc$ugbe~G`J$7p zz^itf(%jtC0^a+npTL1r`$h7CeOe;XFgI5JDargZoFq@WO)7u2t1OG~FfwvL-ptJ& zZZ_yzk<%c#|Js3K0@2~hTf5#2)jw+%7X;Z+vq=Z@&Zavx)q_-wCNpQT?xZ>iCfZ3x zNQDCn$+z<~opvNtoRK)P?1xGeV}JW;lrEP5wx2D8BHqY7{9I;xuvQcg(HS=U-O6zc zk3WpFlx;_w7sNEn1im?8_gbIf>U2^b>z#wS6ksLzIP$Zz&c_3|d+R(-)fpDtLL3zw zr-NnlQ!s(QZ-F19I4(y_!LUO~$jRXaRA2rwu9Z%qLwo82)e>%^-?_Z>2BzX3PSpyO z5Jh-%ahcXi4-=HMwFer5f|!`l`VCO4lrTwuR_R&JlydYQEsswPVobAMSG7hFu#Y}# zZ_($D77pOR{0v^AH++K|c7kgPoI*2O?mADQd^uyc;K2k2dDX!zs|Tz{=$Vh9sF!h| z@dV0)1{7=1sla2Y1dtRkii>_%TTgI;lyzV8vxjsxqFiNRlpn6Cu|s^X6CEQ$Z)sMo z6O!L}#57}g`Lt)OGaJpp#-y#Kb_~KX(js^VwngSG3K$^6AVx}V%*tV?tdGPy`7aZI z5sZ=^cUJ`*X{sedO{;He^Sdx`3}!2j&)LXOyy#UTsyPthHZRBTjK_>8cY1R)Uc+9B zJ$BZxYG*j(9x9Vg-JuscR(ZJ;rh{U)>GVd*M9RaaL@aWOIzXhKmS!ME_d;p z|IgPb!>i)qhd+q5wINE?H}I!lc?t6iU%-F#GoQp=cRq-vb%THMkAEA#^ZCEPKl=HP z$@}@eFFb=cFD{~3DahclP>jT*!NN67LUhe0P$|e4DR+t4<2p!(^z<%UbkD2PWgI$n zn5v=<`aDO9qKxL3-lK_{ikD`WfXQlIDJ0ppNvx%{n?*(x?Raor#2juRL53K&sp2+~w2cPg*UJKx;z=04STZ%KtOJ&j{)Z{oJ4Z@>(f&=J?c&>2qodlXs^kHLVh zETCXx_m0?4%Rn7&PT0rs#;X)K>R6$mv42PjTW<{$`55`BX}q#@9n%!3Za#1jSJpPf zucpzaH6Ha*qtYukq$}+WsEW#CqFhB#sZiYLiS>0-4_bs1BAa zB-*v?wM&*lA!jOdO%aL|ez5<&t(Ujj<$b-Yi`W_HvN~g9PADl1H4I18Y_rvkpRVu- zIPlDCUiv(2rA&mg`8e*2)1_^R4W-}gb+1@t&N%f}J`}7enAd0uGj5tbm;|H9xkQsJ zOuzT8;6SMyLKUPxi_BA^6J(XfXp7l0mUF|ky zZcIj`>~3>w1D)n38rvJfG2%H7E0EM18x(YMn4a2?5^XGYs%y2kv45^iA2sxQ0oK>~ zzOLb}d-hYtR>U(;uhC@>aM#@nxN)^d_o9tE?l_6ZzO#g_&On&q7LGo~{ZJWgIG0ew z!#3($t6(eaAqA4sObOFRr)ksnCAf#KWviVPw9cRiv+!PvgbMZ+i5w9F@#rj1oU^h$ z;wKYXlPjey$(UPDVpzuRo28OiCJ67Chs$czrEv%Q-E}am7-Z-GF&@D+9AU(Z-b3Kj zwp~z>@jCtfn>rmFqyy1%JG7t`LYbVdpsC8%Wt^M11HI}Y z9NT;yQ{C6l3ba$k3X^o0HS{7Z#g1 zL5K4_bW!Z%oTH(EQSJntWcFaU9E_4 zsVF$`o~GZi(b&XFeTy?7M0r*a*=C_7%#>#k){fyyS>uMe7t z=5YGeuOWW(yA(@G^!!RTNCfG~LR^G60X2+KD_GQR5*sAzM4o4Vs9<~ME-W8>q!FIE z{nMYL^-vf^FQxbrli4uY!C=Cud{-x_Nwll7AL+(WV}0VW7Qo1+QX^8@e{KJnu@CHm zrrhg$&1f)DS^1OQN1GXSVwLo~y}GVvGfUVHe4r1dVv$A6Vw$vZMEo(A6M1pp5@G!( zHkIlR+H??i#IJ@w;@FnKeadLKq8ZXgV~bF++!enX?d${W2OlKXdD9(O&RLNUu2|F& zhN4BJsJx$>oyPLow#;0VQwr8Dqm~lffe=Kwlr_MVNNUn4g;ziJoe84*R%woTETlVB%H<&Fuh;9L@vzP>Vsg0Lu38kgU?MIK&;`tdT2|OR|_xqya zEgS}6yvMG6XMr!{*OTo)gbw-T_08mGO*{?>$cRF!QNI?th8fC^Ct`d0hr&h-u~obk z&FWeD_ZGt5x@5>+>n(KE1nay-R4td{>&Vgj=oOBmlg}l#v>vc#+y@^g4SMAu`(v~Xc_6HmN(9^bzBI^I~hPHT>W zc_oi-aUkomo;!-Q!#{?L;VHb z8@CzkU|TsiOQ-+g1SW0y9k3uIAEfz`xF^OUYtuug-xOy*7GvZ?Zz$~u$4#J2OGsJ9 zIGC@qPtKHzX$2A_Oy}ePafhXtN-<%N{cn0L#7scUWw|*)W)2Y~kZ4iA>b0@3Fo_4= ze*&FvuA|#2;^2WQHdY&W|NZx2ad{Cd>ucE9*uY)yeE?tjv#;aq*#kI!@;I*ESQ2{! zo)LB1J=v%|dd~&A2SZ9>UwG+FcR0tGnVNN{OF=fe=HRBa6Hru%EKZhYZZce1JI4zI z9$cBpVeVj=Dw2WRXPE(HxJWwdqdo@NOyp>k-qD5~kqI*?cWT6E#A)-2REQ(@TKz2w zCIjmc9>{?jB!1!PaDF_X>5Mc=%BI)aBwx#dmyQ3Nt6Gw;yY=APBOdtAt*w4^2v@3) z;96-8H_6Cwu!)_m(=pq%SqrhOhc#bebU;MKX?vHIRe zXD$zgb^= z1u>rsK|W0jRH_V}2;Vxs9PbN5)nt9*EY^>F5StbH9X%r&Y8i;3Mmot*nc0c4(%gX> z6{q@vwU#mr+ug{5E^v*1#rB=HV6Q$@(U{L~A=^@9G>(OaC z;2xpGVouiQ^X0oKV^%nJ^-mCY+0(u3Fkguv3uVhxWt$cjy5AMRLoi2&WT`4j#NG-R zZhoSi#C(Si>~9rM1II#~qN;Jwx1le1!KC86wW#DtbaDEevemdcPhPcZ0XW8Zsm z?ZF>IajGWWEPl`Ji7LMR{!ioHXEbsbUl1i>)&(?H`$>F5Vp_+WV!nfg zAFDaseC8M~E^T0aWm77jA-!OS_C}w6Cg(kwSYM9zPm3xWCH9U6uUx&17cablHeLN1 z1)1e$9bvJI1N&KDAdi*pKAyR}gnJJiLv-eYXiS_!r(*;qJWn^Y+lX+Xupbj=K8iB! z54F8W?>@>39%R%FRC12UKdI^&)=r~-^rN^jaZp~fW>I-&&qbZhj0D=Jo$-&th@Zy~ zE~uCt2&$3R)>uHXnNlnd-ZT0C{r%5K&%DbSNM$(=%tUlZqhWjwK4u-O^*)y6CVOEKb9f6I5 z5CuD~Qrev@)VJ%{-rl5aw&&U<5ag5BU(-e>-~^up7!@WJ41a_Un#hNci6fDAVGo9G zw?hX-;Lu!w^D9w15)6yA2(Y|V$CWFqq6EP2cXeq^WzD_X9IqaLb*f7R8P&eM%5YASer|Y0X;@khu!zZJF1o-X|LG9xJeQ*QM%=!aV|`* zB>_rgp5wUjmH)aoj7x<%%nr<^cX9MBuC#Kp_R7kHGB;s}cPl!9tEG=%Eq4rbq~i>| z*t2w9-;LLHzpbrcsK{uri?Okiu-By%*{ zBG{s8;f4Et0uRvlZ(Vo}n$^%Hv9f_XqUvON|0FJ#OZcOU*HOvM;7)FH?hmI9&z_Qcz`9{E(4cL5vMbq@KF?Dz0s=p_Hp)rZ$b!hfm?e@l#aE zM0n}d*YM5Ni?~rgfVsjgn5x}|_Q1Z~NccGllPvADVeoS0B#!O-2oCrEoU*T_SffI>oCDfZasDv2k353q%0bkJR?jT3x^_C_?T}qAdIW8yvcnTOZe3|#lGryhAX!7`HU;BsT#A#bwyptN1$^WHGf zBb!?WxNI#PDhJDSoC{&W=Thw}cc_PvN|1Echg!)%SvmP{9srhVnkN2sREm$ni{N=oNmA1S_7;@HJK{Wt5Bbv;A14 zVZ`@6wp(8mq%()Aa1#V9kixlp!-4B71!4othHRWfv|3gyG!T4#D2|X#`dlEIQxO`J zZLV&%WoG1>XR?OHUfER@vg%#y(aLuFOgzkROH?NCsiaLQD4)szg{dATZCVb53@*#g z8eBWgw?%_75JNg2R0&cqUE(?&7$+H`k`{^^l)09W%7A0i1~w01`FTlmA&fE_dudoO ziRfvVNNw2o6v{kIC5Cw!;;>6xQuL)=qaqJuM(N26__90}T35LRTq=JIS1FK)29%ST zKTAd8Mf&$Q=kCUX4BNf>r&M8XAuc#bs|2j->8N(nY11GRBor-ftPK9_YoW+H+A4YHA0chI8 zd>eJolKKQ>LWk;VrHb!9@Z-3HU*p1g@n&J0B0*R37&{l?)QO{b@tHQhx!8anM3^dd zut}S7xn;1~<5@_D0*#`4gx7i>McS`Ar$j})JLn=`Dr2+JMW@@xso6s~b>Ik&9zKfs zg(H}moyR#?8Z= zLV|HE_P}J+wu35G%?#FP8?iD$!RnbZ&%}%j-gkf$l}fPdNq5U(%7f61aw0O0QRZzS zQRZD8;K)c#q`lazCeYB5pt3R!Ka~IWzV>(WTltXBJLO=rTNA_VrVC~gCNx8Lr5Ov~ zmbAzVrFtW3$j0lIB?_T|tKhQS8V50DCUjN~mr-H-dZr`BWlZE_vR0JKTPb?>xeu)~ zBv%fe&rtBXTX}oER@&UcjyZUCnp;?ypTOb$(-iy)b`Y(#1R5(um_l4_+fK=XAIf~k z4*8{3$6nWTu&^J8aV?yoHqHmd1}9OF4V)Z54{`%XjWar7-uH$&UAvydnu{vxhn>a3^_0*yfXV>v}?q(CRo#D5p9EcJa? zJEJrdvqzh6l;{{H!{0E~LUO~;K3_-1{EE)-9ZK0?Hn47 zIz9hww|TrhuSqrUGER#2cWY#TryS(FMC+tfWa+6APIEJtqL140wBD8=P#~Zb^zxH!VqOkW7rc zgnec{>%Moi63Plcra@Cnt4M`8teaag$lrpV+n;AZs*p1P}s`)L1! zm)XcJFO`KL6RQOrs8+E*A7VeF55p=brl8$iL$|Sw%`Tf1c2J6j$kXO2((5-k5L2Mq z+}c9_Mi-@Wkzy1Di&Y99#UiTwEUDVpV*FFYO5DX}yNzPNmf(G%lH_4OxRT-mpIvn; zdVu|%SFo@16jg?6@?KMPtbnuAMBd@}Vy2xr={6FhRnctp`I$Lbyypl=lSYLt&uDjDTO>MqcVhke~_4@jT-V` zlE<%faty`2Q(KDyZCpi?(P41Cb~7Sd0|^WQ+vO@r+&W#0M_CUS8;kY-Yn(OUdtNapPEmD|v%WkY>12?Z zNANRo3I+adYiGl&GH69)W!tsMGG^`TDSwzdR3)qwO)zGkZ2umQ1TkB|IJUe&a$f~? zDi(>`>DVV(#Xdj5%Vs4Y)ibr61b!dwscSw&oBc_I8~pGp){ zrVPAlkDMA^@+^IAH7MfCvv=cBy2jZne~w;%11fU1;d16!o)nWFi)I5ynvV&eC7!(> z>y?Uay|{J5)e3ig4^bewW#t7-ufHN~B<0iy0yU1;chwocCwhPPHeNyQ^)}8=J&fwi zehbkZDmI$68y$vk_bH$l6!Ns@cwR&6tI=u+oZoD3N*5qrU&bP>(TI|vVYiR=poMLE zqdb(SPz05mMypkq3NUiAMy!V1r2WUL!25cyVXpfGa>L7s(z4On1U>a*OeaxRxuFoQ z;=ty25MSzJaOwjri$T|+9ofk*@E{z?fCb~O?fe8|L-%CE{E(1TO6`r&GJ8}$QkuCw zru5bS9iY5^SOMj&rZpD2rxO&8${>i;+ldsoFJ_fug>}G(s94`l2s$d>u3Lt(>nCT2 z=5nXC8dD3fBvpY_N;TlwA zzOCmfVIO3XIast#vD;pZ)I!)Z!BSH|buN8$tjjEHn z5n1?IZtu|d^yw6Q{NVfWp|Fk0m2X3LXbndBEUZNixglL=#WZoU`6TjmFh7}p0R5tM zYlQ=%PEou+yYV9SZ#+rCXVY3^mRTad`7ttX`lb z;qbi(kKg)<&*^wrkiM!I-4Ed@zd}As0!<1HlY_uzb&kTk>zqOXduVo=L6H+PH0!OIonkz)eDGCy%gRS-^`V=ZU zx{KBN3R(;S&KCtR!J2!*HE}R_?+htiGrC4*!}pma9Oj{^Y9C0aDUUch2sLvb_MJUV z*FnLpWnCl>^jC4H`wf%_Z&MX3T5HTyd`vp z=bqK8eKVYCi#{1;X|aghX_4!P_ekf%me)C>op?jWcHIlIOFP*Uv1y~Mu-`*e$z$Qv zJ`|=4=y8VcWJ4_fLM3eVG`OgA9+?OlnuIWC0VCnABBEvt2s8fvf}~2Z$i4*mrz~>H zzF#{dD&z~&?u*T+o1<}+T^Ix-92VO?jFN0S>sTuLHn1*|#s_>yL$|yXB(LWKov0ho zpBw5V8iP4$xv->9E_M!zC7|o@)_icq?zzW-ZOv@E&-C*S|)GuC15T1O}qeAnP@@zyyl~RL+6Y z&a0S%RHRdOsiO7+9D6FqePL!l2&4lz}X&_8+x3p$V1t4ru!S;8IF z0;b9=64R#u+eN$A5dFNc5~9)HrhrnX>&$@#``^IS;6SqgeXDvi zqO*5NwPZGz#52hRwh|P~fs^zv(0izLSNjnUAX!eM)2q`#(UYu#2c(?E2y`-FY(M)d zr;dz%^NhDq_T+~E891;>290va=)h$zZ^Ja#X3tUg_(sc0#!D;C=%h^h_i`TceSE*k zigCJt?ETE8!;(?v{VQD`AC_DV&_*bhYG}1-k#pZRnFF|kM~3A&8yr{@j2CL`r8;bu z9yrf<7OE4^lcZhw7GMo08zbtrV*-o87$?Gv2r{O@$j98xIT&dzn{W(PaWjdfEcmG0o zadt1D(W7IuGY~Bv?PXke>}%KxK7#U*gMzYT@kux{ z(JW7o@5ykgfW^vw)UU0ey0wJntj7L>hY>fa(x=3|Qc#!-Ssq5?%+VuQ-RfX*yJ3-s zRAo4Nc0y9-GjzbwE3{lQIY1d{C~KHehV*ZGFQxaMz_P2H7E~XnhTpP#8?J_UqB_n}oijyEUYkB!29+;RFKjL+%HhO%~t%DP2>~)!>Ua7E+zCx7M@xNc&~8vR}Ls9k=vI13&Vb2WLRKK z!nY80TF#1+d7IU#k~WL6*?XZ_62`qGDM5(T&DgvxP&~Q*AR{2?P)y?lyYXOR+dV@G zesF{7Kp{B*)p%j1g2Wfx*pVb1aCFSnayWAK2uf2WbmD#@N1rtAwDt3_R>`B+rS&M1 z3kfkSB^cTu<0a)YLtqu`YTcHa(uHF_?6%LwbI{??PKWb_q9wcPah{gWbP05_V|+_3 z2(sdNw(W;R=+%oy`olOCaJ9inr>_-SgdcMin)(bnW=518miVxyz_DPqaWJk^Kq-m> zZ##4xzfmw0@kPQ!PPS#$8g^WNrK(|e=mpg)^cmHja?s+d- zbng~AJv`9*CJy$Wqvyb>Q7$35GXG8Pj2R%O*$d??UY~y$*9r?*Y_dRP0W*gVq0-uf zZmrU@iClNV1{7<-5%;;74H^El4{5u7r_q`F!0wH zOx{Z4ESVQ&<_Uk*uod{gPucyT*UgZ;EYHDazb<$=moV*Q%Ok0DbM}e4Dn%QY$-?Y1 zTiYX;s34gQgs#oC^T$RBfX2=KjMu(E))`OjY?1mkV{8J#3INe$0mpAUj?!!eZN`4O zxuVIm;jAf{8|XXpCtp#sO7jR%!9%+PH8NB2*JY3bThtyZzUli&XDKFuc2kAK$_MNz zP8*;7XR_A3;_&DK{itnqM&rxPo02-%=n+`3B@4$%TQ7Uh;XkL+SxVq0nGJOsEj=Rv zqf3YHanr&5o$sN%@-&?}0rJxa(aTvXRW5SGLk=cZ@rOPX{XD&=$~@*u1r(Y~bd1$S zQb|HYkeYx+K}@l#gGjgS*e47VG@K`A@&U>ujLpgS;M(zz;EmD**29O8Gq+ZEdSo`qITsa6vH-J zg&r2yS8-+eDhB-_N|l26vG8=HM8ULD%!$xxr!&AteS_9qSAs{8_Rv&qk^)Xqcvqo# z6O|+w?#m*# z5^oBbB8Iz~ZoS(;Vv&5QTt<%t8~eRvXzYi{T@4Vp{~#%Zgm9hEAQ z_W_kwEyfSv9TYfpu&Hi`H^rl46U=W+7(lc>y9 z(eAdTKNGk^7@6wIN0uP5w>3^VIUdtQ9g|DhA+R_$FH$@2k4lhdo@0}>!?ycs!25~* zEadazu$lVIAgwC&xCFjRIAFeIXq>I$h~$fJp=EU66jC8f?Kn%xJqawjko`)w^(m~Y z8E}jLHu~}-tyd^p{x(ea3Wi%OlT$%BaX<$0LzBL?Ag^O3NKG!dT1-@_%wnceLZPvU zVA%4*isb%S@}z}aSzcS11wLEVVtTRq#2K8w=|{0xt4f~0Ex7Gc4HI)ynBLk#+}e_= zLr{NkMqlpzR%oqMn;o>)R_WlLLoiQ=eF-AaSh~1@#vgtWy+8W{)YdPcSmidU67#LP74=Bwfq=+^ke1gWdq0UPt7JiunriQAx%Qd@l3( z9GW}@wgZ?t`a#_Ep}&j!KJ$-o_#>Y}Z=Rlg1GsbZMJ#lmpu(&U7H;BM=+Muvc@6P^ zongbpqqscx2rd*)h)fEvnsUyF!3^8*bVgATR=Iw7_{_tf>-8Fi z7DeFL^kwmJ3o8ioC>N@zlqs0d8wg#Ci%poM2K5i^o%qCKr?;RDBttt4<@Nv%NLdF; znCznFBO6hBi4S9;YRVxO#f#gdcD^IXAum|xi+SZS{CX^-RNr?6;yE5xjw5#Hr4-li z^?hM<28e2sYAO5Yu3GhG6gE0~rFslTo0^DONUTog0*SVgO65D%_wkO6N}=H~*dwJY zNkEXS0q0$z?2O82i-eL^k;+Mi><=d^7Uv9=v>xMZ>O>}gbjbux&DduCDsWyMZ-E{6 z?G`h%a`|Q~n5c})P7>sdt#$_V{=!OxV;o5KO`tPqi$6Vd%8m|H8c0&FpD-D^`}S70 zDj%(~?Qaeefy>mQIt5nkbSSkJ11_k#VWaY6Q3oR6c@^KEe4!-4#AFg1YY|{F*W?fU zV01p>0!ku*pZ#!f3S?Vyo*Lr=kK?Gga6r!>hb?uM4y|dNpkVRg!K*mce+)ylEL*ue zY#^e8Dh{i3f*n9FvhA(Nj-{l{#gz#^gH8;oP*n9Grt?MQI~(-tI>HGGR4NJuqcVb= zDeyNG5Jmy@sax^-sgF}2nHE*Z8hx52@)=B3FU+DiKa0r@RUVCHi>h-iG>4!&edgOD%LA%iD)G0`JonJvt+FMnM z&gD{-vWT*3Jz4fbV8@gO#H`uhX;4|$u&TL+H$#sW(7+*ToNNfU z3mjBAHaa=brGPd%IT(VG)|yEMZC=nXDc6JW`OzkwSlp8eKfaa zKHsHHuZj_l-f;@$gR|(yREfnl(}#nQjwmte*!kJJ`|XH<;>E7aPVvTcAmyt2(c~so zCGu;3-0vi<(6YK-{LqK43gi1J7RsFIdeo2OjEOl{=2gQ485iu>r&&Lt6qOcds-QHG z?3fez;3~RW0}R;CHoO)4a+7#)a1N(?-=VDdGI}OQ`#C4Vr$y65(4fORm_}}PUeH|9 z-r~%-&+u4ltE`2Cq0pvp=vMd9fo@RlZc3n3Y7g9|5C^TMnW4OUyLL0K9(fd3rsmP7 z*DPr(pTeUY_L%Qbv9nd4!sPw~sBJA`xV;XQvx$sWR)p0!i^l>bODqR7ICl;8uYLoA z$G#*oC53z_6;_L?+#Y@Kz7a5p*5VVy@FCLTV)D^ffjQ)7IqC-{hNMgtvV~L{=w2^)L(fH7hb%8`ijQFU;}6S-@}BuLDvup zdtJ+V%t;Us%`M$QFE@p?*++4i0&7py&F!uXMVriO{ISR+2X@n8`IVP3*}G=fiwoSe zE&Ab!TR!qRg}6Wg1?_gjY9)mQK|qwt6DU#{#qF-Z%}l(A;Jab3n_ajO_P`lPP1)}^ zvAMoTAIs>o-vxcH*YD7c?}>mPXM~D}o*u?&>;Ot>j||hv5Y&_6%m~C7iT$t|Ypevj z^y^1>4tuqhGJ(W6K|{;QG08C8&wzYOFa0n({_YQ1t1@TOHkz>j_H>-CQy@a}ISbb? zu9B0(L}05y=KU#TJ0_`Y$Gd5RWuzwPIrM3Ba6lEfqWs=i{7|fEk|%-%c5a|;msS!1 z4kSqn$n3#I+U0p9Z7w^A-1`}g2NP@bwbTK~$=SF{Ndz6^*xg`jZ>ur_j0x&YWhGNG2tlf*E> z3ve3yOuYD~$*BIC#G*HCnfd!Evs9!$G+ACtT49aq@u?sfotI>Yl^$YiKN|}xyJ8k_ zyXoTW;5&%oOG19r^m*fa-I<@k@+gDNHS|IXDARKygVO7X)j7RCr!N&+c@#!@wfRiS z&tWodBj3FtNvH2js1d@NKM%B^yw(ogi%SPjp=}jkEF9VCaY_4zVZCh%1knVh4<1CJ zy$RD=l@_aq(TZJmSP-Yf9+d(cubsp2+IdVxLs41ivbArY14&QTQIF2FKCedymG85a zF0Yw<4h^bI7VC>xT)M&LpEx)>M^#Eu0#%VJu5!6dYcq!p3MRci2LdHtJ)HPA+bwBh zm4h~F%@vqSucG_LOBie|$+guc_tW(#%OjL?aEKhOm0mQDjp>K5GI1|E#>=8k6|rq{ zMKM*3#>(9uPS>Br{?>C;(R7@ya$+M+!Q|GDeU8>$fe+tapGq~ls2o)x#p)zwH5153 z6+z*ItZx)XBdVg`PmkYc(ac}5Z3B8k-7Tz8RNdHIqMP18x7$LC0$Z=c-WN^!vngaT zMiKGHP)KBP;=y7waqod0yIysAh^Gp@8F4)+^W%|HAA4f@v%QnF8iH>bh=OikzzfD9 zX}}~mzE0;hn0|16O#-)jYEC#J>u)JtYpuQ)}!d9 z5lC%Du+O~3kl&N(Of0%RZ2UBIEfaBC^vEgRnR3S1wiNrkvm^x%gc+XeEo9X$4Lp^t z*3@Ww#}DAs;rlxcCUNSK?od1?3l(?P!TkJ!$Y9~@5eX!`@i{Q-Om$=u7gA&bdzkR8 z_=t#goJzK5$c74|;+{fE%;>8HS;FQ>Yj|XG4^AmlkVNl-zCNNEFJkd(p!7U-7UJiLOMv4MmI z8|6o1+gq_b5q4DEg5Fv|KVWhwRSw+Nazl6qwB`caZ67+cSCuN1`Q8$0y{i;#`c{7} zb0GTGrgqPa4dd$J46e-_rGwnd%w*byP86|7FV4Tad2|}JV@FY5TZ3+_3DK4BT`tID z*f&^Rtz+@p61ttbtRW_|_W5k1V8RtipU*Bj?*_5046R0k64h-Gcxut8_= z<<(1QZ#Ge(gnnvv8f6X|9DFD!wR%G|nq3qqh)mQbuubP=v)h)cgQ+RGA{*_tFlg7Y zw%Wk88{0ClpPZ^uK%&5{y9oGv2&$-aAep`&b-q8^Rwce)!0WL?H)NgT50xc%Y(0$= z%_mWa*F{=?j2gm~$tONX|18L2p_dr2m21p!!U>d1H9-sT0F^5fCxxflo3d9p{ixjA4pcKom_@;*}FH7(hj}D z&}B#UI8jU`RZW zvG4Q2OI$83ON(bG0#l4QbE*hgkKZ$c9vA`0vyyz6lK>#}Kb^An_eA6+C6rEK&sN+h zWxjhtof&^tX4f{Cx0qFq+3hPGi@u7S10iLDXAYuvXj*tO5`dr`+dE2HJW7qCp90D5 zZ8olBJtD#w24PaMC>Qv{AZ4#7LwmU>UT!4FfOOm^yv(buj{y*N%v0(w=26 zcrTwaYNlus!!AE7fg7rOnekCM{OqGL34zq=o`s1xrzd8PC=q)l1vXEj^>S!f#{sh@ zp6;=+lJcRD`<&8-y9I?XMp-wZwrJm*3FNA?LSAHLcJ|~bAQ9idEdA_FTQ6c?;{^%; z^(28Z9s)knqOca%92g6o9wpBP+qL~{ZX@|m*qqE?HPqyH>j;3_47en;%=|d;zLhMVtP4 zg_7&5Hx|WHWZ!`WOik3FyIrwAt`wO+Q$m|wzus&?)5UTU;Z>pe^?9Dd323w1!40~; zt&JwiwIZf!e3nmPwR8`zP2G<*dX_y{`+9Eo4c&ydOAn^bdEvm|J&jjzTkC1$V+y80 zYUS(m99k0(-v^e3OD@KzIW{S1t~Pt z4k@j>mrCac&OaM(L54*3pz}eDs~`>{1K$sI!*)O0iGAf_D>2K)9!M)Mc5?@*Xkk~bFhHUZ1tTQt3yU*OzE+S&C z`I3PJD-~rpNm(RPisrJm>^f~h8to81mUYf#%rIxgClwtSv&W}VI#3nut*m{r&AKyA zq?vh_GO6a4HW<++3r5zdn`5aI9hE?6lSz)f_d{1nB^B_*~uv_EWw+;t(hQ)g~ z$|dY|c`fY}mv`Bb9z>HRMJh)`9c}ljv|@3-E?!zG$W$;UGq^X7EF219Qp*r|Kkbug z3M7egu{)3~;Kt5i_}RtMb_*q-)V7!CH7jVA=h2NVox_i=X6Wbc*?bCz>yII?7loI? zKPS0IBB!rhi_^kHti}|J9k#KrxdPpz19{?*NX17cwW9XUksP;+#!9NJ%vX@#cL=pc z3;oqiEHAB~zC{PTFfE7TrYbx=KCfIAVjN*px+mUF0=7Db^K}or+`q9^6HC4K!Bt8d zS68m1zP2fF;R6TfaPrUsZaRC0g6t4)Ub#f;mHkzm##n4p`vz&}+AaYnc{fw5Yf^E14t+NYjt( z=5*#Jxk=O}LJA^NbU}e|N)!i|FmMtL+AT`;?k2byHn#D@WT`HUVE(#P*6|P?K?>MZ zj$J9J8P-0^Q*f(P#8ofK7p;c|bc##~wRlM+rbEiMzF3vnLf5nmyTY&}YO_iPlq!)O zXEC=}B-6vUs7%s!Lvk1!TcVqx1rkUPJ`U1DWJJx?j7ar+#cSon^K{5k>3OR_no^H# zK=Lj;?m*$i!j5SyZ=l`#Rq$2XKo%@#sv+|v=+I>IDn@Ro(8;a)WX+6)eV3LQr@pMS zHcc5c4-Y~4-iB1>7nzV4m1z?h!SdH8X2c*%NP>Zr@bFqsVPpetRuoe0QT?e5Oc=qg zldEh1@tmdX=aYNvsxpV4LatCmp03--z3!nhpF@5&LXSw;cb%V zVympcp;}DFYscC&*u-fd_udb{l`-k+aREDYWKTO7j~$ad%n)^h#B83|a_mgDQe|vj z`#rfn_KAs{f-U>N_yZj<8CZzfbsm$pxG<(F;Ke>6(2& z_^dp>AN$wt#L5%ru(8~Lp@gZx=Zn~I5AD16b3^5xy)#GQwG?O2m>%)TOpj1XG}{W# zaUi7i*{9du>JRYv)ysHo=?ZQ=eggO3dpBxRQ>ZsKMFyr?wxq)WT`WIyE$b~1p~xt7 zh{k{`R*j|gE&RpzZs63yJUUZFthX(qTHEa|?L(g7wCK8df4{eL19x?vz@)zBs+TZ9 zb=tKRi)FK4W!|Y(~PGW>#J#$jl*3VttzUVSAXN>mkIN zo&yWrF^tos8KHJ9Jbqo}hQ-iM8%yQVd*u%ncTvvU#BUPo%i=jbcCHy~ok*W}R=t<_A6>s$_^(c8IX;t5XDw=1g$c_Q3KBUP zoy;bYDmgp#*<>!11Rh(3$i&7~kDhYitu)vGs~rWX#)~+;`WX83h~B84rmDi==H_cS zR_8(fij(inr9@{BQ@68!`+K8MYJx<*hQ4PIFs`eIJJwi9IK-Q0K7axp&SiR_O-{U= z=4782MCN(!AO@XXVf5v{O+TtR}>P<3qwggY*=Ti4vR;QhkHNax*ShR!SuzB}Nk z1K#df0RYxIgFwAs3{~8Er)qnFX(1HQq_-Yr|U<=y}T#^6G6s0o3P(76F_C>Ne2H+cvUo_h(Tz)gqH zxd}KBV;h5!3+|RITav9})gzr=b}uvYf8V#(nwc%j!2Jd1k$leH`|Lfl*81987{V(v z5+{_FGY{TVX;%ko|6Jtf_}*A`X1wSud?cm+EY7T1SSGmhVu)YuL9$lp?jSw{7ob~;>D@k9J=@3 z5cE;Q9E?gfNnmAGLT%*irl~qf=fo6AJh`KRp+Rr?4davT$byur!YH$f>~GTr^OVY86RD zk>PC!DO3<@hGfW2a@_Lt4Cllk<(Sdil1ed@%g~UvRm3o%_wLXE(nM!B)d(Fe3=Q;P z&5}{98XLmW{yq%#^q{vhFIrLgOzfResk47X9rL9!Cg&G1J~M}7lQTGSdJ-olXAKe1 z%Jx`bhl(ZUEo0nzkC+@(IwZ1oO`AyCw&wGjiqncEMiyEs_71U7Y}X9@12Z$SMqx5D zF^UMbo|2i0YXc3$Qr4PnCD*;oHUnKr>sZ~2)?Cs*1qtmjNj!ptO)+5htYcbS*3d9O zhF-gz8p3qf8#oVjD)AGsr*pGqkE_5HS5-x)??_hm#yiZgPGF8>Sz`^Yj^~S6qv9AtP-f zZufA7vXbse^{2;B{r*odP@O=SZE!lQ-8yBi+thPivrZdKmQ|Kh(lp6e8;d(aDOx3w z@NDaJ6&lG=KFp~?;ruJpq(G^p?Zl|qycfbtG%bTj+}1TMeys@1qxF#9Z{96qaq&3j zpFfVVr5gpU1tVz6O!#$;8m_E7j4^+|d_Ha+DoG4A&P(18eWnJznM&Ul3cgojo+7Q6 zJQxMlqEzgMZE`F1;wmDB00jNZ)UU&I#WtFPa9|yNhWgSWHKRD2qk|#r)i*V88s$i*=mDc9-sDnt>foTGp5=vjTdq$BvOLNl;k3i(6=Ne@x3JP`+W{ z^K8o#%{YD?kqc2em^dv@x<>+(M(2v}F(Zar zEvvGnA|#h^zKn&lB@8pNC{h_VzB8^>rnPf8CNg>hI68b4p6uL!Rg1@P0Si+W4+^Z_ zXW0-_J%pmzo0b_=E<7X~%9k~q=7x(V51MzJqUXBw{fVAUcx3oe)VxlNc8uVq8+_#U z-$BXM0zEBfW{un!cVz8wYcuOrv2^}Xg!LvCR$hU0f3I|tJ6IO~sS_yv;6E@tdmQO} zi^GU+)wkH|hti;WS$Ij}*h%YzrS2Jh8koD!iL>iDNVqa5xUUMa=2O^y$XIB$v@ugc z{`H%hKgBJ783(&6N{R#P%$c0+3WQppX|7j6fr9J8;b$>;{zi1B=yiSlxKRoumsB3Z zvgU4jor-pBL|~?ZeEu}0l5F|~)C#LHv-~2Qrq9}JgmEy5kt=Q_q-bL$)C)cSbYI?) zH>msbFJ7U<>{wU)0<?~kq3(Bk$~6JgNr?ZMrk>?G z*g#@EyiHfsH<)^>t)xep-1w(WTd&i>n-y@r=GQoqnMSI@VJzTh&K=EN5QIrHT9k|^ zGyNFtDZ+hd?$Jwz9ZF>>Vhze4Y%BMId*Jc7hn6c z{>)L;ae8(hv&Axs)tX$#dXuuhOh&FiM=pcD&JK+B_MoRDA3cx%eckqr^2N`2`sgX# zvu7Xfd+Go^pRSJQ#7Y7-v*(H-h}9xnzNy?zdo1>GiCrk$NV+_X4}2rj?;15`jb4+( zzvZ7BGslTSsP}MSIVjL*BH!w)DYJLNM8XvjAN4-ekY5eK4ULa_1Zg&Hme6>N$e|tZ zz0lwghwrD?hl##R<tU%H9C{xa*Sx4dh$1m>qu%mE-4QX!{D3OTbWV0UPLZFKzClOIec7=$QQ*&HO@6C|oo<*TY|a z6(-U{n5%~<=vwluMpS5}Qo=UqX@m#~aTcLWx4d+68vJTzys%w;vxs92Ax0ZzX|sBQ zMfU{^P^Vzh7N>1Sr@7}8<;z$oO`%+#6h8niz8Z7`n+zqUhZlIEWHg*vw;d>pr8(#dC+S#nDsSRzrutGQF|a|E0Rc~$(2o}06##$zXa;*CYbLrXTVkjjK0t; zWEE#AXAPKRRGeexMlU|1O8UR>?_^^u=Adv;b^})r(`+AHd9iPWsYp4vxiS}Rn%6RO zA6vOu@ASy$Tg!?BRme`45G4^}3}iBFPuek>i=$Od z9PRBc92QRkt$#rM$x>|sr&fhILS+T9bWX<8{F|16rt4OR zijt1A;qH-;<5}NB^n08LZf}E~ZDTn`)7#g_d9*h}({V@|X~ia7p;Q1&1oXJAN!7x1 z_ja7=-6n_CFn#|R#p}oXm2i4qhVz`CBU3vmZKF1=HnTq3g)?l#}8897s>)yif@u zREg@6DmU8uUAZm^PI*e8Q(nqk?{@Y=yWg6vRCSn1Ih0&hPoX|@5*f8Z?&G%V6Bube zNx!0M{99z=Yt{n88=CSy8o4pdj_kxK3MA7ED9C8@WA9vp14$;NcoxfNfRzmNSqrw<=TR^7PRY^2Oh0G|lSgkdb>Z7dfodNNFKKVoHvxRoRAGY;OyH&)iB zQjs!4XgFC&I4Y_{a$(7yj#6z(+_4-=5V6=J;Y42Bp3-~GNZ-ReMp+(G7@nKhthBhG zry2-T-Iys{jAMnJn0M0h7Sr@KO1JhTok)*Rg_22Ki5;`|!z~>H<1}F{iNhB*o|ViM zG#ZWr$peFwF?aVxd5TU7^kZ4X@UDSvxQ<@4_s}lXs;8yOb}88mYycJFM~VW9!f5FP zjy-z{spCf|QD{<*;~FWTmQ?!;aAA2AR?=oE9%j8ag+&rdM}XIQI!s9buvq-i%tqTR zXiNpD+BV_2n$Y^lDr5_~sx@tdFm%pB4~3RY81b}}nGZA2im4Ep);yZ0kE5q?CA$2R z7-}%-vxp{DYZ}O6XSaF|&E&~?Y6ufUJMl|*6ceQoeH~PlE3Gi>y+}_gDSdY?wWM>* z53HT1deQm2wQ`l-FlQ$sz2Qh}Dn(gdTIL&Cj+INtQi(oTVEKWDEvnj`@GN=I9P?|>r|F(7DkBq=rETS6{mi}l8&>yBxlC9891BP zu5bE)aKd(JMwW@tse>+b&Mp0UXWolPz{q%Su?cPp65<9 z(+HT*=+)$KZuzdX)N}?OCKfd!L{4bLBRTWZggD&>^<2#t2@S)-c>!KJj3Coyu$H*@ zN`nHLb@LO1%AbJJ8#ci{94w z3$mu)f=ah=TC4ic4aQepdr4gkjQBv5UAea*v)xmXtXC9RdS z0!f$%RH6B~qp~Po@-tnpY9YH+ATVH-ofc^rv7)tA#0y0XRYF2s6W3x{?&RaSA_`na zAnD)D3Id_@@AX_Kw5(`j1*;6YCkEsgzqcTWL=m0H#3;__En>%SY2uSIJb*Bvc$;i# z3Dh*u*mgk!8zwd@%GMfHZF1)5gV;$xa6?1Xvm781NaRgZuM51I(>tDDS+0Wv8@M#W zDk)KJ<=17F7N|rpFxNv6SFxW*2tVP-q5X6uTW1rV5I7gJMg~W-hs{Q z2cyRvJwAzl`|*9)edZKxbOr|mHXaEQNVjf0ZGGw(wcj8#JdIYIT$uN!rCee!kuWI$ zoY3N`*6O(HiDz*4llyS_rZsru)fZ#gKrebb^7!wsdnI0Z;Z}V4hj-)j%zQ*oAt09M`e$8}Z?w#u4%$zhf5#yT3M;as=q(@B1PFz5bHBovTOzt> zNzs5rCpGDdHyNbyv!M&{Qe@Ci6-v2&T3S((Wb&Mb-VCc8Po7>tu~e3y$vPClS*}Lv zvY%Q;u$0?zfk}*DK}oJ|>oFk3ql!3_(VYho0kxWNFJ^Ipb}?OjzZ_H9AgVq>Yczij z&R^1AA;~b3gqF`kdY}2`3G@~pLU+DK`@`%7gl6ul&pwycsJSgUa3St1Y{BW4sRl)1 zAw)S)a*f^9+g|eth}4ZTy~uNySV(GEz1aMq&G1Jhq_i!Z*of{0dVtPIEHQk^=83!r233zB!O&s66Dq zwYPb!dKws}v7Kv@R54;+oO0;_F%yZM3qW=0ZdE;_%}ngchwVJj`&OuQktRm?+xvwSb+n2aK67 zA-aDyZL#8(aeEC3WmeJTtS!)Q+`T1EkKaDCb3tMJM9U5?S=4tY+oSjS)g(>?h7 z*MEo^#iTOBU=RIhQF2DlLw@NwTVtnCMNXf(`tzefCc^F$&+h)H_4#R|on{|^_IIAswE-47rHk*Z+ zOIc1>dou5!U9Htp4#GBj1C7v4Q-U6f3+2XDUEpydtF$BXfYIO`21+!Y4xH)ViCujc z;#A;72`Y0Xxgh35s>f4}wqPO_pblPC5jb*qGnnr7G$D*w!Ad1%-mL;`VA>u=w~B8B z9ejLLY;rd(n?;l35^^h|MyvgpIfLeEBs9N=7X-@+ux1Ts53Jmo z$>h<~J1oJ3hqjEIN=lxHEQ?8L7)sg{(YyfKfz^sGHjUIp0zu91gKdpD5b$fdY5DIo z&w#QSvDoA)iX~t~!e6}7wUVxQw**%1!qu8eQ83ZtFTt=k>RMOIagF|y65fqKprdL~ zZ+5M&+(@VjCo*`22nL!GnH`e2Gj|y}+6cNgit$BQFIoBLrc@X8_t=8qeCfdx8$p={ zylkXa(}-8r_qpKuRno*`rc!xx*@qt{ENtUJa@eLYW#rTo)m%&% zwCxlHt8x-V+C#DMqyPY_bv*dYbC@ocalx7uvWC}=4PncQC3yJ25jmjF4r0kYXM=>* z3Uj5pN^8xhAZWv2!dATo#>;>6!Ki&sJUNg&vlpT-{%=9YX|$e|`?p>xT3$b%lh{K} z2V2#~pK9y&nV+m>ABbES+M zq>7|QT2dVt6p;zo!u%GqGAFiF3R+Pxfst*o=(kX*2vW&XCxTjHF)-uZuI2B23dnU- zDrHe@<~fLSmNAoULhE*;X%jk#hKq5Vtf@uo52hzvT>98kL-y5d~NYQ|cn zSNr2AGi!(-+=*afW$2UbBi~dWRy|iGka#K*LG&Y(n(h1YS@S$qDGxt;1m~?;j;?%8 z+FloLSWSWEpb01{`aLSzOGb`Ek}+*{z|}mjyOz<#;`hi&M%q+eS^lDGJeTW~FP<}I z=qsD&zdNN>$klo}Q;@$i{CB29{+qd9{9LJm&3fYsJ7wUmmXoH=l8;2B`VdB-4Xpsw z1mCSbmy#`7Yk8az)4PMIF*I2vmzMcuGtc6@`Cri1I3)s#aI(T1k@e+R1q98ohb8!$ zbXnSahQOFd30?Jhx>PyTDWC*~P#?Af%%T#kLXyTGIZEjp*1D z{(pY%Ptn)YjR$^lpYAX)ABzDsv}V|gitC+5t%_!i{Z3fHmUN_)#u_SlG8h$Ay6v;;lf@JxH~ zP`} zr&*euVS3pfEE(I3k>QQV7Y2~d^+0(!5k&KhZETIQ!-a)82^O4ja4(nj!8owwGHDrT z@@C}=#n}j7aB+j2pOVa^LGQgrT`^wMZ=$HLrbXqt)n!v(l@E z#cCF`Y*;s;iza$MO+7P-2#h-8Ol5Ks3@nPxO4oBR5igT4IES&7(UIcWl87jKTDn@r zy?NoJaEp%xE0sYPMLpT1O{{J+NZTi&dTYh*lw`@R*!59aGbjs3wjhE;56-Z~5w<%G z#}rM z2!^T3>M7)L$)+`UaNi+XkFA85eR7wpgXwAi|KA)oI?N1>O32D_wo zYS?|6s&q=C#4Cdi=716#A6m;w_)ifC!*HX-A3P<{da>*FxK*ozOeU)XQl5iQhF>=) zd)P{~Sm9MWTTyL@G9Lr<`TIRxrTeOZj^+%~wG>VSLs+|b8kZCwgzFzfQ-BVccBfP) z5bjeZq=V5dh}jx*BCPYP*w7+~AnaXq`1hBmQKQ4q=L*F$Kn4}taX1O?OpU%->>NZt z1+m;*38zn$#UasWiJzdZg;QHvwSy5{97uQ&-w@5VKoEE~VF-g*KFUheNH9TMUBgsz z7HJhDyJFh~8*aE?QCwKW1?QiKojWhWJwLfiw8yyYu253kna^Uw`Zdy;TD5W+C0VO+ zFH_qRM#nFggoqnKO%)A^oh_y#pr(uf#zw zkHtjgoujWlrSTAq>gQ{kTb#x?#ng%7wdyZq-w>nD<44arg6;~bu1Isbkw;Eoi2N34G zYose2t#C;PbEUF?VtH0FalQ$nw%Mv+adA#AXa{9&%wo-WR?a1Z{@yMP18kHfyX8Ps zE-%Ufn@;DYb-?;Nyvb9>TYza@e6X{D6r1F5M#EqM-MXr1c6Ov*qjjd7WdBs@J~h9F zGJ#enFq$3inwg1Y!OFWWgQAVlTqq>n~@>DGF6~%RY<$5*)o-%t-fN` zy;VJyV4~}Ela1?)Tp0lgKgyuvVB#cnrNX5DN5TO!uoX(;XEqS^b$8%zKKo9rTCOXS z`yV=pkALO|B46SZYRC=MrM0Df=k>+)6IQjl->Y2;HGd?Gkcf_{dLv&MGYD6)7OjSCUWnV2S{LGIbl)1V||*WPEbVyGX>2K%H+xp3Vo-2cq8v~HS_ zDsgCMNTDX}<#Gk|boGhD<~Lg#f)6AfE?}hR**y3IV4VmR$fFTn2bz2&kh-RRbwPq_Y(AYeVBj#6jdgR z^xhj1Oq8&{TN2E<`mgie3^b=COtevxD$$l~7#Qy(i*?oda@M2AC_$@OGcPH&Bm$1i zy(3|M=s$_KJ&f#bp=a^|#%EGbc~t zRkywdx4r!>xbxp{$C-&KY3;HK1}6+0i+Iksdi5x+r3_VrWic!(^#BEvEjNFH?s~>%PY&>cQcD468($=hu973S zb70mDkH$Iqf#x)p_)pk+Baz!MCLky+2Gu}wapT-gf;zZ*C=F)gb96GRu=B+ z8l;M(PbNfRXq8I!@QXJ{kv{y~!Z_;HMdax4&e8?TXLBf(=dmz1BSTScgE3s0VQ(@Y zpcRnwDmfZ3SV!tF2a?^uso4_j#PT<*$muf48IfyWt%h#COnE48=uFLZq&+tSn z2S9@yytsNXy=$(tM71>K8COF$$}*!x@5EdNJ+L#ZYXd0_gs7<%kA)kf;>U|qW*El# zmw|h!z*sHAvgs{B5mHTj@z39ZP3xD+_wIXWKR*8F-$toYL8{9~*BEW)l<{j%43?An zVs`C_q>EK6P;uaaA4CG8()uz%oUf5az?l}8amkDkIWZzmon^Nhwt0yJrV4URqU6Z6 z=4xcs@F12B_DY4aVc8gdzGt7&Ia4w-v5-+ZnSm|534nin3>cAA7SkY|I5Q}8EzL7EzsA&zVH1~KcQrP>^eT)$a{xnw{kZL2Z^!7; zQGEMvzYIn1ZOhh8SS-%tWjDMWYcD(>xBu(E;m+ItQ<7dj%R;G2Q}li`0B7aOA@ucR z#L2dvU4g}+D{&?_f}$^3097MEf1VP#6y!`~g%Ykz252o@JozwIly>v!+VI*91ocI5 z!-PR+n;&(Ov1u}{vnczKEI^pRvKzLL^wa%N-w--<<(NzRbOQsuxy&F6W=P6JgwRzo zq~u{?VB|WqKY}CDC?YgxvO+$HQ&iqN6rjYMNogr_4muUePR1vX(E(n;!u+hZz?3XP zyY|WRUF{2Bsa28h?4`@ol*2r9c{s(s9bL4E%P5u>sk*6QbYuw%bdeVq7jSyKfx+Q1 z3QQR+QaQ;Vqcf)~7Kp!iD|S$n%q4}+u3o{OFD}eWC1g!~Z0Q|@?VcU6rUl|E;RQWN zHxl%Vgf*pvHbZmEkq}}W8)#KUmDy2VRG#xS<@eQSgpo6&^ho(|4-5m^*F*xI|A-9* z=be!=pDv|cm+oywgBSRISQX^Zh-2zDlrp+*HW>-y_Lt$CxGdvMe}DS@w_y9$mGb?^ z_Z-HjKl4pgDM855Mi^R_K`YIfYb0_WsWK#a6Uq=2ZTe$n{bAa$cJ8C0tYHF86MLZh zmItB|*H^`ily=);BxKhUB+Rzif)3uF`|ch1legW3?W>nz>xw0K?`y8dmv8?$RUBEF zXD|uZvi|wC8h#L2LAT_9uM0s=jIu-qr>O-tjTT<)XA(O`hUc(wK4)`Vl(@BDh}O(Y zNdm_NU~9<5?J)TnB~{#UNPD!$GEM0Q)SD%QW$n8wGg5lr3$qpEItqdc;x2{jX=Q2l zQfI&NOm|M)85LSm@lI#fV0Z7eIF{?d5NoEZF1V{+hJHE}t0x~tOSO1D zo(UQiMREHy;}X= zoFCfP&!7kC{#J4;Q5?7o^M$2yZTP_qh8w2))4^t0^j294Z@va}Etar#=3y);KdrUi zkcYc^iygolZf&Gd6}zbEp;!Ar-7=b$*>FyxzZbiyTTcu_0OK#8eV6A^q-=sOx&#q( zvH1v0ImWZWRwBllG$yAx37sO-OXmelRYswshc;eEgnkfS04)M;Yn?cCfHJdr_>HO@ ze1rW%BE0BH-OHtGMIf=f;e>lYSB-x!w*>fi^DryrD%r55(wwvcs+AVHI{MK;nPid5 zVa`?xc^+D(Q7)C_Fk{UzQ7@K&qAhnd_a&6Ue1Bg*Iy-wfG)n*8apSoJFAPyfmMkrd znC$#{h278ZJwk!#3W#lcmDtWAvP>1&vYR#)hLhk)1d{oRDnKfo;D71%Rm-3^RDz@z z`An!-j4tUuY4x_OiWD=J1A~wgXmO+~WvoRFBiP4l%zFU)d-Xz(l@Yx06n<(C|C^A^ZJA)kfsEzD0Tq3 zLQ1pFp-F>Nlhe`;?%+Tpq}Mbm*$sGV=z2Vp?HA#cCN0-=gIg{1i2gz388A|srB`S` zAJ=XqZWx3qIiRy)71{C}1p^QDu3vYjJs3O+Q?}x)& zLAReC+u3keZGJTclOQ7*CL1u^cmDtWD$?;}QKfw3m05Dgagr&amN;5iR8R zl1|a0sg*D}b(|`bdBIdWR5lOv3{!CE&_h)5ZD*g8l5W9?K@+4G%&mC;DraOpZ4Va359lhbA%)mlr$-3lFf+GO>J+^6R#Fc!g(;oSlS9L`Mipfs-ohp*8_ znnW22)Rs4{YYsiR2974%>Hf6QEC_UV8Zf~y0ms!Cc@0n2XduV1tbI|HmDUK`gtC@d zf}3Em@4j#6S4zYdbxShgGHKdbum@s-TxAb*M)5RSCYaa^KCq62zE(DlWL$zs{zJ?A z2UxYNA7A?Hd!;%pmuvXIC;k~Hr)H4qq@-X~AHtkVhiGt?g{(?=a_H(wq+mQfJJM^@ z%m%p)+cL1%sPAq4ar7KO8G8oXy0jWju8{zWL@W`u@vZ{%xX{3P5s_UH$)CvJQ&aLO zQ%>W+_zbSux>iWLTbGaEkwa%NTQ23i?qJE4aF_vnil1w>*?Bv7Df~1 zIC!+R;FM`;tKaC$f+y@i*;uy;pd|3M><$MrZih7t&q&l5jBuwz!DNWO*X7e*bDGHH zxgEypy86tqZjoKxU9^Wh{W(}DCAR`cR=ph0Or>SDb(nelq&&BUJiMx@;ieH z!s@(S&!ptOo~C#*HO?v8x77-&enyCmJ|!W>`bR2jL*xQC=)$9uXi+e6DVQ|bX{SyJ zB%@H}j4&SV)*;jeWrfJ+38wuOCKMQL3rv#eh;QY@3LcS`RzfvI#oBgE1YMY!!L`@F z6k921ES#CerPp4GB|9#_vi^PycMsy_H{FQAEnCsq+lO!cuP@^KtFFP7*ItKv@4iQP zDvHaCFiXEGE9X%k?W3Wk12~gfhH7q@0tQ`IUb{|!ECmxzqS?NfE;VN9hYOV8_}yuY z^iZ|5I7R2k0z$9J1NfUhK^G;%QC64CW&tE2nx<;8R-qcdj0ncrro*qCv#eDE?ukj#hRIR_`5nT&^GaRGA+MF~`$g-&TdF@lTv zA;Jlz>r*UE3i>OZ;n_+->t#l*1Rt~Wi-LIRD0B!}P9VarCS>w|GgrkAwM0yof(ZAT zwUJRMzfW0)xIA9BiDIH;Xl5oRm>{l9L_kr3i61kzjVDM{kr=wWcIVR~mRy~1TS{7T zDq%_`&V+%1|0yH&ZD6n#%@>4G63})q8H5asprPDT-#5wu)(9$WWBmq^&wMPP$pPfA z{^Gq@Hl_#efA)p%;=v~lAl>a^XjLyISSo@}$DKeG>zjqjFvTGi8YBNS`7?$q^Jied zftVL5n7@@&Y<%AxqspohR<%K>D@CiCY!!X&fncRpR8p1KwAE6ct-kg1_?~1kohTG~ z1&P%PJ*q}3xM~z)ICyzH$Z* z1uG3*PnlTi2|!1jDP1}|=|wtfb>!$QXmoZ-b&^r~(6)Cu&-doYI9f-Zq8LQioc>p? zQ4nfrP=cL-hLN~vf?(*xUPLMhQnGf~P=*WQL^DJfr3$q1_*9#y2-7Fhbk5DqOrcC? z)Xl&1X58?;cOvIxaQol?3s$XNiyPkme$1XcgFQdrg%d|l;`5*VUpTP$S#{Erb zyoVk$d64e$V#8zgR-M&0p#w@&mtYpb!)+3}>47-7Y3OTIvIT*0qUWkYnL%V2X-?Vk zFCU!JgJ#$nb2qAUXjNxHOHB?+&dRmSNP!|8ZFVLLG9oFh4W0PI1kqN|);bOo1nIH4 ze1~M+&6e(Kax@hW5!z&P^OLBSi_$*pk&;=Vv@nlqaTcRXhLOwVaP-(o2~%A?og#Z* zuGG*?K_r{@De&aPiguw`ld8crfs$X35BXZ9g5GY5)N_-9#pi*eLys>P=WwJ_#nz4I zp^Jh_mEnaglgaRPlvW*87oJ5CX|_7I+!z`hNOoy8ntUs|trb5LC@qy_F&ise%N&m^ zXO)Wi$SO8Ta$P$bZc3|?Q9UZ3H~n}-*;87yv4VK9%f@I%8bsPx!f+e{Gppbkn9}P9 zuI96_!U5lFO@uVrnCKZHF`EbtFC);cC0cj4-Fyws-@00U`qO*&;P$&8L8>c-fmOqB z@=7X4Elj7PmW_fFS4aFmCTJ=RT5t_$!CK+k04;}_A1f_~k%e&pkNF&q4zzv{wky@C z*ee1?Ld6CUu{$=|D{DA$FlNC=?S`nE7!2uUm@TvU`Mrs^1|=-_?LCBTRFPf2X%#ju z8NxN2#&G)szcd6}zunDKC3Kw3ft){)-KFgdLA5zB#z zm6hD<7O#CC>@wBd=|gXJ1Z&(8q?$!;%+j?8q-t!M*-k3OHH3!Gb1}G(?!a8COV$^U zj@h$@_i-wa@eA@D4(%^sagp{e1u>Il-vH$74cyQN!80;U$hE3~n^4Kw) zniz*-9+}GH{s$h#{SQ5Xj?NyqtY;|vEe(d^_b{`-Gwc?Ybx$KRw~O8nB?1FmQKx&s zxp0lH^Bg^H-i&n8VJHycgQYAg`7L<)h6cQ6cEMe_`IVmtf<}fjbmmX6t`m#Ktpfu0)FtjQPbGEnl9=#u5<9Y-G#a zHq`=uHXTen^SKiGT9K%uuArL2je)Sl<$47{RGC=uPV3dJu_a*}m@|@839XH)nFGlT z$rXMFx3hAY4s>_+A(!pcAPocU=GH7%Ax|7UDEeL5bP8KmjNwPWd|1Q86Ehcm9o=8) zBV^}|Q3~Rygqr&cTUqDQ={lEYlsbn`%Gs`}EO(?$yl9_OMYYr`VRer6f-96PYJ!}# z9ncAtCiZfK;X=aUp?8U{3k9JXRhD&M1B9}H@l>Pz*h#m4>%u{-ntPNInkmD?ha_o{ zj1a5ehMXwpFg11|rq*7H#dOM8bs4oP$5!A#p-F%5JBa4Qvvhgc^s0rL&stoX$7KX+ z62TLZTz_;9CUl-fK>)fV;3ZmxnrNA@&`ubqsxW4L#}-Fc(ZIveqepT7gZJX-@sku( zdr@u#*nj9CN^I&z6-=$gnqnzQ=D0J`lvBYKDHRFHO8eh%qPt>wKb8!o;WcOA)fZXh z3PHypLm#kM4N>HE&J`j3opbdyD$=n(-Ec77H-K#&sfPH^YZ)hB22Ay;)_xH=1tW73 z1Xw#Umm`fXtT6<=B;&pf_a2Z`k;-hKp5-U zZlrTv!pzNP=ulG`+2AQRE0(;D@eXl$GiGy9JA~g`v)MF;K`m6uMPZHRQVNyIBBp01 zFf%tUEuu9mG}&!ruppTZ>x9ir&f_B={2hGqzr9rp1yZ{6wXc5%$4^dB)zVAvB`rie z*1lrz0ueI|97BR?QD#)6n^wSvH5cMl*KCp3_~y6o!1LoiHm+MIi1Qay=ucz^;` zRle`U+Mb~<@mod=GYqW*qR}=}W?&dPk$I6G_8O^pUxO7yT0<%mU-|;PfJ2MsGAE&y zM4$kSj7TE1D1e+fbBf;o8mwHk0?!^iVOm^NL74%3&S}61HGNNeCIYv5Z-+2HR$Qlmb$m!y&GG$uScrKLy3}* zZn__1BP&s1Pxi%16f}a+jm@M&KgtYkCC%+D{@I4!F0^Qk9-Nqp^`rQJVmnyocPL|` z#!ibobfy1gsghNO`YR|QKWLMD*ULS2y0b=N0*{tpsR@F@$)G^OU=6NB8?>2a#=vlK zN>UoNvJh1l>7M?>f8B+Tyy0eaQPudCmtBj$|IwW$b!cpP zf4zG6i?qpT?PX}6^!QUKxCK;OAyxE6S?}DDkh(#4wlGZ@cP5)b2PF|*s)#j%xdr6&RlvNv1{GJux2<4whC*Fw;2k zVGpG3>lj{fO9Pyg>9|BT+*;F%Vb>}|By+++u^H+LE#KWmM}CK_uR6o{X}xx52lPmR zA48ch|889gxiPBN(OsfYY3ZJNBgC;|r?Glfp3YbY-OWj;*@r1`G;qA{e3X>-$MiyN zU(A7InzmKL@Z*k^^SEd1B9Y6Hiz4R8H0cKz}&F23Lrlk6llnqn$!xvyFi`DNE5JoyL#KMk3m#hOu6Y@~E-YCg@p-#V_xgVhr#n{WD~AVh>lK&cd%t*q)@P}$z4ZTlu>S_Z17}%A2QTvcD1(+3jQW+j8A4v= zs@v&6k*fQ{&z;2BvLzTC>DHpB+;naFIDUK_gF_?eSlUI`vW_l)5vk&1lBkvkFU15^ z@=Xdnz4Wr{OQ*4M`Z1*I$5H19U)X z@!H#d7rS;nf%RLimMUSOFNc?2vx5%9UhI4JS&WY#5@_bw*iu}6>7{hQ*YM<%Pvg+B zaY5Wz-YU<2R#WwJ?PVM2a}=;||4A$?l+fLqlkBNl4=A&4VPvR}en*EKey7K0P+VM~ ztTiQ{pP`S1T(^7|T$!*_o60KW6!BN*z=;kwOhQK1j@tsno4Hfa{OTyi0L={nr^ z$X@Jy?l2_vp*`@rt1ibsQMCg6|En^7!&)REZy68W&qYYFUMA<_ zs+{fM7~#Jt&}VX43=9n#0E91`3r1n3(;1A8jhbp&$!w8>e;Or>)}BCpp6+Y!m2^1f zv8pzK^|Oy5)jSU0)kb^DR*IpXFgl^$CkuV+adOR0Oj1Sbx7wS6=KPm=3Ab^^=Eu={ z;yE0y7v%!`fv)@+d=sh8v_fMYVcKX3?YyYJc@_`PDi~EGLm?804*h_rquVGt3_!-OTY%y}IavZbrS@0}{RI(MQdvG2%q(Hf+E zPZj>ya4!W4Pui1R{xsIiKZaH+i!%c&!N*O7esMKym-AIbdwzoxs$4ehsy==;@}xY zih@fzQ%9k*Dv(EJ*_Nv%lq(D3#3zu$P!k$C@-qWiiGpa_WJ4M$RO5q=Zg@81&;?mU z(5Og@CrhD}B~n^c@iePkmDB`M>ly#_sfj7v_FK2$ufF&xZJ0B=h~MHsa?g*^zx)zf zTp^}r7jWLTF?``q-zbmyAAeE7&vw;>34GqBRruytKOx_L^Y8vKp4)#wh+^WEKo!DA zKJrFfcjZoW6|(ZZC!U_i7ruNK#-}TEsAuGTz2%mRaN8|A&|lCktY;2Y@wI=u50CFT zDhq4-wjq4r4{o7sZ32%yz7MzFdO5~M^Z3gz{1l56gx~w_oAJ}T9>ufIp1@n)@-mDJ zcH-#KllaCreu9S|e-4F?Zo0NDbW+9h`d43rYp%UO@cYjl8pnU#@emFiJSB(z>u_7nxD)ovb4xmNBw)@72 z$+o9Xou0ubzVtN;u2sx+b+yUlN^T8Rg(nnJ z5~9@wxE09uoz+GhtorCsd(Z1%jjJ!%g8LueE#IrKPXz-(nCULa@NNa>M$3kZhx;h6$CyT3WmPu(gOTsiSQ!>R8J)D#|N*^D# zEo?K)UAy<<)D17glKwv2blFaP6Z zg=MTNJwtbb5}foHmR1j;P&jp)vWFRV{nYzFZ%pFt&>2|2v6g~LEuEAu*h_VTC{<$Eb} zs1)hYsdedFl)veq^_xPPiRX7ZeYdo2_VD zXeHsMkik-}pFqrLmarvp@}YV2E3U@)-7?dYIGQvUeGU|;!;)B z=CEP}ST~|D^7`#`|8^js(L?WLgUji6{XRbUM_*}SwwyNZB{(Tn?7pn}{5own!-9IkTw{6{lS6z8Ae){Mh zeDlY5VaM9_c-Nb5#uNLW!BhM8reE**NaAIOsgatW~xcS#cK&ued*mpdK5|SOl+ixe zzHT*od;4(Y_z9euou}XBh@fI;PcOQ=21NFRGxBP+rsYnWi)d69MGb-vpardK7BuC*%d-SMcuwDAV0%;-?mLId=p&_SSRk@+D zhIKUh$xApWs>omd>_0&pdNbmeC5%z@nd`_kX@%B<#22uLONV9I9l_E3MO6Ly7+8E9 zeq}=1>VSd1yGFx`lH6Qa!HUw86hNM%_eW=ss?ho3+3;ZW0l6#C5G2w<_evaJaXF?6 zeRQpLJE;{ps+yse@fS%w4+=-v_@B{OM8&GHU06OkF=oR}Z zitt>TKcg4H^BK*BtAXJ4Pqn?4-pxSBVuQinKPDL4%V6o#<8udw#}pfP|4}n z;tR%5Qw{n^l~+&{q=ufa4>^Ue)4s={DRH6@%oL}^~eF3v4I4@iG@Xc@o&FJ zpQ8s4KY9wUzjclLtTUg%kM4d9|M1Uu;L=Mr;J<(9HWYHe6+1WKXAeDxm%n636iB}M zk9Xt2habV4-gFgSdEM3c!*^eg=Z{U|d;hTu%f@mT92mi|6Vtf&u_HK02j-h!yIlfF zg%0|!f9GyGb6a@ZZP(+sZ(fbbGYx$Ahr95;w_hrOWZ(WH_|bnpg5jYKs`#$ORTuQ& zz;#>jkKcZTD%KFaojUk-bZSPgmFD-{G&-C-Mn@YWF=ML|?&)@q_TZ+m%cazxDpj!W z@L{y76`a3p6x&7zFu!&MzTR>$K$XWuO9!xJ{O+Y7LaiC2UzbhWEYvGVDIR z1|R?OzoA6$X*mU^PrdE680zZ4$N%Yjc=q^7boC9Pmp)4&-z8Zy3w>7WWnrt=t5k(i zWlM`R;uD3DnUXSA(e0(c&z{(Wci;3XN*L03<<1MEU=pvtpbcrMxkB1aRLrxC#vB7z zbdOrkIs=vX{7ho?wUi9ERV7&qW74W@RMU$`g1N2h2X&S&yYoV`EUQ^%6r58sYwp6K z+*aJ#MHO?Zft%9`D<>bJL~ub`Im%cis#u&3sels0ps0Oukgygv9TTn;9iR3~=L#HK zc_~g5dQoF#3#HX}od_hQ1gOHDmVWwZeWwp&=*%7zf_bo1s?TLOL;71R^{$UhCY+151V$a01&tWF`OOiC@JhZi*Qx3 zuMff)^_4V1i`t6`1ek`cD3hbHtv*^hWr|>fGV11 zEKcc&55^4s=+mFZ`~Ki}ars3T;}1XdPW=C^OhJKp&QnPUtOcHx3et1;ABFo#rti*~+5 z2In)U*{_rW-Nr!-^>ztr%>KFO7sv3e9~?n9J+EF0swQ$d)d+|5l{2E?)`6c+! zn|=#h#`?qyVyarjiv9sq8X>-O=g;xhcdve7qHWB|m~-<7x7>gm&)jmYLx2;0g-zok9rZnO&{Ik|$blks;in zfkG!;H3OprkPIZY=gqCLU3>OQAlbHh1$w)?F*83OS@$Y|L$fV(2Kj8K_9iiUUaXbn zcx;QEPUySEI(pXlOn4!_b`)hOKLZk2{Ez?EXfv(-MIlG30mrfMZ}%Kl-e>t8iF}Ad z0mS%@7$dRSj&(TNe*vg?rnQ^KkDX1pbr2YvdIDi(lFk|fmz5ROF3O87+%jGE2w zOIO0B@0W7RFuvky9O_z%MPCOI+x<(K)|4iZ0)s4zf8rVRoqQVI!IY>y7F!wdC`lQ3 zaUjrkjmeVdx~97#2XrX2W<8ikR2|r+Vj#-IbYDCz$-}(6QmrJ(vm-EZ7Q6Ixg=OSa ztoMqxJkaP(6{;6)H)?@im6(-;kpz-85$434lkYrTps^(b7%XJ5s;d)2gKOce+(f^t zCIArDRvP(rKqHTMrop0bqV>t%>r844J)=2eMZN-K6YNOaE07AslZ+@dj8YCB zENh}$4%#x6O|2Rtf#M5ull6d@7|8M_tgpq02(6ephw9?E_#L<&2a=S`Tf`^cWTA`Y z7C!vZ&*Qt_`ku&nb1+G#9enOHAH!>Z=VP>ySWcd$-ct#|KhG#{YK^C8NL;4_f=PMi zP_vtihtGAYVCEL*F*`dg!6Z$aVW>y9gm}Te^KBQyJBudJQ^?R^l-AxPTBHQylQnd7 z52CX>gX-j5#NsY3@Fve-uzwKQOi55}(h6%SY)KFP{;R)F&to01MLKwM z^8K;lEOMD{Vb#lKqX3?zjJi+=>0s24x%9%-_~*a=fV4a59M!M4v{N4U&|~{?_0ILU za7#Zf{On6GF?$uBc={Cn>u1Ms;JI;hb!$nN-FwS;?9o{CpY4m-}t^%A@YUe}6~ zq?9JRc6R71%<}7uz--itC{Zvh73VZq#@8}GuJv)T+@W)T1C*JZNDwud^GsnKPW0`B zo1^#4LaRBhaq_q;w*|LQqS-g~Flu6OmC>^cI1y7fFq(nNfFJa=qceflN7d=U(SaR! zu5&5Mv1*SNcH~x{qXUXZuh~uC8#?nW`cLmcwmD1RtO+}uecvZ%r!ZZbk+xWq2jz}y z%+~zKtSnk9yhA-*y%^uCh~X}(%1Zkbu`nsC0`Tuo8y8uK1e9wu4K?q?H5@Ffa_uP{ zyyUZlNmi;#REwHW=_d?AZM(_~(-Qa!o{zPoz1X~T7=v^+bQ$o&6N{AiZ`_9b#8dDq zGn7E5G^*C9aWF5+r(iwMI)%){BbZF(F+02zC8d#FQki%{mP-t-;6%Pk!jcZn1u{lS zUf_z%bdWI@%S)hF!oWDMd0IN!abjqxxloar6SSd#qa&6^E-*62H8dY(hD98)DRO6+ zqhL`w6i`x*W(qU$4-`!1=4S-~R-n48S!)S2m=B&*svr<&CJ8cj{s2XFeVCvoM@3$SzN2K?dse+Qrc+wTjUH8zMc>YuGZGky~Rpds#tvFEo8 z^ZNQFdux?TR81_!;GpKWFf6uQs+ugx#rOX2&vD}T1ajSZR7y?CoE^&YL#kLR;!k1I zky5QJ4FAB~yqIyzUf{;F!$~GDj+C}G)19Zx{=lK58%#w4%4Ce9lY+Ao!GQ?IfnaExZ&!R z^7Ym49maQl_&5bd3JBbyq)eH0nyfmWlFUx&L2Jgry49o6yZ_?1eu&-AA4LxjwRwDz zp=5$pT#d?eeC}J+6hv33NL6d5vzL|^ty2!H^x1fwS17|iZ`}sG{<>>$QT{i8$@1D~ zRw-|Ud;uT3={l^;XYla80}@Ow*|HAztyqGMYgWj^51uJvU~D}F%5LEzF_5IZI6+m( z0%~-CG$oJ(5k@K!@{6Fic%g+ubVmX_e(;cx=a@x*_4(Ve_jDb39_*(&q_X3xf?F7i zW$q%>wWDdIfUNmfmbV=m(WfYoDD9pnx?xUAsu*c8dl_vf%{IsB9ttp{@cXH1sV`um zG>vj`TAWJxkmc+;(wIVfh2=*Q{ehefi`msU(tkB33IoDQAEcknxY|^)N{9K+-EDY< zQ%B+SGm<`}rGvdT?UoA{-dmj2S-EsS0JGLPG^Xoz@y>(K&(t z1{j^DO8>;;=xI&E_Zku?9GGx}_rTjywA zi=E$?R$7Z*uiPy2Fig%+;HPv2<3$2)NT38Estv_cLhO~G^9KW~5Ajo7v8 z0X(xXiQj(n8_-$k(S!bpDV#ic42|$LRHbAvHJ8WVeCH%OC=-0^t(V}Ui&o&l$DhY+ z@r1}Bv}yjB@|>j_ERetv4+aWoJdbk1NZpn*uPOzN@$m^Q*+r~b zH-Jm&vlQniap)M8+dShk_n$d4MOypwRPoen4f)<3_Z-9*zj7zK3LdtsrL1Of0EbUk zX`^JZcI{@InQP!PUwQ-`nFr~=3%Ks;b=a|K3toBkQrvOxenIRs13v5msy2P9iX0jE za*H*P^;Jizqk$#EeI^@MSmbh=u20cTi=bf2*L`5tO0#Q42%6mIPvpZXCP~7RwohoT z1$(crTQZ8zzw1rtrmAT7z4v3wn$^DDrD{ModXVpk<0!&y7E8XC^BL2VV~cFo<(zIuqL&hvpDr zyI9VOjkipSlBVZQO<@~VMB7#^M{nO0UC)LPW<4E81{cfRvvvEiT92bEA@xs2Eie9~Z#LTKUp`V(B7B1qOwsoie3bgs~nlI(k-_ zo4X>6bd-I=Gcx15JLcE$%>D`NdFch%cFC)dy8nl8xZ9FR>9t|Jg&M3aN55~dbpp<* zd+A>Nf4sd1oE>L*F8<8S>1}({7HL<~s#}sJ7g@$Owy_MRV@l{Pfsk-Rav`1MCZs@a z!k;8JgcL6L!UYV*7=yt@wsDt>WHn1xS@qTSJ$rhcnft!)_syKMl1#`i$$!bltFve4 zO!>aIKJW8f4QJUZDR~8;*F)|+Q6Uu{=4zPkAE5a z51zomqbIOobQEuX{U)p${W4`55e)QoG?j!IHeHnoQtxZamOr-rK0@Cvc`A)u-K%R4!IY6|12$* zqhyHH(2#2?OTmz9FS<1?j!=Sl{F!68?KM|p^~w%>>XWx)=VLG7wYLr9)@zqzF7M#i zfA2n2&X@e@-O*KD_~eIf#KVt2EBEAGZ|=csZd!-k`_jl|)A-mYuEXZhE`0rm$M7%T z_z6Dq$+zN~^@I4?bF=uu32q{X{#I7)3woYY%9_Hb#9M*qj6~GS)Or<{p0MCRfrFeVi!+5HKGh4?yElxk z6h|_iE#QCO{}}$}hHL+Q(-)kj(7o;LUm~TiWTk+&U3;a{S2<1*x=j(h%j^rYXbIN9 z$l;m!9Lj>0G|Wf^u1GA&L_iLlAcl+|EuRtkx#`N?8$JstgMYw5!ZuAlbo-O-W5i~M?UqH$n zS|F`PPox?UCld~~p{tz{YrR5%n$6B3&v1M?Clx?8>i9BMLDyIhM3D~0We_gps5wLV zFMTI$&P-IXWFZs-6R&g#Ucm4NDcuW2e5BmvNcolxdc9DgEJO8GBD5c?K}8VB^i0e) z9Hq}$J~xiOiGxU$$>H-5v2fveLx?tI;r;{xVSM3>cjC7{{XVQ;u?*`*`|<4F1NiJ8e-6L-8z09g z8L1L;V+WE##Kzn6z>!nrES|te-~VQ;T)srE=czsW@W+4l4PZtnM&#Ql^JeeGM%3jtvBbd{r7NF^!eQjg(FVhUzmB7IFWuWGmk;w`P%vwsd> z{r3I%$S>WBH{9AM-`>|<_dJ7{$#KMDAsjh2f$dkU#mHa_e&Zu8$mQm6$2T9v=FNjx zwRAcD*I(S||6R6fkOa*aA!fE&qUS%0s8d~Vb-bW)s#sUkrogl0)J7uNbK(V*fU zvlI^o!t}mKm`OvUh%$IXJ9nNlnP14r&;6-5-hSi1&yifJ^kpQHWLQclkX|Wg&K{Gj zM6B&1eU%2+oX3F0lkT0R^H_Oi4-WKiMzJM{iehq+ij6}KDMhAfbY=pBlP@Aso{)Rz z7?#hDu4dxs-Z4lbKZC(o2TZa9wBXSiHjpF--V$;tSg;W@O7xjb*czgsfSi}i&nb3_ z3ayW`v`6lFE{L-;(`pn|8*vGcwCuP7TzInAwz^Dcl1iV6y+I{xHqEzqvAKbU=-SUs zusy~;4{{D|Pbd(MTUiWLr_htB;lLC*x{kHDc2fm`1JA&sz}5+cG<}nzQ;~iZIoxjg zo64zum@$$P^!*#NupTVX6whxOb^~wD&^3vcEsd-&wBVH~1BXf|M{Rz2ud4_Q)trfe zOU*hNzGjA@(r}ONo;@TT&(wp870Ob$0WVY_tlnlRT63pf6po`H>te?{#PbUm7CD(P zo$LHu4xjwgUx;W`p`1fJ7E}CFoOX%tD09Vu^;a&`p*FAi69RK~( zUm>FioIQ6O<7ehDaefRtA9_d{pi{FBMn*5g^UsZ8$6G&yVlF|(!NBepPobkDjN`{n zVC+O1R=5-KL_0kQ3$y7c{^Iiw;K!SeqJOAYuRtmuzcBoII-g=3wZF6LwNGJDYTPQ zx9GV%d*BqDh=T*sFRHYGUh>D-?x4B9QaweOX zNOxu-C(o`x8EZP9!+e3~7hGW}X3)pvYz}Qbw8<$;IrQ=oe1GSoc-zgd##Ji^|6@bq zj_N}gk%-V?e0cjZvk-2H z0!+=^(yBr0YVuaqBWVsG{fxFLu=E(Cg+|0xvu9qNm_LI{ca;ir_mNYfjDxe4WNMIt zlXi*`W{_T(q&2#r93<0txM9jZvRIehO@AN7bmuCWte&THn=}+dEst+)$ezK*^N%4! z(Qkzz_Zm9w>XMd6rfK-XQEH4m?&L7CuvaQ1`-9iOO@*cT?r2Hg(S>O&JNG=2#WCTg zs(Umee%7OgJL|03(b|P2UCYomV$pl2Do~_@KtHEJ3haj(WKOAz*x)--Sm7^8Qb(GQnZc!MOY2gyosRE z7^-YKQUwVF9K^#62|k9->g7mO`f+?Zh(OzBT(ges_kl;ru9lS$JWKs@D8fO3qu7U+ zl+MB!dkV%4Z~OZ!%Ot;IX5vH6JbegTQ(&7qD+x9lKhO$@;z(*TSm^0*SXRK-V06tU zz2gf$(=@%|#DE`Rvp3XNPsZpp zi^ylDaQ5s0Oi!OiU-uwoBj+%E?qxb@a$N0njG>L<$kLe^q>@X~-?x&SM}^MmERK#H z#ZdnUdir{WTd6hGrgXi730lwrl$l+CZe4^GBO_sn%EJ`t)JkQQY0<+E#9I(gsR?Ug z*ez91&J|!%lpbU16d4Mui!MDY)iGj94I)uwMhm}0*;<+9Q~fWyIvI^vBCghiO`n;alX5{K5#!n^xCdOS zvx94;>_GL(id8Zyb`jA~tMq7`WCljGAW%~_48QyKTd{5Ruo%w|e*P<{a*bU89I7O# zv`JiP2&RHOlkOm0b3`yHVwcfo)&z+&=W@mqr;S`iTc|E2g&8Lz{e@sPjpe;PSR1nN zo{zp;Wf_k1A9E~9yH}1gcJeg7_0Xddftr+MJagy>atnEMlf%Ax<7E`#S$Ocp-J++n zjMl)-TdzQdoWxI_dltD;4Q(AuG0?vTEvZ&|UM1uU3$o5@9?uv(<8OZIcDjc_JbUyk zzWBo@7fJEDu7O2L7>481qI+`4o`kW;4ZF>f28{^VBg<@dG9EpL#~AIERb*n`2nr0D z(PUxDR=AYRa~Q0{HI&5+&B*GBI|cMKJ$z9iHMQyb3&W@SZjwM2B#N!( zRzmTp&$%QZfPkXv$Wl~lveH{IEyFmrpf9Q_{f#Dt+uxyxm4_eO+q#j>&0>xgI^Pf$ zC34+}G~u}C5oOeosiu>{F?#%NNlBiXVJVw+cJ#{pOFl~xaI}LAG8xBEK&I{Ev9JvO zDH6vLqO`G=4GoL-3B!(g(34q=lL_VM7%9_|OrnL$W43bNqKVBA$6AFxCx}E#hdfB0 zK+P43lySvWnoOHF-kwy=I8Pyvt|ygfSNBx(*>w|^3_Uj$Lx>Caj&j`myi}AWFj88K zVfrjKJRx4#CP$KV4HWrD!(Ae7#Zg?0i^itw3qo-Nu!W*HW-8#h3cAf=Ou`Y>W8<m*1p-31?P_SEtBn{VkjOHywCGH= zB6aXED`L{;o-I`9oNKg3QqoT!cwMsuhZr~FquQXp2vIs)2OUmA4zNU2wLnQqjUiF*j8S&nPx~W^qBTXCMpaB_O;AB+ z$#K*$O9wyQz7kxyTvi;%)w53_S~-K7z=4A@nt-NW4`4J@D=iQ4q}Y4Cpq8mu)@KhQ zI99>op{+=@wqy0=UbJNn(f#GpV9@V4n3{e?DWL+nXdenIufuF|2`G+2JFV@ua0<0D zvmI2F%Bfoyq@)*M3;Pu#ObrFcu95-CQg%PfLVF@z5f)B62$!8AV03*lO~V(i^D8z@ zS)z<>_!EeJh_9rW4Uav|8&)+JL53Asu5!V`GGf|NiDA7bCf#J#ktjVUo*iSV{m$xn z#7e`+k*x?j7G6%1{p`35+t!f-JhTf2ljjB$VXR_^P-;s-%E$tZTGOSP@J5rauiNyHoN>Va&ePd-9${EQ3q^ab5IZ_i? zo~5+VMT@#rq$rbRLn##vXgvv@IOUm!SPP1!yhPQh)^3WJBC=?hrXa#YUh#G~Mp&BI z46$^CxG;-4-~AA|m~vTUcZ*C5o+slFO36SSH&UfwV1-dE?ITm2lG5K|nP<-Jnrx>XGSzC>gr48x%iK&0j?mQ&IjXX!_E{%|VB) zTfCexgv`Y}vm)3fb4zL}J3xlUis!_U@T%edmt`ngelF7qgqfT|SZk|T7JS3Tn~=+F zA%=;2Ye|?Hs*3$J)=CZ|A?lR^pQamUg%g7Ob2WWFKVOzfPB{Ms6p3Cxh0bxy<(u%W zHLLLbM;^zWyLMyV&LP&8@&(51nk`PmO0}c4r3XVpt0|l9qHLsse12MpPx+Z}Q{J2K zHnj})T)u=AjDCtmlvJ;F5~7f5yFTT-l1%nEnv}_Hl(vip#KxOuHyRGd^@N)|C0oZ` zJVT;08|Zy6Wo%o~Dsj`kPDVFa3kz2t3)eB(Vn?cz3{^(a@$=jUg%Yi`bJ&`G7SqC( z8^HDR&!D4pgn|jK&4*-l)ny)C1+|WVa>MkjDHE^Lc~&@+uJA;u3OXnybnSqGu_BL= z;!B91qTiDs*B zUqpZd82TJ$_h3X<&{FK&(7l&3H{$6Y&X-D z_fK#^GsL6c)YS5tV+^_~B-y4Oa#Re~j@IcB`d!2fVlnTKbY< zzOI6d2_lXWUZOAIIKqR((XJH=iTWXgfHF$P$(5!yqreKqlG4~pg!=Xj~+ z+#(?)4IgagQxaohHfwcw;~{3#WW>c$?In#YEW&X%kXX(uw2ho5?G{f~ z8j7Hm_i;^uW?b5fJZll5J+a$HFd|i3!)oDSPBS$HayKjWl(9 zOj?~BMK#`w;P4jNU&XN(=y2l{ao6=)wY<m|(CH6`BWsn|9F8Vo%;Z}{vSZd3T~Tp){S``cEU=rS(fkcns}-ZYw+4p?s_ zWU7&YP>}mMKs*{0n_^n@%q$o2N(pTg7{_ci4i=~zDB2bdWymget-~fp%N-+!LeCIJ zV9}VZDFR21M-rwbD7CtxmoKY&q8{-#6jzQo17Vl+G$NFUvd}t+cvWLS(bQA#9woPu zs#F3|e)Y52skufIG)|}Q4upbR8ee1(a;1U*#u*7=SISy}%>Rb*ic?w)%Z|WQjxUEdX%FORs|`0+Oif5+tCxjR>N^Q zb*-(VBoJ)PFDK{D(Q~b)^>Y4;zXa9jAlcfAPrT!;_|_kO8khC=2v1WWkigK=br>FA zC1t?b+4HDS*6JG^FPqTtddmyESdCE=wsFnAt_fLry;_=?f`b|v7Sn2K(K9mNp!(`i9Otr3sJG>2Uw37ltQ{;Jt+TC>2 zESwf}t5arOWn-yagTDto?h_tnE}HGAuDljgd-5uoEe;k}9pwm=&eSKde}X9knJnFSAS~>gNF^Ox-7e z4#*v&sS;#3O^(G`oFy8HW6gO*Q34g1GT@Q-7_)dYES;hG!H`m6gk)GY+Kd?t6wabI zpQmSHh_Y-)#6W@qf+N`&4s~xvWpJ}pAQ(E%mNB49kO4(b27l_@&m{CDrk7T^i|mTo zgkRSP3s-{=eTwFQYzkFcX!2ifqj}Z{h)h4?l?~yk)^M>!)bXlX9nci25TR75)zQ`( zlH+G7mE=vu1aC6_&s;Ymq7@NIU@jF@D;2~NT{2Wupj4ZsfO%uF!Irrb?bulBqd@7Y z(w+~|-#Zc!*D1AQdV9-IFjb~&w#7D8qyfoBzQlbS)*6qPGS|RfpI2p_((z)UDwFw~ zUGepCb|lJsToS0M=#8(7XLlI6WIIZkk11PNFo-{gY594`C?W%f0R+N|rdv5t(^ok3 zG8T`;{32?Lrf2N`f7j5pT+cu_O_U!OrZ=W$I&d$@P3((23BRvyQdXHH6OpV4D$RAg znWVZ56^f2d(`Rwt#VeZ`x_+-F#LXl9{18S8(tvhG5vZ$SyStCQRd+_gFV^iODU_Ess`JLl=X8~^g#Ba zAxg#QV5_n+YoNPc@l@Faxd$9nMfy=0x&gC^rLY-M&ZG%#E0JOX`7^u7>dy#YpB{c4)YtA);H*kYG#K9yJrHqZVM$QpfU2UZ%_%gtgpo%}2*u!3ZLe76gJ(5uXzI z)ks9jS}c=l())|q1(9N{mR&@m{bcmxs5xqJhe8&z1x7N(#c*-+mW8-VlvRvROma#A zw`O2|?i}jHNjX@ zz)-!PdH@ki^+nj&dOGdGa8fTJLs?keX`!sCU1`Phz!Y;wRV%V-SRK*yfE0}Gq6JVW zmPL`$E0ssW0daI*&71$5%a`bVReC*4*B)$WLtHe4prP;H&>I>s#tSHvRqAh65K2^F z#!4tNLxH1_ED|LKDyyMcJx}|QWw+CGfdxb8*^kb$?d2{si{rf3Xu-v4 zFNPFNA#A$UT;Vz2M%e{rNbQsz1!ZQ0o`)VhFK>GW*VV4&e!9}ISF#M0qq3!zV#K=yoR0IU@CR}bkx;Z z%HosvYS~t!(%$4#oyOFr&n08{O(xT5_*BztrkaXO&GHA4lo61uN3`f2HBUmnX5zmH zIS%eOFf^Fy>1!w`wLWqr(L@TB3eT1p2s1>{G=ytvZgvtI*9_s^?|vJ7W*A4tvUq0q zNrcG=Gv=BH!?;(Lj42jm9zrUUVKjEvyY48xBNGlIvS zI4s;(Y$R>r%_2kSk`pjh^cf^?K36Uv#55riU31n_QQTpo6R#v_ReEZOaV>MthKU2o z36&^wiiM)WJ}^HwEv&S`5N9fJxj&IeNS;ZyP!T7&ZnOt0mo1@t7NtnIh$APaapV*a zDjGs-!i~pO%RA7~-imW)GMJp17p7T{ia{%YE5wS&Z|gWZ%5!vsw}H@ulrNY*ql{3h z2rF8VYlqGng#5%}Xs?C-wpN6rOfO$RsalgXoHw81dQ)t`oe8>DayB)$0sJV@{hrU} zQOp%E)E^ z2?Lt{`87Ab>fDsTo5V^ZLjH&Vzi-G=$@)tE2pYNa+h9E2XNFiY?{tR9S@ z`{XXTg>e`GTNz^)ivwSH$WsGF&UE`WVP?&BIK~ZsI!?rCAZz*d&6}!uJ(Qxrha3fC z(W0WwqRLBhy^h)yoLyDOc;smv&(Rv3%~GJiec6E0gK~AS?(sBw%)lZW;PpG*pVND6 zei9c9mI!0A0ByT&rN|^wa1nj%jD*F_;(#EQ&7NSH6aeeq}T%d|eo|7>oJ!7ahf2+zmD~J%&W8 zoHCj^iqiFeueL2UTYWq(0WoQyG^neJ(KJC9T`M?}4jbIh2uCFXuQ9z!IU~%yVz_Ao zfApgt#0TE{Ryo$G@d94_)qfPlyLcjkA{hX0aMIBWJCn~K5zgRuKlKiwGXJ-}-v6<` z!o!ar6ayC!U_eb8bG)E<>jZ-VRXgQg0;R+?ir)El@t_nV#G+A3pM$D-EU7C~VTUYX zXy)vtQYvF|`hj9=2s0M)O3EiD)61} zkKy}2+JypVGF}H#!ZP<^# z>Gim5bO~w|&JvZO=5psaMfN{>ayK5`b3~+kZ@Ka^e0;}McwpZt{KNO}mk5%}l4QrE z-0A41yqC%R7u2(4z|tkhweRpz{QbQ@#mT9(Y~t3AB^Vl7D`mcgY+57>Od;em8bfP_ zGCC|e$4*2UKV=e3#oHmJuVhpSq!lYwoy~jYEoIa+DZSsb^r*}&F9Xr#OuiSkN0e?D zi(okBCAYA#*j6-cWv@i$`h8aT02+x}>4h9c_Gj8wVr{}e?A!||vn~?D;BD8RX7@^R z;$mt%6`w?U)O){Y|Jpp2V-SfJO&=snyTSmdL%R*6pK94|9%mMQs#x|5zWQbe$eWDE??VJQHL zFqD{$p;V9RhATwOVHL@lzaEf zzqFhp+$#R*n@=JTqKv>)b@&iNnwu87AXMrKJ;xkUu0-}~Fida)l65I##Wh&L%5!Zv zMjk6%mfi$AMaNL8bRvAE1;#k?Y>%i7-$4d)K9@&NYZC3LxDqb19hn;$NLpx>k>WsO zl~TCW8|%cTckG}n1lalXesqu{;uif-YgC$|B51_%B4>0p#XKuydrUHE8ULwH#2Z>X zacm4H%@m4wUem+B&xCjfg6n>V7Qt-EU9u&a2tGu2%%ns(OZQq4-Q2|j8R4lvmniw)qziZ*?G zG_56poT*uT9gjxdX+qTLEb&rEOjos$G+84Ws4n*v>W;2Pz?z0nV=tK;m&utpw5|Xg zjNCFjkuBHJ{KUGy&B1GZ7Qx~?*}h&BXt~*?JeEx!#$f&cV&rUWkuC8M07P$_$*^M~**v&9f4|3}9`@4%czznan#gU?^LJrrPJ+Zx1Tsa*>^q?Jq!g0f# zky?z^x22!LC<;B~m(YfYeitxoDMj(-aOQkk3}0(1tk_SvG0K$~OF_MDGSsmadOood-S-kbGrAsTey01GCkN}&b6wKa ziDI#ax!IgZGkZ*}Q|T<$4EABLvrP#vF>xOcpw`uUzW2m4`02wxLn;=*=+IKU`wctL z8js`FO{?+fvxn$jlrT9nO+IVib%PfPbvK)^7$0^vzlXpqdYb)s94oS)D~96@*Fqk*kX~D{uu}IU#}SOX*jvd?OI1fGeG{(C7$J=kg zo!@-{|9HnE2vjVz#DaL=2d>3yZ(WOCTFgZX_8dK7m)xr;Kz5I+6U58*tmtA+VAUVq~a z^m#R$oSMh-ktNu@?}z{#ZrHYwGS>?3dH5-0X6Es}H{U`|Cy71B&fvbso}=q7Ngv3N zK8oox$?{X;K}8QjJxSTFd%?Vn-?4BnZo*rpd+{`q-#~T!rn!%_($m?V7P(Qb;V>iY=_YerBHpZ5$Bs{&FdxsrIcwY4mE$a=);w z43(Gpa3(y%843Mulztw~O<{x_Nve2KpjeJ578)1LLK)3&j;VP}ioofl1}f92zWh8U zmu-aIyF|3Am?JDz%;2W;yU|&ip|z7oIOfXS1rK4`+*D^&V6~!9gu2`ti-x46yFl4Y z)lmV3SyLD z1?ZJo#bI%|P6E>ky0a%SlOKW^C}U0LWwg2lxwlvZvQo`{*XE+KmpeBh2>dB^zfpbx zu3SUz8p*ZmgN2W*`r9(YKP^M70C8Y$p z$`p;VhP&u#b3@6BNdub!UX}*gg~+*-N;aYl2%+2Ad%_|Y1OTTP%K!%8lRa@Y*t46vIqks2z$AyzC5UJy%AG*!gZ$Eba zGC>IsP!L|1PQ zp4mN*$9LV0YL%PyDMifb(Npj}t})E9h+qBXEAj3(Zjsju^&(x5h1XoS6k9L*Ab#^R zU&SL&@5P!G9k}u80V!pzT-Skf)d?~T8LVB>fn}{pY+SYk->6j3NB8Kn?|Ca$FJB^A zWWHF$C*N>GgPoSX$0ld}d)nLHg7LX56tli&1P+Gj8FN3yL|0cgx;wg5CPEph!(}3; zF&o3#$MW8G{KYT76W!#PSYxD4Yis#%AFibDFOQ%5kM%_uVOFHgMd!5Z>{%Qf8^d*5 zFUPyKT`pQa1-j-qWuQ;KbQ0xY5pP|$1UpuAOZFXOI3}&LHE+IFV5Gx6Dudk8yB;lJ z6aV~!osuaAE-b~7wH;_n`TsmSyYPx7Xv6S`F&8y5n}RvVXFzB)ew%o_bWG?hayn*% z=KjJuh9}4C8chSCZlm;KsB(>~4I68IJm$nP(RMjrta_y_IZ}eJF zo`&|4N&vX;r7K;?)8D2YL(0&riwhW?d=|0nQK3m;RGcL<8jjz4alLsqLxD(ia>67a zqOTICPa4giMtsc15wal*lsQLA3%Gn@7iBspgl37$iOfC6`-^*p!s-xkr1TYJJ}x06 zg%FrBdj(fXMD~|IL-Hig_cyqErtb z?#eJ8XGj%YauMqJXZPbx@A)-+@Z*1ky?b7ko{4Ip)+n;6N9;9UN@MHBLA>d8*Gv7A#f++SUl@tTgn+R`$cOgMQuJD+ zbIH@DD-`BYD$b)`XF?q5VofR%DV8grgiFXc1Xe#IbuNjs7|?|mGvf6MRVL%;T? zI8KJMtu=^W`RES1*Ob9fWM0?uyRjrW*>G5{9~Gq}S*G(T3_?-{D9l!hDH{maT{lnNiq7)8a zI&lic+(HwW*%gM}BxQ9hQuM^$!+7s+|1sY8ncv6D$Hs&mGh!$>*K8-|a`?`}&*NJUJ&8i8s3+C~_?LTrg3o^OpOK!= zqQ9d}p8F^M^sn*FdwweK*|dC6GA$w1a~owMz6rd-S2k(roSaPi&?5ySwk0ED+8)lN znKU#k19lN%QD$lr{x8$T2LkGKhEWHtkd)PAK1)bhgCdUQYG|qBdYX`nha4M7Pp^7l zXad8~lbt9vsGkfacVTM*DUoYHP_uG0h@p6{!V(>oI^wjaM@wfh zF!mziXAWWGldXm=gT79O z%JAJq6!jMs?wSSOjAlO4#S2m?2Ad7-Xg$Ovsk=BXMUf$TDY_=iLt9KA&*_+-zVZ7l zqfru2`5OJ7iBKR$2Cs(EmEG8J^EUb9pFQ?8&YnL*$BW{L$DYHrS8u|So+Ms-^Og9@ zH+L%i1!lrsm`5SIAX2rqMAIdd$T0G-lGzkB+PYNZ=|@!AsMi_;d|&_iJ-GKrd(qX_ zC3+jtXs`(hF5RHXoo<`AI)b+J6z5fEP*wQP< zI5D2am;d>3jL#%+^wbpY_~t|S-0yuDS6(rUWg~s$*s}g!;Ks7d05u8H3WS)RBIo)ahV-p~H2$R~FXt;A&`18C_`H1Jk1^zSNZ$+b|R zas1qQyz8@{L(QmR?VdfjeEm9%c6DIg$Oz_WO@;N|_|6N*@W8GY>Df5gwswWQw(ra= ze)8fuSfva-$7!^+r*Pk%BY5=qG{SU_@8#ze4hej`%!Wu-gwK-GY)**Ed5bOK*{cw>fCg9r3|yATEb;> z2hf>0M8>QL$1;5)$ObaZM7c)Jv84~GzJ64u&caTgmvSGMq8wfBlGBwYcM0nYV_0$W zGzA<5(fMHbEW>T@yU5T=x81sVh;2tw+(1Putwb$E~#Mpe!D*|5J zyV3oj=1#QnbMPI%$Y89$RvpJzyoJZmu4liPI;vc$2r$I;xH~Pw-!}?Muc6wGQls}6 zKzzw$L$k*-v8&sPLhlAlEnUY5x#p>58y-eOcwhW_n=Q@+p30rz*FD<#2K9-nbq@?L zMwAS~^h;buQ~CwHQDoZPY!ZBP6wQil>n4$<`YC*QA{$w&OXMZ|MwPN;MvhL`87^F= z*Mi}Q(j?%C@=yp9^V4|$_UqBrt|Vc%ZrO)NLctEe|)oEl5g>sWot`Pm8j}y&(S&591E@OZLkYz`gca0i4gzM zDZF%M0#}YM#j5^p$(HMyT86)#KRSjZM^B3wQ{FA()YLib-hUXST105>J$uQ~Rhga@ z)70Po+M96O_1iI&h@s>(1}*JhgieLmJ_~el(>|R>YexruwEH0JK$M(P7xI+-%%vBQ zN~MsXwKr`mAW2pF8MF?Wt(p}F_2GE_pl*1Ik&Z6WQRdvzO#wzsm!S33Kd*ZkAZ!R& z@!zfLSI+eC*=ZeRYqC$LVa#ozTxlL4R#WdDRZ&AHqls!o04EynWOxLRs$s8K`i%xk z49=w^W2cFq&zm&&2Jdu}-m+|=E~*w3LocJHY`O=h!tI#oR2o8-%mu0Q=ijpfNzi#s z3pENbKx=IC{9*K@UqZmm!wH1^nHWcv@fFWs%|tEGj`GMhl$Q=8+;SX|qtC!un3SAL z&)BO9i;lKUaw2it73X4W;13(l3O(2|?c0b+S)52CQ`XJTlbj0gXRgRNGGJ^XMVZ}T zcN^NyOv!ADV^~euMx%siScZlJJHMEt_(gwjC>j}t@taL06n@rh*I2^3N>0|+vowq* zR4{VOx5LuAaujI8Yv}vOp*ID22-S@t-MI!QD6^cRbsebIOFlN-gY;QVa4)4V5)gd6 znga={H@Kc7FfW*t*T@EG5}AuKl!h`IeCV*bNYliTSpZju)?Ne9@mW(D9Jo3Hb4#>mhT`M&V$sw+pa{mRR*XYUCz zPIX~!b={gW23$s{l{B2$p=_so(ci1yiGmjMoJ%7Rv84QD%MLNLwu_?RDphJSiCg8F z0bNG%XA5+!l3pm{o8S4V(A9^{h-iNEe8~Bk5*e|8xDAg!flGgrGt-!&@BXe{Y`AOy z$#{x>UqgDnh;wI;;A0=U0e|>gA0a1O!uvn+CwTHHmE9;*m5qG4j6z;T`7Ny}d4}2I z0==H6OlQCkJKV5=f$ly({pL~|!?n$N6-^?LQm!l&3-a%HBB8RkGCLhtG*^7kCjEQ& zv9sb#cuIHeP!EnA&-upq>BA>+$K4N-V`?RXU6$d;cqEBrBmy^A6xgp9b=M17y!MuB z@T<4IMxfIF`}+^$*#o=rj$5w7P1jt#sFD1NC@p&C6Q@q&pYDACZ@KvfZ0+quYbuUf zxk&fNMJgT^IRNEI6{?V*Aye=(!j9(_>P|zoP?ft?N!ilQgv(0|fCxBEL%cdTvvjoW z^&r*_b<2I>ue~RyH20P*vk|V>#ng~`DU%tEu-0@MXe`%fd2XJb4@kM$(yX(t;nCED z)I5rmL)1jRBQr4eE090BewD#$Mv5}<$>!#{>(5$f`EeCoxoQy6YmdU8UZ>2QJ}To# z$c0`Cd@i3oh(5|n40o2Cg)8H7e0?(Et!bbFWHhr@3Yq0sQsg{LjvQ$3T7e!qk;uz? zU}eq<6^H{-6r`$hYi2b=qjEpVeOXS&5_|zudu9W-d*&yrO@4)plrJ! z6a20KjC3DIm1Gp73Ut7&ovr)t$VPrHeUD~uhoehuF5d#$#qNkP5Abl3E? z=UojYZ^)0R^&6DeqsXAHkMQ>v7eV%e6~Y+1U8`DfxF>YYcl2mhhRsydU4Z zb2qwrmSS|}I+TkQfj)C-#L`YVLXLHEVjlPXbf>DpuW7>v-hD0ldj_z4=`wu$!|x_% za=R)^k>lkohQ-EwRxV3z#v{kZR8x3uA6|dUHq6bq= zBAw0R)Y%CPQ1r|7d0S@;rsfT60G#w-n&!dtssV13Os?zAYc%Z@KiC-^^pCT9Y~0uM zfi77o_F6}+Po8n9D@u4?dtx~3x}JTh2s1UW9W_5>>j7cacd>O1U6MkXnhuH7180QX zb5XEm)2B~r-uE7axFN~in63}G0VzFmJCSMS2FMR5L&~W*nnP4`$U+M|%FnjWP}=Cc zHqRbH|NK4#?KvS~l=&rJv`dqS@QjL?z}(1IoLssN=c*80l5x78lbtJ2UvVV@$wA5* zHHAwv_0wWZoVu5?FBDM7bI@KdhQS~#7RNvehf9jKqIU0y#n3^4M=+=v$9X=E@2gCj z+nShB4Wddf+0x%0kD=k>VrFi?DdFS7lpKWZ3zvDkdENjdpPxs@%OjRm+ACGtlpwas zgU4(~U3Cu~tr#iy&{BLChF2KSS(@o8GPQCrh4jEWOtrSiv*Y=kK&@I)PttH2bO`$7 zq|t42RuU8vMT>{gf|^yy^dM{2MQN7NL}qUIx)!c_0Gdm{^^gdz+hA|dKRbphRWRQT zZ#o`EY$(6Evf)^BzAgidWhU;B@(zI%_nqHH z%aW~F(z~1-X@+h;8Mez-Va=+Qc-8h%^mRqC`@nfJk{KzZ@em|4)1>BO5AkqAo8U;mi8s2M8#kK^NV+2?Bp4Vu31k-9BSA>DiO!S zkM6;3w{F7bHLLMQpScyUyJ;ml+ml$evPa(c!zZ3XiHy=hCWGBE_tHl1pvyf&*+qu$ zOwCc15c++sqFKBH%9w>qz}ERgST?(hjCLAzRyX9#jO+UxJX%_KikxNHO=4l`6_^~} zg0!JHqxe{44lqk^D|If#3R=6d<5fh8b2>xuswZlI(9p!GUcbR>AA35F!s_c~6Pz`1 z-_leiI@bS5Q9#+#*@mMO*sR)KF#$N}*%?)sbDBjU^ba=BY%5(?M|@4_ zJG+@;#cI%d86K@<5Nga5GP9GVHBn$xEX$ue+Rf0q3)t!p*?}{oh2k6Ol~+{VmI)qH@fO*r-q-wmS}5n)V6&X?-9LB;Ki&C+{O*SB zoAFzpd>8iZe-1ys|EGSe${GXbC+qm^pMDL8j~}6^KZp;#XQMb0j?zB&*I&b36m6#x zQOcw$(kK;LYg)+6xivLk#iu^=_qgjvJ5eEH)zcoq@_yw=Cg=0`%P;-_PwyH-XGfnz z9rH|o&JbYMd`YB`EhX_MfA%dr@yyT2IoGiLsuki$7IIm9{u_7Vf#(mPt*a9}%IpRnu|ihWF&zza|L|yj&EUVY8w5i1a4ft9DDb^fQNtf zs4U<@zO1!MfJgU_Y4lSN+c$2&bXJwf7V-tn*`?4bw$@2u*<#kAna?T&*?cCCSTuo0 zp4m^IaRiB=h1YLdgIb}CyMMA1g+fNvt34yj2As=$l#P7wz7~|e=U2audw1=_;L4527HSw9KZj#vKv>wzE91<} zbC%CNo>{ebD6&5>F^joUm98x(j;&aBF+Meeqo>D}e1T>79C`e;ef5w~&I_#l;7RRB zY=yA&rrZ5l5D)r$p~SA+_}TEtIu*r6O_x1toW(H34wou5O494O2m**vUjfPx!*L3l z;_}QqrsBqMvY{!7ya8rc41Saq5b4-SikXs@nit_jzn6aIq5fb<6ORS}O--jB5B+%g z{Anzk+D*~#tTe%0Lk+bmho^u8Tq>;F5#;(dA+u}?7Bp$3W$FeyuSZ(1$LXi%dX{5i z*;W|gPSG;=Xhn=Bh^^wAQx4IlG^R@DSup|<$v1bI*&@~?dotvAX4OHMvXpMJ9Weny zC~6$eNH_rIZ?;i&-D~xRKuy07jZ4lBIhR6jxeWqO*J)IA(8MAveR177U{0|TeJ+11 z^p+{K`9d}2n%)C!lt7&Zcd_S8`KE-O4BF4`MsjvSN|O#dlvP)JG;2Gxq!$4>7s&Q< zQ#qhvgQkc{g+*w}w8BbrSo9OD#i{NhDW66}q_F4>I&r0(s9}GOqjfE8X`(X?d6YI8 zPh0aFLbQMuvNOnJ7m!S}h@Qa0!ZfnkX>iN!EzHlNQp%uA z-yDq%FYCwVO)G?Sarf?nm`PJK(mH^)cKU8xg0_^2)vHsK0^7LnzFpu-(9)r0l7eP4 zv!W^grZ?`OXul1I4jsm$k3EUu;UTQuu$rzhis@NbkU-}rPob;b!R1>m99ze0uN)E#idYVjn3YI;E(UCUvc9VmQ2Qg8u;bbO@Ks-fGB8ngr z=4I1Zk_=$gz!GF>x4v}Z1Okx|db)a1C{~arqseHPa)I9$#qdBMGUXDEoIZniIEwz3 zPGo5}&eT*29jE91FQ0seh(Dd1%i!~0`yOrhDyE7y3K9tgls1LP0WklWQqE%)xg=#R zxtdLpZiv=I39URGP1nWi^zgZ}@$ z*JHzB(Qy)V{&}!&Mog%{<3>f+e}SGI8^&<8h#)ftkg=**F#)`oO;gcd_`nVfP`1Y? zu21~aJtDfMw21196D;n11OiQy=UN4|c`(tZJ~4gCSZJXpPkfFYgy{B&c2B5@I>VC# zR5M>(LbJsYRkUoZ;?DDC5ENC2A`uMHx%b5e5L$K<3azbrlt3$i(np+^NLi1$=~6xC z>g(svVeRQ%h!s!Epdk>{>>3aVzYHpJGvA4!+Pw;MtFFa#ERL)$m03-IBBp{ZFl`Eb z#^}UZY#MtSf#P{`Y;}zWbr$t@z2MlT``VI^p=H8FfTehy1(b|AaxMj-4`S-35Z(8X zq1du>lzAMEN{VLd8XDiVf)}n0N-$Ip1g$X-4s7F zu~j+H)%|DF)Ve;NGyvP}kY@GZIhio6-yqtI3|4fOuxX?py@^&j7ia?70JGWAx-vnr zAEqj_=G#+<_IJ}q+B%U?bTmPS*Ioc;W2-SWydC+z?p%N`!hxbWx6FnLn=e|g=RibT zLyJ*4u1baV?4#5Hn|mc@Q|(cFnnr>_r;3*591?HW)O51ZfXnes=G#)URqD&>v90BQ zIqC}v#kgAZys^{8qBQRVmKrGJw3DNzshJFRJ7 z3*fFF?xoF<#RM4zma#O+0mZedA{SU09mdWzm1@K6+ycJ;-S267h5%x1BSQBs>^qbR zoI10B1Bdn_lULB$f!=<(A7w7f(FVFhULae4d<9M*2$0R7#qE zhmRJp@1;X@pJGT*))b=y?W;$y+| zVc0&*mCh$m_i?sVkwk;D6@#8%mfo{_GOan+LU1E(I8(6kEAPAwPBx2&_8rHa`^ah1 zy0OXF@qAFBT%b&?CKfFmQiwtxw6u~fGG#wDzo4_{5W0ufpr?C?oOc>!M*JCVV45?_ za`i@MY7RO#TfG((3ZVcQwU7`TGQ^w*%c^D+c_$!yBPcRMT5gq$%9W!7;z;=G!M(=> zMdTra%qIGHUnbr#8LpxlB(tsY8Wn`LgLKRIOJZ3N@qeDdnOm8QyZo=F&A+$ zTh%=>x|Yr98LU3}G$O@w!kr}l2tYOkvUYwbj< zOvAzsiuuhJa_Bg76lQTo>;{^q!J8#RniXb^AeZ@dGg~0*h7!f{z)MmvQUb$_?n;G1 z=%f8RPQe!oW_iBYrgN`(&$q5r6GN1RhXss5Yc1fY*KPL|@xm7>KKPJsZxq9X-#d|t zjx3i5%1qJZ%nNgdz`k8ZV5;m5S~yM3(WpGc<_w1Inq~T<=&ICZ2~XOx!Z>1M&tc5k z#QcIGHK3xKKz_f&J1JL-%9xg>Hfg4ncbiyK8uG7;Sbv)IrF|lEtpw-z-qy=Z8Z4HU zX?i5|vfz0z$k#a51B$lBL-WCT0-O)P_vrVGc!tseDN8XcuNac5a$1pCOzVO0U>-R-aw?_loMdfEaY$VY zH{tmj-+ z+uO;p#e~wNUQLkGpnFi7!p!*qIy(nBp1}EYCo#7$gBCLOiFkyLSC;cnB>C8Ca-4TA zGZ6Cz;aD%Qw-#N4|Yr)c$y$D3ZQY%-87uV-y;swmv zgcDEb)L0F9aVF##Mw3XAMen23KN8i~rnE~ea&oL&5g=o3L=zH4JwQhD*2_l)z4YFj zuEf3*=g45x^jcI6Mbm7cPHY)ab;*&1)qty`bXvIf%Tbq<#p-q0U>0rM??3PqzDM`@ z#C#RW_FjQj3*3`3w_+iOh(wY>zwD?W@eskZP&A|()J9y&q>RR4VozSDqW&yX;B{YB zl>3YP{Jm=yp*K+Da#P2;`#xd2i0jD$MKKD$S~DUaRidD${&_4t?=~(g-c8T9j;=N zcD)gE-3W|-;J)$aNsP(LbvuZoi z4tQ3}Yfq>Gs;?63RYToiU0AS%Owv(V5eEsD=FqBh@U43!yza|DrU9=ROtCl>glz$9SmSuVYwOxqAvPvt-YjMo zar5XYokH;hQ&x$0H}#A}z}MrpbwuLH_j=J!z^n0jJtB=ocQ2q4b{!8O;mh1eRKY2; zX<2%r)oHL!x;m=Yskyh%J-w5Cy;Xo4`}gvGH8nW(SIIabI} z)|fyq)uEz9`Z*8^tJ)=_8EO@U#||j%b#9jPR5+ivD(9)sXJcP3D!K+C5_I6#Jh|M* zWw}ULt+=XFk$zK&3=Eg!ME~4Y$P}L7XJtjsn&cMO>l*2a*FTs1*w~+nPL^{NtQnk+_4%T{ckRt!vilK#T%|zgTc-g+_ZT$?%#dTA7E4@ zc$P*F*&2$f?wP9z@}%C)wQAS!k%6|Wrl5|d3$iX_lr3?|!LAiiE2mK@%+d8%;c?)( z8t&=(B5$g`%+Irz9Djl`Jnkb2^i?#mZLNXM2EXq24qc})DXzSL+nCtaT~yBWWF2rh z#8670H*Q#tp)OStx_{3REEJTWhs>5J8c4NAjfN+ue!|97^%~?_)!r;K3$W!171H4R1U8KCM|fcj6|165ORa%sl56L+LWVDw z9DqHz6|=E6EHL`sRt~4q)N4>pa|*0Vc?_L8h@PoK^!GVbWIl@-o9q6}oc~|LD-0X50Ck}2=nOECh+}~4YiqRy844nE#iI1jxbRoC zYvSBFh=|htiv&Yzefzx|r%6)AXfO(BPQ!~h4;N)CzV?6?u%cJ!*$npA$h~xy$D^4GKz~kWy6iDEFa;M=OH*s-VAoj{b%tL0V{h zKqf2{!{<+Adz31wcLJ?~Xf{!!Sboz;=_q(01V4NBn{k#Hdeh4|yfJO9Hs8S%MO#!e9ez>#?dph$46^EDnXQ!(tAx zz#Jye9T0J>AU)X5&R!(K388!752;j@ZaiOuWCxalb%ttW0L+*)LD_hMv-fW{P$lWy*Pwab;08Q{;ei6qL--TFNQ_$wi$%!=Is&LCk0hU7bsmOfsz(*#}za zNoF$Kny%;T&ysZ<@aIgFo`^zF*^aH7w6-n>2PCs?jYaXE8x);RzFft9FOJc@4oS8E z%{yf@OPp#LvMngbs(=z;hV}f3>MwB=?F8zQ<<+XRhMAh5YvG2gg{(}i_hS0P$g0{< zmuy_lmp_}Wr?5slQc7im$kZ&}Shaeu?ThLOj*TMv^Rafh zHx+tshW9$%m%iK-UM|hh-%!>;8K6>YY!FF$Km&ff#bO-!mro$?p5zv{9$Zxu_-K+{ zVjI*-#yn7%#w0n!s@X@jL1)>9dl4JDV>E=V`TQeX(FoH%?@kHI&-Jv~4KEYZSqvLn zlS^trR60y4Sg@WtBNjV7yC*Wnu3yz*#Mvxmk8{0Y1mY11f*OdfW*%l}`0#~;il)~c zfzk5fBl9NDVENB$R~oo|#OaxpR9t z=lq`MdEayHoiG6u-2d+X=Cc^w{A*kE*;@pE2B~hBflS&khFa#%Fv(tI=7&8-2KHBqleWt)Q5mz6}Qh z?v4}++Lo$K2tDPzz=3&j8%;E6FZqxpE7q%GOI#+|MT23KvQ?>I1;$A?Z!}5&76~Jt z9n*3hmQN((|1n0NEoymI8Ki$>9*W4t&`B&5t0KGY=$WTNei(?yW0F*rlUqePsZ_EN zQ@c9R*O$gfKaXsZ$Ylo6-Mt&lEuDzcpS?X7Col1hx2rz7MkRRaT39=a9utcvB*C*Z zglD75U5pWT(bPr9An8cQX1aCbPs1x6RLkqw@FmC%EeGT8GK?Dg**0TV&G+ZV;|EW#rRc= zThsdCF%|D}H~?}$WK%K*eoVxSKhWu(b3=P$hl~WrvdZ1l(k_y$hQK@xuYUFz=$>?r zCjn$=eL5il+O`h7hK!Oz)Dc^BO@8pSB}hh9XLga)_RF4utK=kIPeFrS5CeE6JV{ZNJ=3fS zco%X{^Yu46f{-_k5#Ou{YdA2{oX`KS3qK=+;A-4HvJ%qH0BeP?&dG zra;jh?*n+;Ic;5QXoMFh=YWons^3wBAfv8p@Cna!4fI4xNKT6&+8EG0HuoSVQtoS7 z>(tU;opQ2a`cQ9$s#S$+S*tL=S`Soj)da;?^+#^6zBhbm2YbNR%vo$Xj@||75t&qG zl8kE_S}V)a-oXLQg16LC-L2uIeVz$RUt0=M;E}nY#rxFFZtI)O)s?!fDK;*BNxRNK zrshUwXz2z~FXm2#0;8)FMv#&t3nubNClyO#cyPb8pO(N*^##KfDRoH&JFDjEqMblN zm$Gz*k^wKjl#vhwgI+2qOn;C9bu5$;9t%HKZ{g&1ggsBU=awW1bXR;ue8F?XGeypZBC%2c?Jskv;={%(f#sX zyc=RA0O>|3fl7D7Dxp;cR#1d)S7~*bAq8Zh&lh#W5){k9q9p7hd)$XAXzQX+lXT>@ zM76ih=S6jwdLpi2-2NR`pR{-cVz|COm#-denl+e!f?gzHao*{nmxi;7=Z)m@pI35!5u zG(jC`1V_*6z%g@_HrsY89G=?RF9^yIC6r(=Dl>QXZ(%k*@8o0@<*GtKt*so95$EF( zrpI&F>v+oDGWJl;ou(OAx^BgsbmrojA<4<`m-C+72geFQHdWP5C7?&Vh7X=m1*d6o zF{pjtCE;7L6XP9+Kxa)QZ7PE>Q=yvF^HsN6)iSoq-pL&=kT}6Yn!l5!J6&iCT+1+E zO;8nh4(X;-oD-Ng+Ayp-HoW04RD8&wCw*?H z1S2&s>)E*Q=iK~U?o{OW9NdCIJ(H{o(uy^-44LJIBxzWIvdjR3;(3xrI2Mr$n7b@~ zjc_O=7kP;~91jJ-?^&dz!Q(6L8WNiN$DG-uwr2422~8%GD6>>YrhpVBEk@7Dh^wsjLkw$8 zkBwk*=M==EZIqPQSG*0Gkr6aEx5@D?Qg_9UlA;1Y9Z)nHmt?~J8N4JHUO*s%1b0T1 z%=YbCFGR~p?d{YBQAHUw(MpNLid7|%h}Vp1XxVC2NO)fEHI{RYB;|Qt)K}!Dk192k zT!(3DZiFNAZL@AH>}sMFSE;U{94~+HyH)t^2i_&TlaHOV96S1l&@-6!7p3*KIj(R1 zW7ioqxxUOg7dJeHVyJjD5|__XrJ{n>pt&RSI)r*1Czg5S<&A;dt)}^At~quBRJUZ57cywPaQA z$lXRT5)f(gVlFE#hPF{#c)JdtGnRBU5oCZ!FrxOC9ubeP51wI|%8bo}0&u}2$zMNK z)&O#(0Hc?_9ga>xxbKc4@qK zE|)VRl~S(c#oDx7E(oifm&6>XIpFZ`D^-uw z$BCpYf{hjMD9^T)ISv24I5BedPT!NiM_6dKHYJHMNov7RR6J35%vE6daCU^Fpv=LC zztiaG0KI?4N4R;3w?vh#Da%L|w8bLTFlEHW%R4?+LbjyinNAPMoSY$nB7;JSkoOi| zRl&)QZw~$}{F;2+!%_Zk2E!wLnA9;Djmb%9oiqdCRI9)~LqX=S4AEoqiZI9gg>Z&C~>yJrn{?R*iLN;euhsk(}D1@6e4-QSNHO6A$r zRj;QK=lZq;^|l4B535vT6BTR%hU#oZOG&+ur!8KyG0<#d+x4WhZS)SOaov4S_(aCf z|J50k)Pq{0r=Do0wavx+LR~8&`{StY#Aks+*%DY3NSPm|U7L@>_B_=dnL>s-CI@y}KsL~Ru4(g3ux!*TKTMt$%qT--9(cuj&nIkD17I*Xsa1*!Jr+ z1+Qz0>%RKWhEZo<_1Z`4k9#v7(GCsu$VFy)o+E~(s|&+7(2)_2pg}D8JulK3m(K_w zMuJ%$S;n;VXvh_m16Ld}OE*>9@7RWh6MATr18soS0SssY55pw6m1n~%uc~6$tON=@P?9-=dp$aLY_HuHR%RrOu$_ax7~~o zi3P(l?qcCc5$j{Nt*CD1smdQ!EhPgM7!ARm5Y+9In41xA5IzU*M(lD~x+Ry#X7ruI z0i_b0&y0yY3*rf|Hd=NpjmcdtvbSR2kqm2Y*?DwOqHAht7IIKFH-=mxL$BeYpUH6% zN>bqzeeVY9yoO~r60U}&vx&yzqPG1JC5Sa8;{qz!xpgyo`?jM*ug80y$&=bq zEiv3NhtAn8h&0D!)Me;Bnd>bxIaSknWmt2G0;$MG(0O8_q%33Qg0l7QL@fhN%hd9m zy5+a3G78M*b7C^Y2s}}vWr%V-BHS2X*G=KXbp;JPyCcl4#}9t2Va8n-b6y07ZN@3NrX+`P;gqA#!y`k!CM{Q^x4f~ohQphp zZd*kn*?_jD7Nv7kq0To?oora~Llm~a8^x+x)0a3gYN$t7Bf>;jF<>fGc|Z{auR47` zIvbUEna*-rm4_Ce#Z^M+%!t7a_|TbutGehlOF zlgS&CzONm@z(~(lr4(#gzKf#aVXLLu;EOCO{trB}Wi{LoRr=fku*?V}M!dxqFV*iT+DsqPsHz~o*aXD{OS%eXn1`9DCWOR*jJp_#73CcW zuR2)bgnn^V#l97}oH8g1Q9xy3N=4XGoeICcEE{bl8zGgCz>V2mP%Z5F_4)qq8|Xta zSy55}^ck|05ZYQhVHY!~6ozHg!*g2R@o-h?DF;NNG5OwXpKCj6EBt~YFKq!^xb*~hGnccf6i^_8J{_S1-^afL#k8Kg@~>FGSm)^=ld0$cgnHY+c%0AHg6S8HC9LvfQPI0Q{_^I z>XNeT0z9?q0|SW^AoM<0o^u*bIrdnLq_c487(e>LRy_Rli#|e280m&8#sq;Rz!Amc za(rgE1}tc0ZST6fYQ=oa?^06zoA#%1+v=SYJLq{n->yP4fjSzxWdv-(=sg29Z9NZY z!kw;VX&Dv6^~Fh*n}ne!18e}7koPL&70S?eOHqQdYJ_Mg5kOO`{Tj9^7+o=?GRKX% zEm0Z|C_g0WmJIxF{Bi#`eHO>5JKz61kJm&$fpV@#0-}%FaBDY{q1o$Pfg2X%sbdnn zM7=cGjJ=$Gkf}TvNsd_%fu3JW- zoI|1#lUX~XpqO2r%VuFwRWIl-yCPhWCaMBCfjfBAk8pDbnR7 zQ_UD0$l}NUbswHxyAewc?ZQW|dN*#k;TAmFKPcZNlVw=0BM7shp1suQq>xB73iqL^ zCOv4TN}4ht9((Y19JXXWZu-p)_?u7tBX(@ti!n+nc}lWNmmh`ZS<{iu<<#!UsNX#( zlu$N3B05=reaVF=w#kl{u8Mx`*m4c$<0GZ8~15anbe{F%N>8f zmoGUR^Sat`>>)Go$#bvGMP;fkHT zzqvCsc`00>&y_|dJtSi{e!rNIQ)_D2hO0=N?a35YELn&x8#m&~7dK$W)JeGLloK&D zIDp62zAU@AWFjer1YZLHXF!<0;Y8FYl?6B#(Q#qKpQFb_0yYt4^8h}6#$h;ox)Pz> zJ($6d9@yY>lX!l9u$ba7Qp;8QLcTWzj5YABsVZx@9g)v>P#n9H~@zCz7IWCn~5V)|=vehnQqhHN=95 zfnZEbI!tV z@A^+X_4HcYzw#a|TrdYW{rKznhp+xCzVVNrR5vjb0INBTa!J!`M$p*NMggoq-BeB) zYtet>)GcMhLG<_R7KS{JikL~C9~)I%2!WGU@aa!pfvZ1s5x()S|Bi2THjj2Ge6`H*K}!s`4^9Cb~J;wT`079wZwM!IZ9Mx!DH?`_a_VfwSL!E&?5`=pPyqX21Ae7>=*g##sF#eUF*c zwfy}R7vkyXU&Ie7u@)#%pS5%eKKa2*v8{Iy4?Xz|B4XauMFBN|9ecYmmd%JFv8xe6 zd?_#=W;9pA*M9N~Z0qioqJ#^DL@Ff-qcxep3CoX`=RWbw^B5b+V%n4rk=ftVKOzw3 zpFR3KK7RIz=%ViC#09hP<7+O*8mi=o#H5q*ioL!dbdxG1k1i9)Eaj6(aGdlHk^8Ax3)5K4cDZ zR9IdAhZpF525`u({nVu$LD!8XAdK@*I{}BzUm(fv_WK{h#$9`H*@fp})0VAR_v{P! z=oRnAw*CEh^yxK7#^W;1;T_Pm?>H8Vw1Z?{x`6NByB>Kez8E;+`Fu}^s_`_NB#c{B zgx>sN2X4R}f8fA!J2m(Vk8Z$c}+4MXub31_lLMEBp(m z=DMgHJq=&^lCkeqG2(GG|Dj01Y*cPKWq4p{m0%A&V~F5urQ*u`o)uyFqUH*XI}d(s z4#e!=ZVy(pWVRvWBvWJWZAY(7j$VRk(>id=FK(w!A&D=2?i%dx z>%sTF^$lc)dJv@U!J%%cl*!@nqmGb7@(*AAKECnguT!ZyRnEcgT|@ZhH@}E!lN&LX z8Nk3m8fTw%0xrAaOeB*r{QH0W5_kXV9?@^V=F?Z&;{n`y+k>c-Fg?g2S*T! zgk-M)fv@rmp3ZYDj3~WEBZCg;`p8gMw16u8t1o>!GDH0cB|5Nd{&alhhyR6>4xNqZ z)9Lp=`4w6t7A`sCc-;EvleqMxW6;^sjG67tl2}ezG!uXO{r{riY2)*kUw}P(doW{0 z7Z%N$g1eqshXtK&_#Bb`jQW*^B2(T^UxB3yXXBQ;S7OD{3vl^6PQw@e^=C-X zIXHdEEX-|=W67)z-22#50^{Zx?c|meC6W^`xj89svuj`s*ZqN=BkgfSG1qktI-2nu zP6fhGiH^k?tJKYV3c{W~n35^7YqN60)sCp1thrHVrbUT~3dU-rilU&lDm7Vi*YFEJ z$5n_tL30VGOQXnGX5Cb}Sqk-M3H(^bfR-_VZ^&PM>LJPFK(YUa4v~2q6@-v9yZ#7H}7kEyx17xuIEnII1Fxk(cGVT*Op? zCQ=Ow*~K$@5wr~Y#$1MY+ncgvr zB$AU?tiW^6KacayJsXEDUWEJa`z=25iK{U@(1&Y3@m@UeyZ^@W6OO>yXPk(8@A?_O z@b@1_GUecNpZXxa^X;$D_ZSpiAJ*D}lgIh*JPZ4WGWgMTH=<$EJTx{pBbIES;~K)> z{w;MXtJYI8jZjz7h|^C$5t}w{$E1z`{`t$FkdEYv3(vs!|M^pRc;(}`|Mw5zOP~J; zF1hd&?C;r(>#zS8N%%khk6+-B`SZ}y)+A^Bh97+sZBtwD58wC-_U+h$Q;%6Hof6yR zI=;QKA)RNC678&xc5L6%i?Nc6cruA73p!Ggd~E$Icy7Z+oN?TdIQh84aNTWp;j`bn z9>MY`uD;+b=}7xWhUxs%d7(gEpbEb=Y~l;=Js;os=%x71H6O(JXPqIOoJ&q$PLCbJ zC%=0gzW#6jfjN`gvEqcI!EQ=%OG%_C#a%WSvvJqMkNZT(FeUO9N`mh@?RcDZ^fC;k z^Z3jUZomz9+$q0j{L}JfhawP3;5W}~#qXcnh=#C%6-UmOj%a7^0IvPo*QhIb06=Jo zRkXB?>+yP|K&!c^iUfS^$E zfs@URQe~b%A`z4KE--3^_Z@agF7Z^H{*3~2xm;F!5A7q*K_VPaN!u=@G#tvOe9lmvO?5{$RtD_{LKrcx#R{cnB*quDDlGCYPm?)V)(`{~cgXIpmEGTdCGNVI_4CtMIiieFoS5-9O;! zkAD=QXdB9fJe-75f1NR{3p=;(!q{LxnmdwMbm$y>@|q7(*E30Y06+iLqm+=lB4>qPu$w9hWdZe$79j zd+Upog!A~$cmD~8o_G}H%otiaTIu=AuyWOE>>tTU2ghhR52EO5-Ush(1CFCT>jNU^ z640(}7LQ^mNgsa53_P>;Wjy@yHX&c#|L9}5;GB2i1gh4LT)G%7l+Yi0;T4Sb>=A&( z(`#Qu&)_JM@dj+6MB35TfazV6@Z-DhLm||F-D73E^72cVKVvc-mjMATSQf{2!!al z`-jsM-1lR8a})iy7bUvb7zxMlc4j!U0-m8snGMNCESxzF_dop-uDjzt0iG{kV-ZP;DJZ9>0Cyn$7ggjVZ-*_cya3vcJ`eybj?>c3YV4cgFL7KR=> zREQNTd7s2=YN>WhL^Rd@Q@J7rv|@%MgiUWZBl$;L)w(eLk~t2NO#aWcB5y@9QQG-I z6tn$wp_UP8ScJ|Fb}cMpV5A!y^TR<)tX{L3ej(qnN^c|(m1?G{neE<%=7Am^%~=Rq zo+q-hUuE${_Av>EgNh|j7i_8Ih!zzuaCuS7;)Se&%ZQ^yL?M~P3!Y)ag)(?A%X}DN z>>G-itDvW9i713f3Q(~`1JZqcc>m=e!;L?^2g@ip9e?a$_`6SDfrSgFzql=N8!LWE3m4wi#m+yv>lu?U*XyUFir-Rmd^JS7e4|;x#VY~(q z_b)&H*MI&b5OX5gN|ilVAw|TSr(){V1%i&+y6tiFY`!1)k=;}czKC7B)+3pSqOqkF zkysKT*6i`52wcNAW0ZlH>-d^Y6|GeyD-zkEl#pYocpi1}GEo2|wxD2a;R)nrEWxrT z!jNZzl%a@P#cT;d9ULnB!{|H5W`H<*noMn=OY-6QvzI$77CmZ`s!ya zKLvMD_wcqW{u;md)t$KXwBs#8?7K=wIF=nu3$8MapVmUh7n{fQ03$bASe5~8B1%1P#m^*tWj#xMcD~?-= zMbo>4u^yz)7!QYKRI~EA7qNF>2z~TAa;)h)eC6^BaNAe^4ohZE5$&y2&%G?{@)M6f z3d@gNhIgNR62z2&uE8i1OBoWI9mV_KaUR}t_Bkl$aze6UF3hC14zwh~IRB(&_@@uN z1K+#yLR?54-fvf}#meVj!5F>or!G1NU;OZe2vIk9%RP_d(zl(8NewC7ONn}NV;pZ^ zc8H%yw(K9l*X~|})!T;^LXzH})f+@=#86OjNdV3poOOAUBdoh6I&_AwQs7H%X!x|p zX0O4&jT*1UE9?a6WeJyzccPXiGkTg^zKyna2vjM;(M6f!`FJ23FR=*m(p8T${*q7r zfAoQ7F*PDDP#9mxjdhPTmU3iO;3vSYoXlVt`v z=+Df1ND98csxK~Hgj=Xd*VlB1ZDEpg*P(6>HIgcovJ|sQdVIzF6;noGrD8@1gTfl8 zpvG=zZ06!6a25Q}M8J zvf~d;pTD`84X#@7z{=J5^fe#Gz~GpS7BBg$cj4cz|2;Nrsv=!1;RiqX44!&s4KDtx zQxWA(ZzPW+=gh_hXPtsOA9x5$C_z5EV=s2?+lx!iK9dqoJFdU=4p9yWaWDx+v0-l? zesRa|aOH*P;iP%gg%%1}FmDF#TJ|DM@wM1E_5b9~2k_x{pNDT>^*&(^-u2K!c4gDdM$IEmGp_hIM27?2sn-&}H@IM_Au9<5r%Rp*|D z+aG!c_doI^C8a~@8n*D0U)_$EH*Q2TU0=J?Svr=~b!DnZQnk|F(x^H|x`tx0lyo&W z-@lr=D@#yoL-hH7y>b`U?dpXQYsP=y^B9g^vXE*B6Tf(11IDQPYNq7VWL5FWi_i1# ztaQGF+n(NnwYx@?4jK0#o**Xg7!0Mt>$)|IUE>W$UQZs^(siO)8;n3?QcbTL8Lsq;GVgj{Ry%p*Ht%x?=rS9axmH&;GUfzzzrX;3KYsE)C@(FBNzYagY^_RHxl8eyV)+&%p;a1S+Po+}m z?H|=L8xX92KUK@wY+n96I+nriz1_m<_m-Gy+1Y?EeBoP^xT?79@=I{RMW;*GG%z%T ztFHbcetz4NIP0X7@Z8!R^x8lJC7RK62@gK98L{v*+;;nFq#BdB`~w#VoByU;9>l|| zc2IJhhp&D67X0;9=ium*mf+q;p1@dM*~k9q=fA+a&NvyzEj|?K!8C5Y<38$YN(j+8 zXyoJRDnAVo$>arhV=$6YDoo=aNJZb!p$a`IlI6#1a{e ztl6{`d#Ni7CmIAS!H{ka)_eMTaKkOPVzB7Qd9^*x0u=;%b6I@*=G$=CjOiGn-@nqc zAK`{p6x=dy`1SA6)!vSw(P2eOr2yYu8pg&w`$U?b{WO04$Ws*5l#0U#zw;AZvf@~r zv2+o29YM6E;`rG4rwK{(`R)7h_)8n`2z7m_=4L#o9U60)niJ)8`=dK|6>97jlA=p z_ubuBovF7biHH(fRU!Bc%lxy$jE#81%rSb8q<0)0Z~1-T10A(+4be>>jyGY-lzA8) z8pfXO>*yaVh$Ol&Y0@;ow)YP7OGnV$+J&~3L+J14AR2EF%`M&s*dn9hDa|^f+rt%V zF|Ubzl!)v~MJn;(kzNWQ`%uc~(U5GR%Fx8n&^{$cK^JOvYzR#)b11RRLz#k0f8P#4 z`Y6(3P6&a?=E-zphmg(mpg_^Mp=k=@3}X&cm0lR4;8aB`Jtj$2?C5Y0cJJAQ=~LQp z;RWZ=_v}Khkj0(1--V|heG-Yr7Gy`Z<85c3hNF%;4lD2f173dl6`XMDnRxNV%}6Dc z4%ooJFxp$$)Tn^HyS55^xv9BT0@U#E2!g=`f?|)$H*OH&XiNx~9g|uyXYM=-z|4qG zV|RDAj9OBW1g6bgfcDM>BK>dD>+&8coz4reAR05!L0t{I@$K&)rs_EiCrI5^n}tLh zlN(+6?jqS4k4dM-QXtF;5@9{2P*b8l?Z~BYC@??z>{`U*2@(BNlIrTBFBgi6d*CQ@ zCf;L-HLP|sEENlaDq>sUxX9aEYCOW^Jw9h_a>mY+(YSI!;E|)FWZR)x^UPWoJ+MHW zf{B3-D8>Hu>%4DEntdYlhvHQIVqm1k+KRg^uo z=9(j*StsPDx&T)0E>-f#We4c{D_%1${j^x%OSteN9E$iMZeH|3E@9CYb!Y zfA)7a5utlyZ?J&mf(W7wJid3{#Fx-KASUgGmE4hQJ9Pz#`{zh38Ka?X@ff{$(qp9S z;lr!iT{fXgN^l6GY+K@+K16|@^`d0=U`5f<)`?t2kpvUWA!wXL9m!1U4w}Vk60$Tc zNl@_Hcx`3MB74PrR>o%>sCcBsok<~=rs{r=jJcA@HpGG!1^hh}gq4Uj!-HGsJvzE( zQDE)C{+?}gLq~)x$Hc)>v4V!SX~Jqw5ACLaokvsabflUli-4S%$hK9%$l)S zd`kl2Veg_S3-f>w=SD{RF-YfxM^J%C7?b99pk;c4lB5?6GVfAyYF;v|v?P8rhLGDD z=s4xdRSL`=pGOOAYFP~1(4G&Tz@x3$>N36%$(@iOl6Wz$X1f4n;j()*T0?%R@%)6fiX>9Uy| z7)sy_N|Yxo=)#=#EVqBXMUFoUALLBei4LK@=F1kG=xGi0a37^Cri^f z+CHKpU>C>^i0vMNR2=Qv##XT=8e1l*a|!pLHNWYJoz{srI@;017?*O~1C2O*sIz;p zj)Qql+;Lkqb>~{TG7p&4Oq9kv!O)36|AE4e?(6nFZ%s1sNRHKFS%6V((>ih55zDY@ z(-!n>+(hA{K|{e0pnJSEc_kAFiO=5LI(kY41m}1+f-QHFQX2 z$TL+YF*ec#8%wlMuxmhqI|-Kdu=PAu+o-~W0M~iIyV`p~WKP7V+EuFt-4io3l(mr@ zdXaKP`yCi+R260aab|$C7YQY^U_4k;Dh?=l>tH~6swg8Oo>T{9hm$V^M4PlkK{3fl zD)wcj8>S)ELe*UnEzONGDjN+9ij)g0Dc4MG3<(enF5vIZG9wl%28W{(toYArhHXe7 z6$w7(06?+tg>5cAEUNnB3n&y;v+kBAmN*}xE@ludZOurfDDh?T)Oqa^DVKP>MdV6? zRzyLq>3j1yf|vCppwHcvK+~j{$kDjke9B+Ox`tkeLPl633(sR0`u-#NiUcMmqRD48 z721utCcMA0^vs+)9S+pxsy5saa6ni*(#1Q*GPV7Hq1^=yk*!e%K>-Z|R~@OcKQ`oh z8HVaO9k&)BT|bFv%_9A83OrljhZs7 zVH_}KdiCx?xm3Pdw%qYj$pH?oY^m#?Z?}GcjXx@hj9bH)%Gw(hU5*In%TX?obZv)1 z3MI*?uAnF=DLpSMqQ)KjclRURpFu-Y+kvUz)o)49FEg2%Rh%75jl&I3+){xBL?c|Q zGW+h|2Ww*t_03IJIf2Ucr*-EoPqCjSkm>8qz2&_DYt6ChMr#vJS+opS zPn%5930N0YF1giWQ5Ck)u-;p}wVxmmq&D@y{OKIzn&Q6?M?=CZVW&QWs>lchPa)sY z;mPLN2fYDSFok5Q4atUP6wB&j3-SoV^s3=%-4f5$%RyKF9ao$cA;p=7b_L_Ly-}6U zhQ~sDf)eq7oLC-_u;e{UKQQ#y(q8ubo_qsv1;U?=*OUPt-N06pu(oD~%&srKoE4a*V3%nkiPj0U6(kmRD6<9`o^W z)$_&o#m3N0LUL#8G-1U7<{AxcOkRZ#23h%DA?%IdD9vcxm+UXtbm^-Bl zvpYM`)zXZP#wJS2air*+M_7NZZtsz!ATpX~Nt+ycN3+;HFp6FMBiJ`I>fbyD>dm-3 z&mZmP=eXK8{nhuGdWu;?N_n=tbz>jHt(h6Y(@yizGN$SVbfI;i%cCdr@4m(k=Cz>5 zo??WnEhLn0y-FzFJ*7GVQ_IjW$ux-meS^sKrV$mpShIdIH=z|Q10M`0gAcyz7%ZII zMMYG}ue@zf4!-JAvFC^qxhG2P9Z#O<#KHy|6TXwWww6^$2aAfmJMP<#Z(cmZItcEGO@KLU;m?6N#{CJ*^Osp z43Vtn3%FQz#ZGooLxQq#l7ep|JuV@fiXwLkEL(4>F&EELRV7+$a$VgTtGO?SHD%ZZ zH>}=Q2#!1tuLuVr7%&8yoTr<*RH{hO32NVsAbma#(!9VIa%D)_mU4w-iN}bm2$F`f zh~>Gy(kfCa$Ba4)MR?pbs!X9+vOLHdWKkK}aH6xsm%Uasit4ZcXcT3d&8K+D?>Gf@ z8=A_8*{Q;Jdy|uISB3Ofk*>m|;}%8#zyNhcjZ)1pG&9wZ!q~_#-8eb={1kL4!2}|4 zF?R}8SwVRWBk8>u8%<+qun}{Qn2Sg|B}6Fcd@qoABr@eaug)iT9E>*O<0Jmhw(_Kb z&u9+#-W;wr_fpAHof8>D=>3{yjt_duFv=0TVX+L*q*O^kuHp(gtlrGY#H}i$HBJh) zvRw6SVm(()MTDz~u$x{&^adcVq2&K&R z&FqH*R)s!iAk>T*GnNS0gH;%0O~L=Om9dw z#l~z5|4;XhVszgq;zpRSgt`vnHFhO*9mJdxj+~XG1W(0oXMDoP<_hc$q>4AQ?ql&~ z#$l?i4tUki>gEW=CI>0Ge_0tki_6ZQkMqx4EMwPvA&76^a4#-@$0BUlUBFL&vqsGB zQXvPQ`tYeZbZ#p`bbQxtJp^C;!S8YA(X+5{(INPoZ`_S}bCS6FuTRB~e)=fRJ$5FJ zKYBVPBL^d+4u1Hvr?9zq7*Q(9-%71Zo_sLj_zQ>5QHjK@T{D$R7DrLXbZJ);9&0@w zkL>S9?~dKb(0?r|K9&0OoBuEYlah&x@uo7S;nXD%6@a9I)>SS4Hzdw&uF}McHzL_c zRk0bRN@MvQ$&TM`74;NM1Uh zt7ygKumoAI{>2$EprrK;D=Jdy1_y#5U8&r4cr7v_NTk78z18)7$n*`bx*wid zd!ya}-E<{Nx)n-_>@gptw%H1r;`+y?T}_jwA{q`;hmuydvD7{f_U)lACMw2Pwjkez zQBLnhWX=+#rYGp8wI$G5hVr3?W>kB2xi*w1Jd@i@;SVU|G5L2}jf&*)LQYlRQMfr( zh1PK$OPPGx%0&;}4JpT=tQPA=J-pO&<#T=Sc0;>uD#>;|X5hqBh2K$_b?72TCuI}k zp{=s(GgT$-4b`B<5BXZKWhu|cG$#M$_w@X@YM$bT@6pYnIDF=?FcjI^MuKbmDtmp|;$*6~32L`5NW@6))kv zfGPUw7j3VNM<(lT* zk*WD8t~)Uoo=51mu=axjq*0%vqSY71Ctu#Hj%93lKQ1og5$#Ry>y8~fS?3)!GOy5c z3)wtQn482EZ(o9&?^}n}&%T8D3l`ITS;mxR3%mQcE2RQDU%)k29)YEE+Hvi-?tob> z;JcrF53YRsp|I>>w8hlD5H<3clA=O2kjIQx15d5qjoa>f1Yf=SEPU+JV{q*c9+Z7x z(67K8J&Hq}!{Hl;^~$wN#cO+WBc`QNzP`4G7VZ;eT}$gmc}IZ30)nD)6d}*@l8)=8qFMra?XWCcLtlinR5B5Q5*T$2>vaRi zL6ePPj#mKI109BE^9Yx19Mp(Vbw#4&ZAH|+!7n2Z-X+vWg#&v}-G6FZCw1O9nzR2< zVX%PSZM}#gA|qhic50lNH)OhG!gF!wz2wH-y?E)BU4jtf?l~MVaM*%*XlZJ|P$rK@ zp4o(fp;0t6Ds|*YfPIxzVOK3%XpR{;>DWaGgu@!2rlOo^uhDT?GQXX=kt%-nn^%x6 zxLCDe4W_r!H5iH@Qyj%`wunT+z@c+Gv2xXR-1SrrDR&5Ko?AnIHw!yrY5u6;ZmCL;7 zQk@HNj~u^JDKEz=l}MtcWs+F^7TL&& z5=(B(MDKnJMM~sz51oh3L#K!$D<@Ykk+|LxTkKm6)e#sqB#=V>s1D7;%6LJ^cC}!i zEbARX+wlz|PqXW0^?qV-d>PkoJk9u_R z;M9?O{UC}uQ!OpQ?z(Q>$hfvb=Y0{|sqj%`vpLx{8P=c7if|{u3h}+gFn+tLMzj9n z-Lsty0rj$_k5u)y|kfH8*`+gXC>W&jZ-B-Rg?6>~C z{8Uw!L0{Z5hVT7+9m0v2G8C)iar~lgeCXoS(LO1GSW_$Rx%U+bnzO`vP zf-mpD)RrVZ*4%`dk%%ZzxQYnrBKbE@()^CCk5sPA@8=*wOl@wd@@*Y6~ znyvT^?2F;Ju5X#kqF9VT;;LI@R}57J6$?edSsSKi@(8mi*n(!tKC>;Ol*4DrzM{mrhjk5h^=Pct!&o zEZub(K31O-6rT@;6y)=43Ri^3hT=2MoE4_9iLJ9PI9%w8-xc3+SGyicJmx)zRxy@f z=;*G*t?>ZdI#i3_5bA`l6l;|eA}13*-U{4dc)U9O3bvMK*M9CC==G@1Yufx`%Yk0@ zd(wy=)Lr|lYtkR;l8<))#c#f>ufDJ;&}?snJfR*H44|u3`axJ zU61(knhP7{bB{VysToWml%jhe7E+)ZdhY5?J+Nx#?jD?X)KNHg zUOTpp7IDkX_fVX(@vom*CeK~7u@|>Kuo*_BA15EZ0YClxa`{@nd|(^4?-@omm&RGk zXW+=gPeFPphwFc_mSTc~SSawCJexmmU*gCr;fR(-`SbFYCfpn{vE!B9_~6-RV^w*N zs99u!W$anK2ItH@9g#!?9rW0ahDHoKyR-+5`zAXR!}7sBrcNM+7Z?P2q5arB6tgFX zAQyxs?GcZUOpVlFvBzSusK*N4{ykUYctZy3P_^h3%jBj(!6l?B(vG*C!R>4 zM3r_VoTQ4KzVv9XI{BfvbQmVXa(Nle5Z~OGbRpb1FtR5`6-IJPUqKS$3?0>UK;Xxn7H)HTglVcuQO?$=fR z8nQfhHPgU#YDP~7j)A;A6&DYt+9w!By;p~!?UY^p_omOeL0w8R1fgb0SE|-MY8PSX zl39YI0pA|C{ z?N{xhKi2YvCf{JZGTZ~XA&|a5XeqFY=>`#M?85U~dSTwQ5+8g2={Rd~1DZZ|BCfyr zY3v=yBOZ+?5Dvp@Ol#Z*#hb_Q@#AaKj!egY{(2{#Soc%Rn>!WHzSxg+&c;pmZ^f@y zbpxfac)RQ#is4JwKaY9ywhJ9#?Zz<#;!(O5TJZTFK7qpy+m4<-7rXi@h$WNw-c8S; zrEM=2#8qtHJ%)TWfrjR&4|;Onl4MdvQ>qEm8=K_MeSCeF9bA3hIr#NUFQLDuU#3FG zEISf`qmIVTzCN_go+3!Ewx&jGp;#*(Q`Vd8Obm^3Q8Q|9G@|FQ`erlh%;!DHYSq1% z!l~engvU*M0atil9|#Dfl3&}BQIF$KIg~k-;+&MZI%DNmZS5+sT_$%?u;tcMxEXZO zG>Bzn3NXs3g-U3coCNm`FBJGGbVAPR)hsW(kIZN^Q7Yf`?Ws=2?`oT^Pt*6>_kU z5u3w1`moYRGF2jVt7?zK(h+RVB`{el2wM^+3kybN)CLOTrmjS#+hT-NQ0-hO^#}{x zuH=x<7FDpdTomdsx9~b=ShsZ?5JYm0WBMGxcJf>GK@i zaVRW*H{d4ef^3(qau+J$F#?uShY<@|GwMn~5qyzS)0 zxbe8@K)bMf~))TXFUKPeQij z;?DawQryYm?=C+Aqd6B(-@XbTx#(mpUN{|ly9aUk#qD_Fxvf$_yXHdd-#>y=Po5*n z)a7CZ$DDS8_$ei*Q@HG};%IJ&W6dl5_{Hzm%5+ACBd#*#`%~wfF2#;%O_KytIh-ED zjotm&zqcDNQ5r3?vIUzM27&&OQ4G`P92p(My_D!?Oln1Iq5(mQGv#bXIB;)1nONSk zOWnLa3I!OSNPOgst11UWLqsW<#iDTt_=2rjS@~2G)Ob8#YQx5;& zS+N}au0@UQP6pwV33tQ$sM;NbC< z+?YmP5H)oHVd>6N6ABFj<5sqMui@>*M8d&OTBbsh3F1qQF&)>@&)3XmTkjK$@o`+) z8(4Z3uJ4AO1w(e;e(Z<%ulnks&P>@zK zp3=^QwVjX(=Hf`ouT*8nN=Je&CnCsBvZ~La>N^XQ7}*ocT<$ZXE=pDuR9jN;Q-w@P^yo4;>&O%q&0dZVez6}X&5vS#+Q83B8Jv7b z9G9FnO)zEmKerh>_in`Tb8Nilq-UKF7cA{;1u%p+?P;?CJX6+&_dXAAUjvNE7XC zicjNKuz2oVyg;$iqQ@ROa|RyUxDifkGj60fQ`x&)XbhEnRv``z<4r+?12Qhs0i;$D zj>m1+0Y~)CPMh6pyIuBdXFV$PqWfl$ud{mjz z^a;`H#p%N7z*uGk`9elKO4{055T?J+=Lh5_;ZYJ3C)vNe%&oR8jOlf&?6scIlJixn z-bd*(=A>&8RFudDD`jC7z?cyqlikVK*swtD0;2Hiwb+J6z^JO6M>S=C>FVkPp%(X& z2qp3b1c@~~7!zp~$M6h-TnTuwL|mlH`AG5Nzg!-Zai0}ZqoCaAAcluVF>UHhN}P?@ zGq4X%qKs5$6h-dT_#*KZyN2gz;f@Q=ZrySibzBIwS8@E&mm1@kI(RFNi6o%fCVw1& zT!@fHKqnp#|L2Hjg}P5%=`__jxBUd`>czOvt~bqb7nr5@FCHz~>wtW+om*RIn`Uj~ zBiF5I>0O0_kvckQVH{Yht{>6hfrHPk4IdKkjj|4u#&9R1Uc0qHqj$iiXtMR=O>ZGz zlVX=KFU!i%UsNVPs~fMmcL&y1QM)#e7Lx>bQe@A7nR(- z6{vCP4^$KOzlY*x@yM93Yc3QO!PP(_oI-A}gyDT7!u8?=S7unc!b6X3!}%)~h`!x(uk>Q=<{dbG@e~}=Wn%AO5qGbA8P|Sj zhPa%%ZXC~U+lOkA_4Gn-ddyX&0(;7g88~d#bhJ_oS=`i!LjwkW(bbNJUfP7xvum(o z=@P8(9mFF~KaGxQlX3ZxM^ZvHM2K}ayP#q=wr}2w!sw_>R{r$P#7`tvU0Z6ts`)hy z)F$yBY6Z3B8n=1;F$AT+>b!-bAf_zWgJY^+ssjNstb4?S#-PU9^B76LHWO)FBZ{V$ zN%YrIWOI8^$c-QzO`xf{Q>rWe6^*DxI^PfMX;P_jCs9(ukzjaWNWKr>1U&vKRoJg2 zscvL(?5hX?FAz}Qo8{K??CO{sP>3UF)DhomyucOchar+7W-wXO{0ElQ#WSu_6G{OXs8`0|DO*kF#U zkKqFV;ElozmB?8&L6*&q>s!h?b-NZrBk$@1osY&E&pEG+tqvT}XbQLY%?~WMy-|;+ zFXecnp9#{EVJM{t0k%7a-=)>oKwfax5N^*SM6N%5p7fdH1i{dG9z9!Ttviq|h#t%; z5@p$)l*K2`>;XkQiqml1Q0?3cIzP&y^%pB^j? z1)5M}P6z4X3?_Fq;1ic0gX?dyux{gSG&h74Z%QEHj*yuhL+w1P`7tc*GVqD_orJGk z_Xs7JF(qST8k!65`b-p#QBitMChy?p+aJV5?>Gis?ag@X&JCE|n!pi+A%nUisUp@i!;ZmA=`{%Jp^n$KDq)1;G%4isvKpnj=kyz)vgyrpTK z)`Jk!C|GkPYPiaTiUYDjKAB1;Lo~8X(Y_kXr0KsUb-^&oDx@HiOzPrMQF~j5>_)P= zEOjL#C>HXF#~LUZ#F3@8j2C_ubtvpLAv+j4S!|fZa9X~p+2~3n{;H1VFjO!$IwH7! z;ctl4x9uZfc-+U`LYe}5sW>J~e^c)Wm_^S>Fun8m$*BU;rBDTuZrU>ZKhWMVvj{g}b6B@)B%gROtL>*MHpfB@7W#uJ4F z*@@Kok|;efV|>yv6+u?Tepl@T95q&Q#qGrwG?;vU8E=hGjHd3$IEjdoFLmXSNJ6^! zF#S17zmJ9!0$yRENk0C39kCSAgpGrXal}gn$#;C@C%dLrvn~dyBeANcbk?q4Etsh7 z&ub=Jf6N5OuYo9-MCh7dy>bv+U!lqCQMX#RfNcl2WW zo<1BuCyGzM{}fDaX+kDb)YpsGwW)VvOX{Fmrb4w5Yxh)f-OcyO=SY|pT(V-3>}2m+ z`8003=MQL#m+`g}=g~cCBS!Z`oX$^>zBo&$q=F7o$q=4;X*XOd2%}UeKm6<-r0Da6 zgC^olU1)6Sz`{ePAP`l&yEhZQ`DzQsu3#91or4u>|<&wF!;Nrg?eF}x4URdEb2M-Q-FA3wh8 zXmLi{)9})0K}&gBcvZc0#C0%H1p{`qnt|t8?)dJU3X+bxq-FXa-F53tq_hRJfFoRt z2e)dN>VcZF#>7i#MS`sFQ0mcUwcpn(ZTs)&bzQEivOQ2v0TU_7X|sjk5gyEbxB!YT{C&X@l5Cd=mpWCRtoY%PbQfG8gNa$+8ftwAN~ zyRzDLnX34qJ!wRc(BPYDP1w@($gpo-_=%4{bIb*tKsE zS*ozvNe>%T}HSNNUP590; zPhwU>jJl?oavpicam9=oXcd5nixxW9lc&ytLB;ME7v?PfNcZB8)~>;DZ$CoGhBu8Z z3;3Nzt?cq?TQ#W$==rWO4sj!Mc0)-ya|5EFj%78gmMb`{33`g4=u#p{XGf`>%psmk z3U{PjvBeROV@!qGElwJ(tyActPoY>G#n4a>iqvti&|zak2gY(a$*7iQ`0@?Hqpf_F zx{`u=oRUuo$!LXlBixd+O0Z?x$np5iP%vHA_2LeSf6v`YAx|ei-7hzCm6BR85)qA| zEWLJ9W2Yn&VPdn%7u~Fm4%#9cL&-ueOkO_H>+<&*%N8loS!iyVjy=2A!_E)N65o>N zy28c}g__VcB_?PyP86YJMCAf1M9b8gwjZ(xhn;aUjA)D!a7hxfs~z;bQHXJX@~k&X zsju_AV5I__9IMx@$C{l($Ud_NCoY_VAOHQu z*wQnCTOL?L_s2e*ar8Vqzcr1Qc6H;pLpnGk;n64dA=1Ff#>Cyv4bpKNq;4mL-D4&E z@UE8yxyOreo@^v2;fda0y;!F^5ftHd(g{yMSgbah!lzd~ji0u(i%2Qe=aNYHXOzz0 zaJht}r#Y*^M&H}86??aCkwoHi3E#|^OJ?8R-ozhoX(**0OVlo4OK;&#g|_jlQ&-E1 zXk9GF@fot-pqw%PytByW3Sw*9(%dY8mAMG2vQZa0Bd(Hk;V`RwXt;+0`H;YYn^K+1 z>w^-KRJW|cs!I$WmlbjuaqA0()fgzi%Xqq5*z#AA#B8~c7zO4*R+}4|#g|2J^7PsR zl(@>JtU~G0QRh)th7w7oqKJ|V!L=;Sz2TddHM&gdNGc3Rj*GZuFc6dRjVRPQ6?%RF z&CQe1I%$?lu6)yKvLEYW1;Yutu-ef%buOF$cQ^FD;W%=oDq5z^z;WlDjYtd2sZ@Pk zK*Q%wc;~>tYh8vaxzx^=*G3pVf26kDb~Kb%;HI_;G`xHju7o#QaiIxe@D*g`d9JNf zgW)F@?*ok&39Qqua@-oB&Gkk{RUh?a)b`3XxM}?`WZVeK6MeLO7_=n;eIkLv@o)tn zx~jjQ$E>m)cicTT5!<}JKU9xZ>H&`9)`?TP6HBa2NKmw-;PLK9*RZrVd{7dF>_b=@ zC=iu`BtX|3cO%@XGjPJ5VD|0$16JMT=>4Q}8x4&=7QSwlIv#50SXQS+Xo+V{C)D!Y-O1rB)W@qQ>?&-Vx zydg98uDXt}|QVxKq!L4*Ne8M=7i{OBW(f}!&&bqma6&|Pm-VX0n%W3@7zu2(qf`4)s^?f_O@lbrthF=tdW&Gabm<4)=Hv zN;z*ttk?onFqqJxOJ40E1rm1Lqk;My_j)6?)*Z0X6Tvd+v9;QYKt@Tr1S2t<{Ah>N z>T!^vzA)kR=`|A+l3ouFZSlsL@~~g?>m%VflCfW$Z*ss13JNQOHf(J_kB=Q;=2L-6 zy~YRW1P}FEt$`KPM6|z1m=P-_$_Pje83-;(!%5t1&bBpse|lay9|3_pPk(2?qjcfy8l?YJn=R z0-a!GE#%x2-%JKo6?@ICI8}8yz=s`hr?`QMShB}U=e8^4lVjDsTNt*M9HL}#>tq#97ws9o8LepMJRqgwtcAV;5j{)wEA|M13+l^ zE4_b7{zPrSdVd_^8*dK%m$9-x+@d&ipL3N*;*gX8;i0>E=?3(72Cy`@$cMA>(lHLI zt$bLb*BDvZ8l&`!Xx_-Z(7=PQj+J_eiOK<2n6<_dY;^nZ%fJ0mxc82e@XjxNEAQne zFK)o6pS*yHS(Ou4Ie-jQ7O+COumQjPzdr;&_PyT%P2BVNZt#3?twT(52fOf>-*gP# z{?IA-r$75JeDu>7;pEBFcnFW+2e3+c&wHPNfB8$l1K<3m%ka;C@Nd8$z3wj9yxszJ zFTjtz^NsMiC!T`+-V_Gf@@|aDfGj*U)Fi?>FSnZseNM%>1$buXI{fH+-w*G&`z~l( zp3W}b$2Ty@(Q7EL5*i?|I$ zSb3)H;h|3-?2z2}q5+qzB`onOl{(?P#Nen@6sRt#{}S95Z@v;9oOJN^dL4WY($lJO z#X;hU)JCJ0RJl~)a;>}HhT*u!uu6IiWpZU2o|A~;#8}g&0&`Kw&5(wi{lOS-(k8n> z#=MCp(Av4d1jA|#uScYLV^CYESZea~@quZ*>qnCTzK(sy>r*O30dYL+LsXx#C2OU; z2>nR~_WK>!AL9Gy53#IVgXYpY&sa!@tXft0{@VP03Z;4cei2sJ&cgFoF2LHc+u-bj z=ODmLrq>^cTm`f7Op5rWvzaj|(%RF}HFJ!!6S5u@$xR~zbvIac((TGJf}Krxwxg>c zp@q-^G!@;hSeV?zuw~cjAoOz~`VvrwP93~)s+Vpu|8G1g&&*{0&S1;RMaGdA&UCFV zrL)k^&BiLRG`HCDa&mut-w~RkBKCn5p>=iA^jm;3B@qv8-0yaFp0zDj9+89uq&flqw+ajz`l=b}#5idyW% zsVi5Kf#%B)-@Lf%(saWKVNEyBK2g$Ui##W!> ziIGE^!VZf_hA>GQ{5_s=#yu zQBZPi3-aseledW?cfxu6qsSG-JZ3Bb6A;PQoDgFjAnsrv4|;YaqYHz9i~>{@-wU>d6Rh$3RkEoWmeFw7wlC(V%iwVak>?HYrq=tsr9ixY(*nnnb)< zW?Zg+?bsvAA@smI!GV=iF~-fwltm3`Z*~=1ZxTWA+2^tfy>78X{S&{V^adSXrvNNQt@vWMHPiGG_VegA&u+CKLDq(UP z$U=f`B^H}R309{d1MXUj^IuZ9snpy>hUo%U&{i9d;jd<mR^Y5FOg{4 zx!3P>LKVw+LK*khcCQcr@t58Y@jKoKU-Q=c;a%VHAWJNLgCJfae#EqYW!%KHw)XBhBdz6mr#S3vtds3J`d{knf)o1#YKBbu{1sXt|3+!L`bYxn*s&I=OZ!{>ltBHG9C{Y z{Wa?KpugX;(&}UG-BXK=jHn`Nw-u}p1VL0MQ!jl%*;&Y%UkYDuAG4P(%+DX^Sqbr4 z2-9Dx;CrYoWA$+k8qIm=ciSxM(#0S)CP*^|ro#Lj-1&xka3hc(%)s-Kxd&~Wu%Yr| zjb0WdIYoR@sbWi{1h53kDfd=2c*JXRX!=7=WqRdK(9=JsBS%t}Ox7b4(o`Fl?bHlBIi;X%Nx{Y|6OTY(AB_?`4|ziqd>&~A^Q(-}jD2!*Bq*E$t= z_kaEnyyv}-!%9QJ_kZVo@b(AqfIZC4cUl8zbw;q;8DMfc%%IECJenFNr_Ws*!oT~) z--UaRPv8f?>pr;eL=2mq2%q~=n6xj#PyX!(;Qn*-@a|vuzwr5sd$76J!981sPd_t( zUwqF8;qr|(1ZK?1C0`XQ#{7!JBDEyamB0f<=rTV=Sn zv={2f7NNGbVs#U3n&|jS*5HoY-PHCUP8n@UE~oKicp#h^``IR6*?H>`ZLZ)T%s%~L zo3!ocY;me)A}KJMgkZFUOd6BG9A@jx)nH3kk-DH@K?iwgB_nn_d(iK-F>7DOyh6c< z{0i{ko(@EQp;B)!Dr-0$@TO5D0l>ZNm}eYQ{1B;HaRy36hOA=1;K4qV^8%Q!FR|Mq z6BP>qyZOJ=CbGQfb()g9;lzL^i3LN~LZp z$_&LKH{#h(#0FVY+gif+7YEi?K41hBqlIj6V$&ieq}Fus;^%e}^8)co^sl%AoZ4B7 zkP)fH#w5i+`B{qb-tge70SX8cCX-se3v*-y66rC^nVP}1qLkkd!mqI2C9E$tTiv9S zK;^HcvDaBUIJJi)C%cB&{;~6}H&YL2m{^G%AS>uVTQ3)p)(--wUSMqDqdd1jrA`-P zjj|kSEQ|0QCP4}R)8tQ@}x_ug|F zzWW`og<4g?RXpGc^54gV^z70I;zP*TKJbML@P8iPfX6@gyRfp*gv+}gM>nzU|B33Qyx+`Plm&fv_yZYHmeOL`atcT|gpT_`LVscQ1VN9qX`xAN0ddJ_(yo zU9^cy%m~)7Od_=tw!09)J3Qlx@);jBVhNR+#o7m{d5M`Javf5PrLs-3jifJQb%I>! z%UVfRbBGRGx}c$@?eBN?nd}&|m%G)0qSqc8*zfLh@3>aMi$We0ETvuWOilW5`ojqx z*h6R(tNhr=jQBm!FrE&y5mpN2QXQ5SS9z0BA9li~Sp`OkP%CNB-)Ev?H0fJ3(3AsK zt+5Kt<}$x7YQfQOC3*W19-?$m2?;M?c(GFA7gDQNp|iII*REZLd+&b$nvEv(MxsB~ z?(D)4gW}xcaVS*3fSDdXFJ>Jhjj#Vi38v*K5Bhmy5->oT(j3?*pp-1q4y?IVLN<(S zz_Bn}lOCXgcSEH2MpE$FvCC)3b7rP}b^ux9+dq-m2QauH_L;ha_%iD}Fo|emY`m)> zzt-3m2XR7}^Jp>Ol5?06_fJ9k@`r~Ot!)x z#p99HqSHbkREO;a<(cBiEZ_J_L+jZFN77c1u}3n$5- zKMQc_`AxX}%t`$B7@obd%X7<{UIDfe2;lqA-2uN)AWgCb_-m(@;QQYHB$V+yD^?58 z-)_MIK4<63Wq4D+2WPP|x%~PEVSj5A20J@^#;u+@1@+}cxc1xy*x%TLuYdC!;UC?1 z27d42^H4D*s1cML&*0@#=V0O38eG3}3A)!ept3Lzi)W5O0R!^Z<{k`ocR&@6HgXHq zEXrbrvyhg$II6$NL_m@jFO&q6PAaxq25y9&Vz)*18d1>gi31@SVfDIL8sY~W;#X^T z6B^YfR}rM$#aTOEoMO3=2=CeKI~MH`A;^&fAi2ktL!QxeF-p_Wo{X&MXr(Hr%C^Fb znDGl^JeUS7q^Jol+-t$GyMs} zN&;&&k4OUyBROIHtBo?;*uDhooiosAF2ZQ1&G}ytOfb7?!@|NkUaK062NPI3xd5l} z28?PLOh;ppDG7wTBa$D$;&l9b!XlF|xf(Gl1*auhlt|PA0PI}B_-X@ebrUFe=sG|3 zgSuS@>Q7^5HeRkhL2S_>!Ax~yJS0=ufIyzthOpAuf zY@M`Sv1RTELd;3VXMqKr6s0^r9J8LF@e$s4#BG{}vbTsZtJ#+&NUBtG_*7(erm7D_ z8VjM0r9DPf=9pTRG#kX<*V*sF#?u=x#B6-7GRJ5xnmdp?X)HapBw0!6r77dQPGX;4 zC@c6VF!0#ZjE-)-875jOl_29PGe=OBphM*$CtnPoE@_yU5EDK8efT~~aS=++@JYde ztDv>iDYTEq;~uOZKLPjOc>?;KXW$KYFTs6xzXqQC+<6Qp8kUb8hv9G+7MpY2)(vrQ zc5&}6%wwWkU&cTN@R5&y0peaC1Me=Jy=@83uHV7nptYqEey#8xTE71zoH=_NJn{J} z5MDL1AGEs3HK6M>f=;^y|M=_~xPWKS-}}wqht{<%_`bjTP4GOvy~jTMIQ-C;Jp}*e z{av`;ZNZyP-v-C-It!n;?=JYCzyDsiYk3vE=k>2+1(6RNUx7y+{zLel+fTuNx_A+O zmLe?O zxvRq|&a)asw=8y$?iXu6oF-e{sTKF55?{ppLN>G^jt8~4sG=AQ`xrIJt3<=@?p4@t zU*}THNimGaB*k8IqW8>{r@^^Mco@(ArsuEK8!X_+0(K+JCdxJ3cxB%stW=u#xDwyL zeH(13f+8)fVQ*Jl9fw_E+g55&t~BuVRrD0F>5zDm@zMmv4 z*xNl8jJYvA*xI^b?NbY^S+u%xOlBA~CmZ9&t5@Kjw>$`yd`pY9<8N)Rww5 z`%z(FA-0LPh|$X1zB=JVpRJp;9#wY0=8!Q;9g@zvgHR>Lt*-wqnf}yzv1>nX(|*3@ zq|v3t5xaTDG_qgffaiL^uA%~4Ie=4B)!I5R26B*vA21>4GeB_j1!STLTZq-tg> z`dNU1N@-@&-pKk9tYUKBG<$@#e1e(am4H$$5h z2m-Z;iDnyFE%B~&+)bdP-k<$w{?cG}x*!_IC8?!My$y!o~3 zu)bvAu2W^WYrO{doNmC%TnMe5>+E()mJ&p;K6j=G_ncjT)h0>sPdPxX-*y`&$?`dY zDX`>HtKElAcfghseN2#7mYQsDJ~`qVnv}FRsSgy1go*!YpCI8CBJM`8j#*Njwhf+J zYq8{eqe=z-?WHU5_kZEP!PjGuJ$?2peEZ3D7~(nm{N^V7t+@u=d8`3LJ%$VIHeWAx z>&0vFu6th#zjyT-e8PJVI49-3oJrG~#kZag@k*7^zBV(-Q=iT1Dkbku$1xdk8hWIs5$`*xbCp=1^R{ zkOSi=;=Usd(6z2vqJDun8KX{D23UA-s7QD!jd~L=I6l{~&CZHzD>Xh?h7-fQmTILU zsE(<8cvO9evtbKA3w zYPrS^VvWesm{gl7nnW3vvPTV>MCQgZomueG28IMl1et*=w;i;GG;a-?HOjKC7z0@tcmHlT?gJXMpF5K&dHtiKDt3P$@V*t)a@ z{Tn^Z{>!|#3D__+v>zpa6R(4wCkT#KTQ+vCrc=ghPD20Y6g%Fq1a~Nq;aEv!Do2?*k@kEEK6hQ=xly+8Mg=2$Kf4_o z@b__lcHs2MHMsEHH8^%`5q9uAxVCYPgF$x~!~9&8+mMg{(dV$DS;Ay^2pz0S?mD*$ zg&LkaHvrCGdJZ0V;BMI2?!%K9nAg{DgN@IehqG9j^m-8tdIRC^ydbhIfThM9{MwVx zz;S%7@A|sG1V8<|55xXm2kNJ8gT~ql&kpH!t`PVLgH-+03I3dPe*l$69rnu=_#g&` zM?U`yT-?}#aIXbleeZp6ai_zSjL}{j6jncz1_rs~C!s=unUwhBx!cG6*U=Iv3TY0B zIke4{SgjhBbT-^*FYP4fH<2YNMYey+9EZt-4?Yo1)OKJTT2f++nU!RBBo*K5?{icn zc@1g@jrMOqsc`}t_{PQWBWBmdN_C#is3`NGytRf^Lw9!%G_|!#C9V``phRUq@m8p? zp@Y1PWjVo9B{#x@gS^`f^;sjVc4|B@r_Hq6+Gg#qNpFugYQSQS5lgAe*X9}COrGn5 zL5C3!WG6i7wmExaUk9vqnhPhGu}?lE2LD~fzZa$$^rlj=O)xN$&pbWPcof2Xa|xe+ z01tfSTfi)opo>{xkWhzhb}H?ph+8b8uuFU$W!p!Nr4U;w28RwYpsjj=9D+VGh@B9m z&h)wCbYkF9I64a_HL3Z4Rb6!`JYlxWy)cRMVV+PW4-MuxyjVJ04rFyW{&0D!qL`lz{yN@K40IVa?X zXkVkin@#q*9GD>{L_}vB49011S?&un!o3@g9ft&VpR&Llw_w0}fYAGhNtl#j*qHpV z*;_&qi}@)s{iW(~%HWd7!@2E1O>81UkyzQgf|bgRG0YX_aIaK2d#Aug zqJ$(rL{*`f!C`V0#?}YPxb($WB%FL$2^DV5Sy?4zqNM6IN6B&+7(p36a$e@ChOb)| z9_s}-XBhRazvATP535X+*~hb`3YC(`+!J_ab8igWpLiCpkt88cUAxFAx~DgKP z2cku&I{ewunRotjPaG{93vg+(3(x-c$Cw&0#>92w(a-YrA$QhipWk8S@~hj=kUS&L zkOQ4c)Rm};IITk`UK&RwXpJ>|6tDjifAARm_NkL_??Vs3uRZY@_*du7z_)z$SHdQ4 z^}%3(0j352;LK@={>nGN2e#Yr`7fM@U*Fq??_6DmuQ_=VKCstk@0Ne|(LaKJ@l9V3 z_kPveSZ?QMKK~^A)W<#!@BWr=hVOdEH^F23eYo({W%%;5>(HS}1YgrOe)gjY-K61> z^TsfnZYBdXSCU31%Cs_^TC9+XM2^8_9fd+{vmm)gwar{7qY?Wj%+-auPf*=*DZ~t5 z2R5!e&F!1hr|*G!y^jB_#r6CQ0I?F*nbH&~_UG&alq` zg!m|964J%H{sHqix4E(jwFW?2_tc@v{46S`azzIn@>z{p&^fps7?;~SO21(`T6^B`p z#>-1u)SF=dgEHEx15EwnxnS|?w?Rgj(?dd??@45<4?D*&E;*q47WZGJScUElJZyFV z7VF1&KB6tcj_C!Q=0JM9R}!aGT-l8C_?@h!~WV@`+~cwR_S8Xw7+ zQMW|Q%>IP)eu-1)MQVeyIAsz1y-NW$j>DuA;7P0Y-Y5woYw8n)A|tFOq!kvF>>N+H zT0Ao?Dw}X6$jh7ZHUiPU`{73(hEY(2V160i`CIRU(qa>@o8}_)_u;e;y+I4=)iMW`=_G#MSLwS7?Kh(`LtnR z84Nr4n)-G_S7C1cI95w_EOFb=+P%RpkR+{NDwpuF6AXwAZmEqXUAX+*6Rh1ccltG; z<`f4Kn>NZ#Y#H5bDrMQ@;Gw|A`boqI1UF-~3z-ALfX?s{oD-t5L+hj^q7#u%Uvb(a zCr%~xbalqDrcX4FKGTT~PKY9CEc;uWTZN`w%Z%fO&*hi-%^JR~R+&-?s zur-Aqrnt@W5+^^#{T>=rW7SjCqHI9>(6Y|ttWXIq$^kpvOeNs#+j;_-D0%I-qz{q+ z(q9Zc`-Nw|$qhFvIFq36oFqY(CxW@bSF?zP~9H z>kY0hhIl?cF_BIZ(Hx4+RoKU~irgv5jfky$>Ab;;N8`EK9S*JLV32C@&Q>NQA7O{9 z)aD^_APFY2+y)-fp(;oe)(0VBe$#^XPH zn__Y4BqLR>Y+hg)`dXzZR=#u-)65w!&}i6W+&h`kw0dpn9c$vG&;YH-7l2-45ra@< zlxHtfn~cbkbU>Af1xwQK_~DFLCWE|3#*Aj5#}>jGRBNlSx_XjZC*97Tuy9%U5Dygg zY!?nlt$7?)@pV&c@A@W|$NL*lTUf%^G=Uw=3d;Dw&Mz$Tb=tUi9;W?WIQ6kXtV{J9QH3I>mzb;4tG4JY$We9E)9>1Oenc-Hp+nEgpN3 z#FR#yF`jhjvt}c~m;x}_77$!LENGr>74Y5~sg&54nkC&N2LjzEH2kI+nzE!=ofbuO zbBPqWGcCQJz_%^&P&q~7Bu45qBVU76hj!FMew$c81VxB9olc~zi3IzG( z_yLxmL+w7iu7seR?km{e=s`KCdC^t2QgCw~ZdZ}!9A3Mzt&T0cn%Y*{scBgSVo7)` zygf+iuUYu343lnYpdB6g#SZRnp9{n@>Ll^@SB80zxpFGJnYwCAm9ZtHVZb zfWM~(?W>mTOxC+h504!lL|ZBWbi7n$ugspu6xYz zMdFS`R(if*WvJ<#qWY;R)m6@3vh>21plLZ-MMl7Qeg5ar% zdqOQOVBUb3dC~n}YBXVW`6SdZQ`^SqKAjG6=BXYU}YmrPYCcma}wDMor3VkvXZCK&4LG z;H3NA{pAj}Sne@$MvathtJc5YC*9v+r2@6E4Aa7tH4Z7^rP*$!k7q9VFA;BUis$pS8!+76fpt9JTbR_d zk~$^ztr3Ly_xW-WHm7}PZEV0~)c1DSa0tNUrrKPVHN}KK%KnA}F?f=8N$lGLv9g0l z$B2$)phjxFbfXCOLtA%6Vd_Q`!;n4F&Y}aKf({MM+4O4AXTsjx!bzBGtiX*ejEI9a z{=LZ?fwJ(BvnXJ%bHYDQ@WK@;4bH&YcySh+P2qSH$TpXt#C6vAa%^dW{0*mapae>U z5c?F=CL~ot2$88pnBavi&qHl)6)TD|Y-4cP-*032UtnB$gh6FAZ1Dg-EYzSme;bsr z%-q^0WYz|&4mamkV2szU)!Kq`Z5gWgT6+6kxW0P@PQKwBoVe!<^f4g0tWBJHRCn44#Oe`(PB5wgQfU{G8)D;K36b zdw@E(&B*$JxXjpkmS`KW9BD17s#2=NwqGt|6YK8`VPYxmW;x1OK7&O%`GIV504k3{ zBl#OEOc{&k3TdeU!^}gkla=zN%*tC@Y{HN~y9p~^QV0~en&?E1&z;dz}N*{1_PeyI*h8VNsd9uBYt2peOgr1C~eZX35+H0UKu`$pB zu@=KR(atj?4V#yCAnF^auz6cnYsc9Z7?%lyMQ4Q~X~V-}V_StLvcD5jGdi%6HUwCi zFt?0gy8+21F=$3${w@ImuW|sX2A3oS4i&^+IfAvYb=i!%l}hA)Zl;O#(&jGzA9Nb| zL=yXES9#n7;^h;p&p-VsFRjwPdYu#8Pwae#KQm&nQ_^*HdyBzB7p`98@9&NVg2FA7 zxdZjgrRVXSE%Kfgj7ugY->6qHHGPeNFBx=D$Y!&2VUL}`}cqoK9~wZND1 zEU$ovIqNEsH##@cF{2-iU@+*3-72B3*oASz407T{jK*F3zymn8b|;tbt@fscrv|LK zHNr}SXB;%Kzym!X7e+jw!xE_u7ofFw1I8y#3VH{B{{S}xaXpHKsuyviRt~ws@j;Bw zGnx!phbyvLQYsdK!LU+?>fCYmYwzRtw|2LoQqZDZLsb_AAnIEZS*pGSwOW(kN4tF$ zA2Z5*x-^ws4#8cwB7fIgMwIN-QBg5v6*fHbOVKNr0Nuy;xB z8C@agbUbh>_}|O;S}_@-#j?1l+oEGNcRv6D5Z~ zySgNnkm(I^BDlHeOoHS{2)mj&ATcYj7x)n*hr*WTq_rwHM!73#z2vC;K z)rw0V-~go(q{GPVKA6yEE0@Yp#gdm0FEopYOzeBNJiKAM(a=Vz|Z0Iv#`*tT1IrUtniUCW5lcfU$@n<43JY`;B&^5D&XOtfZR@ zY}O`8QG&qBiuGlKzCf6jf9}X4BN?tuh^#oWQu#5kK8Ph3Dhmy!zq3;$k<8ff&_crL zx^b)4%Ak<62)zcnIFUnbS=A>a@6c6pbg1Zl46A`80-z)Q*);J|Os}R+o`CuZAU^a{ zNZYzA9VawIZL3-{QxDtZf@goz*g|wk;sme5m@29QOjQ~Gtl+gOVDKn{ELGQdGtz-Qoz`V$!UvaJ9F|-$Q?7O}EHHU8!mkp$ zFqT=&aYU-ZSiz7F{czO6_YhktjaaI*Xvi0>RG!Dz8AEsfI*dmhh9i@^Vgug;>4;IV zD%bF^sg+>v^dc;tU4ikm&pl(6GC9gfnS10=a50FG`aqy`BHdzPt5o?(Kn9+fNjqvPvys?jKr60F@SFsb zSo%7>ErygcEYwPyYvINyi3S!9ZINaa)a)pplN_y5p)=*`$H6jS$StAv=ox6XHpRrH z*T;ivD4Zv-Qv2HSJeO=buqqN&X9_DLd-&%hG=uZ z@;qGregL8aO~RNZOghS3!Nl|@RWzi@hxe#aMNXJ_=udMQ3{ZxhDntHRW!t)fGK{67 zGtz3~e;Z|6kB~*^{RLkJ(yi^3F`XTOICQ0x`Y~4IH74@vc*cH z_bHIfLQ_;w9_(n_T!taJJ5A}M^9(mjm_c&L;ixkMH^_W8 zv!$l8`cuXaa*ZV|icVE%+d4r;z|*y}xZ>2=*8nrt%}y-RRUibx4tUlUGrO8+6x^no z*kGc(1Uuen`1;Wit!zd}#oQ^Cs(^^QBJcs+G~;cUUg~hd zRKR;8aNg!70sCq%aRU6B(!+CQ{~NusFit_6NfE%uq1YqJjBR^xWv$b8XJCn|DFAqi z`!m&PV>71=5G3};skFq;VT^h~%u+^p-yEK`n?q1b85ht@79JdY4t^U2P#L(jWybTp zM`{s$aHJ12wJHH%s}zT3Bt>QnHU=x5NLMic;eDzxdBDnMLO>k)5ZOXP3m=VIKP<)K zW*Ejx$R`t=bbHp^@=<-ue(p!wmnl#$p*Bh!S5=CGq_UNTt&F`b5=r(@tCS>=h9j1~ zkB4L4Gls??`0pB?r**PHpGa`#L}fG<1z5(?Tl{?-C#?l0AQ|zipf(s~_tY|=fjVbS zD)QoUh0w68BF#H!a6XX6TSz|}{+UWMJ(Camrshf-L9a;XU24w+nyV19l9AHOey_t2@_Mlu($>|al|#ywp~UN^Tf)h)+kkgzGY)2%^fMz$(sx0 zI|XTV8FdUcJES89Z`zSaUyB!pw5~`nvc$nE5(y9!r{p`>*p@4v*tgp3K-acA*!PB@ z;dEHxi)wPd8f8Qg6IM%{B5Nkc?|3kh966l?SnnS>^P)o!NPno>Y@JnqDL^`SvPzsT zxC@}1OQR$Skdl9iQ|48Nb!`m5fxJ2`Flqi2Gdh5mhP6^J>hL@4CmhK$0F$cVjiy## zOmsmntSRR|Ryoj@dwA-G^_qjs)D-0&VBTCHEmBrx4taJH2jn7&2?t5aYf;#sKmF&d zX2WY*B_clb!_V46Sfu03_tl}Z2x(`+5i`~_zX}v&{!F*PeEXY`tK^F^urro1n*m1T z=t_ZNM8RPq{ztfX19HBdP|%2>fj?Kr|6ZWP3XGle1zi0}n@d@l1JdBZjJ?|(vZ$du zpd`Du5)9(RW7*=?G9?gaMIwh48I~(%I8!wd<}CeMY7t?e==OG5p>u>Uo@NsD`kW`0 zQR`}eZ>Ui(vHS>t_`4GXxDdo#l9NE$}EB3TlW9m!{$;ZSXx2 z#-Cuq0qGWTmOp}CyA5%%24^0+7Z$K8>0zLy8BuUZAknst)b_@4#!+q%$EYXmpqI|a zLU~CEWtDHUwzCOp%SnN{sJF(gsk&SbmHx6I|xo8nT*`XX|Hq|>}s;; zCB6lWW_qRpX-?1vJ+WI8s?`wPI7y=ogv=$5ZKjL#HyJ2GzJaY8 zRnk4Ht1sz#ALsV(Mst`e%xao4Qc|oqfbMG=L*C(|7pJ(F(L~Mt?WTF_Y%F0gtYQHM zy!=`N`NBTD5#D=nPPLb7${*6?3gKHe70}VtVYr85GyodDLm$R5mU#hPGWoY~>Wxdm zOJSyUDWvDv)vT9h3J!6r_;2hJ^IAY>n^NsvN zSsBTtgI0H}7l22hf2ih;59qv&go0z6uWYk>A4aH_t^`SV*DIP?VU7KBF}m0d+QvD# z!0G}v$!us<-|=)bGLqgi6P%;Oo?M+q^JFRJ#Z4vcom<$z-R98V7`p@|Co7*qB^xN3 z5EvYQL!;(DVU+l(h)>hW{(c<3%i zN(PPo<7(2zLALPD;Pd@0=@-Vvh#O75~bNfi|`!*Aa3Pu)BNW?hI_?(Ap z4?LF$X*~Ja6b@2+Xfs#+=w@t~+=GC;bs@Ka&wMDp_Vhl{%+HQNoS()fqT&~Eh8!pq zYRAGh0c=obx8@!e9W7L1-d#1y+kKas!CVduu@Kq6)2;(8OEb$^`{X=-@(iQ{9nX-p z3cBwt3eb;ko%d>X%?W|VvTMz;GYf(9Nbg_~wMo|b*TM-2TJK7iB$5PZX#{uaxzq*ZI z)T-5~MOSU_&~}-2_cquqW(Vw3*}Q!sQu7xZW6y6CclZ4nG}le86@Ju+MRI5t8eqCo ze9;dP&XY*kOy4t_>I%519M6|XDZ*TTE1;MZ7Bh}`!cqfGye!8xXkD;WjUX}e(N86W0D}L)e!|vD+o{^f>duWas^>A@4NRl^A0`iVK}h~ z(CgX*;|LX&_jb=X$5vse8LYdt;sdkZ0CtDsXu*LPFaWaXGqgA=j0G!s6n}t>un%id|niW`eXmqzjH~qLo9meP)!k) zyA8{YhqUR%?Ozyr;_1KCtjCp_#;zR)BF5%F2c@oD)G9?{W6uMG8Pi9XVkSFRaaQ1Q zDja!NOpQJCl0c|x*dGf$i(ie>CvXD-lzWhIcgOqHA@ib5n%d{qNSpK@@ZYP#0}FlFOZ2qogqFVbbDl&pcr7%CV1aAOr5Q{b%A$e?ji02 zac^@>X?}|d%3tL?vb~ZwTchi^tmxYMjDPzJ;vCQ7xQuQnuE}$Ts5>HrxU}eIqgc3x zC!6=;QX9K58%~2NKjBaoZM&=kD%O*+)O*b>X3j|TKKp=v2$grU{e!++^R!)sZ_D4thS)8yjgd& zt0!xqKQiRdi4L~4$e(g5|KhyeMiG9pJam6GOiJ6Q8O(CL5D`O@5=}p#TI0UDVUl?S z+-V0OS%sb7^pBwS%1gAW%L#M41aj1hqAS?|3V9KE@tiK(Ee*tUwMI&T2;$1|HFf^$ z-#6sC?N(lWFaEoU(2KZPep65)Kvq<_X9 zY$p?R_rjUDA3PELiufqsGt!8ZZfUlVZ>1-~5^2)K`QIsmp~K2XXNt(k?-rw=!wCE- zmxJVw>}_ZYqhsvU&>`&v{}d7Z9c-5-eX5DThVM}MuKOcR`iIpY*+0N6csVNYtOE~( vk;u9JYhs_8i(PSN7So&su08u|6~oN*WUypab!gFw>13I~Ew9%bx!wCeFNu>& literal 0 HcmV?d00001 diff --git a/images/case_studies/box-small.png b/images/case_studies/box-small.png new file mode 100644 index 0000000000000000000000000000000000000000..105b66a5832bb63fcc2cefc35996bc98c9e013d0 GIT binary patch literal 8519 zcmaKSWmHt*8ZOcdA>AMiQZv*L(%mI7G(-2$UD7#pBN9^5DM(3&gn)o_gMg%TUOaX0 zk8|!`Yws`C`#kUS#`^aDv7^;h<*)(d00aaCYz28~&F9hT`4&b;eSW5nO#FEs$lPTh z?r&jM?p{z=O9TlEn3*Mwf+N)0QqvM@;o~xFDU5)C#A2%jafhfV3z@?lIiY`TIK3U6 zpV7}`V&2YBb9+m78Z%34TPG2^bTt1kakm$t`==;~iaL!H%+->H zj}y#c&c!1@!!O9m#V07h&C5>14dMa=L4rUoUJekK5J*r6#6|P3i|*N*tA&-2rnKz8 zd_Av3=xp5GorQovFE1}nFCI>qt2K~IP*4yE;s$bab39vcxcNA_L%lhi-01(!AZ_Vp z?rQ7oZVPjw`O643gL$}%&^;^tX9$kY|Dknq`&XKt69)8#Is>^lL4QN~x1oy4|KHTn z@jupX?wXeWSMPr%cGL24wghTgy1_hL&7TWrMgP~8vyhalCDa|}ss)2N{5y;4HZXUX zn+?pFMoQ{$*7#{yRG{XzPJcIF{{y9>BBbEt<_>i-w^Wc8p?l`xw6(Pm5|jahWhG<< z1SR++xwvHb1!Q?7r1%8+r6l;dd1QHI{*9G}nR_@|I=TNFYw?WbkpzLHxuw8@5@4?X ziv8OVj?T}Xr7c};JuNL{U15$i{|H>j_J3=6=KEh-{)x5tFIxT;2mk-X0-wbI|Caav zDfhp(o_px;^gqq}y!cQ1TRJ`Wyz6sw>on&wA|OyTDM(9bc`y7n#PHU-XnI<;+*z5Q zQ@#{Q6sOA>Dny46>*Dr3%oI22818G%sW~bgnsJQuHjD1-G$~fyriK)jMFEEBOcQGv zp%{A1S)&%0+v6QP$GLCF$e2=au>x;9fS;+DN04ax``@47_A=|)FjKbA`fqOx>ygv?oQEoq_@G6aGZ7TUdea%v>_^i}{lm?Kt4ftOC| z9(NgMapdJ!Gj;S|i&Y3vaz+vM)K@X4$Xu#%tSDUKG%>9WP{d25U~VjT(6R^L^*9dPE4V3^q14ja7&=lT=%*_@#!38x&8Y z;nqR{>jZ-Xeud6yln{Mkr}?=JgQGJ{H}O`oj+loZysQVtMKO*F7Nd7Z3YrLeWbn+~ zypW`D0owx{(W7&~T+izPZ+7Edfb=Uw-P)T*bW|HIbM?Bu5{*PAYn!ER@RM1oSUn3V zan-4DT2EYKu2LuolN8j+rxvrw@aV+w`bx?!ZNoV|&o98e`tlf6b#ezo4lS)-p(j>i zY=epTN0U#`$_~JY3SCC-Ba;>$C!(gAgYZ39K}cb_TW;RTp^DLq(y^~y=IU&DGj1_P zWlcj&P5Gv+s2l+Orn*iKDWynQTp|DI{t<>z9Vff0p;n4^O2uvnP7pH9ZHxCWsAv@ig}D65<0+?AQ0ws(>^-#p*5e$??}Y_WHQ1gLP78 z)ytwf23xyf&-7xUBlntpli}a|*&egf8kxIYIXx7=SFw~1tqqk;GSEdTGV($KFoT)f=K|imU`WLuWEE~38n^kwWobZ5tyC?uu6p*- z2}C+`O1_;qptAGjd9@Et^ZQHUD0b+ut-64dPz_EF#~ ztu9QRL0Q;G$n!cUSL3h=u_*0;0c!9u2IQqTz;LB5M`%na))s>^m zmGq)CJ}1LhG%uF%l3EvZ_iiR0W|r2xRLBJ8P&-GjXmP5|Gs7KkyZdrnR&m^kMGFcT zFXK(hm#@y&@rJ47c@BEfk><7_SjAy}&eU2Q0h}kK?(SJx=V;H!fVGL*6ROz1PgJmc z2TmWf6w$GR0hzhBp;|&XlxA8tKkwp?>7CH%xav3l?C=$srpk5qn+1f^0l0h|I&>zr zl1mLln@4#ktab*8tja&Ra@_xNDK43(kHvb~yVSf9few9Lv|TGtBOYyhef}%LjNrKA zpy!NDGOz&nfD$>_D}2Q-1bo~bhx?s2;IdF!i6-W6QFzz;)|lVNexQ6{dpRp#nnV-! zJ(yW?Kv!?2ZF8x|5zylVm~@SnPJNq)o-b#(p>(^3<71MZWYmA&Og%m6^#{WMQ!_4rF}!6|@|LkhBg zcDfMEto(^>^h5joX7+rSq4v{R$?|*tGHm7^Ye`xLz3xYEkBfN%z=c7M5iQvWP1#P+ zt({RvFCKf3Pw8D+;>$~!B8Q;eSqvf>S?wXbs6>B}AYXP=W@02o;ukwFh|dx#n0M7! zLFFsGE38pSo#lfJB%v=}r{~b3#^ff|JRMwElgw^GW(+jSvX0_2+daMJ++fGQz46NJ z2D|~3)-Khai`yw@7EfMZz zJ#|-nU#tRZCtOyhZs?tOv$O|$YRBEv2M|RD64GnLZmlBu3#Cj*!w-V~bVJYCWbBYu z2W!xAYiQqnv3Huc)T!bQ!>Se|xvoMEO$^R7*31uT=2r6}aFFc-Mj zap>lz@e|}*%aN|pEf3RiM9NIoLE=u=bvS!w+XyOJL01g_|mzq%bz-6e-@DfiJZ4!sqvzu z0h8FPmkDYv*nU}Ir&DL3i<3SB~pi~Ra)fScm( zJ8^oOXa#*0Yyr=Z?bcqzy%%F0fr^igY5fC0?$@YjxQp{tlntFLg4_3sfAVDBVjjsJ zs9w2vR}Ty&Evp=bUsYEJS(Y|JitnP-=XM@k9CH#3xfgEF`}Qlfp(d*;A1zS|hu#(CErPass%PB0!UOy5 zX55zpj3?9HcY?|fPEr;;#pcZ;N$y1~x`$L>g01|+_g0Ci=Ddf<=p@-W?fZVi1f=|C;0j&DfqOcoZEuGN2LuRdS$JM*B!8%Rqn}drFlZfV_;H4c zHWG?sy|pJFu0ntJSs6_qz+~`^gIqJKn2Em9ZVD-cowov>C)jg8;ck-A2#Eyxa+cT% z1;;J%6XwWKIU=q|UUnOJ>jTI?>ToIP?1q?Y%i|1Fq6l(S6%;E! zr5(P198?tz%#9nFtHKJNp(Fh6%E5wjWu-t1`HCke?*vk7qO`Dr5R;meWlV$o-qmSz zjK!^#$t!Yv%IMnqNrBIWla>rNi~2@FwvL6~Dt;m~u(d~j^}!_Y)cmRZi4~DSZkUJp z!afQSJYVs$&#^$Tp&R{GCD|V~&8!{E?FnNJ_-0 zt&S^q9`*~TmfH2^XYK-ae03ZDZ#xrsq71g_#&y~&c3-o2$HQfwm<*x&jeLN`sv<}p zI3w&wY6E@QM<3AX6@grcwCiP60RQcviY^g9j>~lJ;YVYB)4D9RX+^y$C>w>IoNQJW zIHp}-D%~Oz${*L*0(6kbl|HAWmtq3onDe(WD|XTux5hH(y^E{GA_~5esvS<5q5@NU z90uCsh8kg+Nt`~v$4_{}P~O@fwAF0T{5pK)DPz`Pi?i(S*{xkxiN2Vblpn!D%IY?i z)Z?9RtsAR5aOf?T9X}&ksU#jO-&0$r-E;{f=jWb*AlQY};st*7`13U^=Nc{$B3db_ zNalX4L z=xdCxE1%>_z$$VQsMIuiax~?396@35r13NrQYerW^e)3rHazPaYFFyfugwngA~9REr-qJs zC3aK6OpmY9<|W1l(y9q`O)Au!=#TYk81Irl5GCIuB=|59y#_OuT#wHg@rOwk!VgQA zk_>lCwH99+_0-uQ~A9~0sCVLFUoSflzi(jOG zMYu$OFD8h=X(Jwsi-7~L%aDTf)SK85QxdSwUbhFLAZfK!A@RzjN;7$oOw!)oOTO?2 zlb2aLenL{6rS+tA3oZV>Y8XMWj}R0#Gw5<7l&qY?kl^q>q_MS^Ad&MbdZwN>T%Q=hGkn$yMZSohB_V!PT?f9!2bxIF8?s5@*UUBk zeGuN-LNfrLBvd-1Jreqb&Pr`GU8@*zkZK)yn*&TI6#djBb`e-?IlYCyiPN~HoUcK@ z`v;Xa6Oi*|oC-JlYx_4nxGGs3lSKaTR~^m{R|g#qSN|}aSXqho+B*A}0^Qw8uG^fp z&6McV*zAWzoRJO+*Zz#1gURt!Cxg{G6GRG1m^S)3+X#GoFnW@80U zDc*Q@!m5ZpqX5^J>SQ7L(nSqEreV@dC@tm12n|5bCX*O@&C*H8DCB@QW@q21a4ViN`XTb(1oixrNs8)lTve-N#$IcfUA8H+|5Cb?aR2E>)BpgNa`LQ`)&z zk$xTa+oe7Hf%EI#i)w?HKSLerl>J`U-^jnbsa=uzVv4T}Of6Q$rFU|0S-@yXAQ$oa zh>}m6bMGz5d5YZc5()R&U87SaCNUK$=^TDzE+sYLO@WR?5Gm&;xh3X-Q(wJ;5eP(0 z_iLx;U1@uPN+i|H*E+lk=KGe`zfVb2K0Iyu0v-ta{J{C+QMo_@p0ui$RfRXWWjDxR=#1~zP#3q-A~(0e zpX%S$8+$XdD0_`}j=+HV&dMK^=~dgYObP6S76R_52Ju%y`X6&d#LMS?A27Y5{Eez~ zn#^y1I04O$v{F@N;Nqbml0MsqUy@8_6@BHJYb1csl^EXIM@nA3OAp75-K|Dq>_PAt z6CF%((jAHrLCDH4n=0`cEYS%uaf$aaxBnKSd_g2Mc`B*J+DwL)U_DfNDz0O9rWacW572u;5e3N(A!-mlol{FZ}KydSkBoMqEVbvohO^i@4s!M z1y6%10%7X?`dhxI|TvA6ieE>gxg5$uw{@Vmh~iO|N5NZw9Tv{OwhY z=C%b9*c5lRUISWh(InZLPd<(j1>Vb79Op^PDXx|Y>fwwNYBV^jxaAyIOBr=D|2W~- zA6J;BKMk8n4^C6k^zY8N823uHS6Fozg{i#ef8a70R8VsgxCDOSH;e;i8=G{y3VvD* zXi%)C%qjKYq|AB1tk7Ogt}{U(1gxKCsb$Z)`}VSKiE*l{>dI3Jr%IHG$GlL23AUZs zrpO$kqZJ0&;Xzn>eEUey+E(?0zTY6%awBPOTJ%z+>7)!s(;XsXvoCw}JzPFAT}c71 zyW;-wyNn z1Q_L9opPXk(Pae3?lsJ>i-!?z9Z>SCaTwliTwYdq{dO}9Js4%MVV7VRE58$+o3RAWwJ#}!H-!7WS(lpcWI11jR4aP%td%JfMEIK$(@JVv_VTnRF72_Fw|SbR^~SW~DSTG-sXy})O62fC8B~&6Yv_OXyIzlph{DHVru&dZ zn2CdpX6|qq%H2Fte^iIbM*Jo;+T_R3RIP<#be&QD>*Lt~_@7C|sZDW1FTcnx#R992 zw@Pt~wO~GOC6Siwp!Xt)@^T0{5>Itop;AGsb*nCz;Hry^oQyVg5D!95f#z$R-cG*I z3uIGk<$@?Ja=5&YMyq{Fa+~HpGR3m?AJ*b$2AGW1iCBdkFR%_&diBVuIevTUhy{h0@tR606fL2X2a)l zFnu%*m4T(euo(Stib%;;cm3{7FJ(4g(g}zDAEMt^=R6x9tR6V;@QvN>GlaOEoY<2z zRDYcL1+k*+wcX33@|nKH(Es|;?x3x!v&`%<J9b6>b1KRVh)kw^TcOch-5Rj*%Z&b*A~iuGA{5vf zEit|^Fq{gQmT47Scv)K(mc48;K!K*>2{Bucj9~vU^#Y9gop~|f`vzH37ees z;9&{_vn5dT5eCf8y^55V-**fs*bwoEtx|(=x#K2!9 z&3#Q_>K)jnep1mCNln)HM@_UgRDxC?3kUs?UeqV-rjTp9ga{uCPQkmy)a`BPnJ+M2 zi6Q7UUREpo^Ti{!n**WX;}F_ytf2O++ywc1j~slkFg_tS7$?tH%us8kvEftIgo#*_ z>CyBD{*=;LUc;1YY$_(HWl?3>R0^?nNB#8=b#cLebht6TjlNS~tCU0{%c%&>$a6N` ziyg3lby+&ii)88z5w4ofn;yIvJck+Sr#F#1#c_yl#>9Od%Sv=B)+lvIry#uDtEOyu z#j$QtH#Tgg0m~B+wVuipelX!$QGk6)Tt^x|2sxJBw$WkFO%1YwYd`4K*RZ|y+BOxE zO|m6#9GS&r+A(f3^&g+0prZB34Kb`TNOto%75&(Ug>~_$#~7>M-2)epG99DCrAYc2 zpg{%&Jurw_%3bNxj~;$$aLq`dN^3%W&}0o)O^hd9pBY(8w0j&jHiHQ$j#TY`w$ieW z&w|)KRLPEctVvp;u`51+wfoGY;R;L2jgk#XHC60Co-C}IE)3r%FTdMaH5p@S(k#~i znT&m^d+)V_{^uf}vdy9*hQTZkB4!lNBE01AyQj~qD6b)*3iR}t9No&7Hoq`e#}KbQ z!@Q%NXkub`8ll-efLG75?`@5PGH6ms8# zg?9T&xmuJ9)~AiT+GxfDwabl|^_>sP&4@JYR;GqXAV;0XjZAZJ*`=vnT!-IK9i80Z zE~~}M7USd6u!U;GfTflQ)>23D;*)PiKWcXx*l7nWq3Dy!HX}bxyN>4bQqx=d+bmaZ z8qZYx)P;`DJBlRvyo(Yric%z!*V&=Q zaD}E19g9%!6usgLw7aie#@@3@>?I2{$<3W|Q#8nbMT7`^NCJuON9OPdQ5#71FdWA# zxn;a3uouLY3WsHB2w#QA53ZbreY6wC3wdd)-?Xj9!>U~K9^B19;Qk`YUoic0K5j|z z1hWZL{+hGPSyqiE8)>xxBlaUhumP>o1QpMxG{01U9p-NcX14E!90i zHQ1K$pc{pHH0CwGpJWfD?Tsduxw#cCBNfs$E^T7N!Ay)$gHEeTZ{6Lb+4cvW=)ow! z=O~6Xd@3*Gm2Z`wo>I4LK9Min?}jvKw1UTGlk^tSys=9(riPJ?Ne)#W)V1#Wvf1tj zDG2NKw#0p^o%ZHb^1?N^#&a%4Lh0+9f?_X2K|fB8IiElY>#HPW#G@x%?3qt!gHISB ZgcBus$zz-|_rJe3D9ETvS4*0P{2$eNyT1Sc literal 0 HcmV?d00001 diff --git a/images/case_studies/golfnow_logo.png b/images/case_studies/golfnow_logo.png new file mode 100644 index 0000000000000000000000000000000000000000..dbeb127b02a270963c61bb29f21dd9f2d9b576c9 GIT binary patch literal 8858 zcmd6NWmFu&wlxF-1a}DT794_0fZ!Tjf($S?%-|4w2yVd%8iF&p2A2T_hoHfOYjD?( z+;{JN-;a0S@7JrVPgSk6_c?WHudcOvb%chRJQg|`IsyU$mZHLY&1XCN+yc;EJU`Dg zIswn-wVSNIn-<8*%@g1PM3AxsnFDDQ9Rb!rO(4M1+j#&ehJb*?Vymt1rmw0ZVgYjG z1pM{k^l}70qY)6qB)z}@3wxj&jXBWT)=8Z1sHKCB#@14tPVcQMw<=f$Xk)AJ$pxtO zNln}0lf8woC7q-MjhL6nGl3(}4M5}N=-}ik;w4V^4_=Yy{@-aXI+}mDxY>)-{Z~@@ zsv0yhAQvFbTTTHE3m!fp8bM)Bp0~n6y!`AmyxcqjT-?H3Jp3HoJR;n}BHTPQ|Gem) zqq$gGiDQ;0XRVt&{6N%k*3@E-wI>i-(i@ z?~?xIsH*zEyE;1l+uPMm6Zqfx{*T12+TLIwmnP5^73_&8$w>6E-XCneKXKR4BOEojSV<^mk1-L0VCvK*^S)Ju|_ms&oAlDxwntK+-k>7sv4b{Fv zAL>4UwpWKyNxGIge_P%oJDp&U<3Wt2JK+y5oP7xDFzLlf|2qYP|;j=H?j-IFs4OFk+qJ}wDl5~|0jU>&MTThR~l zZGIj*)o+xowhX%Y0>7S@;-yLs<9Lj}Ds84vj!IUq397xhyv z%wMjR{^cW)?rpw5S7fsCG#4?mcK-SFpeKQS@ehA{<&rM6ob1q5LMW_C*<`-@WQxAS__95LLIzrAFqUWsx!_sD zMZ;VVq~j#^d((2PpXfeaE80lmh^6!@#bW38d^o;zyA^=RxjPk3c3bJ^G&|HXdiTO% z>e1qt=#&5ND8-dq7GdrnzZxRVMDhH)Dfii!xdH-r_&ITMnW>l}BrmH4&b~NjicwtUfxCq? zHckSU%^w)*=7P2awC~zj{>dXz5tt`!Bi`GOG8K3Pe`t@Slg9iucTsr=+N0=`jv~Qh z!DsGvmlHztj(KHACibA{-XVJYOGgWh0s|LJWc&5J8x4Vpmkwv7zrKtiQ~HHaSv>HC zCI!>gnGPdls@EoT3neT`7{Yq`ClWofKNb|BRIWy&H0 zY?3<5nVUT}I90N;*~;Wl99pPL$Y6ZeK+X6-~ZyX9zXgh zb#=QbTa}GJltjGn8sS<4N4U(-%YoQ~@9lDYpu_yvG> z>8}MV_u4XTo$;9vHqL9PR8hY&pQXV$IzEoei~e7%gOY~1?KPE93fhD&>L#bd7jm|7`x4- z%2*ueR8&#EugiBob;p(X!pEm&t*70RhM}Uu%8bOb`=VsY$9?~zI7GWhEU;Z$x1IYO zW&4VQ*!AJ+*8cWJYrs*$q)axYypfsC%U47Q@7b;}?~rb#BZV}F)=vIsVTH(WuBZ!SL3rO;@kMANCRs8$=zsB1{NSSd zUH5!SBy16GShm2{6336($%{E6?PM_A?4vuH#O4d$FY&N>)odFi1_;46#!c3Ae#nfj z=c`qkh!!arpGFKjQVDb1FBuirF&-xi_QOrZQbXK#zu(ONG(vYpFss(K9z)-1`QBWt z558Ivvgx@)X=A85qJqY~xbvNTObdaXT?so11DAe*!SUUfUt7&2An5fU!M)9IxG}9J zON$RG^#WfdvTDDdxA~-YTmu~)nXgu1dMs$i-m0MH4-Z58RuacW#z(#%vdnl`rpGzq zgm#)G1vFt)OJ4n%Wl&$I2Uo!nA?=)=h8N3G!$~kxi z^oO4U`WpL5e0qN4lh)LH+`M z#8s8{Jv!eqcrVg+_t^CZ>(MICn{pUc3bxtz{mAt)pJVv|@#7ONuIUoAqSW;!Qo$Ed z@?;^)%`Tth(iQn~J-xUM&!Q5N*RMbASEJZFR&Qp6AjirJj4XjD_m5r;r=@xi;df1Z z+DJdmQ$->&h}iZCd)lvmXf)n}qe2rPq8qmc1E9rPx=@GuA7&$7+E#-AI-Ojnz)&0X4{Zf8ow6ZFS zfU8ZQFmcw0ynfXw#*F%xwsOUp416CkS$0*^>8G>!;kzt*?e=Fl+7~HB?0lp&9~5yv z3PTv?Lq~hh^MOB<+OrsFv9@cRfABJWo7}cbz)9-8mWI_m%V5l@5_`X^YFg>5NlUqd zTDC1+jSm8Tfyiz;w-BY8KY7W zL^n+@6)TB8#Gh#y?0!<1l#D0+RR@82)}Vbj$+ybuL>igafSVx#p}upGT^x^W(TQJU zRo!(N{7U*&?DVTQIh=RBMA8VL@l{6Ws>@d zJCxszsaDIJC8)6$sEZsDOf{3A;vISA$6NWGHoAjI4Ac$`{R9iKrdtQD%$`X4_lcT#J0lgd&@R?KEd~zBtRsU({FLe; zKw++t{_>DikE!LWnV*10$$@YF03TJ;92pDklh#FlqE@pu1J@1hJ9Po{@8`Mp=%@!3 zb0zkOhXmWv1SV$9lz85{lA0KOHBP+>2q4xB?RkOH(kq06a>XSqztbaU*y2sdB zHWD$wknjB~ANlNElz~^{))}ZhT}nZR)SRqSEa4G)H4Sw1qP;X%T>zb!rORW;3Bgr7 zV=w7G8r)iAkUoITrb+oidjE@6tC5{)6PTz8rK@7zcPf&=cf7VM7X|n$oR6iU#>Ma! z0%4#)HPv>XojPaXN2kFkLb)g>{k^SKgFhy4`Lvg1L9OT6;nPa}$SBH+vo7<*2euZ(mM!ZOkdJYALJJrZRLwvpShHB&Z6K}8&Pjv{WJD)g0JM(K;uNa| zh2F5!6jQUag*Rqj6tF4fF!c7xs$szW@$as^zV;}8@`h9>dc}MfiaMJbOjvl!L`3dU zO^XMWZ$Er(OIC>wG2^X+yme`?P@t3Qq<%i`NZ%@XGv@|*-PPa+8~(I&e@Bx2t%r2d z28y7@hA5dKQR(kzG(DRvjVlV*=sUq9myRUkrC1VnQKP~g;Z7>tJ7pw7e3~+kOAFQ2 z6NKGtejX7}$$3uyEWX8o$~%zkuvR%Tk?Vf2$XcYMxb_1DB{a~hy&8nPPLC)pUC^j; zc!2EQYN4-fa$1BW&*)5A<}=aE&%dBqwUv9Np5czZTlJo^sTVbD>}kWjwl-nP;unJ9 z%32-NC%rr-VxcYeU9l_slhm{;4!gJe@Jz0KLMhR~7Gi4L_H`8o+75zfHrDN7u2s1j zW2re$14-<9=oNLG)JTQ$)QjkSHw<*`-i=c2v=uJ0DX+NNSeTH_L_OB@&Fe;fmB{?9 zBIBi_y}C(bHyWBAT7;QxJE~Lt5wfAcmZ8;sf(3LGPabp@jA5+~S7H${{~Iz`p=;tf z3Cjwf(EV@8Sj2=i112VHI??c`p8V6qi$qcxtAzLsq#!VyA(fNqR1gr)6o5fdew3Ik zpR@Inh&sDiFWLzM8>^yiVf+4Zq9q$TzuDD&_XQxIw1hf;$v)7=ZlfgIIEs`vh^hLn zF=v`Znku93n~2fm!<2P$hlvqw$i5K@Mdd;9WHD>4$>!Cqb3!1c-fpt_X_&8W!3b;5 z{l^yr`1P!1;C6LnpW#nP6%y7y%@UOZ9~eC&bB}oW{G6j67Jn||^v(ZEc~t#^(u*XXSWySIplOvA&= zW&tWH**&CFgt3bi0*%1QqsOFN|!*@!LaH(^Cm(bH^>@J#1e}IS0yo=!a=SDe9tLPFh$0bN8kNGHI6WIt%RuH;@`E79Md~ z7QPr+6(L61`#Ii^SOMA8Wsq*oc^71<{1Rp6qQEr^T18x!1!bWpEbux}DEX(ecD55_ zAqV|HeNXK|Ap|KezeGj8yw@}8wLjV8W})Gce`2;B`}5|^2@HuY+JuEI;~PlDG(L(IFne>(1=$I$dd!6L_ry!=@%y z``+`a#vV)x>l-DxhZX6#1tvOdR7DaNrQqyy;TjkRy6aOutF>e44~8ty1}!;vy5Pxm z)4M_s3zufPuzt^vh76LTO`QU!Uq4-m!a88VC3fSV$qvuvPbyy@($?~-3UMZ8ssHlR zjbbvcWHP``Y{bag46RsgX0nIKIX{(_7*QFiet6&fp?=GA!Sk@hdbcxX-^%a$I{oG* zg*LTvaw|LD@o{qj{%yO}%HvWpuF=oAO3_05s~hCROt-M#8siYTzkBbIUTjpk+N^jk ziler7n@}e{`0(_F>b)JEzRMNhjWi#;_M!LTV|- z&!^2gGg)aoyV$i@5le5GmW$?h^r=g}`;j}!<1YiDb4G2sRj^9=wf4;^p@5rIlk1Fof%G zR&_dj?yi%hhw}EVluqt5zqLK~W##5I?Qt#;H&WMdkFg5{?Dk2&ExAHoTuo{6*IRf@ z;m}kAjN>8puGaoIt@GY?5G?>m>wWT>WC|N&iqW>vEOl&R({$z07ka~7?P8;^1PPPG z;nzoEm&dcPxs;!cZsF7AKbNTu-ez#etVy`YLK;P=xZChrqD<{&{v|2om94QyTFGZq z`=SYicbgoijvJ~|)VXUdI6SN{uBKLwn&?aK2ugc^N>X;b`7t3qrk^4@aqGOT5zJC4 z8%(ZdmD562Tb7wx^k9P)Cs7C)MAaitGT~}lS2%0%+ot8po%4eM3u=oBw)M{~4DfQr z$ES|yLfjbqrQ(IJIkJJ79wrJ!SkniVWok2n)wHfT-@%mq0wvN`&EM?2@nLj9u@&*I zA82(c-tebdB+B}FtWHO{`c=FW+BeoHGco;Lkmc^Ib9$(Eu_PnGo`8>1!`>Vc0+~bB zf3m+bTrd5z(NyS)#J+ykuW8?b6*yc$fFite4p)}gT_RoJH^|_5<6ce(C%&2ZQyRsF zdVPsU8|u6MogX`|TdPOD%tVSM!7#Be$$XsAW=do;#Sg3FcJi>Cc>%tl4HhdhDo1}A z`-b0sJ3lI*e*4(L^{*oK} z7IG03V*+t5=Uda#T{Fl{KCP_^o?c2}5tI;YZ6W6987$wBnc9e z&xK;3_-*Em+b5ku?~g$bm+@PLw-Uy7h7mIcX0HrHL$)II%VrZz~Gx$rS)4|9wH2|lwl!mQsfss=1cuH|-|7>91oNVfP`j+0}bV0_QS zxCWV+fSYH#B7+RFmt21MPv&k#xNSAWp!$f2K!nT9wC!lVFiP~1aP{eC9y+6sc>f^R`$J34qut?R3@X^`G zT}!5e^Fu9-5l@GZ7E{BCim@=~P<+a*%-W!(vD&a~5rtND!j$q{D9vY$ZPys0l|IaL zmA)rs!4cr=-Qi%=u6^nmz!;9n^3{;18ijqtra7?a;U|uIxevh8C@whYg*v7S8X|_7 z;-sY8bkYdiNPl&Y_)CHIDd@mS|6y%8F18V?={tLEgT(gPg1w$Z{kmnMjt2(o`yQW4 z7LGv$+_BRFQ3ej(?LZi6gy;->5^xrNRR!U4{<%$;D1l6IcBzZD?-Bs{z!2Vp#G_c< z6Z!yR*V-XK>@4sjr(8{d;B=Qn%npe?9G+}GH2|&M8}&O63s&Zgtrf=Yk~W4KIU`G2 zJ2iOaz>49%Rg7CVdC^#3?Ab{g#loAvK%6N|%vm1<*fUis+?^60=ctFhuY(i2Lx!vN z@Qhm>i$of74pme&r1^Tk{S@*GGHHRjcD{so7QzIhf^$70KWi^vi?43@8-@-R3kqk- zBm~f4S5?Cv`%*(h(7Y!()%Vr8OhWp6@aE9%qlT zdg%4Oe$PkevWzxYB%a(dvdS+=M=O7YEXRiBX*Q^l#9HMjP+b zDfg04(t=cy?AE$ogz*5kcs|BVbtc93tdB|ssKmM3eF5V7lBC9kWGE}Hk;o9E>s~Hq z1{fJ}GQJ$d;rMN`S9_<2xNF(SfMnx?0{2IF?Cag|H_Ma+5)=e_n@fm3viKW2SLP;$hH2NSM%zO! zpquJ+^sm8~BV-)S&tH^YPCsp2-pbjV*Q}Q3aB?V9)1wM6=@;i>dTlfW*jVflDe~Ap z&{?S)Y`7zDcq+=Se`6_6LztR@J%A|fyu4?dmg@bUU!b_29B)3kz}RsWA(5nSq@c-I z^5WN|wJ844D6kAqc6|wPVh_q3<4gz=N3Szf{#pmmsRtZAZtg~qWoo{FbW2}}y<-T%Pjpb??8;Z1Dy3TS-Pe7bF=

    Co!VWdh$tGc8PQuMiXrEdE;5v=4p?3 zeU_fOtxdmk=upu%6TT9Mnx+&;ZB+PX<5X&0E?m2^tpENr7kGv~VQ$SJbvE5mxd>K7 z{5JJNnx03|+%vC?qMh$qZznnI&YCe`HEJUTbOz2EY%d%g7*hy|=AO0ppO+(l< z&?AQxhKh`>+BthU6W~>RgX78ZcppS`lY&ves*}$V6Eg#Lu$_RM42~(7qF4^G+0YJa zr8MPN&pKbIoaxms(NkT{&j~Y=tr3Lvtj9?{I)JS|oHHytx$x4V%E7?xV50*vkk+4FYfdndT;L+j{7)Xm()yqi4l%n;ak1$xVx zy+5txfLbhdD-29vkj4DU`P(dPGG?@&E?6zDUJ8U^RLA2Ic>YA$8Mm1NMW**C9%&k- zteHOG3;ud@=^4HtEo&d_8YP@_PQ*MK=SYU)$b*7ckYhkjw5H>h^%~A=;weWbhG5QYklHdGHQho~8CT}kcTb0dJg(nq8rC{fUm>(QY{&4QXc2bbC? zKNOoEi=)5M(rE1m=56IoC1XVKp2tDiUh!U4<3bxujT-JuD4TvU*lPK(c#y9=lz^_O zLQq-m+0%IS@v3XD0b6ic^5Ak3bE8JCykAY5xX>n5eW2Y<2|Xgt(W~ixc;!lP#&gcw zTw4uEQi&`y?%*CVr8DSD5lPY<`pi=z)Jkn6Ja3HtG562gBdohSqE9DjNJfe7o`&W# zFHKH(vI&-2<|(&ZGkmcO#~-w2oT9H+jaQ>3@<bA|N50A{82n^kg>w{aZv)R_%R-v>D|8 E0r1&*@&Et; literal 0 HcmV?d00001 diff --git a/images/case_studies/peardeck_logo.png b/images/case_studies/peardeck_logo.png new file mode 100644 index 0000000000000000000000000000000000000000..c1b9772ec45a09baec0c11ff9bdb61f91915d680 GIT binary patch literal 9260 zcmch7WmH|uvMvc49D=(B3GVK0L4(Tz7S3X^ShzcZ;1)CxEV#S7dw}2`+}$60-?Pts zKi+x&?ww=wY#CKwRae)TUEMXmsH(`KArm3Pz`&r%%Soxf)^o4L7wPTm`$Dw?@LCZ< zrFEbh4i->X00amlZth?Tq>#4 zJ@}Oy21ZEK4Gb`|1wtuIftJ=FVXBknb}9;Mb73lNZe=!Quq4pRTFwIk)bLQzH1n`E z<2R=g6`>Gv6L3Jx}Q9#%GfR(39CHg*9vegQUiihn&+uhJmq76R&0 zGXIkG+7qU-fWOd*TAt|4|n{ng59QFJANitP9(Jx{SD%NxI`@OsIXestMXJ1%xTf2}p*NzN=FIGMI*0d^BsR95Rdx`z1O}hx)7}E1w9Ifn}e)3*A;rx_2n3BTjd2)4JJKKCMVoBe;%tB3ZTORO1QcO%o?(g!yQh~`UUhc|s z>BIA_6{NPsGj|MG@Sm~cG4Vvq`#p)%yJIurH4N0%jf~LG_n=KK_6u+baC{jm5>j1V zp}Q{xA#YwfTAo7DGx%_mGgAO;MEqlv?ww?llfxo2D9VyUm6+5x~4=wNUM1+u&h*6#JeQqCOsr}zPb13Y&G78~10iFSlaYo=3_CH4oL zg{G1yvA21oocj*2E~(bjez?+Vd$Q62BU+#_ndRRmdt;X|HahQ)KmO4iDqq>RXTH=j zx+I_SumrEzc_D$<%;TCSV&HYH;BjTBP1&Rw zwo5efwvKoh_nJN#8TobQ##eJB51ZEM7-e#u^}c&gp%6w*!{kNpXSun|(I6)GK6%7( zFYa0o;*>Z=xmoJbI=R@tlTql<$_04a&1Qb|t)k-DtnyN-V0sW2BDe!?+AOhI}=7O^mjw^?3W?ed1$%RFivsOt~>&~)m&96%?1i?AD zG#LmtIUkS^4px;Y!d1Qzf2zRTk~D$80R7p@wKV_DU*-vg`uU21#Y{k;vf-lzPcQ~{ zXH;ygLSB{bW*?+6{TC4F849@e^`t|grus;?(;S(m($|}qkfg`-hcb{{x6y6B3pnhv za#j7kygZ}2*#JZnjyigKf7^4p|4wbHMVE5}{hj%XPpj&Az`JCYR)Ou(QqgB5WOba6 zIb8VI@2`Mq0f86%Dxm5{HzO4_H+nb19MwUBQKWzNo4*b$eB!z zqJd!F15l>=p~Gi0I|nZ>fMq7Rjm_sVV7|ITqAJ?|XFQ@M0A#eXZdOSY*=Ohcl^D0u zzNV45$S^%-aDViXUpkxdcZ<4t`uWnx>C<4Bhf+k#ok-j@Vs7ZSdKqQv zKqh)7A()6t)_NX$Z1KnSO<@FkU%Ox zg029m&>{}mR9QzB4VE~YtF6q~#00<*uO5@qju-*PL)G){%Xe5Ts@*s1>}5?~-2Qy2 zjE*Mit#SoS&Lmv^`pT5)%Ih$do7J!zOeZrZiqiQly`?3-;1kMX9ZA`n*3`RO4}E)n z8GSp>cfI0H$=N}fc=h}C7z!+1xFwYHNbx^i*@ zz7bJhw9D*%x4?gz^{TG3O=cNQk);HVb8 z=JQH;Hmtr$*?#05s6H_JsxOtYF5qWw!zKe3D&hq7% zRL~dL3zTfLcnZtMAQ#M6EAIbFbho7O2_7EdSxYC3t^U5ayqt!12Fh0Brpvs6aKFeQ z9NTa#%R?8}=DMOR3Ms(CI@GJ(Jz~(VFGUz@_|(N6 z9^NKVp?LDACS$NcS(lwEV_s;@r?;>{4RvRsr6|cvrS`ecqCuTjJ(2}8rpTS7jK!BI zB_(5*I{z3NzDDOS=viP}AjZ3ER)hV^Y1^EG}9HZJ*OOur05 zm;@hpVN7#!a@l0`S`f4Iy*(OYX|izDAzYY#FF?{#&brtbZ_}@0=J)*%_^=YUcy8o7 z_jLSGh^h=5awZrZF}-sO|AXJx#DmE^Adq*86P;9e$dP2kegAO&NYJ(Iu?xlb-5WUa zKzaB&etyvITIbV44<9MI^|@IR9v*%{?h4=c7(W0agT~a$v z4vTtL!%B{`sXQ3p7mmQ~J31=FD{Ea%m&yvJc;NmAB)(AF9;ELFgV!}&q4mjg(V|aF zE!NXt#v_x<@^0h`8x8#Z{R8i>XKZR`f`$hGlA=0QHB#k>;`K+j*1y*}m5I2E7~U#e zc`%r`?Ln=&y*b|$k6P5~oxwT6WZsey4XCPR<9O@jz4TpnXls%+&T(zJCohuDRCZ_i zJQcN?bET9SG>3>mS=g}(7&!?nMl$N3Be>uAxZdzufpy&*D|`o!t`UysWaVE>mvXsq z>N-*m5~>s@l9M0Y+}+*UBLyl{6%bvYt`{?(9wrFA8@DUhu}+l9ZJf=4_yu7zghebM zt-g6+HsHLzfJdjPhQ56wPj#qWr=F!qqkq8VNJt5j_MnT}=0qq=?O>{KI-T%D$=wp+ zEpM2U2zw(uD_8*i@g zZ}0(P+{RsYdo!(`X|tuGMtAeEzjc(_N78s-YTYkHSKV?_2hZ~8Ha1i;_|r1t1NV>@ zvuFucWJA$VKoeO!LtT?(QIX2yBO8Y!bN* z2eDO=&Hl)Xpia9C63VK|lX|<@sZMlz;*k`Auta^1>TSMv?*`pIvx*mdOiLI)&vBJ8v&X zMjCE!N!t(8&&CVcg*`tD^OL+>HeMVKnD^=pmOiy?825O4nIDf3s*wtQ&buutZ&PW2 zGKf%8HaQjeKi!W$nXK(`gLsCb#*?G#f81bYxl zXBY1ZLMDFw`n4KOc7%t^?Rn27kE+E?IdhpKYPb5_+jQl2r>Cwy4qDc$x6^gdvbByq zDllv|k>J+!PtCZFCTNGTcbU{`o_1M1;^I8pt* zGsCH5GUV$zKlJjBJ9Wl0+t+r_Hoqy#b(x@o%QQmKAuWM`^N4VZF=O!AP@-r~#jx|n zufiWcsA>|i4b|jSF{`kIT)#N-f26aVeT1KHY)m8=C)Q(*TV^atQ=hxqZQ)m z{J_o``rC2cYIdftd@pD0K~WVUXzYw(3VZ~f8$ID##SY8brAchT=%C+K+Xq*xj6H@U zYsRqSMej>?n4~0q7we4+Ra2Rl8rbbkJ!&iFQLg&D{$v_q=4&%phe+emmr!qpsml*; zYdJug-2U06HCbmnAhQ1}i90p*dwIFAVI?$?!_YjUeTUXE;*N7kI$+B;UImi0DL#*jy%jn%*mDt#; z#9#GD7>H_^?vt`$kjZM>fYf?5K8vmgH8Wa^tj4O>6)qpm2%;4?-xceeufD2g&=#a~ zgLbt{ml<9+ZM1GTej98!K-^i~FC!BElQ|b{W4opOx$Y)9p02a^>xj}I)nf+y*pOHY z@Zs8F*0*@p3LN)&G;KVMA66r8MTfqZB1~C{IuF}BCzEKIEL8C$gp@^anEpgu+#Ei+ zTnDrQt+6e(!7KBt{xrT8SFW#LqbV(AG7w}W29E&8_VjS@Tw9n9*m{jw9V{$)ZE>7_Jg!)Cw6>QXx0ajvT89h(P~O6go2#UtA~VZp zK0!eu%LqBVbyVUUs7?MLBnQ_UqTHgBOIUxaR=h-AcQn1Fs_=vhxzt+Qa#0nFB53z^vTR7;s zX@?gTN6M(+`pB3-v@JDUmbX8OtQE31rNCnAm%dH*Bj^XbM+P5&J=NgZN6Dy-lYO#_-~7&f zEQ_K}=R|0O?2$ngXE zzcqx}v8SOZD41Y={XkDY@;n$H+tEo!Yt%xsyvQd|#5v;USf0AkQ6S^IdrDeG)krc% zP)OD_bo3*Rdb=<3Kwcin_q2`e^%H3$=JA8|W4E=FlT+taQBKGQN~3lm>mM>u$}4ns zPdvQU*aw>)u9}7#9B;V1wMkFt5o7J+xJWpzgKB)7y`>HhE%T@e*G9$;t5J`ocorQQ zaZYJop%yGU?HEjt6+M(-w_v$Bx%3fr(@!JP*Jf#JC-Z_0b1o$~SXz3QvMFbpo=Ok- zt~Qag*=$@0Q@l==iECNWFr*IJ&+)D3K`LrLv~PosU{g73U2{Mie@RcF*_v zoNitx2)psg$Md`}7B$tI)#Tst1tA96AD+EOZ>dH` z>7#tb?4_*QPoN8h_T+Bzk4HYgW8>IX+d>oCY7N%S}c#E`HeD$`Yt#);WtF@(Je!W zU1Bsi2+oRi1aZXZ655-}n#yBOv@Y)aN1Uswt%YP#wj_0^mMHI=!Td?a&YIRKM$uWJ zV;d3?s-!NH#01IN&+&?NkqtvGLy8b6cA?AWcCdI3e^iwo+>$Th#Gc)3!`WtzY$sYn zZb$BCUaJX;j@8xGxw(Te=eOn-dbLF8FlCUf3c!=c4fg&FH@=)3S2#T) zc6?k`GJD^2e#<-MNH&_I>HxGUP7XYl2e`XkL)YVu$J;f@1sU|0dcd?rDsa;A5+^_m z(g9)j4X(^|H)Z<8z9O?a5A*CI6uJLP{%7mMEfQx+K6{1`;YDhdR^DgPr=H)iq3G0p z3B;)6Oh#2UcXVj`Re?PkcndvS`0vFi<~3Oq+76*hs;AaJL?w_vJq@T-$iOO z%2L_lUIqc7aLU(#KB1O*Y!$u=V`D(hPl-yio$}Yi?1|oeduCj~VG)E`=XWm+!m*6& za0&!)`snqC@-V)GV#&Fe@TfgD^LejGE3FO`xG75?g!7HhrzHWSqyFk>8KqV1xx)D& zYU%N@`5J4b_11+Pi%SNb93lBj`66=?f~`+(*3aF8leWOrJ13704T{$6pVkh_gcIfD znW+1V`0;PD5u}!0qnG7}Yj+V?VNwxtR#tu?0_hZ`^qKGprRuL6ZIvMa;D_pi;c`V0 z9Ubsi7#4eVZqMD3$5V?td?*HSF%}jbXIiizABDkBX*Z*aQ?Pi6Ycv^i53vLRYKZi( zNTb(L>LyPhd+j+ga@%6f-jVB0OZ}8C2p7A7ukNPhJW>$ekB9l7B)o#ruT{fuAL z&$#j06Wbe!AE!2Zrq!X+S51O+a7Z_xT3M3#fejPB>OG-t?(Q2s=&`Y<({ftcC3=zTrQ=~=XW`)BtTd67 z`ZxlG%R2~&@N!LNrW9)(WAP7yD=YtaBP#2mJspKq1tLaTpct#}a;3WGqI%ZTs*J;O~s3pWQD`=b|A`TZ|qr7i;%l zADMhfsJXl_ttC-w>oS2^$LwS;MMU5I4!)~;Jtd#WM#R67}#HQEDhRvNkgMBSOJM$t)f{bIailukQWKsTTmnshT zFrzrFrbIOcaV`xEh>@Fl+}KPvadSz%h9W7qPP)7frbe_zjR7JGw75OT(JN`Z_G1P$ zB#8LFRW0_mS-4|hVOnkm+F)0KNt)^z#ynN8L={T@vZkN6>n)A(M(((FGJ%~7<=WoI zgW-7vOsIGo&*$ODIl+o5tVcx45_uC}fT^B_7H_3VuzT(wD$@-oK!m#JVb~2~efyv( zB*suY?Gf5e>3M$F&mYX0nAUBUe#RCJC3U9CS85(PMFcMhJQ-jFt)KUjGv7g_<*gY) zuGZGO5z)z$n3&mNkP*~m6sr9uyF>fG3;GJNpkF*sOy*S5eL|k$3+*=HLO_I%bcgo5 zUS6soTD;QMYvMwb%vUQ!z#u*Ex;=AcM!z{+ zc?#q|DZ8^^Hh8`EtGVgud}=4hcA(o!Y6(#(_I_EO*SfK$R$$U)^F0esL^_jWD=Wx1 zlI)`bEnptf$8+2L2CpyiPn9`SB4WE+m#Qh-E;O8ZR$Gqxyub!=^n_~^e2OL&zJDH^ zdc3<3j+FqzoZ({!(SpO(DlT&Z^(FEWQjkel^zCUSYgYH-fi6ctV${pL17_lEYNpuRTb}xekEp0}c>_`9!Z;mPyu5x5 zWWyqUrZbKIfE@hjC1L6|oj)%8R71CXvx`n?6&`bR{QHe@sk-$R^8JHWv}_IfD0BFD z`|6+16OzSw!$Su0gV$50=A)YN`RAWYO_*0~TD9`uV70dH8a6Xx)>_NxMCvr}JHf@; zIIr1s&{KGe8+D0HTq%QJlT2>AS03~%1O)!?%C^f^P{@!{e8d5iaUNV4x|+kMV0l;~ z<^1Bg^6q_;Pct`xsR4}m5mC;D$$>;aQ$S6aHUN7So}9VPEBrEj3yHL_?2af4CRt@hH*^;h4A}86Cxp# zwex7a!QNq6b2wkviOg}v+b^B4_h};*8G4luCi}g^x;D-~zn8svyX)Os7`UnYA$k6= zPm+=3vpVLNEImn5e!bYFV^v0q`QK{YAw@mT08_HAu6@8;x9dT=z`y|h(AJW!=Xq^9 zW(&{FtZOFfc2+E)R+&-x=fdl2i@5m4@2*6N3Vr50S7A|P6ZPdgg3s@sjhLjo?6Am0 z)>8_y-__V$U*5GO8uxM}7;hrjpZ*9{WSCLlH!^xn#cdwc=E+dQkrb%e8hh_ao8Qpj zy~mp+N_g`#(}$fMA+WPw|EYUCS=;+P@!E2gwVZ9i@L03^_#K^Qm$wyC;EZuX{>s`f zo3<{ibguQgXDp)xaJK@ zBdy8RG%gH@33|$T`3vp3`l)gI6V`j`-h)5asf~R6AK#9zVmY@Z7egHoc284IM#qQ8 zZ>HQ6wRM`INNM*9^|SVo9!_<=`&-MrIsBTWJqsNPUB2trVc^X<#Q;DuIJ6B001(CH%8Obx4)MQxZPG5j1_4T zr7h=uW=!Lbt&cKE<<^2js3PJX9RQ-MSa39=ppfuRHRRHG8fC|4GbF8lRKg{S^w_vP zD{p_SQZJ;zc}?+QQYpC7EHqk<_mwWb#+I6lc?M-LJLxQdf-D7j#b%3`KVMvN@bI3t zkyC+7c*Vg#g1m>zl`?-d%VIF#|BaA^5pl!*h7=_{{PoJF`Q%1&FD1pLIWn6t=49a9Xx_CXRfC@@MF(&ku*3E-Rt$F__v(%3!l*uOAPiXmzXPZqWFuXo8YQ#Q2< z4?gP`UahULux|;@WE$1?Vo#&9GIezbb3T?^ls|r~j4Xf4aSW%WHh$yvfri>GMob~Z o&#xlp@A5xet!3VdzCnP&ExZGbC_?1_`hQ1WT1Bc%!X)7T070y!VE_OC literal 0 HcmV?d00001 diff --git a/images/case_studies/wink.png b/images/case_studies/wink.png new file mode 100644 index 0000000000000000000000000000000000000000..ef2ee30bf3a4662f2e673b2ef1d28c78fea3497e GIT binary patch literal 5623 zcmbVQXH-+$wnlnyQly9>v``be^j@X+qCkKUDIp;Vgd(C4RFEcJ1p(Ldp{^Bi z94EdJ)TG4kjzv3^IB;S09I)0%FDwpn7zp^o1na8? z{C7|erWODlBpL>gmr{`Qke0m)P*jqVmRGteBPRinfk-QWAxdCrIZ24LGDJxkA`STS z0TR8TJ-w8zboKxAMVzStZ(^}1WiS|r!%5*}rI2ViSXxO*2@H_|%g9I)EhI4^{#adi-&;VS{+9K}{Fx?V!oa~$6j)jca=xX% z3{6e{XHx{?Z)*(J3ih9P|65^< z1yNK`(vnw{mey0esxPahBd?^WqopV#t1qYb7grbQ5r}~KWB=lM{)emhkKFTyK%j`8 zbzx}lAeg5<8i@e>v2kVZe~v}&ANBs^dj4}P^8d&M6N3Stm-oMv`>!Qp51qrmo0mBG zyZvGQ#GXeJoBNyZ+f5P@R$}(FY=WmYZiFQ9j)p{>&aQh@slm-ROGjg*k|-Wr;{a+J zb1+gHzM-)-+Yn5{Kc5$jDBLzbGZz!JR|=TtW&bQFT2;%yNa?O=MVib)4H7jptS+A! zoz1APlN}l94Lf?(>qMv@QLB4hXTN>wbFDBtdv*0}_WNn;LinlE+a)ylT?bnWxeKI@ z4>SREEPdMK=ZFKyOhmQJcZnz|n))0`=mLnylb`e)(XpJP|2xI~eN;Y;e`HdD$mZE$ z>|~ln+eB_p)AL-*bS~O0kSd)~U%liPt~%FdKQq|%=SgZtn=yQRL;0^MUsu&cvYGm|HJV9&-Kfcq40r6{qzRO z2f|rfMpSYMl#Bd7YQN>ugj8xMrUpKe>+9SW=%j`h+!UrqLS9DZ@z&`v&=+Rp@>)XeP)n%XrsQ^E3itjTdYX zOds!GNs9x(hxswZeVxf4a{?}FX_xU1D+XXv-%OA)s${Z692p&#R;|pgMF*xA@hUxG zSClUwj23`Z>X$F?(iB}*@*=%RN+3P+f>>O2ixyEfzK z9eU$AMFs~Oa#X8zIH!g1Y&^pbxo-3tNuT*je<<;DaZjZVgHufC)hu?6%W~F5kWEsN z{9uXHO7>>)fG+QW2~P&h`<&(~j_07i zqOd^)r-b6VjIt!G*)7jlOv-+^>C{Js7A31``XajV`Oe4WHB`%81^j;eI_`Ox@o8js z{~~WkDeJEKVeA@+3~rlObs=$@&jTmtEkY%cMj?e-rG0Ejmsw=ga;T;YVKGmb>=Lzy z#csxOhSoOP?s(Ezk=#?my?<;%mqvBMVwBLZUc+N>@z%wxTQ3jphG<3^m03nxv#Qqe z(RB#$oaMI4`bX^eaW5ut)=}))6K-hDt~E*(c7I34_z^%r6^L^?_~<9>tMp@S?G^f( z53V`ami$V~@3P^oV|{iLgLHAWKMM)MY5XMj>JEn~AK(nDnS*6$IDb#MRi)lLYOOz6 zc9TMaFP-G2URC!|5vurV<~0AQWQhtMElK7xp8WO1=Y_3NLmBpav!>VCd>7Zt`r#C& zwJ+!oxY48iXp;ziw%=7k6+ui-=aEugRbG136diGK?}~A{2dPn!u<}oSM%AXHWefLW zRN?P>^Y`7x(%#^I?D5m#trwoLYeF;&&o-?nYwthrqIWu3O|7hLZ@+FB^3vA4LLW-< zJdH9cXy|3cS%+LTZ3RE)egLUc*iqTe%$A>>#JG{y_3Z=BI=BHM+nQ?w)gEV{sQ=xp zZ0XhSJXHyN)d)8BYP8l)u${ zoEc3JX!5!~Z<7f-ipL$j%@T@`*loCao(Kgh9Hk#+kgs3sKAR*YMPHN_b(O|38zHJyPAuZ~{!9hA7r#j5>lY38l_fB$Y+{wq(i=NtFCwWdM zk1kx){R!Y7+VhO+G=|Jmr3_;2e%zs_5tgQY3zo0+5}H5@I&Y76QJ8s6ABgxhR#YYw z3vWam%5_EVfD_353(_I8JAqv-ba4P zz9QQ};1mebi?<9wwR~*og>Rs&aUfi>n^E1Wxps| zReVeryR8O2w;HwTkF)8xMjCJRv6aPt)RYj6C!a3(RVipz9Co>k1R2_hZ&-yZ*+`jB!izRu`;0#AT=Nh}Z~Vaw zz^9AjMoTwq zd+`9Zc=D0(6ZhphBLn*`5cQYC&&~okp^WlQTQ8{z&4~{)L}0?;a3_D z(&D?{a6o>lAq|mz`IyJ(Mu~)agyV|25Vs&#I52 z?Mq5HvX9m%6Azv+$Lf8r97!L#152lh*QU&sHAX1P8N7{^P1)ZN|LU3ZIxCuPx!&*% zyShp;E0WO}GqQ%$uZ+_pRc+$5MpoG2Y|6^F;?=+#$pV)j!5j2&_BCShgO}|OHJWQR zN^dWu2WuD4J>%;3?#7NJICPub@pjA)>sN5=$#`wb-E6fn3d7&ZwGFGVbseiuNDqJ2 z*gLT=h|`-0W77)>hhz{Y3?s%XnC@Xvd4=RF09JxajW8i2Oc*m?$^VdIT6Ui5Y}T>BdRNYxOeck(J$o1`ieIN#pc_5(KI zTF|I>+eem|>ZKPZM_hd07U#TfSdxn83@0*bwD}yIDA5jl%Q!LBHjIf59_(|NQc~Jl z6j|@J9z2X|oWK6kieX2+C&;eWmWig&Ud)QmXDqg8)$N{2ajI*C*B2qz zWYcZO60ybE&%MWwdM|S4_*nB9F!8?(Qd{C99@}K}$+z~o; zquZWxNh1`)Sv0fvPFwR@>WpP^lY2*s%dw=pYl&r}@Wh#N%dY%blQwnf?rD5zDogkC z3koLAq@B-F>--&WS|n1vGR}%-V@ti2%>`O%Z1`R|_x2JyonJ568nS%ua=%sU*jmv{?1BpK1S({~&!2609K!&CE0RwP$9-zy8a>oa}p}uCZjrfZO{1TEj)Hu7Z zPh3|Nz=hL$?>fmv4is)?i56n|SZec=zY|W5Vp%f3xoI1tveIVX{kofp4|j2aU&^I&{6|$VV4#_-1p5ZtNivKIoXp*=6=z&ss8~uTftiv*R0{!qSkyJ_7nz(wrG9#{lMll91l&h-KZGZ zAD@{{`)!dj#S1SC4r1<{x*BzpRi_gL_EV~r*J_($KU&jYXee02A3K}1Y%gKdmeb-K zG9<{SZxoLiJhqP0b%{2&ba34;U~9pSN{fnHZSA*X>+mVXBO1>fm)ywT!_hA zZ?JLCUUy|?Pt1UyzJXPDR^D=dx1n#D{ORRf5tUzI_dNTk*l*aHF)o`Wq(wlHLJri9 zoo5A4AP#-=Dkj~TgKg99|vdyKFiKvZtZppYnHg@c*QiW&aVzV=xd(# zsx*0sN$nRxv+of{ei8m;w9TT@pRl)R@i=3$#H6XsC^1*jMf>aS&4;s&U0tW>v$5PZ zwH!W)99Nent%ow(nkhX;?_Ib))d+Q5@t5)*BLDEobu-Q|1jEI98$qza+0QLFr<4Ea zAM$&6H{{#UPLr;j4eVCWsf&5hb(OS=672M|Piw$|KqQLc$Br)#XZ1q2Bux-JUm3>) zUIkCiSQon&hOVx}ee8V&OuuM%acAqD?P^%p{%YxV_kHp6hNt32f=;v~ZuzRUwFF&I zxfFt)V7XwLjR?5&pjy4>PIO;KVU*}%Oi)X{{`*O7ns6S&Weu0UgiQDB8s@6CGs?hy z1%q4`a*R15P9jx(Q#uqC<*#x^_PBZ05)AZfPpUR~Aicf!-cq|$p_H_EfCnId({r;U0~_4Gx24F&wgb$)}-^g7nflJbJ3a_<0mnAOA1{-ktUP&y%{eVl2?-}Hs#YCJ2b{Osx3sqw7f_>>R4&GVfy;p z{5K})pH{DIJ^0;UXkt@1SILiRPb|(V>YK;aIA#}1KYNUJZO%Fxxe-&ifv0lZPCUkB zcx7o!ayZYQu|rC@TzRz{fRwa7kvT<=G)MtAsoHNb^TLFhV5=ID6!Y;(`5igHiuRO{ zb_p(#Ln!%~*@brWq2I>_frxcp%iWHTtpPW0(dfhsk?U2cLkR!>o0R|E<;#N?3BTFh ZNQ6ZA;4Y$D!1JqEBRw Date: Wed, 27 Sep 2017 02:09:28 +0800 Subject: [PATCH 80/87] relink the persistent volume of petset (#5582) --- docs/concepts/workloads/controllers/petset.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/workloads/controllers/petset.md b/docs/concepts/workloads/controllers/petset.md index 48e6f6bd81656..ea4a16b653085 100644 --- a/docs/concepts/workloads/controllers/petset.md +++ b/docs/concepts/workloads/controllers/petset.md @@ -38,7 +38,7 @@ This doc assumes familiarity with the following Kubernetes concepts: * [Pods](/docs/user-guide/pods/single-container/) * [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/) * [Headless Services](/docs/concepts/services-networking/service/#headless-services) -* [Persistent Volumes](/docs/concepts/storage/volumes/) +* [Persistent Volumes](/docs/concepts/storage/persistent-volumes/) * [Persistent Volume Provisioning](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/persistent-volume-provisioning/README.md) You need a working Kubernetes cluster at version >= 1.3, with a healthy DNS [cluster addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/README.md) at version >= 15. You cannot use PetSet on a hosted Kubernetes provider that has disabled `alpha` resources. From 5b9c1d91ace15174eef6e736a9b4471b4c6c063e Mon Sep 17 00:00:00 2001 From: Nathan LeClaire Date: Tue, 26 Sep 2017 11:11:15 -0700 Subject: [PATCH 81/87] Correct setup link (#5634) The current link to https://kubernetes.io/docs/setup/ presently ends up redirecting to https://kubernetes.io/docs/setup/. This corrects the link to what seems to be the correct endpoint. --- docs/setup/independent/install-kubeadm.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/setup/independent/install-kubeadm.md b/docs/setup/independent/install-kubeadm.md index 4ff75f4e95e00..445c4a3108ea1 100644 --- a/docs/setup/independent/install-kubeadm.md +++ b/docs/setup/independent/install-kubeadm.md @@ -127,7 +127,8 @@ example. You have to do this until SELinux support is improved in the kubelet. {% capture whatsnext %} -* [Using kubeadm to Create a Cluster](/docs/getting-started-guides/kubeadm/) +* [Using kubeadm to Create a + Cluster](/docs/setup/independent/create-cluster-kubeadm/) {% endcapture %} From ad4dc7c81da5aeea92a526c7ff250321c01198a6 Mon Sep 17 00:00:00 2001 From: jianglingxia Date: Wed, 27 Sep 2017 02:40:39 +0800 Subject: [PATCH 82/87] fix the typo of serviceaccount (#5533) * fix the typo of serviceaccount * update it --- .../configure-service-account.md | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/docs/tasks/configure-pod-container/configure-service-account.md b/docs/tasks/configure-pod-container/configure-service-account.md index 3a3d4c3e7c34d..65a69ae8b4361 100644 --- a/docs/tasks/configure-pod-container/configure-service-account.md +++ b/docs/tasks/configure-pod-container/configure-service-account.md @@ -146,18 +146,19 @@ Any tokens for non-existent service accounts will be cleaned up by the token con ```shell $ kubectl describe secrets/build-robot-secret -Name: build-robot-secret -Namespace: default -Labels: -Annotations: kubernetes.io/service-account.name=build-robot,kubernetes.io/service-account.uid=870ef2a5-35cf-11e5-8d06-005056b45392 +Name: build-robot-secret +Namespace: default +Labels: +Annotations: kubernetes.io/service-account.name=build-robot + kubernetes.io/service-account.uid=da68f9c6-9d26-11e7-b84e-002dc52800da -Type: kubernetes.io/service-account-token +Type: kubernetes.io/service-account-token Data ==== -ca.crt: 1220 bytes -token: ... -namespace: 7 bytes +ca.crt: 1338 bytes +namespace: 7 bytes +token: ... ``` **Note:** The content of `token` is elided here. From c34b2a6c399ffd11b6c7e8d5fcd13405d3174849 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Tue, 26 Sep 2017 09:47:09 +0800 Subject: [PATCH 83/87] Remove dangling files related to apparmor --- docs/admin/apparmor/deny-write.profile | 10 ---------- docs/admin/apparmor/hello-apparmor-pod.yaml | 13 ------------- 2 files changed, 23 deletions(-) delete mode 100644 docs/admin/apparmor/deny-write.profile delete mode 100644 docs/admin/apparmor/hello-apparmor-pod.yaml diff --git a/docs/admin/apparmor/deny-write.profile b/docs/admin/apparmor/deny-write.profile deleted file mode 100644 index c2653c7112865..0000000000000 --- a/docs/admin/apparmor/deny-write.profile +++ /dev/null @@ -1,10 +0,0 @@ -#include - -profile k8s-apparmor-example-deny-write flags=(attach_disconnected) { - #include - - file, - - # Deny all file writes. - deny /** w, -} diff --git a/docs/admin/apparmor/hello-apparmor-pod.yaml b/docs/admin/apparmor/hello-apparmor-pod.yaml deleted file mode 100644 index 3e9b3b2a9c6be..0000000000000 --- a/docs/admin/apparmor/hello-apparmor-pod.yaml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - name: hello-apparmor - annotations: - # Tell Kubernetes to apply the AppArmor profile "k8s-apparmor-example-deny-write". - # Note that this is ignored if the Kubernetes node is not running version 1.4 or greater. - container.apparmor.security.beta.kubernetes.io/hello: localhost/k8s-apparmor-example-deny-write -spec: - containers: - - name: hello - image: busybox - command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ] From b176f8e3fb8d73f815c18ee77a794524faa0c519 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Tue, 26 Sep 2017 10:00:59 +0800 Subject: [PATCH 84/87] Polish AppArmor tutorial --- docs/tutorials/clusters/apparmor.md | 17 ++--------------- 1 file changed, 2 insertions(+), 15 deletions(-) diff --git a/docs/tutorials/clusters/apparmor.md b/docs/tutorials/clusters/apparmor.md index b1c60fc596ae8..81301a9b8a8bb 100644 --- a/docs/tutorials/clusters/apparmor.md +++ b/docs/tutorials/clusters/apparmor.md @@ -192,20 +192,7 @@ Next, we'll run a simple "Hello AppArmor" pod with the deny-write profile: {% include code.html language="yaml" file="hello-apparmor-pod.yaml" ghlink="/docs/tutorials/clusters/hello-apparmor-pod.yaml" %} ```shell -$ kubectl create -f /dev/stdin < Annotations: container.apparmor.security.beta.kubernetes.io/hello=localhost/k8s-apparmor-example-allow-write -Status: Failed +Status: Pending Reason: AppArmor Message: Pod Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded IP: From dae523734af58ba5c6403d1f7805bb3d8f9a6f39 Mon Sep 17 00:00:00 2001 From: Ian Chakeres Date: Sun, 17 Sep 2017 15:46:05 -0700 Subject: [PATCH 85/87] Fixed links to architecture.md and principles.md --- docs/concepts/overview/what-is-kubernetes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/overview/what-is-kubernetes.md b/docs/concepts/overview/what-is-kubernetes.md index 0596b10b266a4..93cff0b445464 100644 --- a/docs/concepts/overview/what-is-kubernetes.md +++ b/docs/concepts/overview/what-is-kubernetes.md @@ -93,7 +93,7 @@ Even though Kubernetes provides a lot of functionality, there are always new sce Additionally, the [Kubernetes control plane](/docs/concepts/overview/components/) is built upon the same [APIs](/docs/reference/api-overview/) that are available to developers and users. Users can write their own controllers, such as [schedulers](https://github.com/kubernetes/community/blob/{{page.githubbranch}}/contributors/devel/scheduler.md), with [their own APIs](https://github.com/kubernetes/community/blob/{{page.githubbranch}}/contributors/design-proposals/api-machinery/extending-api.md) that can be targeted by a general-purpose [command-line tool](/docs/user-guide/kubectl-overview/). -This [design](https://github.com/kubernetes/community/blob/{{page.githubbranch}}/contributors/design-proposals/architecture/principles.md) has enabled a number of other systems to build atop Kubernetes. +This [design](https://git.k8s.io/community/contributors/design-proposals/architecture/principles.md) has enabled a number of other systems to build atop Kubernetes. #### What Kubernetes is not From 2d3e488818fe76c219020b270731d6f5f10e1278 Mon Sep 17 00:00:00 2001 From: Andrew Chen Date: Tue, 26 Sep 2017 11:52:28 -0700 Subject: [PATCH 86/87] Add link to example for CRDs (#5641) * Add link to example for CRDs In the CustomResourceDefinitions section, add a link to the [Custom Resource Example](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/apiextensions-apiserver/examples/client-go). * add note --- docs/concepts/api-extension/custom-resources.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/docs/concepts/api-extension/custom-resources.md b/docs/concepts/api-extension/custom-resources.md index dce7da2b157ff..aded0427a533d 100644 --- a/docs/concepts/api-extension/custom-resources.md +++ b/docs/concepts/api-extension/custom-resources.md @@ -53,8 +53,12 @@ This frees you from writing your own API server to handle the custom resource, but the generic nature of the implementation means you have less flexibility than with [API server aggregation](#api-server-aggregation). -CRD is the successor to the deprecated *ThirdPartyResource* (TPR) API, and is available as of -Kubernetes 1.7. +Refer to the [Custom Resource Example](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/apiextensions-apiserver/examples/client-go) +for a demonstration of how to register a new custom resource, work with instances of your new resource type, +and setup a controller to handle events. + +**Note:** CRD is the successor to the deprecated *ThirdPartyResource* (TPR) API, and is available as of Kubernetes 1.7. +{: .note} ## API server aggregation From 2624ce66ea3e91b37c012cdc201265046cf441be Mon Sep 17 00:00:00 2001 From: jianglingxia Date: Tue, 26 Sep 2017 18:53:08 +0800 Subject: [PATCH 87/87] fix envFrom in configmap --- docs/tasks/configure-pod-container/configure-pod-configmap.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tasks/configure-pod-container/configure-pod-configmap.md b/docs/tasks/configure-pod-container/configure-pod-configmap.md index d83790d9435d7..7286accd9ad38 100644 --- a/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -124,7 +124,7 @@ This page provides a series of usage examples demonstrating how to configure Pod SPECIAL_TYPE: charm ``` -1. Use `env-from` to define all of the ConfigMap's data as Pod environment variables. The key from the ConfigMap becomes the environment variable name in the Pod. +1. Use `envFrom` to define all of the ConfigMap's data as Pod environment variables. The key from the ConfigMap becomes the environment variable name in the Pod. ```yaml apiVersion: v1