diff --git a/config.toml b/config.toml index 9a4e71fbeff86..db1c633a1d50c 100644 --- a/config.toml +++ b/config.toml @@ -145,4 +145,9 @@ weight = 3 contentDir = "content/no" # A list of language codes to look for untranslated content, ordered from left to right. language_alternatives = ["en"] - +[languages.ko] +title = "Kubernetes" +description = "Production-Grade Container Orchestration" +languageName = "Korean" +weight = 4 +contentDir = "content/ko" diff --git a/content/ko/_common-resources/index.md b/content/ko/_common-resources/index.md new file mode 100644 index 0000000000000..ca03031f1ee91 --- /dev/null +++ b/content/ko/_common-resources/index.md @@ -0,0 +1,3 @@ +--- +headless: true +--- diff --git a/content/ko/_index.html b/content/ko/_index.html new file mode 100644 index 0000000000000..49d18a9b4447b --- /dev/null +++ b/content/ko/_index.html @@ -0,0 +1,62 @@ +--- +title: "운영 수준의 컨테이너 오케스트레이션" +abstract: "자동화된 컨테이너 배포, 스케일링과 관리" +cid: home +--- + +{{< deprecationwarning >}} + +{{< blocks/section id="oceanNodes" >}} +{{% blocks/feature image="flower" %}} +### [쿠버네티스]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}})는 컨테이너화된 애플리케이션을 자동으로 배포, 스케일링 및 관리해주는 오픈소스 시스템입니다. + +It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon [15 years of experience of running production workloads at Google](http://queue.acm.org/detail.cfm?id=2898444), combined with best-of-breed ideas and practices from the community. +{{% /blocks/feature %}} + +{{% blocks/feature image="scalable" %}} +#### 행성 규모 확장성 + +Designed on the same principles that allows Google to run billions of containers a week, Kubernetes can scale without increasing your ops team. + +{{% /blocks/feature %}} + +{{% blocks/feature image="blocks" %}} +#### 무한한 유연성 + +Whether testing locally or running a global enterprise, Kubernetes flexibility grows with you to deliver your applications consistently and easily no matter how complex your need is. + +{{% /blocks/feature %}} + +{{% blocks/feature image="suitcase" %}} +#### 어디서나 동작 + +Kubernetes is open source giving you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure, letting you effortlessly move workloads to where it matters to you. + +{{% /blocks/feature %}} + +{{< /blocks/section >}} + +{{< blocks/section id="video" background-image="kub_video_banner_homepage" >}} +
+

150+ 마이크로서비스를 쿠버네티스로 마이그레이션하는 도전

+

By Sarah Wells, Technical Director for Operations and Reliability, Financial Times

+ +
+
+
+ Attend KubeCon in Shanghai on Nov. 13-15, 2018 +
+
+
+
+ Attend KubeCon in Seattle on Dec. 11-13, 2018 +
+
+ + +
+{{< /blocks/section >}} + +{{< blocks/kubernetes-features >}} + +{{< blocks/case-studies >}} diff --git a/content/ko/case-studies/_index.html b/content/ko/case-studies/_index.html new file mode 100644 index 0000000000000..28bdab44afc5f --- /dev/null +++ b/content/ko/case-studies/_index.html @@ -0,0 +1,10 @@ +--- +title: Case Studies +linkTitle: Case Studies +bigheader: Kubernetes User Case Studies +abstract: A collection of users running Kubernetes in production. +layout: basic +class: gridPage +cid: caseStudies +--- + diff --git a/content/ko/docs/concepts/_index.md b/content/ko/docs/concepts/_index.md new file mode 100644 index 0000000000000..00fc1b62c3a51 --- /dev/null +++ b/content/ko/docs/concepts/_index.md @@ -0,0 +1,74 @@ +--- +title: 개념 +main_menu: true +content_template: templates/concept +weight: 40 +--- + +{{% capture overview %}} + +개념 섹션을 통해 쿠버네티스 시스템을 구성하는 요소와 클러스터를 표현하는데 사용되는 추상 개념에 대해 배우고 쿠버네티스가 작동하는 방식에 대해 보다 깊이 이해할 수 있다. + +{{% /capture %}} + +{{% capture body %}} + +## 개요 + +쿠버네티스를 사용하려면, *쿠버네티스 API 오브젝트로* 클러스터에 대해 사용자가 *바라는 상태를* 기술해야 한다. 어떤 애플리케이션이나 워크로드를 구동시키려고 하는지, 어떤 컨테이너 이미지를 쓰는지, 복제의 수는 몇 개인지, 어떤 네트워크와 디스크 자원을 쓸 수 있도록 할 것인지 등을 의미한다. 바라는 상태를 설정하는 방법은 쿠버네티스 API를 사용해서 오브젝트를 만드는 것인데, 대개 `kubectl`이라는 명령줄 인터페이스를 사용한다. 클러스터와 상호 작용하고 바라는 상태를 설정하거나 수정하기 위해서 쿠버네티스 API를 직접 사용할 수도 있다. + +일단 바라는 상태를 설정하고 나면, *쿠버네티스 컨트롤 플레인이* 클러스터의 현재 상태를 바라는 상태와 일치시키기 위한 일을 하게 된다. 그렇게 함으로써, 쿠버네티스가 컨테이너를 시작 또는 재시작 시키거나, 주어진 애플리케이션의 복제 수를 스케일링하는 등의 다양한 작업을 자동으로 수행할 수 있게 된다. 쿠버네티스 컨트롤 플레인은 클러스터에서 돌아가는 프로세스의 집합으로 구성된다. + +* **쿠버네티스 마스터**는 클러스터 내 마스터 노드로 지정된 노드 내에서 구동되는 세 개의 프로세스 집합이다. 해당 프로세스는 [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) 및 [kube-scheduler](/docs/admin/kube-scheduler/)이다. +* 클러스터 내 마스터 노드가 아닌 각각의 노드는 다음 두 개의 프로세스를 구동시킨다. + * 쿠버네티스 마스터와 통신하는 **[kubelet](/docs/admin/kubelet/)**. + * 각 노드의 쿠버네티스 네트워킹 서비스를 반영하는 네트워크 프록시인 **[kube-proxy](/docs/admin/kube-proxy/)**. + +## 쿠버네티스 오브젝트 + +쿠버네티스는 시스템의 상태를 나타내는 추상 개념을 다수 포함하고 있다. 컨테이너화되어 배포된 애플리케이션과 워크로드, 이에 연관된 네트워크와 디스크 자원, 그 밖에 클러스터가 무엇을 하고 있는지에 대한 정보가 이에 해당한다. 이런 추상 개념은 쿠버네티스 API 내 오브젝트로 표현된다. 보다 자세한 내용은 [쿠버네티스 오브젝트 개요](/docs/concepts/abstractions/overview/) 문서를 참조한다. + +기초적인 쿠버네티스 오브젝트에는 다음과 같은 것들이 있다. + +* [파드](/docs/concepts/workloads/pods/pod-overview/) +* [서비스](/docs/concepts/services-networking/service/) +* [볼륨](/docs/concepts/storage/volumes/) +* [네임스페이스](/docs/concepts/overview/working-with-objects/namespaces/) + +추가로, 쿠버네티스에는 컨트롤러라는 보다 높은 수준의 추상 개념도 다수 있다. 컨트롤러는 기초 오브젝트를 기반으로, 부가 기능 및 편의 기능을 제공해준다. 다음이 포함된다. + +* [레플리카 셋](/docs/concepts/workloads/controllers/replicaset/) +* [디플로이먼트](/docs/concepts/workloads/controllers/deployment/) +* [스테이트풀 셋](/docs/concepts/workloads/controllers/statefulset/) +* [데몬 셋](/docs/concepts/workloads/controllers/daemonset/) +* [잡](/docs/concepts/workloads/controllers/jobs-run-to-completion/) + +## 쿠버네티스 컨트롤 플레인 + +쿠버네티스 마스터와 kubelet 프로세스와 같은 쿠버네티스 컨트롤 플레인의 다양한 구성 요소는 쿠버네티스가 클러스터와 통신하는 방식을 관장한다. 컨트롤 플레인은 시스템 내 모든 쿠버네티스 오브젝트의 레코드를 유지하면서, 오브젝트의 상태를 관리하는 제어 루프를 지속적으로 구동시킨다. 컨트롤 플레인의 제어 루프는 클러스터 내 변경이 발생하면 언제라도 응답하고 시스템 내 모든 오브젝트의 실제 상태가 사용자가 바라는 상태와 일치시키기 위한 일을 한다. + +예를 들어, 쿠버네티스 API를 사용해서 디플로이먼트 오브젝트를 만들 때에는, 바라는 상태를 시스템에 신규로 입력해야한다. 쿠버네티스 컨트롤 플레인이 오브젝트 생성을 기록하고, 사용자 지시대로 필요한 애플리케이션을 시작시키고 클러스터 노드에 스케줄링한다. 그래서 결국 클러스터의 실제 상태가 바라는 상태와 일치하게 된다. + +### 쿠버네티스 마스터 + +클러스터에 대해 바라는 상태를 유지할 책임은 쿠버네티스 마스터에 있다. `kubectl` 명령줄 인터페이스와 같은 것을 사용해서 쿠버네티스로 상호 작용할 때에는 쿠버네티스 마스터와 통신하고 있는 셈이다. + +> "마스터"는 클러스터 상태를 관리하는 프로세스의 집합이다. 주로 이 프로세스는 클러스터 내 단일 노드에서 구동되며, 이 노드가 바로 마스터이다. 마스터는 가용성과 중복을 위해 복제될 수도 있다. + +### 쿠버네티스 노드 + +클러스터 내 노드는 애플리케이션과 클라우드 워크플로우를 구동시키는 머신(VM, 물리 서버 등)이다. 쿠버네티스 마스터는 각 노드를 관리한다. 직접 노드와 직접 상호 작용할 일은 거의 없을 것이다. + +#### 오브젝트 메타데이터 + + +* [어노테이션](/docs/concepts/overview/working-with-objects/annotations/) + +{{% /capture %}} + +{{% capture whatsnext %}} + +개념 페이지를 작성하기를 원하면, 개념 페이지 유형과 개념 템플릿에 대한 정보가 있는 +[페이지 템플릿 사용하기](/docs/home/contribute/page-templates/)를 참조한다. + +{{% /capture %}} diff --git a/content/ko/docs/concepts/overview/what-is-kubernetes.md b/content/ko/docs/concepts/overview/what-is-kubernetes.md new file mode 100644 index 0000000000000..f086ab21f51c9 --- /dev/null +++ b/content/ko/docs/concepts/overview/what-is-kubernetes.md @@ -0,0 +1,207 @@ +--- +reviewers: +- bgrant0607 +- mikedanese +title: What is Kubernetes? +content_template: templates/concept +weight: 10 +--- + +{{% capture overview %}} +This page is an overview of Kubernetes. +{{% /capture %}} + +{{% capture body %}} +Kubernetes is a portable, extensible open-source platform for managing +containerized workloads and services, that facilitates both +declarative configuration and automation. It has a large, rapidly +growing ecosystem. Kubernetes services, support, and tools are widely available. + +Google open-sourced the Kubernetes project in 2014. Kubernetes builds upon +a [decade and a half of experience that Google has with running +production workloads at +scale](https://research.google.com/pubs/pub43438.html), combined with +best-of-breed ideas and practices from the community. + +## Why do I need Kubernetes and what can it do? + +Kubernetes has a number of features. It can be thought of as: + +- a container platform +- a microservices platform +- a portable cloud platform +and a lot more. + +Kubernetes provides a **container-centric** management environment. It +orchestrates computing, networking, and storage infrastructure on +behalf of user workloads. This provides much of the simplicity of +Platform as a Service (PaaS) with the flexibility of Infrastructure as +a Service (IaaS), and enables portability across infrastructure +providers. + +## How is Kubernetes a platform? + +Even though Kubernetes provides a lot of functionality, there are +always new scenarios that would benefit from new +features. Application-specific workflows can be streamlined to +accelerate developer velocity. Ad hoc orchestration that is acceptable +initially often requires robust automation at scale. This is why +Kubernetes was also designed to serve as a platform for building an +ecosystem of components and tools to make it easier to deploy, scale, +and manage applications. + +[Labels](/docs/concepts/overview/working-with-objects/labels/) empower +users to organize their resources however they +please. [Annotations](/docs/concepts/overview/working-with-objects/annotations/) +enable users to decorate resources with custom information to +facilitate their workflows and provide an easy way for management +tools to checkpoint state. + +Additionally, the [Kubernetes control +plane](/docs/concepts/overview/components/) is built upon the same +[APIs](/docs/reference/using-api/api-overview/) that are available to developers +and users. Users can write their own controllers, such as +[schedulers](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/devel/scheduler.md), +with [their own +APIs](/docs/concepts/api-extension/custom-resources/) +that can be targeted by a general-purpose [command-line +tool](/docs/user-guide/kubectl-overview/). + +This +[design](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) +has enabled a number of other systems to build atop Kubernetes. + +## What Kubernetes is not + +Kubernetes is not a traditional, all-inclusive PaaS (Platform as a +Service) system. Since Kubernetes operates at the container level +rather than at the hardware level, it provides some generally +applicable features common to PaaS offerings, such as deployment, +scaling, load balancing, logging, and monitoring. However, Kubernetes +is not monolithic, and these default solutions are optional and +pluggable. Kubernetes provides the building blocks for building developer +platforms, but preserves user choice and flexibility where it is +important. + +Kubernetes: + +* Does not limit the types of applications supported. Kubernetes aims + to support an extremely diverse variety of workloads, including + stateless, stateful, and data-processing workloads. If an + application can run in a container, it should run great on + Kubernetes. +* Does not deploy source code and does not build your + application. Continuous Integration, Delivery, and Deployment + (CI/CD) workflows are determined by organization cultures and preferences + as well as technical requirements. +* Does not provide application-level services, such as middleware + (e.g., message buses), data-processing frameworks (for example, + Spark), databases (e.g., mysql), caches, nor cluster storage systems (e.g., + Ceph) as built-in services. Such components can run on Kubernetes, and/or + can be accessed by applications running on Kubernetes through portable + mechanisms, such as the Open Service Broker. +* Does not dictate logging, monitoring, or alerting solutions. It provides + some integrations as proof of concept, and mechanisms to collect and + export metrics. +* Does not provide nor mandate a configuration language/system (e.g., + [jsonnet](https://github.com/google/jsonnet)). It provides a declarative + API that may be targeted by arbitrary forms of declarative specifications. +* Does not provide nor adopt any comprehensive machine configuration, + maintenance, management, or self-healing systems. + +Additionally, Kubernetes is not a mere *orchestration system*. In +fact, it eliminates the need for orchestration. The technical +definition of *orchestration* is execution of a defined workflow: +first do A, then B, then C. In contrast, Kubernetes is comprised of a +set of independent, composable control processes that continuously +drive the current state towards the provided desired state. It +shouldn't matter how you get from A to C. Centralized control is also +not required. This results in a system that is easier to use and more +powerful, robust, resilient, and extensible. + +## Why containers? + +Looking for reasons why you should be using containers? + +![Why Containers?](/images/docs/why_containers.svg) + +The *Old Way* to deploy applications was to install the applications +on a host using the operating-system package manager. This had the +disadvantage of entangling the applications' executables, +configuration, libraries, and lifecycles with each other and with the +host OS. One could build immutable virtual-machine images in order to +achieve predictable rollouts and rollbacks, but VMs are heavyweight +and non-portable. + +The *New Way* is to deploy containers based on operating-system-level +virtualization rather than hardware virtualization. These containers +are isolated from each other and from the host: they have their own +filesystems, they can't see each others' processes, and their +computational resource usage can be bounded. They are easier to build +than VMs, and because they are decoupled from the underlying +infrastructure and from the host filesystem, they are portable across +clouds and OS distributions. + +Because containers are small and fast, one application can be packed +in each container image. This one-to-one application-to-image +relationship unlocks the full benefits of containers. With containers, +immutable container images can be created at build/release time rather +than deployment time, since each application doesn't need to be +composed with the rest of the application stack, nor married to the +production infrastructure environment. Generating container images at +build/release time enables a consistent environment to be carried from +development into production. Similarly, containers are vastly more +transparent than VMs, which facilitates monitoring and +management. This is especially true when the containers' process +lifecycles are managed by the infrastructure rather than hidden by a +process supervisor inside the container. Finally, with a single +application per container, managing the containers becomes tantamount +to managing deployment of the application. + +Summary of container benefits: + +* **Agile application creation and deployment**: + Increased ease and efficiency of container image creation compared to VM image use. +* **Continuous development, integration, and deployment**: + Provides for reliable and frequent container image build and + deployment with quick and easy rollbacks (due to image + immutability). +* **Dev and Ops separation of concerns**: + Create application container images at build/release time rather + than deployment time, thereby decoupling applications from + infrastructure. +* **Observability** + Not only surfaces OS-level information and metrics, but also application + health and other signals. +* **Environmental consistency across development, testing, and production**: + Runs the same on a laptop as it does in the cloud. +* **Cloud and OS distribution portability**: + Runs on Ubuntu, RHEL, CoreOS, on-prem, Google Kubernetes Engine, and anywhere else. +* **Application-centric management**: + Raises the level of abstraction from running an OS on virtual + hardware to running an application on an OS using logical resources. +* **Loosely coupled, distributed, elastic, liberated [micro-services](https://martinfowler.com/articles/microservices.html)**: + Applications are broken into smaller, independent pieces and can + be deployed and managed dynamically -- not a fat monolithic stack + running on one big single-purpose machine. +* **Resource isolation**: + Predictable application performance. +* **Resource utilization**: + High efficiency and density. + +## What does Kubernetes mean? K8s? + +The name **Kubernetes** originates from Greek, meaning *helmsman* or +*pilot*, and is the root of *governor* and +[cybernetic](http://www.etymonline.com/index.php?term=cybernetics). *K8s* +is an abbreviation derived by replacing the 8 letters "ubernete" with +"8". + +{{% /capture %}} + +{{% capture whatsnext %}} +* Ready to [Get Started](/docs/setup/)? +* For more details, see the [Kubernetes Documentation](/docs/home/). +{{% /capture %}} + + diff --git a/content/ko/docs/home/_index.md b/content/ko/docs/home/_index.md new file mode 100644 index 0000000000000..da2e4ffd38f61 --- /dev/null +++ b/content/ko/docs/home/_index.md @@ -0,0 +1,18 @@ +--- +title: 쿠버네티스 문서 +layout: docsportal_home +noedit: true +cid: userJourneys +css: /css/style_user_journeys.css +js: /js/user-journeys/home.js, https://use.fontawesome.com/4bcc658a89.js +display_browse_numbers: true +linkTitle: "문서" +main_menu: true +weight: 10 +menu: + main: + title: "문서" + weight: 20 + post: > +

Learn how to use Kubernetes with the use of walkthroughs, samples, and reference documentation. You can even help contribute to the docs!

+--- diff --git a/content/ko/docs/reference/glossary/index.md b/content/ko/docs/reference/glossary/index.md new file mode 100755 index 0000000000000..d462206f54d12 --- /dev/null +++ b/content/ko/docs/reference/glossary/index.md @@ -0,0 +1,8 @@ +--- +title: 표준 용어집 +layout: glossary +noedit: true +default_active_tag: fundamental +weight: 5 +--- + diff --git a/content/ko/docs/setup/_index.md b/content/ko/docs/setup/_index.md new file mode 100644 index 0000000000000..ab50a95e6ea47 --- /dev/null +++ b/content/ko/docs/setup/_index.md @@ -0,0 +1,91 @@ +--- +no_issue: true +title: 설치 +main_menu: true +weight: 30 +content_template: templates/concept +--- + +{{% capture overview %}} + +Use this page to find the type of solution that best fits your needs. + +Deciding where to run Kubernetes depends on what resources you have available +and how much flexibility you need. You can run Kubernetes almost anywhere, +from your laptop to VMs on a cloud provider to a rack of bare metal servers. +You can also set up a fully-managed cluster by running a single command or craft +your own customized cluster on your bare metal servers. + +{{% /capture %}} + +{{% capture body %}} + +## Local-machine Solutions + +A local-machine solution is an easy way to get started with Kubernetes. You +can create and test Kubernetes clusters without worrying about consuming cloud +resources and quotas. + +You should pick a local solution if you want to: + +* Try or start learning about Kubernetes +* Develop and test clusters locally + +Pick a [local-machine solution](/docs/setup/pick-right-solution/#local-machine-solutions). + +## Hosted Solutions + +Hosted solutions are a convenient way to create and maintain Kubernetes clusters. They +manage and operate your clusters so you don’t have to. + +You should pick a hosted solution if you: + +* Want a fully-managed solution +* Want to focus on developing your apps or services +* Don’t have dedicated site reliability engineering (SRE) team but want high availability +* Don't have resources to host and monitor your clusters + +Pick a [hosted solution](/docs/setup/pick-right-solution/#hosted-solutions). + +## Turnkey – Cloud Solutions + + +These solutions allow you to create Kubernetes clusters with only a few commands and +are actively developed and have active community support. They can also be hosted on +a range of Cloud IaaS providers, but they offer more freedom and flexibility in +exchange for effort. + +You should pick a turnkey cloud solution if you: + +* Want more control over your clusters than the hosted solutions allow +* Want to take on more operations ownership + +Pick a [turnkey cloud solution](/docs/setup/pick-right-solution/#turnkey-cloud-solutions) + +## Turnkey – On-Premises Solutions + +These solutions allow you to create Kubernetes clusters on your internal, secure, +cloud network with only a few commands. + +You should pick a on-prem turnkey cloud solution if you: + +* Want to deploy clusters on your private cloud network +* Have a dedicated SRE team +* Have the the resources to host and monitor your clusters + +Pick an [on-prem turnkey cloud solution](/docs/setup/pick-right-solution/#on-premises-turnkey-cloud-solutions). + +## Custom Solutions + +Custom solutions give you the most freedom over your clusters but require the +most expertise. These solutions range from bare-metal to cloud providers on +different operating systems. + +Pick a [custom solution](/docs/setup/pick-right-solution/#custom-solutions). + +{{% /capture %}} + +{{% capture whatsnext %}} +Go to [Picking the Right Solution](/docs/setup/pick-right-solution/) for a complete +list of solutions. +{{% /capture %}} diff --git a/content/ko/docs/setup/building-from-source.md b/content/ko/docs/setup/building-from-source.md new file mode 100644 index 0000000000000..12290041c4cae --- /dev/null +++ b/content/ko/docs/setup/building-from-source.md @@ -0,0 +1,21 @@ +--- +title: 소스로부터 빌드 +--- + +You can either build a release from source or download a pre-built release. If you do not plan on developing Kubernetes itself, we suggest using a pre-built version of the current release, which can be found in the [Release Notes](/docs/setup/release/notes/). + +The Kubernetes source code can be downloaded from the [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) repo. + +## 소스로부터 빌드 + +If you are simply building a release from source there is no need to set up a full golang environment as all building happens in a Docker container. + +Building a release is simple. + +```shell +git clone https://github.com/kubernetes/kubernetes.git +cd kubernetes +make release +``` + +For more details on the release process see the kubernetes/kubernetes [`build`](http://releases.k8s.io/{{< param "githubbranch" >}}/build/) directory. diff --git a/content/ko/docs/setup/cluster-large.md b/content/ko/docs/setup/cluster-large.md new file mode 100644 index 0000000000000..731313dbf4a59 --- /dev/null +++ b/content/ko/docs/setup/cluster-large.md @@ -0,0 +1,128 @@ +--- +title: 대형 클러스터 구축 +weight: 80 +--- + +## 지원 + +At {{< param "version" >}}, Kubernetes supports clusters with up to 5000 nodes. More specifically, we support configurations that meet *all* of the following criteria: + +* No more than 5000 nodes +* No more than 150000 total pods +* No more than 300000 total containers +* No more than 100 pods per node + +
+ +{{< toc >}} + +## 설치 + +A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by a "master" (the cluster-level control plane). + +Normally the number of nodes in a cluster is controlled by the value `NUM_NODES` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/gce/config-default.sh)). + +Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up. + +When setting up a large Kubernetes cluster, the following issues must be considered. + +### 쿼터 문제 + +To avoid running into cloud provider quota issues, when creating a cluster with many nodes, consider: + +* Increase the quota for things like CPU, IPs, etc. + * In [GCE, for example,](https://cloud.google.com/compute/docs/resource-quotas) you'll want to increase the quota for: + * CPUs + * VM instances + * Total persistent disk reserved + * In-use IP addresses + * Firewall Rules + * Forwarding rules + * Routes + * Target pools +* Gating the setup script so that it brings up new node VMs in smaller batches with waits in between, because some cloud providers rate limit the creation of VMs. + +### Etcd 저장소 + +To improve performance of large clusters, we store events in a separate dedicated etcd instance. + +When creating a cluster, existing salt scripts: + +* start and configure additional etcd instance +* configure api-server to use it for storing events + +### 마스터 크기와 마스터 구성 요소 + +On GCE/Google Kubernetes Engine, and AWS, `kube-up` automatically configures the proper VM size for your master depending on the number of nodes +in your cluster. On other providers, you will need to configure it manually. For reference, the sizes we use on GCE are + +* 1-5 nodes: n1-standard-1 +* 6-10 nodes: n1-standard-2 +* 11-100 nodes: n1-standard-4 +* 101-250 nodes: n1-standard-8 +* 251-500 nodes: n1-standard-16 +* more than 500 nodes: n1-standard-32 + +And the sizes we use on AWS are + +* 1-5 nodes: m3.medium +* 6-10 nodes: m3.large +* 11-100 nodes: m3.xlarge +* 101-250 nodes: m3.2xlarge +* 251-500 nodes: c4.4xlarge +* more than 500 nodes: c4.8xlarge + +{{< note >}} +On Google Kubernetes Engine, the size of the master node adjusts automatically based on the size of your cluster. For more information, see [this blog post](https://cloudplatform.googleblog.com/2017/11/Cutting-Cluster-Management-Fees-on-Google-Kubernetes-Engine.html). + +On AWS, master node sizes are currently set at cluster startup time and do not change, even if you later scale your cluster up or down by manually removing or adding nodes or using a cluster autoscaler. +{{< /note >}} + +### 애드온 자원 + +To prevent memory leaks or other resource issues in [cluster addons](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](http://pr.k8s.io/10653/files) and [#10778](http://pr.k8s.io/10778/files)). + +For example: + +```yaml + containers: + - name: fluentd-cloud-logging + image: k8s.gcr.io/fluentd-gcp:1.16 + resources: + limits: + cpu: 100m + memory: 200Mi +``` + +Except for Heapster, these limits are static and are based on data we collected from addons running on 4-node clusters (see [#10335](http://issue.k8s.io/10335#issuecomment-117861225)). The addons consume a lot more resources when running on large deployment clusters (see [#5880](http://issue.k8s.io/5880#issuecomment-113984085)). So, if a large cluster is deployed without adjusting these values, the addons may continuously get killed because they keep hitting the limits. + +To avoid running into cluster addon resource issues, when creating a cluster with many nodes, consider the following: + +* Scale memory and CPU limits for each of the following addons, if used, as you scale up the size of cluster (there is one replica of each handling the entire cluster so memory and CPU usage tends to grow proportionally with size/load on cluster): + * [InfluxDB and Grafana](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml) + * [kubedns, dnsmasq, and sidecar](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/kube-dns/kube-dns.yaml.in) + * [Kibana](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml) +* Scale number of replicas for the following addons, if used, along with the size of cluster (there are multiple replicas of each so increasing replicas should help handle increased load, but, since load per replica also increases slightly, also consider increasing CPU/memory limits): + * [elasticsearch](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml) +* Increase memory and CPU limits slightly for each of the following addons, if used, along with the size of cluster (there is one replica per node but CPU/memory usage increases slightly along with cluster load/size as well): + * [FluentD with ElasticSearch Plugin](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml) + * [FluentD with GCP Plugin](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-gcp/fluentd-gcp-ds.yaml) + +Heapster's resource limits are set dynamically based on the initial size of your cluster (see [#16185](http://issue.k8s.io/16185) +and [#22940](http://issue.k8s.io/22940)). If you find that Heapster is running +out of resources, you should adjust the formulas that compute heapster memory request (see those PRs for details). + +For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](/docs/concepts/configuration/manage-compute-resources-container/#troubleshooting). + +In the [future](http://issue.k8s.io/13048), we anticipate to set all cluster addon resource limits based on cluster size, and to dynamically adjust them if you grow or shrink your cluster. +We welcome PRs that implement those features. + +### 시작 시 사소한 노드 오류 허용 + +For various reasons (see [#18969](https://github.com/kubernetes/kubernetes/issues/18969) for more details) running +`kube-up.sh` with a very large `NUM_NODES` may fail due to a very small number of nodes not coming up properly. +Currently you have two choices: restart the cluster (`kube-down.sh` and then `kube-up.sh` again), or before +running `kube-up.sh` set the environment variable `ALLOWED_NOTREADY_NODES` to whatever value you feel comfortable +with. This will allow `kube-up.sh` to succeed with fewer than `NUM_NODES` coming up. Depending on the +reason for the failure, those additional nodes may join later or the cluster may remain at a size of +`NUM_NODES - ALLOWED_NOTREADY_NODES`. diff --git a/content/ko/docs/setup/custom-cloud/_index.md b/content/ko/docs/setup/custom-cloud/_index.md new file mode 100644 index 0000000000000..5ddaaf3f3f7ad --- /dev/null +++ b/content/ko/docs/setup/custom-cloud/_index.md @@ -0,0 +1,4 @@ +--- +title: 사용자 지정 클라우드 솔루션 +weight: 50 +--- diff --git a/content/ko/docs/setup/custom-cloud/coreos.md b/content/ko/docs/setup/custom-cloud/coreos.md new file mode 100644 index 0000000000000..3e978792b6588 --- /dev/null +++ b/content/ko/docs/setup/custom-cloud/coreos.md @@ -0,0 +1,88 @@ +--- +title: CoreOS on AWS or GCE +content_template: templates/concept +--- + +{{% capture overview %}} + +There are multiple guides on running Kubernetes with [CoreOS](https://coreos.com/kubernetes/docs/latest/). + +{{% /capture %}} + +{{% capture body %}} + +## Official CoreOS Guides + +These guides are maintained by CoreOS and deploy Kubernetes the "CoreOS Way" with full TLS, the DNS add-on, and more. These guides pass Kubernetes conformance testing and we encourage you to [test this yourself](https://coreos.com/kubernetes/docs/latest/conformance-tests.html). + +* [**AWS Multi-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html) + + Guide and CLI tool for setting up a multi-node cluster on AWS. + CloudFormation is used to set up a master and multiple workers in auto-scaling groups. + +* [**Bare Metal Multi-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-baremetal.html#automated-provisioning) + + Guide and HTTP/API service for PXE booting and provisioning a multi-node cluster on bare metal. + [Ignition](https://coreos.com/ignition/docs/latest/) is used to provision a master and multiple workers on the first boot from disk. + +* [**Vagrant Multi-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html) + + Guide to setting up a multi-node cluster on Vagrant. + The deployer can independently configure the number of etcd nodes, master nodes, and worker nodes to bring up a fully HA control plane. + +* [**Vagrant Single-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html) + + The quickest way to set up a Kubernetes development environment locally. + As easy as `git clone`, `vagrant up` and configuring `kubectl`. + +* [**Full Step by Step Guide**](https://coreos.com/kubernetes/docs/latest/getting-started.html) + + A generic guide to setting up an HA cluster on any cloud or bare metal, with full TLS. + Repeat the master or worker steps to configure more machines of that role. + +## Community Guides + +These guides are maintained by community members, cover specific platforms and use cases, and experiment with different ways of configuring Kubernetes on CoreOS. + +* [**Easy Multi-node Cluster on Google Compute Engine**](https://github.com/rimusz/coreos-multi-node-k8s-gce/blob/master/README.md) + + Scripted installation of a single master, multi-worker cluster on GCE. + Kubernetes components are managed by [fleet](https://github.com/coreos/fleet). + +* [**Multi-node cluster using cloud-config and Weave on Vagrant**](https://github.com/errordeveloper/weave-demos/blob/master/poseidon/README.md) + + Configure a Vagrant-based cluster of 3 machines with networking provided by Weave. + +* [**Multi-node cluster using cloud-config and Vagrant**](https://github.com/pires/kubernetes-vagrant-coreos-cluster/blob/master/README.md) + + Configure a single master, multi-worker cluster locally, running on your choice of hypervisor: VirtualBox, Parallels, or VMware + +* [**Single-node cluster using a small macOS App**](https://github.com/rimusz/kube-solo-osx/blob/master/README.md) + + Guide to running a solo cluster (master + worker) controlled by an macOS menubar application. + Uses xhyve + CoreOS under the hood. + +* [**Multi-node cluster with Vagrant and fleet units using a small macOS App**](https://github.com/rimusz/coreos-osx-gui-kubernetes-cluster/blob/master/README.md) + + Guide to running a single master, multi-worker cluster controlled by an macOS menubar application. + Uses Vagrant under the hood. + +* [**Multi-node cluster using cloud-config, CoreOS and VMware ESXi**](https://github.com/xavierbaude/VMware-coreos-multi-nodes-Kubernetes) + + Configure a single master, single worker cluster on VMware ESXi. + +* [**Single/Multi-node cluster using cloud-config, CoreOS and Foreman**](https://github.com/johscheuer/theforeman-coreos-kubernetes) + + Configure a standalone Kubernetes or a Kubernetes cluster with [Foreman](https://theforeman.org). + +## Support Level + + +IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level +-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- +GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires)) +Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles)) + +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. + +{{% /capture %}} diff --git a/content/ko/docs/setup/custom-cloud/kops.md b/content/ko/docs/setup/custom-cloud/kops.md new file mode 100644 index 0000000000000..f10985682ff2e --- /dev/null +++ b/content/ko/docs/setup/custom-cloud/kops.md @@ -0,0 +1,174 @@ +--- +title: Installing Kubernetes on AWS with kops +content_template: templates/concept +--- + +{{% capture overview %}} + +This quickstart shows you how to easily install a Kubernetes cluster on AWS. +It uses a tool called [`kops`](https://github.com/kubernetes/kops). + +kops is an opinionated provisioning system: + +* Fully automated installation +* Uses DNS to identify clusters +* Self-healing: everything runs in Auto-Scaling Groups +* Multiple OS support (Debian, Ubuntu 16.04 supported, CentOS & RHEL, Amazon Linux and CoreOS) - see the [images.md](https://github.com/kubernetes/kops/blob/master/docs/images.md) +* High-Availability support - see the [high_availability.md](https://github.com/kubernetes/kops/blob/master/docs/high_availability.md) +* Can directly provision, or generate terraform manifests - see the [terraform.md](https://github.com/kubernetes/kops/blob/master/docs/terraform.md) + +If your opinions differ from these you may prefer to build your own cluster using [kubeadm](/docs/admin/kubeadm/) as +a building block. kops builds on the kubeadm work. + +{{% /capture %}} + +{{% capture body %}} + +## Creating a cluster + +### (1/5) Install kops + +#### Requirements + +You must have [kubectl](/docs/tasks/tools/install-kubectl/) installed in order for kops to work. + +#### Installation + +Download kops from the [releases page](https://github.com/kubernetes/kops/releases) (it is also easy to build from source): + +On macOS: + +```shell +curl -OL https://github.com/kubernetes/kops/releases/download/1.10.0/kops-darwin-amd64 +chmod +x kops-darwin-amd64 +mv kops-darwin-amd64 /usr/local/bin/kops +# you can also install using Homebrew +brew update && brew install kops +``` + +On Linux: + +```shell +wget https://github.com/kubernetes/kops/releases/download/1.10.0/kops-linux-amd64 +chmod +x kops-linux-amd64 +mv kops-linux-amd64 /usr/local/bin/kops +``` + +### (2/5) Create a route53 domain for your cluster + +kops uses DNS for discovery, both inside the cluster and so that you can reach the kubernetes API server +from clients. + +kops has a strong opinion on the cluster name: it should be a valid DNS name. By doing so you will +no longer get your clusters confused, you can share clusters with your colleagues unambiguously, +and you can reach them without relying on remembering an IP address. + +You can, and probably should, use subdomains to divide your clusters. As our example we will use +`useast1.dev.example.com`. The API server endpoint will then be `api.useast1.dev.example.com`. + +A Route53 hosted zone can serve subdomains. Your hosted zone could be `useast1.dev.example.com`, +but also `dev.example.com` or even `example.com`. kops works with any of these, so typically +you choose for organization reasons (e.g. you are allowed to create records under `dev.example.com`, +but not under `example.com`). + +Let's assume you're using `dev.example.com` as your hosted zone. You create that hosted zone using +the [normal process](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingNewSubdomain.html), or +with a command such as `aws route53 create-hosted-zone --name dev.example.com --caller-reference 1`. + +You must then set up your NS records in the parent domain, so that records in the domain will resolve. Here, +you would create NS records in `example.com` for `dev`. If it is a root domain name you would configure the NS +records at your domain registrar (e.g. `example.com` would need to be configured where you bought `example.com`). + +This step is easy to mess up (it is the #1 cause of problems!) You can double-check that +your cluster is configured correctly if you have the dig tool by running: + +`dig NS dev.example.com` + +You should see the 4 NS records that Route53 assigned your hosted zone. + +### (3/5) Create an S3 bucket to store your clusters state + +kops lets you manage your clusters even after installation. To do this, it must keep track of the clusters +that you have created, along with their configuration, the keys they are using etc. This information is stored +in an S3 bucket. S3 permissions are used to control access to the bucket. + +Multiple clusters can use the same S3 bucket, and you can share an S3 bucket between your colleagues that +administer the same clusters - this is much easier than passing around kubecfg files. But anyone with access +to the S3 bucket will have administrative access to all your clusters, so you don't want to share it beyond +the operations team. + +So typically you have one S3 bucket for each ops team (and often the name will correspond +to the name of the hosted zone above!) + +In our example, we chose `dev.example.com` as our hosted zone, so let's pick `clusters.dev.example.com` as +the S3 bucket name. + +* Export `AWS_PROFILE` (if you need to select a profile for the AWS CLI to work) + +* Create the S3 bucket using `aws s3 mb s3://clusters.dev.example.com` + +* You can `export KOPS_STATE_STORE=s3://clusters.dev.example.com` and then kops will use this location by default. + We suggest putting this in your bash profile or similar. + + +### (4/5) Build your cluster configuration + +Run "kops create cluster" to create your cluster configuration: + +`kops create cluster --zones=us-east-1c useast1.dev.example.com` + +kops will create the configuration for your cluster. Note that it _only_ creates the configuration, it does +not actually create the cloud resources - you'll do that in the next step with a `kops update cluster`. This +give you an opportunity to review the configuration or change it. + +It prints commands you can use to explore further: + +* List your clusters with: `kops get cluster` +* Edit this cluster with: `kops edit cluster useast1.dev.example.com` +* Edit your node instance group: `kops edit ig --name=useast1.dev.example.com nodes` +* Edit your master instance group: `kops edit ig --name=useast1.dev.example.com master-us-east-1c` + +If this is your first time using kops, do spend a few minutes to try those out! An instance group is a +set of instances, which will be registered as kubernetes nodes. On AWS this is implemented via auto-scaling-groups. +You can have several instance groups, for example if you wanted nodes that are a mix of spot and on-demand instances, or +GPU and non-GPU instances. + + +### (5/5) Create the cluster in AWS + +Run "kops update cluster" to create your cluster in AWS: + +`kops update cluster useast1.dev.example.com --yes` + +That takes a few seconds to run, but then your cluster will likely take a few minutes to actually be ready. +`kops update cluster` will be the tool you'll use whenever you change the configuration of your cluster; it +applies the changes you have made to the configuration to your cluster - reconfiguring AWS or kubernetes as needed. + +For example, after you `kops edit ig nodes`, then `kops update cluster --yes` to apply your configuration, and +sometimes you will also have to `kops rolling-update cluster` to roll out the configuration immediately. + +Without `--yes`, `kops update cluster` will show you a preview of what it is going to do. This is handy +for production clusters! + +### Explore other add-ons + +See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to explore other add-ons, including tools for logging, monitoring, network policy, visualization & control of your Kubernetes cluster. + +## Cleanup + +* To delete your cluster: `kops delete cluster useast1.dev.example.com --yes` + +## Feedback + +* Slack Channel: [#kops-users](https://kubernetes.slack.com/messages/kops-users/) +* [GitHub Issues](https://github.com/kubernetes/kops/issues) + +{{% /capture %}} + +{{% capture whatsnext %}} + +* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/user-guide/kubectl-overview/). +* Learn about `kops` [advanced usage](https://github.com/kubernetes/kops) +* See the `kops` [docs](https://github.com/kubernetes/kops) section for tutorials, best practices and advanced configuration options. + +{{% /capture %}} diff --git a/content/ko/docs/setup/custom-cloud/kubespray.md b/content/ko/docs/setup/custom-cloud/kubespray.md new file mode 100644 index 0000000000000..11671b501c632 --- /dev/null +++ b/content/ko/docs/setup/custom-cloud/kubespray.md @@ -0,0 +1,120 @@ +--- +title: Installing Kubernetes On-premises/Cloud Providers with Kubespray +content_template: templates/concept +--- + +{{% capture overview %}} + +This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-incubator/kubespray). + +Kubespray is a composition of [Ansible](http://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides: + +* a highly available cluster +* composable attributes +* support for most popular Linux distributions + * Container Linux by CoreOS + * Debian Jessie, Stretch, Wheezy + * Ubuntu 16.04, 18.04 + * CentOS/RHEL 7 + * Fedora/CentOS Atomic + * openSUSE Leap 42.3/Tumbleweed +* continuous integration tests + +To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/comparisons.md) to [kubeadm](/docs/admin/kubeadm/) and [kops](../kops). + +{{% /capture %}} + +{{% capture body %}} + +## Creating a cluster + +### (1/5) Meet the underlay requirements + +Provision servers with the following [requirements](https://github.com/kubernetes-incubator/kubespray#requirements): + +* **Ansible v2.4 (or newer) and python-netaddr is installed on the machine that will run Ansible commands** +* **Jinja 2.9 (or newer) is required to run the Ansible Playbooks** +* The target servers must have **access to the Internet** in order to pull docker images +* The target servers are configured to allow **IPv4 forwarding** +* **Your ssh key must be copied** to all the servers part of your inventory +* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall +* If kubespray is ran from non-root user account, correct privilege escalation method should be configured in the target servers. Then the `ansible_become` flag or command parameters `--become` or `-b` should be specified + +Kubespray provides the following utilities to help provision your environment: + +* [Terraform](https://www.terraform.io/) scripts for the following cloud providers: + * [AWS](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform/aws) + * [OpenStack](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform/openstack) + +### (2/5) Compose an inventory file + +After you provision your servers, create an [inventory file for Ansible](http://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)". + +### (3/5) Plan your cluster deployment + +Kubespray provides the ability to customize many aspects of the deployment: + +* Choice deployment mode: kubeadm or non-kubeadm +* CNI (networking) plugins +* DNS configuration +* Choice of control plane: native/binary or containerized with docker or rkt +* Component versions +* Calico route reflectors +* Component runtime options + * docker + * rkt + * cri-o +* Certificate generation methods + +Kubespray customizations can be made to a [variable file](http://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes. + +### (4/5) Deploy a Cluster + +Next, deploy your cluster: + +Cluster deployment using [ansible-playbook](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment). + +```shell +ansible-playbook -i your/inventory/hosts.ini cluster.yml -b -v \ + --private-key=~/.ssh/private_key +``` + +Large deployments (100+ nodes) may require [specific adjustments](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/large-deployments.md) for best results. + +### (5/5) Verify the deployment + +Kubespray provides a way to verify inter-pod connectivity and DNS resolve with [Netchecker](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/netcheck.md). Netchecker ensures the netchecker-agents pods can resolve DNS requests and ping each over within the default namespace. Those pods mimic similar behavior of the rest of the workloads and serve as cluster health indicators. + +## Cluster operations + +Kubespray provides additional playbooks to manage your cluster: _scale_ and _upgrade_. + +### Scale your cluster + +You can add worker nodes from your cluster by running the scale playbook. For more information, see "[Adding nodes](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#adding-nodes)". +You can remove worker nodes from your cluster by running the remove-node playbook. For more information, see "[Remove nodes](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#remove-nodes)". + +### Upgrade your cluster + +You can upgrade your cluster by running the upgrade-cluster playbook. For more information, see "[Upgrades](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/upgrades.md)". + +## Cleanup + +You can reset your nodes and wipe out all components installed with Kubespray via the [reset playbook](https://github.com/kubernetes-incubator/kubespray/blob/master/reset.yml). + +{{< caution >}} +**Caution:** When running the reset playbook, be sure not to accidentally target your production cluster! +{{< /caution >}} + +## Feedback + +* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/) +* [GitHub Issues](https://github.com/kubernetes-incubator/kubespray/issues) + +{{% /capture %}} + +{{% capture whatsnext %}} + +Check out planned work on Kubespray's [roadmap](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/roadmap.md). + +{{% /capture %}} diff --git a/content/ko/docs/setup/custom-cloud/master.yaml b/content/ko/docs/setup/custom-cloud/master.yaml new file mode 100644 index 0000000000000..5b7df1bd77d70 --- /dev/null +++ b/content/ko/docs/setup/custom-cloud/master.yaml @@ -0,0 +1,142 @@ +#cloud-config + +--- +write-files: + - path: /etc/conf.d/nfs + permissions: '0644' + content: | + OPTS_RPC_MOUNTD="" + - path: /opt/bin/wupiao + permissions: '0755' + content: | + #!/bin/bash + # [w]ait [u]ntil [p]ort [i]s [a]ctually [o]pen + [ -n "$1" ] && \ + until curl -o /dev/null -sIf http://${1}; do \ + sleep 1 && echo .; + done; + exit $? + +hostname: master +coreos: + etcd2: + name: master + listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 + advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001 + initial-cluster-token: k8s_etcd + listen-peer-urls: http://$private_ipv4:2380,http://$private_ipv4:7001 + initial-advertise-peer-urls: http://$private_ipv4:2380 + initial-cluster: master=http://$private_ipv4:2380 + initial-cluster-state: new + fleet: + metadata: "role=master" + units: + - name: etcd2.service + command: start + - name: generate-serviceaccount-key.service + command: start + content: | + [Unit] + Description=Generate service-account key file + + [Service] + ExecStartPre=-/usr/bin/mkdir -p /opt/bin + ExecStart=/bin/openssl genrsa -out /opt/bin/kube-serviceaccount.key 2048 2>/dev/null + RemainAfterExit=yes + Type=oneshot + - name: setup-network-environment.service + command: start + content: | + [Unit] + Description=Setup Network Environment + Documentation=https://github.com/kelseyhightower/setup-network-environment + Requires=network-online.target + After=network-online.target + + [Service] + ExecStartPre=-/usr/bin/mkdir -p /opt/bin + ExecStartPre=/usr/bin/curl -L -o /opt/bin/setup-network-environment -z /opt/bin/setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment + ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment + ExecStart=/opt/bin/setup-network-environment + RemainAfterExit=yes + Type=oneshot + - name: fleet.service + command: start + - name: flanneld.service + command: start + drop-ins: + - name: 50-network-config.conf + content: | + [Unit] + Requires=etcd2.service + [Service] + ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}' + - name: docker.service + command: start + - name: kube-apiserver.service + command: start + content: | + [Unit] + Description=Kubernetes API Server + Documentation=https://github.com/kubernetes/kubernetes + Requires=setup-network-environment.service etcd2.service generate-serviceaccount-key.service + After=setup-network-environment.service etcd2.service generate-serviceaccount-key.service + + [Service] + EnvironmentFile=/etc/network-environment + ExecStartPre=-/usr/bin/mkdir -p /opt/bin + ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-apiserver -z /opt/bin/kube-apiserver https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kube-apiserver + ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver + ExecStartPre=/opt/bin/wupiao 127.0.0.1:2379/v2/machines + ExecStart=/opt/bin/kube-apiserver \ + --service-account-key-file=/opt/bin/kube-serviceaccount.key \ + --service-account-lookup=false \ + --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \ + --runtime-config=api/v1 \ + --allow-privileged=true \ + --insecure-bind-address=0.0.0.0 \ + --insecure-port=8080 \ + --kubelet-https=true \ + --secure-port=6443 \ + --service-cluster-ip-range=10.100.0.0/16 \ + --etcd-servers=http://127.0.0.1:2379 \ + --public-address-override=${DEFAULT_IPV4} \ + --logtostderr=true + Restart=always + RestartSec=10 + - name: kube-controller-manager.service + command: start + content: | + [Unit] + Description=Kubernetes Controller Manager + Documentation=https://github.com/kubernetes/kubernetes + Requires=kube-apiserver.service + After=kube-apiserver.service + + [Service] + ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-controller-manager -z /opt/bin/kube-controller-manager https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kube-controller-manager + ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager + ExecStart=/opt/bin/kube-controller-manager \ + --service-account-private-key-file=/opt/bin/kube-serviceaccount.key \ + --master=127.0.0.1:8080 \ + --logtostderr=true + Restart=always + RestartSec=10 + - name: kube-scheduler.service + command: start + content: | + [Unit] + Description=Kubernetes Scheduler + Documentation=https://github.com/kubernetes/kubernetes + Requires=kube-apiserver.service + After=kube-apiserver.service + + [Service] + ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-scheduler -z /opt/bin/kube-scheduler https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kube-scheduler + ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler + ExecStart=/opt/bin/kube-scheduler --master=127.0.0.1:8080 + Restart=always + RestartSec=10 + update: + group: alpha + reboot-strategy: off diff --git a/content/ko/docs/setup/custom-cloud/node.yaml b/content/ko/docs/setup/custom-cloud/node.yaml new file mode 100644 index 0000000000000..9f5caff49bc3e --- /dev/null +++ b/content/ko/docs/setup/custom-cloud/node.yaml @@ -0,0 +1,92 @@ +#cloud-config +write-files: + - path: /opt/bin/wupiao + permissions: '0755' + content: | + #!/bin/bash + # [w]ait [u]ntil [p]ort [i]s [a]ctually [o]pen + [ -n "$1" ] && [ -n "$2" ] && while ! curl --output /dev/null \ + --silent --head --fail \ + http://${1}:${2}; do sleep 1 && echo -n .; done; + exit $? +coreos: + etcd2: + listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 + advertise-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 + initial-cluster: master=http://:2380 + proxy: on + fleet: + metadata: "role=node" + units: + - name: etcd2.service + command: start + - name: fleet.service + command: start + - name: flanneld.service + command: start + - name: docker.service + command: start + - name: setup-network-environment.service + command: start + content: | + [Unit] + Description=Setup Network Environment + Documentation=https://github.com/kelseyhightower/setup-network-environment + Requires=network-online.target + After=network-online.target + + [Service] + ExecStartPre=-/usr/bin/mkdir -p /opt/bin + ExecStartPre=/usr/bin/curl -L -o /opt/bin/setup-network-environment -z /opt/bin/setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment + ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment + ExecStart=/opt/bin/setup-network-environment + RemainAfterExit=yes + Type=oneshot + - name: kube-proxy.service + command: start + content: | + [Unit] + Description=Kubernetes Proxy + Documentation=https://github.com/kubernetes/kubernetes + Requires=setup-network-environment.service + After=setup-network-environment.service + + [Service] + ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-proxy -z /opt/bin/kube-proxy https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kube-proxy + ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy + # wait for kubernetes master to be up and ready + ExecStartPre=/opt/bin/wupiao 8080 + ExecStart=/opt/bin/kube-proxy \ + --master=:8080 \ + --logtostderr=true + Restart=always + RestartSec=10 + - name: kube-kubelet.service + command: start + content: | + [Unit] + Description=Kubernetes Kubelet + Documentation=https://github.com/kubernetes/kubernetes + Requires=setup-network-environment.service + After=setup-network-environment.service + + [Service] + EnvironmentFile=/etc/network-environment + ExecStartPre=/usr/bin/curl -L -o /opt/bin/kubelet -z /opt/bin/kubelet https://storage.googleapis.com/kubernetes-release/release/v1.1.2/bin/linux/amd64/kubelet + ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet + # wait for kubernetes master to be up and ready + ExecStartPre=/opt/bin/wupiao 8080 + ExecStart=/opt/bin/kubelet \ + --address=0.0.0.0 \ + --port=10250 \ + --hostname-override=${DEFAULT_IPV4} \ + --api-servers=:8080 \ + --allow-privileged=true \ + --logtostderr=true \ + --healthz-bind-address=0.0.0.0 \ + --healthz-port=10248 + Restart=always + RestartSec=10 + update: + group: alpha + reboot-strategy: off diff --git a/content/ko/docs/setup/independent/_index.md b/content/ko/docs/setup/independent/_index.md new file mode 100755 index 0000000000000..e87c318721942 --- /dev/null +++ b/content/ko/docs/setup/independent/_index.md @@ -0,0 +1,5 @@ +--- +title: "kubeadm으로 클러스터 부트스트래핑 하기" +weight: 30 +--- + diff --git a/content/ko/docs/setup/independent/control-plane-flags.md b/content/ko/docs/setup/independent/control-plane-flags.md new file mode 100644 index 0000000000000..3f1214435b10c --- /dev/null +++ b/content/ko/docs/setup/independent/control-plane-flags.md @@ -0,0 +1,79 @@ +--- +title: Customizing control plane configuration with kubeadm +content_template: templates/concept +weight: 40 +--- + +{{% capture overview %}} + +The kubeadm configuration exposes the following fields that can override the default flags passed to control plane components such as the APIServer, ControllerManager and Scheduler: + +- `APIServerExtraArgs` +- `ControllerManagerExtraArgs` +- `SchedulerExtraArgs` + +These fields consist of `key: value` pairs. To override a flag for a control plane component: + +1. Add the appropriate field to your configuration. +2. Add the flags to override to the field. + +For more details on each field in the configuration you can navigate to our +[API reference pages](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#ClusterConfiguration). + +{{% /capture %}} + +{{% capture body %}} + +## APIServer flags + +For details, see the [reference documentation for kube-apiserver](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/). + +Example usage: +```yaml +apiVersion: kubeadm.k8s.io/v1alpha3 +kind: ClusterConfiguration +kubernetesVersion: v1.12.0 +metadata: + name: 1.12-sample +apiServerExtraArgs: + advertise-address: 192.168.0.103 + anonymous-auth: false + enable-admission-plugins: AlwaysPullImages,DefaultStorageClass + audit-log-path: /home/johndoe/audit.log +``` + +## ControllerManager flags + +For details, see the [reference documentation for kube-controller-manager](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/). + +Example usage: +```yaml +apiVersion: kubeadm.k8s.io/v1alpha3 +kind: ClusterConfiguration +kubernetesVersion: v1.12.0 +metadata: + name: 1.12-sample +controllerManagerExtraArgs: + cluster-signing-key-file: /home/johndoe/keys/ca.key + bind-address: 0.0.0.0 + deployment-controller-sync-period: 50 +``` + +## Scheduler flags + +For details, see the [reference documentation for kube-scheduler](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/). + +Example usage: +```yaml +apiVersion: kubeadm.k8s.io/v1alpha3 +kind: ClusterConfiguration +kubernetesVersion: v1.12.0 +metadata: + name: 1.12-sample +schedulerExtraArgs: + address: 0.0.0.0 + config: /home/johndoe/schedconfig.yaml + kubeconfig: /home/johndoe/kubeconfig.yaml +``` + +{{% /capture %}} diff --git a/content/ko/docs/setup/independent/create-cluster-kubeadm.md b/content/ko/docs/setup/independent/create-cluster-kubeadm.md new file mode 100644 index 0000000000000..f658956e1b6b1 --- /dev/null +++ b/content/ko/docs/setup/independent/create-cluster-kubeadm.md @@ -0,0 +1,625 @@ +--- +title: Creating a single master cluster with kubeadm +content_template: templates/task +weight: 30 +--- + +{{% capture overview %}} + +**kubeadm** helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices. With kubeadm, your cluster should pass [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification). Kubeadm also supports other cluster +lifecycle functions, such as upgrades, downgrade, and managing [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/). + +Because you can install kubeadm on various types of machine (e.g. laptop, server, +Raspberry Pi, etc.), it's well suited for integration with provisioning systems +such as Terraform or Ansible. + +kubeadm's simplicity means it can serve a wide range of use cases: + +- New users can start with kubeadm to try Kubernetes out for the first time. +- Users familiar with Kubernetes can spin up clusters with kubeadm and test their applications. +- Larger projects can include kubeadm as a building block in a more complex system that can also include other installer tools. + +kubeadm is designed to be a simple way for new users to start trying +Kubernetes out, possibly for the first time, a way for existing users to +test their application on and stitch together a cluster easily, and also to be +a building block in other ecosystem and/or installer tool with a larger +scope. + +You can install _kubeadm_ very easily on operating systems that support +installing deb or rpm packages. The responsible SIG for kubeadm, +[SIG Cluster Lifecycle](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle), provides these packages pre-built for you, +but you may also on other OSes. + + +### kubeadm Maturity + +| Area | Maturity Level | +|---------------------------|--------------- | +| Command line UX | beta | +| Implementation | beta | +| Config file API | alpha | +| Self-hosting | alpha | +| kubeadm alpha subcommands | alpha | +| CoreDNS | GA | +| DynamicKubeletConfig | alpha | + + +kubeadm's overall feature state is **Beta** and will soon be graduated to +**General Availability (GA)** during 2018. Some sub-features, like self-hosting +or the configuration file API are still under active development. The +implementation of creating the cluster may change slightly as the tool evolves, +but the overall implementation should be pretty stable. Any commands under +`kubeadm alpha` are by definition, supported on an alpha level. + + +### Support timeframes + +Kubernetes releases are generally supported for nine months, and during that +period a patch release may be issued from the release branch if a severe bug or +security issue is found. Here are the latest Kubernetes releases and the support +timeframe; which also applies to `kubeadm`. + +| Kubernetes version | Release month | End-of-life-month | +|--------------------|----------------|-------------------| +| v1.6.x | March 2017 | December 2017 | +| v1.7.x | June 2017 | March 2018 | +| v1.8.x | September 2017 | June 2018 | +| v1.9.x | December 2017 | September 2018   | +| v1.10.x | March 2018 | December 2018   | +| v1.11.x | June 2018 | March 2019   | +| v1.12.x | September 2018 | June 2019   | + +{{% /capture %}} + +{{% capture prerequisites %}} + +- One or more machines running a deb/rpm-compatible OS, for example Ubuntu or CentOS +- 2 GB or more of RAM per machine. Any less leaves little room for your + apps. +- 2 CPUs or more on the master +- Full network connectivity among all machines in the cluster. A public or + private network is fine. + +{{% /capture %}} + +{{% capture steps %}} + +## Objectives + +* Install a single master Kubernetes cluster or [high availability cluster](https://kubernetes.io/docs/setup/independent/high-availability/) +* Install a Pod network on the cluster so that your Pods can + talk to each other + +## Instructions + +### Installing kubeadm on your hosts + +See ["Installing kubeadm"](/docs/setup/independent/install-kubeadm/). + +{{< note >}} +**Note:** If you have already installed kubeadm, run `apt-get update && +apt-get upgrade` or `yum update` to get the latest version of kubeadm. + +When you upgrade, the kubelet restarts every few seconds as it waits in a crashloop for +kubeadm to tell it what to do. This crashloop is expected and normal. +After you initialize your master, the kubelet runs normally. +{{< /note >}} + +### Initializing your master + +The master is the machine where the control plane components run, including +etcd (the cluster database) and the API server (which the kubectl CLI +communicates with). + +1. Choose a pod network add-on, and verify whether it requires any arguments to +be passed to kubeadm initialization. Depending on which +third-party provider you choose, you might need to set the `--pod-network-cidr` to +a provider-specific value. See [Installing a pod network add-on](#pod-network). +1. (Optional) Unless otherwise specified, kubeadm uses the network interface associated +with the default gateway to advertise the master's IP. To use a different +network interface, specify the `--apiserver-advertise-address=` argument +to `kubeadm init`. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you +must specify an IPv6 address, for example `--apiserver-advertise-address=fd00::101` +1. (Optional) Run `kubeadm config images pull` prior to `kubeadm init` to verify +connectivity to gcr.io registries. + +Now run: + +```bash +kubeadm init +``` + +### More information + +For more information about `kubeadm init` arguments, see the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm/). + +For a complete list of configuration options, see the [configuration file documentation](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file). + +To customize control plane components, including optional IPv6 assignment to liveness probe for control plane components and etcd server, provide extra arguments to each component as documented in [custom arguments](/docs/admin/kubeadm#custom-args). + +To run `kubeadm init` again, you must first [tear down the cluster](#tear-down). + +If you join a node with a different architecture to your cluster, create a separate +Deployment or DaemonSet for `kube-proxy` and `kube-dns` on the node. This is because the Docker images for these +components do not currently support multi-architecture. + +`kubeadm init` first runs a series of prechecks to ensure that the machine +is ready to run Kubernetes. These prechecks expose warnings and exit on errors. `kubeadm init` +then downloads and installs the cluster control plane components. This may take several minutes. +The output should look like: + +```none +[init] Using Kubernetes version: vX.Y.Z +[preflight] Running pre-flight checks +[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) +[certificates] Generated ca certificate and key. +[certificates] Generated apiserver certificate and key. +[certificates] apiserver serving cert is signed for DNS names [kubeadm-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4] +[certificates] Generated apiserver-kubelet-client certificate and key. +[certificates] Generated sa key and public key. +[certificates] Generated front-proxy-ca certificate and key. +[certificates] Generated front-proxy-client certificate and key. +[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" +[kubeconfig] Wrote KubeConfig file to disk: "admin.conf" +[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" +[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" +[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" +[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" +[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" +[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" +[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" +[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" +[init] This often takes around a minute; or longer if the control plane images have to be pulled. +[apiclient] All control plane components are healthy after 39.511972 seconds +[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace +[markmaster] Will mark node master as master by adding a label and a taint +[markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master="" +[bootstraptoken] Using token: +[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials +[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token +[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace +[addons] Applied essential addon: CoreDNS +[addons] Applied essential addon: kube-proxy + +Your Kubernetes master has initialized successfully! + +To start using your cluster, you need to run (as a regular user): + + mkdir -p $HOME/.kube + sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config + sudo chown $(id -u):$(id -g) $HOME/.kube/config + +You should now deploy a pod network to the cluster. +Run "kubectl apply -f [podnetwork].yaml" with one of the addon options listed at: + http://kubernetes.io/docs/admin/addons/ + +You can now join any number of machines by running the following on each node +as root: + + kubeadm join --token : --discovery-token-ca-cert-hash sha256: +``` + +To make kubectl work for your non-root user, run these commands, which are +also part of the `kubeadm init` output: + +```bash +mkdir -p $HOME/.kube +sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config +sudo chown $(id -u):$(id -g) $HOME/.kube/config +``` + +Alternatively, if you are the `root` user, you can run: + +```bash +export KUBECONFIG=/etc/kubernetes/admin.conf +``` + +Make a record of the `kubeadm join` command that `kubeadm init` outputs. You +need this command to [join nodes to your cluster](#join-nodes). + +The token is used for mutual authentication between the master and the joining +nodes. The token included here is secret. Keep it safe, because anyone with this +token can add authenticated nodes to your cluster. These tokens can be listed, +created, and deleted with the `kubeadm token` command. See the +[kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm-token/). + +### Installing a pod network add-on {#pod-network} + +{{< caution >}} +**Caution:** This section contains important information about installation and deployment order. Read it carefully before proceeding. +{{< /caution >}} + +You must install a pod network add-on so that your pods can communicate with +each other. + +**The network must be deployed before any applications. Also, CoreDNS will not start up before a network is installed. +kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).** + +Several projects provide Kubernetes pod networks using CNI, some of which also +support [Network Policy](/docs/concepts/services-networking/networkpolicies/). See the [add-ons page](/docs/concepts/cluster-administration/addons/) for a complete list of available network add-ons. +- IPv6 support was added in [CNI v0.6.0](https://github.com/containernetworking/cni/releases/tag/v0.6.0). +- [CNI bridge](https://github.com/containernetworking/plugins/blob/master/plugins/main/bridge/README.md) and [local-ipam](https://github.com/containernetworking/plugins/blob/master/plugins/ipam/host-local/README.md) are the only supported IPv6 network plugins in Kubernetes version 1.9. + +Note that kubeadm sets up a more secure cluster by default and enforces use of [RBAC](/docs/reference/access-authn-authz/rbac/). +Make sure that your network manifest supports RBAC. + +You can install a pod network add-on with the following command: + +```bash +kubectl apply -f +``` + +You can install only one pod network per cluster. + +{{< tabs name="tabs-pod-install" >}} +{{% tab name="Choose one..." %}} +Please select one of the tabs to see installation instructions for the respective third-party Pod Network Provider. +{{% /tab %}} + +{{% tab name="Calico" %}} +For more information about using Calico, see [Quickstart for Calico on Kubernetes](https://docs.projectcalico.org/latest/getting-started/kubernetes/), [Installing Calico for policy and networking](https://docs.projectcalico.org/latest/getting-started/kubernetes/installation/calico), and other related resources. + +For Calico to work correctly, you need to pass `--pod-network-cidr=192.168.0.0/16` to `kubeadm init` or update the `calico.yml` file to match your Pod network. Note that Calico works on `amd64` only. + +```shell +kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml +kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml +``` + +{{% /tab %}} +{{% tab name="Canal" %}} +Canal uses Calico for policy and Flannel for networking. Refer to the Calico documentation for the [official getting started guide](https://docs.projectcalico.org/latest/getting-started/kubernetes/installation/flannel). + +For Canal to work correctly, `--pod-network-cidr=10.244.0.0/16` has to be passed to `kubeadm init`. Note that Canal works on `amd64` only. + +```shell +kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml +kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml +``` + +{{% /tab %}} + +{{% tab name="Cilium" %}} +For more information about using Cilium with Kubernetes, see [Quickstart for Cilium on Kubernetes](http://docs.cilium.io/en/v1.2/kubernetes/quickinstall/) and [Kubernetes Install guide for Cilium](http://docs.cilium.io/en/v1.2/kubernetes/install/). + +Passing `--pod-network-cidr` option to `kubeadm init` is not required, but highly recommended. + +These commands will deploy Cilium with its own etcd managed by etcd operator. + +```shell +# Download required manifests from Cilium repository +wget https://github.com/cilium/cilium/archive/v1.2.0.zip +unzip v1.2.0.zip +cd cilium-1.2.0/examples/kubernetes/addons/etcd-operator + +# Generate and deploy etcd certificates +export CLUSTER_DOMAIN=$(kubectl get ConfigMap --namespace kube-system coredns -o yaml | awk '/kubernetes/ {print $2}') +tls/certs/gen-cert.sh $CLUSTER_DOMAIN +tls/deploy-certs.sh + +# Label kube-dns with fixed identity label +kubectl label -n kube-system pod $(kubectl -n kube-system get pods -l k8s-app=kube-dns -o jsonpath='{range .items[]}{.metadata.name}{" "}{end}') io.cilium.fixed-identity=kube-dns + +kubectl create -f ./ + +# Wait several minutes for Cilium, coredns and etcd pods to converge to a working state +``` + + +{{% /tab %}} +{{% tab name="Flannel" %}} + +For `flannel` to work correctly, you must pass `--pod-network-cidr=10.244.0.0/16` to `kubeadm init`. + +Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1` +to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information +please see [here](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements). + +```shell +kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml +``` +Note that `flannel` works on `amd64`, `arm`, `arm64` and `ppc64le`, but until `flannel v0.11.0` is released +you need to use the following manifest that supports all the architectures: + +```shell +kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml +``` + +For more information about `flannel`, see [the CoreOS flannel repository on GitHub +](https://github.com/coreos/flannel). +{{% /tab %}} + +{{% tab name="Kube-router" %}} +Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1` +to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information +please see [here](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements). + +Kube-router relies on kube-controller-manager to allocate pod CIDR for the nodes. Therefore, use `kubeadm init` with the `--pod-network-cidr` flag. + +Kube-router provides pod networking, network policy, and high-performing IP Virtual Server(IPVS)/Linux Virtual Server(LVS) based service proxy. + +For information on setting up Kubernetes cluster with Kube-router using kubeadm, please see official [setup guide](https://github.com/cloudnativelabs/kube-router/blob/master/docs/kubeadm.md). +{{% /tab %}} + +{{% tab name="Romana" %}} +Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1` +to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information +please see [here](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements). + +The official Romana set-up guide is [here](https://github.com/romana/romana/tree/master/containerize#using-kubeadm). + +Romana works on `amd64` only. + +```shell +kubectl apply -f https://raw.githubusercontent.com/romana/romana/master/containerize/specs/romana-kubeadm.yml +``` +{{% /tab %}} + +{{% tab name="Weave Net" %}} +Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1` +to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information +please see [here](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements). + +The official Weave Net set-up guide is [here](https://www.weave.works/docs/net/latest/kube-addon/). + +Weave Net works on `amd64`, `arm`, `arm64` and `ppc64le` without any extra action required. +Weave Net sets hairpin mode by default. This allows Pods to access themselves via their Service IP address +if they don't know their PodIP. + +```shell +kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" +``` +{{% /tab %}} + +{{% tab name="JuniperContrail/TungstenFabric" %}} +Provides overlay SDN solution, delivering multicloud networking, hybrid cloud networking, +simultaneous overlay-underlay support, network policy enforcement, network isolation, +service chaining and flexible load balancing. + +There are multiple, flexible ways to install JuniperContrail/TungstenFabric CNI. + +Kindly refer to this quickstart: [TungstenFabric](https://tungstenfabric.github.io/website/) +{{% /tab %}} +{{< /tabs >}} + + +Once a pod network has been installed, you can confirm that it is working by +checking that the CoreDNS pod is Running in the output of `kubectl get pods --all-namespaces`. +And once the CoreDNS pod is up and running, you can continue by joining your nodes. + +If your network is not working or CoreDNS is not in the Running state, check +out our [troubleshooting docs](/docs/setup/independent/troubleshooting-kubeadm/). + +### Master Isolation + +By default, your cluster will not schedule pods on the master for security +reasons. If you want to be able to schedule pods on the master, e.g. for a +single-machine Kubernetes cluster for development, run: + +```bash +kubectl taint nodes --all node-role.kubernetes.io/master- +``` + +With output looking something like: + +``` +node "test-01" untainted +taint "node-role.kubernetes.io/master:" not found +taint "node-role.kubernetes.io/master:" not found +``` + +This will remove the `node-role.kubernetes.io/master` taint from any nodes that +have it, including the master node, meaning that the scheduler will then be able +to schedule pods everywhere. + +### Joining your nodes {#join-nodes} + +The nodes are where your workloads (containers and pods, etc) run. To add new nodes to your cluster do the following for each machine: + +* SSH to the machine +* Become root (e.g. `sudo su -`) +* Run the command that was output by `kubeadm init`. For example: + +``` bash +kubeadm join --token : --discovery-token-ca-cert-hash sha256: +``` + +If you do not have the token, you can get it by running the following command on the master node: + +``` bash +kubeadm token list +``` + +The output is similar to this: + +``` console +TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS +8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system: + signing token generated by bootstrappers: + 'kubeadm init'. kubeadm: + default-node-token +``` + +By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, +you can create a new token by running the following command on the master node: + +``` bash +kubeadm token create +``` + +The output is similar to this: + +``` console +5didvk.d09sbcov8ph2amjw +``` + +If you don't have the value of `--discovery-token-ca-cert-hash`, you can get it by running the following command chain on the master node: + +``` bash +openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \ + openssl dgst -sha256 -hex | sed 's/^.* //' +``` + +The output is similar to this: + +``` console +8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78 +``` + +{{< note >}} +**Note:** To specify an IPv6 tuple for `:`, IPv6 address must be enclosed in square brackets, for example: `[fd00::101]:2073`. +{{< /note >}} + +The output should look something like: + +``` +[preflight] Running pre-flight checks + +... (log output of join workflow) ... + +Node join complete: +* Certificate signing request sent to master and response + received. +* Kubelet informed of new secure connection details. + +Run 'kubectl get nodes' on the master to see this machine join. +``` + +A few seconds later, you should notice this node in the output from `kubectl get +nodes` when run on the master. + +### (Optional) Controlling your cluster from machines other than the master + +In order to get a kubectl on some other computer (e.g. laptop) to talk to your +cluster, you need to copy the administrator kubeconfig file from your master +to your workstation like this: + +``` bash +scp root@:/etc/kubernetes/admin.conf . +kubectl --kubeconfig ./admin.conf get nodes +``` + +{{< note >}} +**Note:** The example above assumes SSH access is enabled for root. If that is not the +case, you can copy the `admin.conf` file to be accessible by some other user +and `scp` using that other user instead. + +The `admin.conf` file gives the user _superuser_ privileges over the cluster. +This file should be used sparingly. For normal users, it's recommended to +generate an unique credential to which you whitelist privileges. You can do +this with the `kubeadm alpha phase kubeconfig user --client-name ` +command. That command will print out a KubeConfig file to STDOUT which you +should save to a file and distribute to your user. After that, whitelist +privileges by using `kubectl create (cluster)rolebinding`. +{{< /note >}} + +### (Optional) Proxying API Server to localhost + +If you want to connect to the API Server from outside the cluster you can use +`kubectl proxy`: + +```bash +scp root@:/etc/kubernetes/admin.conf . +kubectl --kubeconfig ./admin.conf proxy +``` + +You can now access the API Server locally at `http://localhost:8001/api/v1` + +## Tear down {#tear-down} + +To undo what kubeadm did, you should first [drain the +node](/docs/reference/generated/kubectl/kubectl-commands#drain) and make +sure that the node is empty before shutting it down. + +Talking to the master with the appropriate credentials, run: + +```bash +kubectl drain --delete-local-data --force --ignore-daemonsets +kubectl delete node +``` + +Then, on the node being removed, reset all kubeadm installed state: + +```bash +kubeadm reset +``` + +If you wish to start over simply run `kubeadm init` or `kubeadm join` with the +appropriate arguments. + +More options and information about the +[`kubeadm reset command`](/docs/reference/setup-tools/kubeadm/kubeadm-reset/). + +## Maintaining a cluster {#lifecycle} + +Instructions for maintaining kubeadm clusters (e.g. upgrades,downgrades, etc.) can be found [here.](/docs/tasks/administer-cluster/kubeadm) + +## Explore other add-ons {#other-addons} + +See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to explore other add-ons, +including tools for logging, monitoring, network policy, visualization & +control of your Kubernetes cluster. + +## What's next {#whats-next} + +* Verify that your cluster is running properly with [Sonobuoy](https://github.com/heptio/sonobuoy) +* Learn about kubeadm's advanced usage in the [kubeadm reference documentation](/docs/reference/setup-tools/kubeadm/kubeadm) +* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/user-guide/kubectl-overview/). +* Configure log rotation. You can use **logrotate** for that. When using Docker, you can specify log rotation options for Docker daemon, for example `--log-driver=json-file --log-opt=max-size=10m --log-opt=max-file=5`. See [Configure and troubleshoot the Docker daemon](https://docs.docker.com/engine/admin/) for more details. + +## Feedback {#feedback} + +* For bugs, visit [kubeadm Github issue tracker](https://github.com/kubernetes/kubeadm/issues) +* For support, visit kubeadm Slack Channel: + [#kubeadm](https://kubernetes.slack.com/messages/kubeadm/) +* General SIG Cluster Lifecycle Development Slack Channel: + [#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/) +* SIG Cluster Lifecycle [SIG information](#TODO) +* SIG Cluster Lifecycle Mailing List: + [kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle) + +## Version skew policy {#version-skew-policy} + +The kubeadm CLI tool of version vX.Y may deploy clusters with a control plane of version vX.Y or vX.(Y-1). +kubeadm CLI vX.Y can also upgrade an existing kubeadm-created cluster of version vX.(Y-1). + +Due to that we can't see into the future, kubeadm CLI vX.Y may or may not be able to deploy vX.(Y+1) clusters. + +Example: kubeadm v1.8 can deploy both v1.7 and v1.8 clusters and upgrade v1.7 kubeadm-created clusters to +v1.8. + +Please also check our [installation guide](/docs/setup/independent/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl) +for more information on the version skew between kubelets and the control plane. + +## kubeadm works on multiple platforms {#multi-platform} + +kubeadm deb/rpm packages and binaries are built for amd64, arm (32-bit), arm64, ppc64le, and s390x +following the [multi-platform +proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multi-platform.md). + +Only some of the network providers offer solutions for all platforms. Please consult the list of +network providers above or the documentation from each provider to figure out whether the provider +supports your chosen platform. + +## Limitations {#limitations} + +Please note: kubeadm is a work in progress and these limitations will be +addressed in due course. + +1. The cluster created here has a single master, with a single etcd database + running on it. This means that if the master fails, your cluster may lose + data and may need to be recreated from scratch. Adding HA support + (multiple etcd servers, multiple API servers, etc) to kubeadm is + still a work-in-progress. + + Workaround: regularly + [back up etcd](https://coreos.com/etcd/docs/latest/admin_guide.html). The + etcd data directory configured by kubeadm is at `/var/lib/etcd` on the master. + +## Troubleshooting {#troubleshooting} + +If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/independent/troubleshooting-kubeadm/). + + + + diff --git a/content/ko/docs/setup/independent/high-availability.md b/content/ko/docs/setup/independent/high-availability.md new file mode 100644 index 0000000000000..d7cde10d82a08 --- /dev/null +++ b/content/ko/docs/setup/independent/high-availability.md @@ -0,0 +1,546 @@ +--- +title: Creating Highly Available Clusters with kubeadm +content_template: templates/task +weight: 50 +--- + +{{% capture overview %}} + +This page explains two different approaches to setting up a highly available Kubernetes +cluster using kubeadm: + +- With stacked masters. This approach requires less infrastructure. etcd members +and control plane nodes are co-located. +- With an external etcd cluster. This approach requires more infrastructure. The +control plane nodes and etcd members are separated. + +Your clusters must run Kubernetes version 1.12 or later. You should also be aware that +setting up HA clusters with kubeadm is still experimental. You might encounter issues +with upgrading your clusters, for example. We encourage you to try either approach, +and provide feedback. + +{{< caution >}} +**Caution**: This page does not address running your cluster on a cloud provider. +In a cloud environment, neither approach documented here works with Service objects +of type LoadBalancer, or with dynamic PersistentVolumes. +{{< /caution >}} + +{{% /capture %}} + +{{% capture prerequisites %}} + +For both methods you need this infrastructure: + +- Three machines that meet [kubeadm's minimum + requirements](/docs/setup/independent/install-kubeadm/#before-you-begin) for + the masters +- Three machines that meet [kubeadm's minimum + requirements](/docs/setup/independent/install-kubeadm/#before-you-begin) for + the workers +- Full network connectivity between all machines in the cluster (public or + private network is fine) +- SSH access from one device to all nodes in the system +- sudo privileges on all machines + +For the external etcd cluster only, you also need: + +- Three additional machines for etcd members + +{{< note >}} +**Note**: The following examples run Calico as the Pod networking provider. If +you run another networking provider, make sure to replace any default values as +needed. +{{< /note >}} + +{{% /capture %}} + +{{% capture steps %}} + +## First steps for both methods + +{{< note >}} +**Note**: All commands in this guide on any control plane or etcd node should be +run as root. +{{< /note >}} + +- Find your pod CIDR. For details, see [the CNI network + documentation](/docs/setup/independent/create-cluster-kubeadm/#pod-network). + The example uses Calico, so the pod CIDR is `192.168.0.0/16`. + +### Configure SSH + +1. Enable ssh-agent on your main device that has access to all other nodes in + the system: + + ``` + eval $(ssh-agent) + ``` + +1. Add your SSH identity to the session: + + ``` + ssh-add ~/.ssh/path_to_private_key + ``` + +1. SSH between nodes to check that the connection is working correctly. + + - When you SSH to any node, make sure to add the `-A` flag: + + ``` + ssh -A 10.0.0.7 + ``` + + - When using sudo on any node, make sure to preserve the environment so SSH + forwarding works: + + ``` + sudo -E -s + ``` + +### Create load balancer for kube-apiserver + +{{< note >}} +**Note**: There are many configurations for load balancers. The following +example is only one option. Your cluster requirements may need a +different configuration. +{{< /note >}} + +1. Create a kube-apiserver load balancer with a name that resolves to DNS. + + - In a cloud environment you should place your control plane nodes behind a TCP + forwarding load balancer. This load balancer distributes traffic to all + healthy control plane nodes in its target list. The health check for + an apiserver is a TCP check on the port the kube-apiserver listens on + (default value `:6443`). + + - It is not recommended to use an IP address directly in a cloud environment. + + - The load balancer must be able to communicate with all control plane nodes + on the apiserver port. It must also allow incoming traffic on its + listening port. + +1. Add the first control plane nodes to the load balancer and test the + connection: + + ```sh + nc -v LOAD_BALANCER_IP PORT + ``` + + - A connection refused error is expected because the apiserver is not yet + running. A timeout, however, means the load balancer cannot communicate + with the control plane node. If a timeout occurs, reconfigure the load + balancer to communicate with the control plane node. + +1. Add the remaining control plane nodes to the load balancer target group. + +## Stacked control plane nodes + +### Bootstrap the first stacked control plane node + +{{< note >}} +**Note**: Optionally replace `stable` with a different version of Kubernetes, for example `v1.12.0`. +{{< /note >}} + +1. Create a `kubeadm-config.yaml` template file: + + apiVersion: kubeadm.k8s.io/v1alpha3 + kind: ClusterConfiguration + kubernetesVersion: stable + apiServerCertSANs: + - "LOAD_BALANCER_DNS" + controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" + etcd: + local: + extraArgs: + listen-client-urls: "https://127.0.0.1:2379,https://CP0_IP:2379" + advertise-client-urls: "https://CP0_IP:2379" + listen-peer-urls: "https://CP0_IP:2380" + initial-advertise-peer-urls: "https://CP0_IP:2380" + initial-cluster: "CP0_HOSTNAME=https://CP0_IP:2380" + serverCertSANs: + - CP0_HOSTNAME + - CP0_IP + peerCertSANs: + - CP0_HOSTNAME + - CP0_IP + networking: + # This CIDR is a Calico default. Substitute or remove for your CNI provider. + podSubnet: "192.168.0.0/16" + +1. Replace the following variables in the template with the appropriate + values for your cluster: + + * `LOAD_BALANCER_DNS` + * `LOAD_BALANCER_PORT` + * `CP0_HOSTNAME` + * `CP0_IP` + +1. Run `kubeadm init --config kubeadm-config.yaml` + +### Copy required files to other control plane nodes + +The following certificates and other required files were created when you ran `kubeadm init`. +Copy these files to your other control plane nodes: + +- `/etc/kubernetes/pki/ca.crt` +- `/etc/kubernetes/pki/ca.key` +- `/etc/kubernetes/pki/sa.key` +- `/etc/kubernetes/pki/sa.pub` +- `/etc/kubernetes/pki/front-proxy-ca.crt` +- `/etc/kubernetes/pki/front-proxy-ca.key` +- `/etc/kubernetes/pki/etcd/ca.crt` +- `/etc/kubernetes/pki/etcd/ca.key` + +Copy the admin kubeconfig to the other control plane nodes: + +- `/etc/kubernetes/admin.conf` + +In the following example, replace +`CONTROL_PLANE_IPS` with the IP addresses of the other control plane nodes. + +```sh +USER=ubuntu # customizable +CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8" +for host in ${CONTROL_PLANE_IPS}; do + scp /etc/kubernetes/pki/ca.crt "${USER}"@$host: + scp /etc/kubernetes/pki/ca.key "${USER}"@$host: + scp /etc/kubernetes/pki/sa.key "${USER}"@$host: + scp /etc/kubernetes/pki/sa.pub "${USER}"@$host: + scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host: + scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host: + scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt + scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key + scp /etc/kubernetes/admin.conf "${USER}"@$host: +done +``` + +{{< note >}} +**Note**: Remember that your config may differ from this example. +{{< /note >}} + +### Add the second stacked control plane node + +1. Create a second, different `kubeadm-config.yaml` template file: + + apiVersion: kubeadm.k8s.io/v1alpha3 + kind: ClusterConfiguration + kubernetesVersion: stable + apiServerCertSANs: + - "LOAD_BALANCER_DNS" + controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" + etcd: + local: + extraArgs: + listen-client-urls: "https://127.0.0.1:2379,https://CP1_IP:2379" + advertise-client-urls: "https://CP1_IP:2379" + listen-peer-urls: "https://CP1_IP:2380" + initial-advertise-peer-urls: "https://CP1_IP:2380" + initial-cluster: "CP0_HOSTNAME=https://CP0_IP:2380,CP1_HOSTNAME=https://CP1_IP:2380" + initial-cluster-state: existing + serverCertSANs: + - CP1_HOSTNAME + - CP1_IP + peerCertSANs: + - CP1_HOSTNAME + - CP1_IP + networking: + # This CIDR is a calico default. Substitute or remove for your CNI provider. + podSubnet: "192.168.0.0/16" + +1. Replace the following variables in the template with the appropriate values for your cluster: + + - `LOAD_BALANCER_DNS` + - `LOAD_BALANCER_PORT` + - `CP0_HOSTNAME` + - `CP0_IP` + - `CP1_HOSTNAME` + - `CP1_IP` + +1. Move the copied files to the correct locations: + + ```sh + USER=ubuntu # customizable + mkdir -p /etc/kubernetes/pki/etcd + mv /home/${USER}/ca.crt /etc/kubernetes/pki/ + mv /home/${USER}/ca.key /etc/kubernetes/pki/ + mv /home/${USER}/sa.pub /etc/kubernetes/pki/ + mv /home/${USER}/sa.key /etc/kubernetes/pki/ + mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/ + mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/ + mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt + mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key + mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf + ``` + +1. Run the kubeadm phase commands to bootstrap the kubelet: + + ```sh + kubeadm alpha phase certs all --config kubeadm-config.yaml + kubeadm alpha phase kubelet config write-to-disk --config kubeadm-config.yaml + kubeadm alpha phase kubelet write-env-file --config kubeadm-config.yaml + kubeadm alpha phase kubeconfig kubelet --config kubeadm-config.yaml + systemctl start kubelet + ``` + +1. Run the commands to add the node to the etcd cluster: + + ```sh + export CP0_IP=10.0.0.7 + export CP0_HOSTNAME=cp0 + export CP1_IP=10.0.0.8 + export CP1_HOSTNAME=cp1 + + export KUBECONFIG=/etc/kubernetes/admin.conf + kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP1_HOSTNAME} https://${CP1_IP}:2380 + kubeadm alpha phase etcd local --config kubeadm-config.yaml + ``` + + - This command causes the etcd cluster to become unavailable for a + brief period, after the node is added to the running cluster, and before the + new node is joined to the etcd cluster. + +1. Deploy the control plane components and mark the node as a master: + + ```sh + kubeadm alpha phase kubeconfig all --config kubeadm-config.yaml + kubeadm alpha phase controlplane all --config kubeadm-config.yaml + kubeadm alpha phase mark-master --config kubeadm-config.yaml + ``` + +### Add the third stacked control plane node + +1. Create a third, different `kubeadm-config.yaml` template file: + + apiVersion: kubeadm.k8s.io/v1alpha3 + kind: ClusterConfiguration + kubernetesVersion: stable + apiServerCertSANs: + - "LOAD_BALANCER_DNS" + controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" + etcd: + local: + extraArgs: + listen-client-urls: "https://127.0.0.1:2379,https://CP2_IP:2379" + advertise-client-urls: "https://CP2_IP:2379" + listen-peer-urls: "https://CP2_IP:2380" + initial-advertise-peer-urls: "https://CP2_IP:2380" + initial-cluster: "CP0_HOSTNAME=https://CP0_IP:2380,CP1_HOSTNAME=https://CP1_IP:2380,CP2_HOSTNAME=https://CP2_IP:2380" + initial-cluster-state: existing + serverCertSANs: + - CP2_HOSTNAME + - CP2_IP + peerCertSANs: + - CP2_HOSTNAME + - CP2_IP + networking: + # This CIDR is a calico default. Substitute or remove for your CNI provider. + podSubnet: "192.168.0.0/16" + +1. Replace the following variables in the template with the appropriate values for your cluster: + + - `LOAD_BALANCER_DNS` + - `LOAD_BALANCER_PORT` + - `CP0_HOSTNAME` + - `CP0_IP` + - `CP1_HOSTNAME` + - `CP1_IP` + - `CP2_HOSTNAME` + - `CP2_IP` + +1. Move the copied files to the correct locations: + + ```sh + USER=ubuntu # customizable + mkdir -p /etc/kubernetes/pki/etcd + mv /home/${USER}/ca.crt /etc/kubernetes/pki/ + mv /home/${USER}/ca.key /etc/kubernetes/pki/ + mv /home/${USER}/sa.pub /etc/kubernetes/pki/ + mv /home/${USER}/sa.key /etc/kubernetes/pki/ + mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/ + mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/ + mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt + mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key + mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf + ``` + +1. Run the kubeadm phase commands to bootstrap the kubelet: + + ```sh + kubeadm alpha phase certs all --config kubeadm-config.yaml + kubeadm alpha phase kubelet config write-to-disk --config kubeadm-config.yaml + kubeadm alpha phase kubelet write-env-file --config kubeadm-config.yaml + kubeadm alpha phase kubeconfig kubelet --config kubeadm-config.yaml + systemctl start kubelet + ``` + +1. Run the commands to add the node to the etcd cluster: + + ```sh + export CP0_IP=10.0.0.7 + export CP0_HOSTNAME=cp0 + export CP2_IP=10.0.0.9 + export CP2_HOSTNAME=cp2 + + export KUBECONFIG=/etc/kubernetes/admin.conf + kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP2_HOSTNAME} https://${CP2_IP}:2380 + kubeadm alpha phase etcd local --config kubeadm-config.yaml + ``` + +1. Deploy the control plane components and mark the node as a master: + + ```sh + kubeadm alpha phase kubeconfig all --config kubeadm-config.yaml + kubeadm alpha phase controlplane all --config kubeadm-config.yaml + kubeadm alpha phase mark-master --config kubeadm-config.yaml + ``` + +## External etcd + +### Set up the cluster + +- Follow [these instructions](/docs/setup/independent/setup-ha-etcd-with-kubeadm/) + to set up the etcd cluster. + +#### Copy required files from an etcd node to all control plane nodes + +In the following example, replace `USER` and `CONTROL_PLANE_HOSTS` values with values +for your environment. + +```sh +# Make a list of required etcd certificate files +cat << EOF > etcd-pki-files.txt +/etc/kubernetes/pki/etcd/ca.crt +/etc/kubernetes/pki/apiserver-etcd-client.crt +/etc/kubernetes/pki/apiserver-etcd-client.key +EOF + +# create the archive +tar -czf etcd-pki.tar.gz -T etcd-pki-files.txt + +# copy the archive to the control plane nodes +USER=ubuntu +CONTROL_PLANE_HOSTS="10.0.0.7 10.0.0.8 10.0.0.9" +for host in $CONTROL_PLANE_HOSTS; do + scp etcd-pki.tar.gz "${USER}"@$host: +done +``` + +### Set up the first control plane node + +1. Extract the etcd certificates + + mkdir -p /etc/kubernetes/pki + tar -xzf etcd-pki.tar.gz -C /etc/kubernetes/pki --strip-components=3 + +1. Create a `kubeadm-config.yaml`: + +{{< note >}} +**Note**: Optionally replace `stable` with a different version of Kubernetes, for example `v1.11.3`. +{{< /note >}} + + apiVersion: kubeadm.k8s.io/v1alpha3 + kind: ClusterConfiguration + kubernetesVersion: stable + apiServerCertSANs: + - "LOAD_BALANCER_DNS" + controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" + etcd: + external: + endpoints: + - https://ETCD_0_IP:2379 + - https://ETCD_1_IP:2379 + - https://ETCD_2_IP:2379 + caFile: /etc/kubernetes/pki/etcd/ca.crt + certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt + keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key + networking: + # This CIDR is a calico default. Substitute or remove for your CNI provider. + podSubnet: "192.168.0.0/16" + +1. Replace the following variables in the template with the appropriate values for your cluster: + + - `LOAD_BALANCER_DNS` + - `LOAD_BALANCER_PORT` + - `ETCD_0_IP` + - `ETCD_1_IP` + - `ETCD_2_IP` + +1. Run `kubeadm init --config kubeadm-config.yaml` +1. Copy the output join commamnd. + +### Copy required files to the correct locations + +The following pki files were created during the `kubeadm init` step and must be shared with +all other control plane nodes. + +- `/etc/kubernetes/pki/ca.crt` +- `/etc/kubernetes/pki/ca.key` +- `/etc/kubernetes/pki/sa.key` +- `/etc/kubernetes/pki/sa.pub` +- `/etc/kubernetes/pki/front-proxy-ca.crt` +- `/etc/kubernetes/pki/front-proxy-ca.key` + +In the following example, replace the list of +`CONTROL_PLANE_IPS` values with the IP addresses of the other control plane nodes. + +```sh +# make a list of required kubernetes certificate files +cat << EOF > certificate_files.txt +/etc/kubernetes/pki/ca.crt +/etc/kubernetes/pki/ca.key +/etc/kubernetes/pki/sa.key +/etc/kubernetes/pki/sa.pub +/etc/kubernetes/pki/front-proxy-ca.crt +/etc/kubernetes/pki/front-proxy-ca.key +EOF + +# create the archive +tar -czf control-plane-certificates.tar.gz -T certificate_files.txt + +USER=ubuntu # customizable +CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8" +for host in ${CONTROL_PLANE_IPS}; do + scp control-plane-certificates.tar.gz "${USER}"@$host: +done +``` + +### Set up the other control plane nodes + +1. Extract the required certificates + + mkdir -p /etc/kubernetes/pki + tar -xzf etcd-pki.tar.gz -C /etc/kubernetes/pki --strip-components 3 + tar -xzf control-plane-certificates.tar.gz -C /etc/kubernetes/pki --strip-components 3 + +1. Verify the location of the copied files. + Your `/etc/kubernetes` directory should look like this: + + - `/etc/kubernetes/pki/apiserver-etcd-client.crt` + - `/etc/kubernetes/pki/apiserver-etcd-client.key` + - `/etc/kubernetes/pki/ca.crt` + - `/etc/kubernetes/pki/ca.key` + - `/etc/kubernetes/pki/front-proxy-ca.crt` + - `/etc/kubernetes/pki/front-proxy-ca.key` + - `/etc/kubernetes/pki/sa.key` + - `/etc/kubernetes/pki/sa.pub` + - `/etc/kubernetes/pki/etcd/ca.crt` + +1. Run the copied `kubeadm join` command from above. Add the flag "--experimental-control-plane". + The final command will look something like this: + + kubeadm join ha.k8s.example.com:6443 --token 5ynki1.3erp9i3yo7gqg1nv --discovery-token-ca-cert-hash sha256:a00055bd8c710a9906a3d91b87ea02976334e1247936ac061d867a0f014ecd81 --experimental-control-plane + +## Common tasks after bootstrapping control plane + +### Install a pod network + +[Follow these instructions](/docs/setup/independent/create-cluster-kubeadm/#pod-network) to install +the pod network. Make sure this corresponds to whichever pod CIDR you provided +in the master configuration file. + +### Install workers + +Each worker node can now be joined to the cluster with the command returned from any of the +`kubeadm init` commands. + +{{% /capture %}} diff --git a/content/ko/docs/setup/independent/install-kubeadm.md b/content/ko/docs/setup/independent/install-kubeadm.md new file mode 100644 index 0000000000000..df670972c5c93 --- /dev/null +++ b/content/ko/docs/setup/independent/install-kubeadm.md @@ -0,0 +1,251 @@ +--- +title: Installing kubeadm +content_template: templates/task +weight: 20 +--- + +{{% capture overview %}} + +This page shows how to install the `kubeadm` toolbox. +For information how to create a cluster with kubeadm once you have performed this installation process, +see the [Using kubeadm to Create a Cluster](/docs/setup/independent/create-cluster-kubeadm/) page. + +{{% /capture %}} + +{{% capture prerequisites %}} + +* One or more machines running one of: + - Ubuntu 16.04+ + - Debian 9 + - CentOS 7 + - RHEL 7 + - Fedora 25/26 (best-effort) + - HypriotOS v1.0.1+ + - Container Linux (tested with 1800.6.0) +* 2 GB or more of RAM per machine (any less will leave little room for your apps) +* 2 CPUs or more +* Full network connectivity between all machines in the cluster (public or private network is fine) +* Unique hostname, MAC address, and product_uuid for every node. See [here](#verify-the-mac-address-and-product-uuid-are-unique-for-every-node) for more details. +* Certain ports are open on your machines. See [here](#check-required-ports) for more details. +* Swap disabled. You **MUST** disable swap in order for the kubelet to work properly. + +{{% /capture %}} + +{{% capture steps %}} + +## Verify the MAC address and product_uuid are unique for every node + +* You can get the MAC address of the network interfaces using the command `ip link` or `ifconfig -a` +* The product_uuid can be checked by using the command `sudo cat /sys/class/dmi/id/product_uuid` + +It is very likely that hardware devices will have unique addresses, although some virtual machines may have +identical values. Kubernetes uses these values to uniquely identify the nodes in the cluster. +If these values are not unique to each node, the installation process +may [fail](https://github.com/kubernetes/kubeadm/issues/31). + +## Check network adapters + +If you have more than one network adapter, and your Kubernetes components are not reachable on the default +route, we recommend you add IP route(s) so Kubernetes cluster addresses go via the appropriate adapter. + +## Check required ports + +### Master node(s) + +| Protocol | Direction | Port Range | Purpose | Used By | +|----------|-----------|------------|-------------------------|---------------------------| +| TCP | Inbound | 6443* | Kubernetes API server | All | +| TCP | Inbound | 2379-2380 | etcd server client API | kube-apiserver, etcd | +| TCP | Inbound | 10250 | Kubelet API | Self, Control plane | +| TCP | Inbound | 10251 | kube-scheduler | Self | +| TCP | Inbound | 10252 | kube-controller-manager | Self | + +### Worker node(s) + +| Protocol | Direction | Port Range | Purpose | Used By | +|----------|-----------|-------------|-----------------------|-------------------------| +| TCP | Inbound | 10250 | Kubelet API | Self, Control plane | +| TCP | Inbound | 30000-32767 | NodePort Services** | All | + +** Default port range for [NodePort Services](/docs/concepts/services-networking/service/). + +Any port numbers marked with * are overridable, so you will need to ensure any +custom ports you provide are also open. + +Although etcd ports are included in master nodes, you can also host your own +etcd cluster externally or on custom ports. + +The pod network plugin you use (see below) may also require certain ports to be +open. Since this differs with each pod network plugin, please see the +documentation for the plugins about what port(s) those need. + +## Installing runtime + +Since v1.6.0, Kubernetes has enabled the use of CRI, Container Runtime Interface, by default. +The container runtime used by default is Docker, which is enabled through the built-in +`dockershim` CRI implementation inside of the `kubelet`. + +Other CRI-based runtimes include: + +- [cri-containerd](https://github.com/containerd/cri-containerd) +- [cri-o](https://github.com/kubernetes-incubator/cri-o) +- [frakti](https://github.com/kubernetes/frakti) +- [rkt](https://github.com/kubernetes-incubator/rktlet) + +Refer to the [CRI installation instructions](/docs/setup/cri.md) for more information. + +## Installing kubeadm, kubelet and kubectl + +You will install these packages on all of your machines: + +* `kubeadm`: the command to bootstrap the cluster. + +* `kubelet`: the component that runs on all of the machines in your cluster + and does things like starting pods and containers. + +* `kubectl`: the command line util to talk to your cluster. + +kubeadm **will not** install or manage `kubelet` or `kubectl` for you, so you will +need to ensure they match the version of the Kubernetes control panel you want +kubeadm to install for you. If you do not, there is a risk of a version skew occurring that +can lead to unexpected, buggy behaviour. However, _one_ minor version skew between the +kubelet and the control plane is supported, but the kubelet version may never exceed the API +server version. For example, kubelets running 1.7.0 should be fully compatible with a 1.8.0 API server, +but not vice versa. + +{{< warning >}} +These instructions exclude all Kubernetes packages from any system upgrades. +This is because kubeadm and Kubernetes require +[special attention to upgrade](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11/). +{{}} + +For more information on version skews, please read our +[version skew policy](/docs/setup/independent/create-cluster-kubeadm/#version-skew-policy). + +{{< tabs name="k8s_install" >}} +{{% tab name="Ubuntu, Debian or HypriotOS" %}} +```bash +apt-get update && apt-get install -y apt-transport-https curl +curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - +cat </etc/apt/sources.list.d/kubernetes.list +deb http://apt.kubernetes.io/ kubernetes-xenial main +EOF +apt-get update +apt-get install -y kubelet kubeadm kubectl +apt-mark hold kubelet kubeadm kubectl +``` +{{% /tab %}} +{{% tab name="CentOS, RHEL or Fedora" %}} +```bash +cat < /etc/yum.repos.d/kubernetes.repo +[kubernetes] +name=Kubernetes +baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 +enabled=1 +gpgcheck=1 +repo_gpgcheck=1 +gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg +exclude=kube* +EOF +setenforce 0 +yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes +systemctl enable kubelet && systemctl start kubelet +``` + + **Note:** + + - Disabling SELinux by running `setenforce 0` is required to allow containers to access the host filesystem, which is required by pod networks for example. + You have to do this until SELinux support is improved in the kubelet. + - Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure + `net.bridge.bridge-nf-call-iptables` is set to 1 in your `sysctl` config, e.g. + + ```bash + cat < /etc/sysctl.d/k8s.conf + net.bridge.bridge-nf-call-ip6tables = 1 + net.bridge.bridge-nf-call-iptables = 1 + EOF + sysctl --system + ``` +{{% /tab %}} +{{% tab name="Container Linux" %}} +Install CNI plugins (required for most pod network): + +```bash +CNI_VERSION="v0.6.0" +mkdir -p /opt/cni/bin +curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-amd64-${CNI_VERSION}.tgz" | tar -C /opt/cni/bin -xz +``` + +Install crictl (required for kubeadm / Kubelet Container Runtime Interface (CRI)) + +```bash +CRICTL_VERSION="v1.11.1" +mkdir -p /opt/bin +curl -L "https://github.com/kubernetes-incubator/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-amd64.tar.gz" | tar -C /opt/bin -xz +``` + +Install `kubeadm`, `kubelet`, `kubectl` and add a `kubelet` systemd service: + +```bash +RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)" + +mkdir -p /opt/bin +cd /opt/bin +curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl} +chmod +x {kubeadm,kubelet,kubectl} + +curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/kubelet.service" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service +mkdir -p /etc/systemd/system/kubelet.service.d +curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/10-kubeadm.conf" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +``` + +Enable and start `kubelet`: + +```bash +systemctl enable kubelet && systemctl start kubelet +``` +{{% /tab %}} +{{< /tabs >}} + + +The kubelet is now restarting every few seconds, as it waits in a crashloop for +kubeadm to tell it what to do. + +## Configure cgroup driver used by kubelet on Master Node + +When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet +and set it in the `/var/lib/kubelet/kubeadm-flags.env` file during runtime. + +If you are using a different CRI, you have to modify the file +`/etc/default/kubelet` with your `cgroup-driver` value, like so: + +```bash +KUBELET_KUBEADM_EXTRA_ARGS=--cgroup-driver= +``` + +This file will be used by `kubeadm init` and `kubeadm join` to source extra +user defined arguments for the kubelet. + +Please mind, that you **only** have to do that if the cgroup driver of your CRI +is not `cgroupfs`, because that is the default value in the kubelet already. + +Restarting the kubelet is required: + +```bash +systemctl daemon-reload +systemctl restart kubelet +``` + +## Troubleshooting + +If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/independent/troubleshooting-kubeadm/). + +{{% capture whatsnext %}} + +* [Using kubeadm to Create a Cluster](/docs/setup/independent/create-cluster-kubeadm/) + +{{% /capture %}} + + + + diff --git a/content/ko/docs/setup/independent/setup-ha-etcd-with-kubeadm.md b/content/ko/docs/setup/independent/setup-ha-etcd-with-kubeadm.md new file mode 100644 index 0000000000000..2b53adbfa3230 --- /dev/null +++ b/content/ko/docs/setup/independent/setup-ha-etcd-with-kubeadm.md @@ -0,0 +1,263 @@ +--- +title: Set up a High Availability etcd cluster with kubeadm +content_template: templates/task +weight: 60 +--- + +{{% capture overview %}} + +Kubeadm defaults to running a single member etcd cluster in a static pod managed +by the kubelet on the control plane node. This is not a high availability setup +as the etcd cluster contains only one member and cannot sustain any members +becoming unavailable. This task walks through the process of creating a high +availability etcd cluster of three members that can be used as an external etcd +when using kubeadm to set up a kubernetes cluster. + +{{% /capture %}} + +{{% capture prerequisites %}} + +* Three hosts that can talk to each other over ports 2379 and 2380. This + document assumes these default ports. However, they are configurable through + the kubeadm config file. +* Each host must [have docker, kubelet, and kubeadm installed][toolbox]. +* Some infrastructure to copy files between hosts. For example `ssh` and `scp` + can satisfy this requirement. + +[toolbox]: /docs/setup/independent/install-kubeadm/ + +{{% /capture %}} + +{{% capture steps %}} + +## Setting up the cluster + +The general approach is to generate all certs on one node and only distribute +the *necessary* files to the other nodes. + +{{< note >}} +**Note:** kubeadm contains all the necessary crytographic machinery to generate +the certificates described below; no other cryptographic tooling is required for +this example. +{{< /note >}} + + +1. Configure the kubelet to be a service manager for etcd. + + Running etcd is simpler than running kubernetes so you must override the + kubeadm-provided kubelet unit file by creating a new one with a higher + precedence. + + ```sh + cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf + [Service] + ExecStart= + ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true + Restart=always + EOF + + systemctl daemon-reload + systemctl restart kubelet + ``` + +1. Create configuration files for kubeadm. + + Generate one kubeadm configuration file for each host that will have an etcd + member running on it using the following script. + + ```sh + # Update HOST0, HOST1, and HOST2 with the IPs or resolvable names of your hosts + export HOST0=10.0.0.6 + export HOST1=10.0.0.7 + export HOST2=10.0.0.8 + + # Create temp directories to store files that will end up on other hosts. + mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/ + + ETCDHOSTS=(${HOST0} ${HOST1} ${HOST2}) + NAMES=("infra0" "infra1" "infra2") + + for i in "${!ETCDHOSTS[@]}"; do + HOST=${ETCDHOSTS[$i]} + NAME=${NAMES[$i]} + cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml + apiVersion: "kubeadm.k8s.io/v1alpha3" + kind: ClusterConfiguration + etcd: + local: + serverCertSANs: + - "${HOST}" + peerCertSANs: + - "${HOST}" + extraArgs: + initial-cluster: infra0=https://${ETCDHOSTS[0]}:2380,infra1=https://${ETCDHOSTS[1]}:2380,infra2=https://${ETCDHOSTS[2]}:2380 + initial-cluster-state: new + name: ${NAME} + listen-peer-urls: https://${HOST}:2380 + listen-client-urls: https://${HOST}:2379 + advertise-client-urls: https://${HOST}:2379 + initial-advertise-peer-urls: https://${HOST}:2380 + EOF + done + ``` + +1. Generate the certificate authority + + If you already have a CA then the only action that is copying the CA's `crt` and + `key` file to `/etc/kubernetes/pki/etcd/ca.crt` and + `/etc/kubernetes/pki/etcd/ca.key`. After those files have been copied, + proceed to the next step, "Create certificates for each member". + + If you do not already have a CA then run this command on `$HOST0` (where you + generated the configuration files for kubeadm). + + ``` + kubeadm alpha phase certs etcd-ca + ``` + + This creates two files + + - `/etc/kubernetes/pki/etcd/ca.crt` + - `/etc/kubernetes/pki/etcd/ca.key` + +1. Create certificates for each member + + ```sh + kubeadm alpha phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml + kubeadm alpha phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml + kubeadm alpha phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml + kubeadm alpha phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml + cp -R /etc/kubernetes/pki /tmp/${HOST2}/ + # cleanup non-reusable certificates + find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete + + kubeadm alpha phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml + kubeadm alpha phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml + kubeadm alpha phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml + kubeadm alpha phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml + cp -R /etc/kubernetes/pki /tmp/${HOST1}/ + find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete + + kubeadm alpha phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml + kubeadm alpha phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml + kubeadm alpha phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml + kubeadm alpha phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml + # No need to move the certs because they are for HOST0 + + # clean up certs that should not be copied off this host + find /tmp/${HOST2} -name ca.key -type f -delete + find /tmp/${HOST1} -name ca.key -type f -delete + ``` + +1. Copy certificates and kubeadm configs + + The certificates have been generated and now they must be moved to their + respective hosts. + + ```sh + USER=ubuntu + HOST=${HOST1} + scp -r /tmp/${HOST}/* ${USER}@${HOST}: + ssh ${USER}@${HOST} + USER@HOST $ sudo -Es + root@HOST $ chown -R root:root pki + root@HOST $ mv pki /etc/kubernetes/ + ``` + +1. Ensure all expected files exist + + The complete list of required files on `$HOST0` is: + + ``` + /tmp/${HOST0} + └── kubeadmcfg.yaml + --- + /etc/kubernetes/pki + ├── apiserver-etcd-client.crt + ├── apiserver-etcd-client.key + └── etcd + ├── ca.crt + ├── ca.key + ├── healthcheck-client.crt + ├── healthcheck-client.key + ├── peer.crt + ├── peer.key + ├── server.crt + └── server.key + ``` + + On `$HOST1`: + + ``` + $HOME + └── kubeadmcfg.yaml + --- + /etc/kubernetes/pki + ├── apiserver-etcd-client.crt + ├── apiserver-etcd-client.key + └── etcd + ├── ca.crt + ├── healthcheck-client.crt + ├── healthcheck-client.key + ├── peer.crt + ├── peer.key + ├── server.crt + └── server.key + ``` + + On `$HOST2` + + ``` + $HOME + └── kubeadmcfg.yaml + --- + /etc/kubernetes/pki + ├── apiserver-etcd-client.crt + ├── apiserver-etcd-client.key + └── etcd + ├── ca.crt + ├── healthcheck-client.crt + ├── healthcheck-client.key + ├── peer.crt + ├── peer.key + ├── server.crt + └── server.key + ``` + +1. Create the static pod manifests + + Now that the certificates and configs are in place it's time to create the + manifests. On each host run the `kubeadm` command to generate a static manifest + for etcd. + + ```sh + root@HOST0 $ kubeadm alpha phase etcd local --config=/tmp/${HOST0}/kubeadmcfg.yaml + root@HOST1 $ kubeadm alpha phase etcd local --config=/home/ubuntu/kubeadmcfg.yaml + root@HOST2 $ kubeadm alpha phase etcd local --config=/home/ubuntu/kubeadmcfg.yaml + ``` + +1. Optional: Check the cluster health + + ```sh + docker run --rm -it \ + --net host \ + -v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.2.18 etcdctl \ + --cert-file /etc/kubernetes/pki/etcd/peer.crt \ + --key-file /etc/kubernetes/pki/etcd/peer.key \ + --ca-file /etc/kubernetes/pki/etcd/ca.crt \ + --endpoints https://${HOST0}:2379 cluster-health + ... + cluster is healthy + ``` + +{{% /capture %}} + +{{% capture whatsnext %}} + +Once your have a working 3 member etcd cluster, you can continue setting up a +highly available control plane using the [external etcd method with +kubeadm](/docs/setup/independent/high-availability/). + +{{% /capture %}} + + diff --git a/content/ko/docs/setup/independent/troubleshooting-kubeadm.md b/content/ko/docs/setup/independent/troubleshooting-kubeadm.md new file mode 100644 index 0000000000000..8a66795d50c69 --- /dev/null +++ b/content/ko/docs/setup/independent/troubleshooting-kubeadm.md @@ -0,0 +1,262 @@ +--- +title: Troubleshooting kubeadm +content_template: templates/concept +weight: 70 +--- + +{{% capture overview %}} + +As with any program, you might run into an error installing or running kubeadm. +This page lists some common failure scenarios and have provided steps that can help you understand and fix the problem. + +If your problem is not listed below, please follow the following steps: + +- If you think your problem is a bug with kubeadm: + - Go to [github.com/kubernetes/kubeadm](https://github.com/kubernetes/kubeadm/issues) and search for existing issues. + - If no issue exists, please [open one](https://github.com/kubernetes/kubeadm/issues/new) and follow the issue template. + +- If you are unsure about how kubeadm works, you can ask on [Slack](http://slack.k8s.io/) in #kubeadm, or open a question on [StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes). Please include + relevant tags like `#kubernetes` and `#kubeadm` so folks can help you. + +{{% /capture %}} + +{{% capture body %}} + +## `ebtables` or some similar executable not found during installation + +If you see the following warnings while running `kubeadm init` + +```sh +[preflight] WARNING: ebtables not found in system path +[preflight] WARNING: ethtool not found in system path +``` + +Then you may be missing `ebtables`, `ethtool` or a similar executable on your node. You can install them with the following commands: + +- For Ubuntu/Debian users, run `apt install ebtables ethtool`. +- For CentOS/Fedora users, run `yum install ebtables ethtool`. + +## kubeadm blocks waiting for control plane during installation + +If you notice that `kubeadm init` hangs after printing out the following line: + +```sh +[apiclient] Created API client, waiting for the control plane to become ready +``` + +This may be caused by a number of problems. The most common are: + +- network connection problems. Check that your machine has full network connectivity before continuing. +- the default cgroup driver configuration for the kubelet differs from that used by Docker. + Check the system log file (e.g. `/var/log/message`) or examine the output from `journalctl -u kubelet`. If you see something like the following: + + ```shell + error: failed to run Kubelet: failed to create kubelet: + misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs" + ``` + + There are two common ways to fix the cgroup driver problem: + + 1. Install docker again following instructions + [here](/docs/setup/independent/install-kubeadm/#installing-docker). + 1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to + [Configure cgroup driver used by kubelet on Master Node](/docs/setup/independent/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node) + for detailed instructions. + +- control plane Docker containers are crashlooping or hanging. You can check this by running `docker ps` and investigating each container by running `docker logs`. + +## kubeadm blocks when removing managed containers + +The following could happen if Docker halts and does not remove any Kubernetes-managed containers: + +```bash +sudo kubeadm reset +[preflight] Running pre-flight checks +[reset] Stopping the kubelet service +[reset] Unmounting mounted directories in "/var/lib/kubelet" +[reset] Removing kubernetes-managed containers +(block) +``` + +A possible solution is to restart the Docker service and then re-run `kubeadm reset`: + +```bash +sudo systemctl restart docker.service +sudo kubeadm reset +``` + +Inspecting the logs for docker may also be useful: + +```sh +journalctl -ul docker +``` + +## Pods in `RunContainerError`, `CrashLoopBackOff` or `Error` state + +Right after `kubeadm init` there should not be any pods in these states. + +- If there are pods in one of these states _right after_ `kubeadm init`, please open an + issue in the kubeadm repo. `coredns` (or `kube-dns`) should be in the `Pending` state + until you have deployed the network solution. +- If you see Pods in the `RunContainerError`, `CrashLoopBackOff` or `Error` state + after deploying the network solution and nothing happens to `coredns` (or `kube-dns`), + it's very likely that the Pod Network solution and nothing happens to the DNS server, it's very + likely that the Pod Network solution that you installed is somehow broken. You + might have to grant it more RBAC privileges or use a newer version. Please file + an issue in the Pod Network providers' issue tracker and get the issue triaged there. + +## `coredns` (or `kube-dns`) is stuck in the `Pending` state + +This is **expected** and part of the design. kubeadm is network provider-agnostic, so the admin +should [install the pod network solution](/docs/concepts/cluster-administration/addons/) +of choice. You have to install a Pod Network +before CoreDNS may deployed fully. Hence the `Pending` state before the network is set up. + +## `HostPort` services do not work + +The `HostPort` and `HostIP` functionality is available depending on your Pod Network +provider. Please contact the author of the Pod Network solution to find out whether +`HostPort` and `HostIP` functionality are available. + +Calico, Canal, and Flannel CNI providers are verified to support HostPort. + +For more information, see the [CNI portmap documentation](https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md). + +If your network provider does not support the portmap CNI plugin, you may need to use the [NodePort feature of +services](/docs/concepts/services-networking/service/#nodeport) or use `HostNetwork=true`. + +## Pods are not accessible via their Service IP + +- Many network add-ons do not yet enable [hairpin mode](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#a-pod-cannot-reach-itself-via-service-ip) + which allows pods to access themselves via their Service IP. This is an issue related to + [CNI](https://github.com/containernetworking/cni/issues/476). Please contact the network + add-on provider to get the latest status of their support for hairpin mode. + +- If you are using VirtualBox (directly or via Vagrant), you will need to + ensure that `hostname -i` returns a routable IP address. By default the first + interface is connected to a non-routable host-only network. A work around + is to modify `/etc/hosts`, see this [Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11) + for an example. + +## TLS certificate errors + +The following error indicates a possible certificate mismatch. + +```none +# kubectl get pods +Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") +``` + +- Verify that the `$HOME/.kube/config` file contains a valid certificate, and + regenerate a certificate if necessary. The certificates in a kubeconfig file + are base64 encoded. The `base64 -d` command can be used to decode the certificate + and `openssl x509 -text -noout` can be used for viewing the certificate information. +- Another workaround is to overwrite the existing `kubeconfig` for the "admin" user: + + ```sh + mv $HOME/.kube $HOME/.kube.bak + sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config + sudo chown $(id -u):$(id -g) $HOME/.kube/config + ``` + +## Default NIC When using flannel as the pod network in Vagrant + +The following error might indicate that something was wrong in the pod network: + +```sh +Error from server (NotFound): the server could not find the requested resource +``` + +- If you're using flannel as the pod network inside Vagrant, then you will have to specify the default interface name for flannel. + + Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed. + + This may lead to problems with flannel, which defaults to the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this, pass the `--iface eth1` flag to flannel so that the second interface is chosen. + +## Non-public IP used for containers + +In some situations `kubectl logs` and `kubectl run` commands may return with the following errors in an otherwise functional cluster: + +```sh +Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc65b868-glc5m/mysql: dial tcp 10.19.0.41:10250: getsockopt: no route to host +``` + +- This may be due to Kubernetes using an IP that can not communicate with other IPs on the seemingly same subnet, possibly by policy of the machine provider. +- Digital Ocean assigns a public IP to `eth0` as well as a private one to be used internally as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's `InternalIP` instead of the public one. + + Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will not display the offending alias IP address. Alternatively an API endpoint specific to Digital Ocean allows to query for the anchor IP from the droplet: + + ```sh + curl http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address + ``` + + The workaround is to tell `kubelet` which IP to use using `--node-ip`. When using Digital Ocean, it can be the public one (assigned to `eth0`) or the private one (assigned to `eth1`) should you want to use the optional private network. The [`KubeletExtraArgs` section of the kubeadm `NodeRegistrationOptions` structure](https://github.com/kubernetes/kubernetes/blob/release-1.12/cmd/kubeadm/app/apis/kubeadm/v1alpha3/types.go#L163-L166) can be used for this. + + Then restart `kubelet`: + + ```sh + systemctl daemon-reload + systemctl restart kubelet + ``` + +## Services with externalTrafficPolicy=Local are not reachable + +On nodes where the hostname for the kubelet is overridden using the `--hostname-override` option, kube-proxy will default to treating 127.0.0.1 as the node IP, which results in rejecting connections for Services configured for `externalTrafficPolicy=Local`. This situation can be verified by checking the output of `kubectl -n kube-system logs `: + +```sh +W0507 22:33:10.372369 1 server.go:586] Failed to retrieve node info: nodes "ip-10-0-23-78" not found +W0507 22:33:10.372474 1 proxier.go:463] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP +``` + +A workaround for this is to modify the kube-proxy DaemonSet in the following way: + +```sh +kubectl -n kube-system patch --type json daemonset kube-proxy -p "$(cat <<'EOF' +[ + { + "op": "add", + "path": "/spec/template/spec/containers/0/env", + "value": [ + { + "name": "NODE_NAME", + "valueFrom": { + "fieldRef": { + "apiVersion": "v1", + "fieldPath": "spec.nodeName" + } + } + } + ] + }, + { + "op": "add", + "path": "/spec/template/spec/containers/0/command/-", + "value": "--hostname-override=${NODE_NAME}" + } +] +EOF +)" + +``` + +## `coredns` pods have `CrashLoopBackOff` or `Error` state + +If you have nodes that are running SELinux with an older version of Docker you might experience a scenario +where the `coredns` pods are not starting. To solve that you can try one of the following options: + +- Upgrade to a [newer version of Docker](/docs/setup/independent/install-kubeadm/#installing-docker). +- [Disable SELinux](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-enabling_and_disabling_selinux-disabling_selinux). +- Modify the `coredns` deployment to set `allowPrivilegeEscalation` to `true`: + +```bash +kubectl -n kube-system get deployment coredns -o yaml | \ + sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | \ + kubectl apply -f - +``` + +{{< warning >}} +**Warning**: Disabling SELinux or setting `allowPrivilegeEscalation` to `true` can compromise +the security of your cluster. +{{< /warning >}} + +{{% /capture %}} diff --git a/content/ko/docs/setup/minikube.md b/content/ko/docs/setup/minikube.md new file mode 100644 index 0000000000000..1507cce2da0eb --- /dev/null +++ b/content/ko/docs/setup/minikube.md @@ -0,0 +1,382 @@ +--- +title: Minikube로 로컬 상에서 쿠버네티스 구동 +--- + +Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day. + +{{< toc >}} + +### Minikube 특징 + +* Minikube supports Kubernetes features such as: + * DNS + * NodePorts + * ConfigMaps and Secrets + * Dashboards + * Container Runtime: Docker, [rkt](https://github.com/rkt/rkt), [CRI-O](https://github.com/kubernetes-incubator/cri-o) and [containerd](https://github.com/containerd/containerd) + * Enabling CNI (Container Network Interface) + * Ingress + +## 설치 + +See [Installing Minikube](/docs/tasks/tools/install-minikube/). + +## 빠른 시작 + +Here's a brief demo of minikube usage. +If you want to change the VM driver add the appropriate `--vm-driver=xxx` flag to `minikube start`. Minikube supports +the following drivers: + +* virtualbox +* vmwarefusion +* kvm2 ([driver installation](https://git.k8s.io/minikube/docs/drivers.md#kvm2-driver)) +* kvm ([driver installation](https://git.k8s.io/minikube/docs/drivers.md#kvm-driver)) +* hyperkit ([driver installation](https://git.k8s.io/minikube/docs/drivers.md#hyperkit-driver)) +* xhyve ([driver installation](https://git.k8s.io/minikube/docs/drivers.md#xhyve-driver)) (deprecated) + +Note that the IP below is dynamic and can change. It can be retrieved with `minikube ip`. + +```shell +$ minikube start +Starting local Kubernetes cluster... +Running pre-create checks... +Creating machine... +Starting local Kubernetes cluster... + +$ kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080 +deployment.apps/hello-minikube created +$ kubectl expose deployment hello-minikube --type=NodePort +service/hello-minikube exposed + +# We have now launched an echoserver pod but we have to wait until the pod is up before curling/accessing it +# via the exposed service. +# To check whether the pod is up and running we can use the following: +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +hello-minikube-3383150820-vctvh 0/1 ContainerCreating 0 3s +# We can see that the pod is still being created from the ContainerCreating status +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +hello-minikube-3383150820-vctvh 1/1 Running 0 13s +# We can see that the pod is now Running and we will now be able to curl it: +$ curl $(minikube service hello-minikube --url) +CLIENT VALUES: +client_address=192.168.99.1 +command=GET +real path=/ +... +$ kubectl delete services hello-minikube +service "hello-minikube" deleted +$ kubectl delete deployment hello-minikube +deployment.extensions "hello-minikube" deleted +$ minikube stop +Stopping local Kubernetes cluster... +Stopping "minikube"... +``` + +### 다른 컨테이너 런타임 + +#### containerd + +To use [containerd](https://github.com/containerd/containerd) as the container runtime, run: + +```bash +$ minikube start \ + --network-plugin=cni \ + --container-runtime=containerd \ + --bootstrapper=kubeadm +``` + +Or you can use the extended version: + +```bash +$ minikube start \ + --network-plugin=cni \ + --extra-config=kubelet.container-runtime=remote \ + --extra-config=kubelet.container-runtime-endpoint=unix:///run/containerd/containerd.sock \ + --extra-config=kubelet.image-service-endpoint=unix:///run/containerd/containerd.sock \ + --bootstrapper=kubeadm +``` + +#### CRI-O + +To use [CRI-O](https://github.com/kubernetes-incubator/cri-o) as the container runtime, run: + +```bash +$ minikube start \ + --network-plugin=cni \ + --container-runtime=cri-o \ + --bootstrapper=kubeadm +``` + +Or you can use the extended version: + +```bash +$ minikube start \ + --network-plugin=cni \ + --extra-config=kubelet.container-runtime=remote \ + --extra-config=kubelet.container-runtime-endpoint=/var/run/crio.sock \ + --extra-config=kubelet.image-service-endpoint=/var/run/crio.sock \ + --bootstrapper=kubeadm +``` + +#### rkt 컨테이너 엔진 + +To use [rkt](https://github.com/rkt/rkt) as the container runtime run: + +```shell +$ minikube start \ + --network-plugin=cni \ + --container-runtime=rkt +``` + +This will use an alternative minikube ISO image containing both rkt, and Docker, and enable CNI networking. + +### 드라이버 플러그인 + +See [DRIVERS](https://git.k8s.io/minikube/docs/drivers.md) for details on supported drivers and how to install +plugins, if required. + +### 도커 데몬 재사용 + +When using a single VM of Kubernetes, it's really handy to reuse the minikube's built-in Docker daemon; as this means you don't have to build a docker registry on your host machine and push the image into it - you can just build inside the same docker daemon as minikube which speeds up local experiments. Just make sure you tag your Docker image with something other than 'latest' and use that tag while you pull the image. Otherwise, if you do not specify version of your image, it will be assumed as `:latest`, with pull image policy of `Always` correspondingly, which may eventually result in `ErrImagePull` as you may not have any versions of your Docker image out there in the default docker registry (usually DockerHub) yet. + +To be able to work with the docker daemon on your mac/linux host use the `docker-env command` in your shell: + +``` +eval $(minikube docker-env) +``` +You should now be able to use docker on the command line on your host mac/linux machine talking to the docker daemon inside the minikube VM: + +``` +docker ps +``` + +On Centos 7, docker may report the following error: + +``` +Could not read CA certificate "/etc/docker/ca.pem": open /etc/docker/ca.pem: no such file or directory +``` + +The fix is to update /etc/sysconfig/docker to ensure that minikube's environment changes are respected: + +``` +< DOCKER_CERT_PATH=/etc/docker +--- +> if [ -z "${DOCKER_CERT_PATH}" ]; then +> DOCKER_CERT_PATH=/etc/docker +> fi +``` + +Remember to turn off the imagePullPolicy:Always, as otherwise Kubernetes won't use images you built locally. + +## 클러스터 관리 + +### 클러스터 시작 + +The `minikube start` command can be used to start your cluster. +This command creates and configures a virtual machine that runs a single-node Kubernetes cluster. +This command also configures your [kubectl](/docs/user-guide/kubectl-overview/) installation to communicate with this cluster. + +If you are behind a web proxy, you will need to pass this information in e.g. via + +``` +https_proxy= minikube start --docker-env http_proxy= --docker-env https_proxy= --docker-env no_proxy=192.168.99.0/24 +``` + +Unfortunately just setting the environment variables will not work. + +Minikube will also create a "minikube" context, and set it to default in kubectl. +To switch back to this context later, run this command: `kubectl config use-context minikube`. + +#### 쿠버네티스 버전 지정 + +Minikube supports running multiple different versions of Kubernetes. You can +access a list of all available versions via + +``` +minikube get-k8s-versions +``` + +You can specify the specific version of Kubernetes for Minikube to use by +adding the `--kubernetes-version` string to the `minikube start` command. For +example, to run version `v1.7.3`, you would run the following: + +``` +minikube start --kubernetes-version v1.7.3 +``` + +### 쿠버네티스 구성 + +Minikube has a "configurator" feature that allows users to configure the Kubernetes components with arbitrary values. +To use this feature, you can use the `--extra-config` flag on the `minikube start` command. + +This flag is repeated, so you can pass it several times with several different values to set multiple options. + +This flag takes a string of the form `component.key=value`, where `component` is one of the strings from the below list, `key` is a value on the +configuration struct and `value` is the value to set. + +Valid keys can be found by examining the documentation for the Kubernetes `componentconfigs` for each component. +Here is the documentation for each supported configuration: + +* [kubelet](https://godoc.org/k8s.io/kubernetes/pkg/kubelet/apis/kubeletconfig#KubeletConfiguration) +* [apiserver](https://godoc.org/k8s.io/kubernetes/cmd/kube-apiserver/app/options#ServerRunOptions) +* [proxy](https://godoc.org/k8s.io/kubernetes/pkg/proxy/apis/kubeproxyconfig#KubeProxyConfiguration) +* [controller-manager](https://godoc.org/k8s.io/kubernetes/pkg/apis/componentconfig#KubeControllerManagerConfiguration) +* [etcd](https://godoc.org/github.com/coreos/etcd/etcdserver#ServerConfig) +* [scheduler](https://godoc.org/k8s.io/kubernetes/pkg/apis/componentconfig#KubeSchedulerConfiguration) + +#### 예제 + +To change the `MaxPods` setting to 5 on the Kubelet, pass this flag: `--extra-config=kubelet.MaxPods=5`. + +This feature also supports nested structs. To change the `LeaderElection.LeaderElect` setting to `true` on the scheduler, pass this flag: `--extra-config=scheduler.LeaderElection.LeaderElect=true`. + +To set the `AuthorizationMode` on the `apiserver` to `RBAC`, you can use: `--extra-config=apiserver.Authorization.Mode=RBAC`. + +### 클러스터 중지 +The `minikube stop` command can be used to stop your cluster. +This command shuts down the minikube virtual machine, but preserves all cluster state and data. +Starting the cluster again will restore it to it's previous state. + +### 클러스터 삭제 +The `minikube delete` command can be used to delete your cluster. +This command shuts down and deletes the minikube virtual machine. No data or state is preserved. + +## 클러스터와 상호 작용 + +### Kubectl + +The `minikube start` command creates a "[kubectl context](/docs/reference/generated/kubectl/kubectl-commands/#-em-set-context-em-)" called "minikube". +This context contains the configuration to communicate with your minikube cluster. + +Minikube sets this context to default automatically, but if you need to switch back to it in the future, run: + +`kubectl config use-context minikube`, + +Or pass the context on each command like this: `kubectl get pods --context=minikube`. + +### 대시보드 + +To access the [Kubernetes Dashboard](/docs/tasks/access-application-cluster/web-ui-dashboard/), run this command in a shell after starting minikube to get the address: + +```shell +minikube dashboard +``` + +### 서비스 + +To access a service exposed via a node port, run this command in a shell after starting minikube to get the address: + +```shell +minikube service [-n NAMESPACE] [--url] NAME +``` + +## 네트워킹 + +The minikube VM is exposed to the host system via a host-only IP address, that can be obtained with the `minikube ip` command. +Any services of type `NodePort` can be accessed over that IP address, on the NodePort. + +To determine the NodePort for your service, you can use a `kubectl` command like this: + +`kubectl get service $SERVICE --output='jsonpath="{.spec.ports[0].nodePort}"'` + +## 퍼시스턴트 볼륨 +Minikube supports [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) of type `hostPath`. +These PersistentVolumes are mapped to a directory inside the minikube VM. + +The Minikube VM boots into a tmpfs, so most directories will not be persisted across reboots (`minikube stop`). +However, Minikube is configured to persist files stored under the following host directories: + +* `/data` +* `/var/lib/localkube` +* `/var/lib/docker` + +Here is an example PersistentVolume config to persist data in the `/data` directory: + +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pv0001 +spec: + accessModes: + - ReadWriteOnce + capacity: + storage: 5Gi + hostPath: + path: /data/pv0001/ +``` + +## 호스트 폴더 마운트 +Some drivers will mount a host folder within the VM so that you can easily share files between the VM and host. These are not configurable at the moment and different for the driver and OS you are using. + +**Note:** Host folder sharing is not implemented in the KVM driver yet. + +| Driver | OS | HostFolder | VM | +| --- | --- | --- | --- | +| VirtualBox | Linux | /home | /hosthome | +| VirtualBox | macOS | /Users | /Users | +| VirtualBox | Windows | C://Users | /c/Users | +| VMware Fusion | macOS | /Users | /Users | +| Xhyve | macOS | /Users | /Users | + + +## 프라이빗 컨테이너 레지스트리 + +To access a private container registry, follow the steps on [this page](/docs/concepts/containers/images/). + +We recommend you use `ImagePullSecrets`, but if you would like to configure access on the minikube VM you can place the `.dockercfg` in the `/home/docker` directory or the `config.json` in the `/home/docker/.docker` directory. + +## 애드온 + +In order to have minikube properly start or restart custom addons, +place the addons you wish to be launched with minikube in the `~/.minikube/addons` +directory. Addons in this folder will be moved to the minikube VM and +launched each time minikube is started or restarted. + +## HTTP 프록시 환경에서 Minikube 사용 + +Minikube creates a Virtual Machine that includes Kubernetes and a Docker daemon. +When Kubernetes attempts to schedule containers using Docker, the Docker daemon may require external network access to pull containers. + +If you are behind an HTTP proxy, you may need to supply Docker with the proxy settings. +To do this, pass the required environment variables as flags during `minikube start`. + +For example: + +```shell +$ minikube start --docker-env http_proxy=http://$YOURPROXY:PORT \ + --docker-env https_proxy=https://$YOURPROXY:PORT +``` + +If your Virtual Machine address is 192.168.99.100, then chances are your proxy settings will prevent kubectl from directly reaching it. +To by-pass proxy configuration for this IP address, you should modify your no_proxy settings. You can do so with: + +```shell +$ export no_proxy=$no_proxy,$(minikube ip) +``` + +## 알려진 이슈 +* Features that require a Cloud Provider will not work in Minikube. These include: + * LoadBalancers +* Features that require multiple nodes. These include: + * Advanced scheduling policies + +## 설계 + +Minikube uses [libmachine](https://github.com/docker/machine/tree/master/libmachine) for provisioning VMs, and [localkube](https://git.k8s.io/minikube/pkg/localkube) (originally written and donated to this project by [RedSpread](https://github.com/redspread)) for running the cluster. + +For more information about minikube, see the [proposal](https://git.k8s.io/community/contributors/design-proposals/cluster-lifecycle/local-cluster-ux.md). + +## 추가적인 링크: +* **Goals and Non-Goals**: For the goals and non-goals of the minikube project, please see our [roadmap](https://git.k8s.io/minikube/docs/contributors/roadmap.md). +* **Development Guide**: See [CONTRIBUTING.md](https://git.k8s.io/minikube/CONTRIBUTING.md) for an overview of how to send pull requests. +* **Building Minikube**: For instructions on how to build/test minikube from source, see the [build guide](https://git.k8s.io/minikube/docs/contributors/build_guide.md) +* **Adding a New Dependency**: For instructions on how to add a new dependency to minikube see the [adding dependencies guide](https://git.k8s.io/minikube/docs/contributors/adding_a_dependency.md) +* **Adding a New Addon**: For instruction on how to add a new addon for minikube see the [adding an addon guide](https://git.k8s.io/minikube/docs/contributors/adding_an_addon.md) +* **Updating Kubernetes**: For instructions on how to update kubernetes see the [updating Kubernetes guide](https://git.k8s.io/minikube/docs/contributors/updating_kubernetes.md) + +## 커뮤니티 + +Contributions, questions, and comments are all welcomed and encouraged! minikube developers hang out on [Slack](https://kubernetes.slack.com) in the #minikube channel (get an invitation [here](http://slack.kubernetes.io/)). We also have the [kubernetes-dev Google Groups mailing list](https://groups.google.com/forum/#!forum/kubernetes-dev). If you are posting to the list please prefix your subject with "minikube: ". diff --git a/content/ko/docs/setup/multiple-zones.md b/content/ko/docs/setup/multiple-zones.md new file mode 100644 index 0000000000000..ea1d053fc3fef --- /dev/null +++ b/content/ko/docs/setup/multiple-zones.md @@ -0,0 +1,331 @@ +--- +title: 여러 영역에서 구동 +weight: 90 +--- + +## 소개 + +Kubernetes 1.2 adds support for running a single cluster in multiple failure zones +(GCE calls them simply "zones", AWS calls them "availability zones", here we'll refer to them as "zones"). +This is a lightweight version of a broader Cluster Federation feature (previously referred to by the affectionate +nickname ["Ubernetes"](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/design-proposals/multicluster/federation.md)). +Full Cluster Federation allows combining separate +Kubernetes clusters running in different regions or cloud providers +(or on-premises data centers). However, many +users simply want to run a more available Kubernetes cluster in multiple zones +of their single cloud provider, and this is what the multizone support in 1.2 allows +(this previously went by the nickname "Ubernetes Lite"). + +Multizone support is deliberately limited: a single Kubernetes cluster can run +in multiple zones, but only within the same region (and cloud provider). Only +GCE and AWS are currently supported automatically (though it is easy to +add similar support for other clouds or even bare metal, by simply arranging +for the appropriate labels to be added to nodes and volumes). + + +{{< toc >}} + +## 기능 + +When nodes are started, the kubelet automatically adds labels to them with +zone information. + +Kubernetes will automatically spread the pods in a replication controller +or service across nodes in a single-zone cluster (to reduce the impact of +failures.) With multiple-zone clusters, this spreading behavior is +extended across zones (to reduce the impact of zone failures.) (This is +achieved via `SelectorSpreadPriority`). This is a best-effort +placement, and so if the zones in your cluster are heterogeneous +(e.g. different numbers of nodes, different types of nodes, or +different pod resource requirements), this might prevent perfectly +even spreading of your pods across zones. If desired, you can use +homogeneous zones (same number and types of nodes) to reduce the +probability of unequal spreading. + +When persistent volumes are created, the `PersistentVolumeLabel` +admission controller automatically adds zone labels to them. The scheduler (via the +`VolumeZonePredicate` predicate) will then ensure that pods that claim a +given volume are only placed into the same zone as that volume, as volumes +cannot be attached across zones. + +## 제한 사항 + +There are some important limitations of the multizone support: + +* We assume that the different zones are located close to each other in the +network, so we don't perform any zone-aware routing. In particular, traffic +that goes via services might cross zones (even if pods in some pods backing that service +exist in the same zone as the client), and this may incur additional latency and cost. + +* Volume zone-affinity will only work with a `PersistentVolume`, and will not +work if you directly specify an EBS volume in the pod spec (for example). + +* Clusters cannot span clouds or regions (this functionality will require full +federation support). + +* Although your nodes are in multiple zones, kube-up currently builds +a single master node by default. While services are highly +available and can tolerate the loss of a zone, the control plane is +located in a single zone. Users that want a highly available control +plane should follow the [high availability](/docs/admin/high-availability) instructions. + +### Volume limitations +The following limitations are addressed with [topology-aware volume binding](/docs/concepts/storage/storage-classes/#volume-binding-mode). + +* StatefulSet volume zone spreading when using dynamic provisioning is currently not compatible with + pod affinity or anti-affinity policies. + +* If the name of the StatefulSet contains dashes ("-"), volume zone spreading + may not provide a uniform distribution of storage across zones. + +* When specifying multiple PVCs in a Deployment or Pod spec, the StorageClass + needs to be configured for a specific single zone, or the PVs need to be + statically provisioned in a specific zone. Another workaround is to use a + StatefulSet, which will ensure that all the volumes for a replica are + provisioned in the same zone. + +## 연습 + +We're now going to walk through setting up and using a multi-zone +cluster on both GCE & AWS. To do so, you bring up a full cluster +(specifying `MULTIZONE=true`), and then you add nodes in additional zones +by running `kube-up` again (specifying `KUBE_USE_EXISTING_MASTER=true`). + +### 클러스터 가져오기 + +Create the cluster as normal, but pass MULTIZONE to tell the cluster to manage multiple zones; creating nodes in us-central1-a. + +GCE: + +```shell +curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-a NUM_NODES=3 bash +``` + +AWS: + +```shell +curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a NUM_NODES=3 bash +``` + +This step brings up a cluster as normal, still running in a single zone +(but `MULTIZONE=true` has enabled multi-zone capabilities). + +### 라벨이 지정된 노드 확인 + +View the nodes; you can see that they are labeled with zone information. +They are all in `us-central1-a` (GCE) or `us-west-2a` (AWS) so far. The +labels are `failure-domain.beta.kubernetes.io/region` for the region, +and `failure-domain.beta.kubernetes.io/zone` for the zone: + +```shell +> kubectl get nodes --show-labels + + +NAME STATUS ROLES AGE VERSION LABELS +kubernetes-master Ready,SchedulingDisabled 6m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master +kubernetes-minion-87j9 Ready 6m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9 +kubernetes-minion-9vlv Ready 6m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv +kubernetes-minion-a12q Ready 6m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q +``` + +### 두번째 영역에 더 많은 노드 추가하기 + +Let's add another set of nodes to the existing cluster, reusing the +existing master, running in a different zone (us-central1-b or us-west-2b). +We run kube-up again, but by specifying `KUBE_USE_EXISTING_MASTER=true` +kube-up will not create a new master, but will reuse one that was previously +created instead. + +GCE: + +```shell +KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-b NUM_NODES=3 kubernetes/cluster/kube-up.sh +``` + +On AWS we also need to specify the network CIDR for the additional +subnet, along with the master internal IP address: + +```shell +KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2b NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.1.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh +``` + + +View the nodes again; 3 more nodes should have launched and be tagged +in us-central1-b: + +```shell +> kubectl get nodes --show-labels + +NAME STATUS ROLES AGE VERSION LABELS +kubernetes-master Ready,SchedulingDisabled 16m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master +kubernetes-minion-281d Ready 2m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d +kubernetes-minion-87j9 Ready 16m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9 +kubernetes-minion-9vlv Ready 16m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv +kubernetes-minion-a12q Ready 17m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q +kubernetes-minion-pp2f Ready 2m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-pp2f +kubernetes-minion-wf8i Ready 2m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-wf8i +``` + +### 볼륨 어피니티 + +Create a volume using the dynamic volume creation (only PersistentVolumes are supported for zone affinity): + +```json +kubectl create -f - < kubectl get pv --show-labels +NAME CAPACITY ACCESSMODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE LABELS +pv-gce-mj4gm 5Gi RWO Retain Bound default/claim1 manual 46s failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a +``` + +So now we will create a pod that uses the persistent volume claim. +Because GCE PDs / AWS EBS volumes cannot be attached across zones, +this means that this pod can only be created in the same zone as the volume: + +```yaml +kubectl create -f - < kubectl describe pod mypod | grep Node +Node: kubernetes-minion-9vlv/10.240.0.5 +> kubectl get node kubernetes-minion-9vlv --show-labels +NAME STATUS AGE VERSION LABELS +kubernetes-minion-9vlv Ready 22m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv +``` + +### 여러 영역에 파드 분배하기 + +Pods in a replication controller or service are automatically spread +across zones. First, let's launch more nodes in a third zone: + +GCE: + +```shell +KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-f NUM_NODES=3 kubernetes/cluster/kube-up.sh +``` + +AWS: + +```shell +KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2c NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.2.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh +``` + +Verify that you now have nodes in 3 zones: + +```shell +kubectl get nodes --show-labels +``` + +Create the guestbook-go example, which includes an RC of size 3, running a simple web app: + +```shell +find kubernetes/examples/guestbook-go/ -name '*.json' | xargs -I {} kubectl create -f {} +``` + +The pods should be spread across all 3 zones: + +```shell +> kubectl describe pod -l app=guestbook | grep Node +Node: kubernetes-minion-9vlv/10.240.0.5 +Node: kubernetes-minion-281d/10.240.0.8 +Node: kubernetes-minion-olsh/10.240.0.11 + + > kubectl get node kubernetes-minion-9vlv kubernetes-minion-281d kubernetes-minion-olsh --show-labels +NAME STATUS ROLES AGE VERSION LABELS +kubernetes-minion-9vlv Ready 34m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv +kubernetes-minion-281d Ready 20m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d +kubernetes-minion-olsh Ready 3m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-f,kubernetes.io/hostname=kubernetes-minion-olsh +``` + + +Load-balancers span all zones in a cluster; the guestbook-go example +includes an example load-balanced service: + +```shell +> kubectl describe service guestbook | grep LoadBalancer.Ingress +LoadBalancer Ingress: 130.211.126.21 + +> ip=130.211.126.21 + +> curl -s http://${ip}:3000/env | grep HOSTNAME + "HOSTNAME": "guestbook-44sep", + +> (for i in `seq 20`; do curl -s http://${ip}:3000/env | grep HOSTNAME; done) | sort | uniq + "HOSTNAME": "guestbook-44sep", + "HOSTNAME": "guestbook-hum5n", + "HOSTNAME": "guestbook-ppm40", +``` + +The load balancer correctly targets all the pods, even though they are in multiple zones. + +### 클러스터 강제 종료 + +When you're done, clean up: + +GCE: + +```shell +KUBERNETES_PROVIDER=gce KUBE_USE_EXISTING_MASTER=true KUBE_GCE_ZONE=us-central1-f kubernetes/cluster/kube-down.sh +KUBERNETES_PROVIDER=gce KUBE_USE_EXISTING_MASTER=true KUBE_GCE_ZONE=us-central1-b kubernetes/cluster/kube-down.sh +KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-a kubernetes/cluster/kube-down.sh +``` + +AWS: + +```shell +KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2c kubernetes/cluster/kube-down.sh +KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2b kubernetes/cluster/kube-down.sh +KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a kubernetes/cluster/kube-down.sh +``` diff --git a/content/ko/docs/setup/node-conformance.md b/content/ko/docs/setup/node-conformance.md new file mode 100644 index 0000000000000..c59e89e8a759f --- /dev/null +++ b/content/ko/docs/setup/node-conformance.md @@ -0,0 +1,97 @@ +--- +title: 노드 구성 검증하기 +--- + +{{< toc >}} + +## 노드 적합성 테스트 + +*Node conformance test* is a containerized test framework that provides a system +verification and functionality test for a node. The test validates whether the +node meets the minimum requirements for Kubernetes; a node that passes the test +is qualified to join a Kubernetes cluster. + +## 제한 사항 + +In Kubernetes version 1.5, node conformance test has the following limitations: + +* Node conformance test only supports Docker as the container runtime. + +## 노드 필수 구성 요소 + +To run node conformance test, a node must satisfy the same prerequisites as a +standard Kubernetes node. At a minimum, the node should have the following +daemons installed: + +* Container Runtime (Docker) +* Kubelet + +## 노드 적합성 테스트 실행 + +To run the node conformance test, perform the following steps: + +1. Point your Kubelet to localhost `--api-servers="http://localhost:8080"`, +because the test framework starts a local master to test Kubelet. There are some +other Kubelet flags you may care: + * `--pod-cidr`: If you are using `kubenet`, you should specify an arbitrary CIDR + to Kubelet, for example `--pod-cidr=10.180.0.0/24`. + * `--cloud-provider`: If you are using `--cloud-provider=gce`, you should + remove the flag to run the test. + +2. Run the node conformance test with command: + +```shell +# $CONFIG_DIR is the pod manifest path of your Kubelet. +# $LOG_DIR is the test output path. +sudo docker run -it --rm --privileged --net=host \ + -v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \ + k8s.gcr.io/node-test:0.2 +``` + +## 다른 아키텍처에서 노드 적합성 테스트 실행 + +Kubernetes also provides node conformance test docker images for other +architectures: + + Arch | Image | +--------|:-----------------:| + amd64 | node-test-amd64 | + arm | node-test-arm | + arm64 | node-test-arm64 | + +## 선택된 테스트 실행 + +To run specific tests, overwrite the environment variable `FOCUS` with the +regular expression of tests you want to run. + +```shell +sudo docker run -it --rm --privileged --net=host \ + -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \ + -e FOCUS=MirrorPod \ # Only run MirrorPod test + k8s.gcr.io/node-test:0.2 +``` + +To skip specific tests, overwrite the environment variable `SKIP` with the +regular expression of tests you want to skip. + +```shell +sudo docker run -it --rm --privileged --net=host \ + -v /:/rootfs:ro -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \ + -e SKIP=MirrorPod \ # Run all conformance tests but skip MirrorPod test + k8s.gcr.io/node-test:0.2 +``` + +Node conformance test is a containerized version of [node e2e test](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/devel/e2e-node-tests.md). +By default, it runs all conformance tests. + +Theoretically, you can run any node e2e test if you configure the container and +mount required volumes properly. But **it is strongly recommended to only run conformance +test**, because it requires much more complex configuration to run non-conformance test. + +## 주의 사항 + +* The test leaves some docker images on the node, including the node conformance + test image and images of containers used in the functionality + test. +* The test leaves dead containers on the node. These containers are created + during the functionality test. diff --git a/content/ko/docs/setup/on-premises-vm/_index.md b/content/ko/docs/setup/on-premises-vm/_index.md new file mode 100644 index 0000000000000..92d67a957cc0b --- /dev/null +++ b/content/ko/docs/setup/on-premises-vm/_index.md @@ -0,0 +1,4 @@ +--- +title: 온-프레미스 VM +weight: 60 +--- diff --git a/content/ko/docs/setup/on-premises-vm/cloudstack.md b/content/ko/docs/setup/on-premises-vm/cloudstack.md new file mode 100644 index 0000000000000..fa2924927af2d --- /dev/null +++ b/content/ko/docs/setup/on-premises-vm/cloudstack.md @@ -0,0 +1,120 @@ +--- +title: Cloudstack +content_template: templates/concept +--- + +{{% capture overview %}} + +[CloudStack](https://cloudstack.apache.org/) is a software to build public and private clouds based on hardware virtualization principles (traditional IaaS). To deploy Kubernetes on CloudStack there are several possibilities depending on the Cloud being used and what images are made available. CloudStack also has a vagrant plugin available, hence Vagrant could be used to deploy Kubernetes either using the existing shell provisioner or using new Salt based recipes. + +[CoreOS](http://coreos.com) templates for CloudStack are built [nightly](http://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions. + +This guide uses a single [Ansible playbook](https://github.com/apachecloudstack/k8s), which is completely automated and can deploy Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init. + +{{% /capture %}} + +{{% capture body %}} + +## Prerequisites + +```shell +sudo apt-get install -y python-pip libssl-dev +sudo pip install cs +sudo pip install sshpubkeys +sudo apt-get install software-properties-common +sudo apt-add-repository ppa:ansible/ansible +sudo apt-get update +sudo apt-get install ansible +``` + +On CloudStack server you also have to install libselinux-python : + +```shell +yum install libselinux-python +``` + +[_cs_](https://github.com/exoscale/cs) is a python module for the CloudStack API. + +Set your CloudStack endpoint, API keys and HTTP method used. + +You can define them as environment variables: `CLOUDSTACK_ENDPOINT`, `CLOUDSTACK_KEY`, `CLOUDSTACK_SECRET` and `CLOUDSTACK_METHOD`. + +Or create a `~/.cloudstack.ini` file: + +```none +[cloudstack] +endpoint = +key = +secret = +method = post +``` + +We need to use the http POST method to pass the _large_ userdata to the coreOS instances. + +### Clone the playbook + +```shell +git clone https://github.com/apachecloudstack/k8s +cd kubernetes-cloudstack +``` + +### Create a Kubernetes cluster + +You simply need to run the playbook. + +```shell +ansible-playbook k8s.yml +``` + +Some variables can be edited in the `k8s.yml` file. + +```none +vars: + ssh_key: k8s + k8s_num_nodes: 2 + k8s_security_group_name: k8s + k8s_node_prefix: k8s2 + k8s_template: + k8s_instance_type: +``` + +This will start a Kubernetes master node and a number of compute nodes (by default 2). +The `instance_type` and `template` are specific, edit them to specify your CloudStack cloud specific template and instance type (i.e. service offering). + +Check the tasks and templates in `roles/k8s` if you want to modify anything. + +Once the playbook as finished, it will print out the IP of the Kubernetes master: + +```none +TASK: [k8s | debug msg='k8s master IP is {{ k8s_master.default_ip }}'] ******** +``` + +SSH to it using the key that was created and using the _core_ user. + +```shell +ssh -i ~/.ssh/id_rsa_k8s core@ +``` + +And you can list the machines in your cluster: + +```shell +fleetctl list-machines +``` + +```none +MACHINE IP METADATA +a017c422... role=node +ad13bf84... role=master +e9af8293... role=node +``` + +## Support Level + + +IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level +-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- +CloudStack | Ansible | CoreOS | flannel | [docs](/docs/setup/on-premises-vm/cloudstack/) | | Community ([@Guiques](https://github.com/ltupin/)) + +For support level information on all solutions, see the [Table of solutions](/docs/setup/pick-right-solution/#table-of-solutions) chart. + +{{% /capture %}} diff --git a/content/ko/docs/setup/on-premises-vm/dcos.md b/content/ko/docs/setup/on-premises-vm/dcos.md new file mode 100644 index 0000000000000..f9cb4177f1192 --- /dev/null +++ b/content/ko/docs/setup/on-premises-vm/dcos.md @@ -0,0 +1,23 @@ +--- +title: Kubernetes on DC/OS +content_template: templates/concept +--- + +{{% capture overview %}} + +Mesosphere provides an easy option to provision Kubernetes onto [DC/OS](https://mesosphere.com/product/), offering: + +* Pure upstream Kubernetes +* Single-click cluster provisioning +* Highly available and secure by default +* Kubernetes running alongside fast-data platforms (e.g. Akka, Cassandra, Kafka, Spark) + +{{% /capture %}} + +{{% capture body %}} + +## Official Mesosphere Guide + +The canonical source of getting started on DC/OS is located in the [quickstart repo](https://github.com/mesosphere/dcos-kubernetes-quickstart). + +{{% /capture %}} diff --git a/content/ko/docs/setup/on-premises-vm/ovirt.md b/content/ko/docs/setup/on-premises-vm/ovirt.md new file mode 100644 index 0000000000000..5e25efec6d4df --- /dev/null +++ b/content/ko/docs/setup/on-premises-vm/ovirt.md @@ -0,0 +1,70 @@ +--- +title: oVirt +content_template: templates/concept +--- + +{{% capture overview %}} + +oVirt is a virtual datacenter manager that delivers powerful management of multiple virtual machines on multiple hosts. Using KVM and libvirt, oVirt can be installed on Fedora, CentOS, or Red Hat Enterprise Linux hosts to set up and manage your virtual data center. + +{{% /capture %}} + +{{% capture body %}} + +## oVirt Cloud Provider Deployment + +The oVirt cloud provider allows to easily discover and automatically add new VM instances as nodes to your Kubernetes cluster. +At the moment there are no community-supported or pre-loaded VM images including Kubernetes but it is possible to [import] or [install] Project Atomic (or Fedora) in a VM to [generate a template]. Any other distribution that includes Kubernetes may work as well. + +It is mandatory to [install the ovirt-guest-agent] in the guests for the VM ip address and hostname to be reported to ovirt-engine and ultimately to Kubernetes. + +Once the Kubernetes template is available it is possible to start instantiating VMs that can be discovered by the cloud provider. + +[import]: https://ovedou.blogspot.it/2014/03/importing-glance-images-as-ovirt.html +[install]: https://www.ovirt.org/documentation/quickstart/quickstart-guide/#create-virtual-machines +[generate a template]: https://www.ovirt.org/documentation/quickstart/quickstart-guide/#using-templates +[install the ovirt-guest-agent]: https://www.ovirt.org/documentation/how-to/guest-agent/install-the-guest-agent-in-fedora/ + +## Using the oVirt Cloud Provider + +The oVirt Cloud Provider requires access to the oVirt REST-API to gather the proper information, the required credential should be specified in the `ovirt-cloud.conf` file: + +```none +[connection] +uri = https://localhost:8443/ovirt-engine/api +username = admin@internal +password = admin +``` + +In the same file it is possible to specify (using the `filters` section) what search query to use to identify the VMs to be reported to Kubernetes: + +```none +[filters] +# Search query used to find nodes +vms = tag=kubernetes +``` + +In the above example all the VMs tagged with the `kubernetes` label will be reported as nodes to Kubernetes. + +The `ovirt-cloud.conf` file then must be specified in kube-controller-manager: + +```shell +kube-controller-manager ... --cloud-provider=ovirt --cloud-config=/path/to/ovirt-cloud.conf ... +``` + +## oVirt Cloud Provider Screencast + +This short screencast demonstrates how the oVirt Cloud Provider can be used to dynamically add VMs to your Kubernetes cluster. + +[![Screencast](https://img.youtube.com/vi/JyyST4ZKne8/0.jpg)](https://www.youtube.com/watch?v=JyyST4ZKne8) + +## Support Level + + +IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level +-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- +oVirt | | | | [docs](/docs/setup/on-premises-vm/ovirt/) | | Community ([@simon3z](https://github.com/simon3z)) + +For support level information on all solutions, see the [Table of solutions](/docs/setup/pick-right-solution/#table-of-solutions) chart. + +{{% /capture %}} diff --git a/content/ko/docs/setup/pick-right-solution.md b/content/ko/docs/setup/pick-right-solution.md new file mode 100644 index 0000000000000..5ce3aefad8458 --- /dev/null +++ b/content/ko/docs/setup/pick-right-solution.md @@ -0,0 +1,250 @@ +--- +title: 알맞은 솔루션 선정 +weight: 10 +content_template: templates/concept +--- + +{{% capture overview %}} + +Kubernetes can run on various platforms: from your laptop, to VMs on a cloud provider, to a rack of +bare metal servers. The effort required to set up a cluster varies from running a single command to +crafting your own customized cluster. Use this guide to choose a solution that fits your needs. + +If you just want to "kick the tires" on Kubernetes, use the [local Docker-based solutions](#local-machine-solutions). + +When you are ready to scale up to more machines and higher availability, a [hosted solution](#hosted-solutions) is the easiest to create and maintain. + +[Turnkey cloud solutions](#turnkey-cloud-solutions) require only a few commands to create +and cover a wide range of cloud providers. [On-Premises turnkey cloud solutions](#on-premises-turnkey-cloud-solutions) have the simplicity of the turnkey cloud solution combined with the security of your own private network. + +If you already have a way to configure hosting resources, use [kubeadm](/docs/setup/independent/create-cluster-kubeadm/) to easily bring up a cluster with a single command per machine. + +[Custom solutions](#custom-solutions) vary from step-by-step instructions to general advice for setting up +a Kubernetes cluster from scratch. + +{{% /capture %}} + +{{% capture body %}} + +## 로컬 머신 솔루션 + +* [Minikube](/docs/setup/minikube/) is the recommended method for creating a local, single-node Kubernetes cluster for development and testing. Setup is completely automated and doesn't require a cloud provider account. + +* [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) can use VirtualBox on your machine to deploy Kubernetes to one or more VMs for development and test scenarios. Scales to full multi-node cluster. + +* [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers) is a Terraform/Packer/BASH based Infrastructure as Code (IaC) scripts to create a seven node (1 Boot, 1 Master, 1 Management, 1 Proxy and 3 Workers) LXD cluster on Linux Host. + +* [Kubeadm-dind](https://github.com/kubernetes-sigs/kubeadm-dind-cluster) is a multi-node (while minikube is single-node) Kubernetes cluster which only requires a docker daemon. It uses docker-in-docker technique to spawn the Kubernetes cluster. + +* [Ubuntu on LXD](/docs/getting-started-guides/ubuntu/local/) supports a nine-instance deployment on localhost. + +## 호스트 된 솔루션 + +* [AppsCode.com](https://appscode.com/products/cloud-deployment/) provides managed Kubernetes clusters for various public clouds, including AWS and Google Cloud Platform. + +* [APPUiO](https://appuio.ch) runs an OpenShift public cloud platform, supporting any Kubernetes workload. Additionally APPUiO offers Private Managed OpenShift Clusters, running on any public or private cloud. + +* [Amazon Elastic Container Service for Kubernetes](https://aws.amazon.com/eks/) offers managed Kubernetes service. + +* [Azure Kubernetes Service](https://azure.microsoft.com/services/container-service/) offers managed Kubernetes clusters. + +* [Giant Swarm](https://giantswarm.io/product/) offers managed Kubernetes clusters in their own datacenter, on-premises, or on public clouds. + +* [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) offers managed Kubernetes clusters. + +* [IBM Cloud Kubernetes Service](https://console.bluemix.net/docs/containers/container_index.html) offers managed Kubernetes clusters with isolation choice, operational tools, integrated security insight into images and containers, and integration with Watson, IoT, and data. + +* [Kubermatic](https://www.loodse.com) provides managed Kubernetes clusters for various public clouds, including AWS and Digital Ocean, as well as on-premises with OpenStack integration. + +* [Kublr](https://kublr.com) offers enterprise-grade secure, scalable, highly reliable Kubernetes clusters on AWS, Azure, GCP, and on-premise. It includes out-of-the-box backup and disaster recovery, multi-cluster centralized logging and monitoring, and built-in alerting. + +* [Madcore.Ai](https://madcore.ai) is devops-focused CLI tool for deploying Kubernetes infrastructure in AWS. Master, auto-scaling group nodes with spot-instances, ingress-ssl-lego, Heapster, and Grafana. + +* [OpenShift Dedicated](https://www.openshift.com/dedicated/) offers managed Kubernetes clusters powered by OpenShift. + +* [OpenShift Online](https://www.openshift.com/features/) provides free hosted access for Kubernetes applications. + +* [Oracle Container Engine for Kubernetes](https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengoverview.htm) is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud. + +* [Platform9](https://platform9.com/products/kubernetes/) offers managed Kubernetes on-premises or on any public cloud, and provides 24/7 health monitoring and alerting. (Kube2go, a web-UI driven Kubernetes cluster deployment service Platform9 released, has been integrated to Platform9 Sandbox.) + +* [Stackpoint.io](https://stackpoint.io) provides Kubernetes infrastructure automation and management for multiple public clouds. + +## 턴키 클라우드 솔루션 + +These solutions allow you to create Kubernetes clusters on a range of Cloud IaaS providers with only a +few commands. These solutions are actively developed and have active community support. + +* [Agile Stacks](https://www.agilestacks.com/products/kubernetes) +* [Alibaba Cloud](/docs/setup/turnkey/alibaba-cloud/) +* [APPUiO](https://appuio.ch) +* [AWS](/docs/setup/turnkey/aws/) +* [Azure](/docs/setup/turnkey/azure/) +* [CenturyLink Cloud](/docs/setup/turnkey/clc/) +* [Conjure-up Kubernetes with Ubuntu on AWS, Azure, Google Cloud, Oracle Cloud](/docs/getting-started-guides/ubuntu/) +* [Gardener](https://gardener.cloud/) +* [Google Compute Engine (GCE)](/docs/setup/turnkey/gce/) +* [IBM Cloud](https://github.com/patrocinio/kubernetes-softlayer) +* [Kontena Pharos](https://kontena.io/pharos/) +* [Kubermatic](https://cloud.kubermatic.io) +* [Kublr](https://kublr.com/) +* [Madcore.Ai](https://madcore.ai/) +* [Oracle Container Engine for K8s](https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengprerequisites.htm) +* [Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service) +* [Rancher 2.0](https://rancher.com/docs/rancher/v2.x/en/) +* [Stackpoint.io](/docs/setup/turnkey/stackpoint/) +* [Tectonic by CoreOS](https://coreos.com/tectonic) + +## 온-프레미스 턴키 클라우드 솔루션 +These solutions allow you to create Kubernetes clusters on your internal, secure, cloud network with only a +few commands. + +* [Agile Stacks](https://www.agilestacks.com/products/kubernetes) +* [APPUiO](https://appuio.ch) +* [IBM Cloud Private](https://www.ibm.com/cloud-computing/products/ibm-cloud-private/) +* [Kontena Pharos](https://kontena.io/pharos/) +* [Kubermatic](https://www.loodse.com) +* [Kublr](https://kublr.com/) +* [Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service) +* [Rancher 2.0](https://rancher.com/docs/rancher/v2.x/en/) +* [SUSE CaaS Platform](https://www.suse.com/products/caas-platform) +* [SUSE Cloud Application Platform](https://www.suse.com/products/cloud-application-platform/) + +## 사용자 지정 솔루션 + +Kubernetes can run on a wide range of Cloud providers and bare-metal environments, and with many +base operating systems. + +If you can find a guide below that matches your needs, use it. It may be a little out of date, but +it will be easier than starting from scratch. If you do want to start from scratch, either because you +have special requirements, or just because you want to understand what is underneath a Kubernetes +cluster, try the [Getting Started from Scratch](/docs/setup/scratch/) guide. + +If you are interested in supporting Kubernetes on a new platform, see +[Writing a Getting Started Guide](https://git.k8s.io/community/contributors/devel/writing-a-getting-started-guide.md). + +### 일반 + +If you already have a way to configure hosting resources, use +[kubeadm](/docs/setup/independent/create-cluster-kubeadm/) to easily bring up a cluster +with a single command per machine. + +### 클라우드 + +These solutions are combinations of cloud providers and operating systems not covered by the above solutions. + +* [CoreOS on AWS or GCE](/docs/setup/custom-cloud/coreos/) +* [Gardener](https://gardener.cloud/) +* [Kublr](https://kublr.com/) +* [Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/) +* [Kubespray](/docs/setup/custom-cloud/kubespray/) +* [Rancher Kubernetes Engine (RKE)](https://github.com/rancher/rke) + +### 온-프레미스 VM + +* [CloudStack](/docs/setup/on-premises-vm/cloudstack/) (uses Ansible, CoreOS and flannel) +* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) (uses Fedora and flannel) +* [oVirt](/docs/setup/on-premises-vm/ovirt/) +* [Vagrant](/docs/setup/custom-cloud/coreos/) (uses CoreOS and flannel) +* [VMware](/docs/setup/custom-cloud/coreos/) (uses CoreOS and flannel) +* [VMware vSphere](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/) +* [VMware vSphere, OpenStack, or Bare Metal](/docs/getting-started-guides/ubuntu/) (uses Juju, Ubuntu and flannel) + +### 베어 메탈 + +* [CoreOS](/docs/setup/custom-cloud/coreos/) +* [Fedora (Single Node)](/docs/getting-started-guides/fedora/fedora_manual_config/) +* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) +* [Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/) + +### 통합 + +These solutions provide integration with third-party schedulers, resource managers, and/or lower level platforms. + +* [DCOS](/docs/setup/on-premises-vm/dcos/) + * Community Edition DCOS uses AWS + * Enterprise Edition DCOS supports cloud hosting, on-premises VMs, and bare metal + +## 솔루션 표 + +Below is a table of all of the solutions listed above. + +IaaS Provider | Config. Mgmt. | OS | Networking | Docs | Support Level +-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------------------------- +any | any | multi-support | any CNI | [docs](/docs/setup/independent/create-cluster-kubeadm/) | Project ([SIG-cluster-lifecycle](https://git.k8s.io/community/sig-cluster-lifecycle)) +Google Kubernetes Engine | | | GCE | [docs](https://cloud.google.com/kubernetes-engine/docs/) | Commercial +Stackpoint.io | | multi-support | multi-support | [docs](https://stackpoint.io/) | Commercial +AppsCode.com | Saltstack | Debian | multi-support | [docs](https://appscode.com/products/cloud-deployment/) | Commercial +Madcore.Ai | Jenkins DSL | Ubuntu | flannel | [docs](https://madcore.ai) | Community ([@madcore-ai](https://github.com/madcore-ai)) +Platform9 | | multi-support | multi-support | [docs](https://platform9.com/managed-kubernetes/) | Commercial +Kublr | custom | multi-support | multi-support | [docs](http://docs.kublr.com/) | Commercial +Kubermatic | | multi-support | multi-support | [docs](http://docs.kubermatic.io/) | Commercial +IBM Cloud Kubernetes Service | | Ubuntu | IBM Cloud Networking + Calico | [docs](https://console.bluemix.net/docs/containers/) | Commercial +Giant Swarm | | CoreOS | flannel and/or Calico | [docs](https://docs.giantswarm.io/) | Commercial +GCE | Saltstack | Debian | GCE | [docs](/docs/setup/turnkey/gce/) | Project +Azure Kubernetes Service | | Ubuntu | Azure | [docs](https://docs.microsoft.com/en-us/azure/aks/) | Commercial +Azure (IaaS) | | Ubuntu | Azure | [docs](/docs/setup/turnkey/azure/) | [Community (Microsoft)](https://github.com/Azure/acs-engine) +Bare-metal | custom | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config/) | Project +Bare-metal | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) +libvirt | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) +KVM | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) +DCOS | Marathon | CoreOS/Alpine | custom | [docs](/docs/getting-started-guides/dcos/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) +AWS | CoreOS | CoreOS | flannel | [docs](/docs/setup/turnkey/aws/) | Community +GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires)) +Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles)) +CloudStack | Ansible | CoreOS | flannel | [docs](/docs/getting-started-guides/cloudstack/) | Community ([@sebgoa](https://github.com/sebgoa)) +VMware vSphere | any | multi-support | multi-support | [docs](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/) | [Community](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/contactus.html) +Bare-metal | custom | CentOS | flannel | [docs](/docs/getting-started-guides/centos/centos_manual_config/) | Community ([@coolsvap](https://github.com/coolsvap)) +lxd | Juju | Ubuntu | flannel/canal | [docs](/docs/getting-started-guides/ubuntu/local/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) +AWS | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) +Azure | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) +GCE | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) +Oracle Cloud | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) +Rackspace | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) +VMware vSphere | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) +Bare Metal | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) +AWS | Saltstack | Debian | AWS | [docs](/docs/setup/turnkey/aws/) | Community ([@justinsb](https://github.com/justinsb)) +AWS | kops | Debian | AWS | [docs](https://github.com/kubernetes/kops/) | Community ([@justinsb](https://github.com/justinsb)) +Bare-metal | custom | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY)) +oVirt | | | | [docs](/docs/setup/on-premises-vm/ovirt/) | Community ([@simon3z](https://github.com/simon3z)) +any | any | any | any | [docs](/docs/setup/scratch/) | Community ([@erictune](https://github.com/erictune)) +any | any | any | any | [docs](http://docs.projectcalico.org/v2.2/getting-started/kubernetes/installation/) | Commercial and Community +any | RKE | multi-support | flannel or canal | [docs](https://rancher.com/docs/rancher/v2.x/en/quick-start-guide/) | [Commercial](https://rancher.com/what-is-rancher/overview/) and [Community](https://github.com/rancher/rancher) +any | [Gardener Cluster-Operator](https://kubernetes.io/blog/2018/05/17/gardener/) | multi-support | multi-support | [docs](https://gardener.cloud) | [Project/Community](https://github.com/gardener) and [Commercial]( https://cloudplatform.sap.com/) +Alibaba Cloud Container Service For Kubernetes | ROS | CentOS | flannel/Terway | [docs](https://www.aliyun.com/product/containerservice) | Commercial +Agile Stacks | Terraform | CoreOS | multi-support | [docs](https://www.agilestacks.com/products/kubernetes) | Commercial +IBM Cloud Kubernetes Service | | Ubuntu | calico | [docs](https://console.bluemix.net/docs/containers/container_index.html) | Commercial + + +{{< note >}} +**Note:** The above table is ordered by version test/used in nodes, followed by support level. +{{< /note >}} + +### 열 정의 + +* **IaaS Provider** is the product or organization which provides the virtual or physical machines (nodes) that Kubernetes runs on. +* **OS** is the base operating system of the nodes. +* **Config. Mgmt.** is the configuration management system that helps install and maintain Kubernetes on the + nodes. +* **Networking** is what implements the [networking model](/docs/concepts/cluster-administration/networking/). Those with networking type + _none_ may not support more than a single node, or may support multiple VM nodes in a single physical node. +* **Conformance** indicates whether a cluster created with this configuration has passed the project's conformance + tests for supporting the API and base features of Kubernetes v1.0.0. +* **Support Levels** + * **Project**: Kubernetes committers regularly use this configuration, so it usually works with the latest release + of Kubernetes. + * **Commercial**: A commercial offering with its own support arrangements. + * **Community**: Actively supported by community contributions. May not work with recent releases of Kubernetes. + * **Inactive**: Not actively maintained. Not recommended for first-time Kubernetes users, and may be removed. +* **Notes** has other relevant information, such as the version of Kubernetes used. + + + + +[1]: https://gist.github.com/erictune/4cabc010906afbcc5061 + +[2]: https://gist.github.com/derekwaynecarr/505e56036cdf010bf6b6 + +[3]: https://gist.github.com/erictune/2f39b22f72565365e59b + +{{% /capture %}} diff --git a/content/ko/docs/setup/scratch.md b/content/ko/docs/setup/scratch.md new file mode 100644 index 0000000000000..ea0c97c8fcfa9 --- /dev/null +++ b/content/ko/docs/setup/scratch.md @@ -0,0 +1,874 @@ +--- +title: 맨 처음부터 사용자 지정 클러스터 생성 +--- + +This guide is for people who want to craft a custom Kubernetes cluster. If you +can find an existing Getting Started Guide that meets your needs on [this +list](/docs/setup/), then we recommend using it, as you will be able to benefit +from the experience of others. However, if you have specific IaaS, networking, +configuration management, or operating system requirements not met by any of +those guides, then this guide will provide an outline of the steps you need to +take. Note that it requires considerably more effort than using one of the +pre-defined guides. + +This guide is also useful for those wanting to understand at a high level some of the +steps that existing cluster setup scripts are making. + +{{< toc >}} + +## 설계 및 준비 + +### 학습 계획 + + 1. You should be familiar with using Kubernetes already. We suggest you set + up a temporary cluster by following one of the other Getting Started Guides. + This will help you become familiar with the CLI ([kubectl](/docs/user-guide/kubectl/)) and concepts ([pods](/docs/user-guide/pods/), [services](/docs/concepts/services-networking/service/), etc.) first. + 1. You should have `kubectl` installed on your desktop. This will happen as a side + effect of completing one of the other Getting Started Guides. If not, follow the instructions + [here](/docs/tasks/kubectl/install/). + +### 클라우드 공급자 + +Kubernetes has the concept of a Cloud Provider, which is a module which provides +an interface for managing TCP Load Balancers, Nodes (Instances) and Networking Routes. +The interface is defined in `pkg/cloudprovider/cloud.go`. It is possible to +create a custom cluster without implementing a cloud provider (for example if using +bare-metal), and not all parts of the interface need to be implemented, depending +on how flags are set on various components. + +### 노드 + +- You can use virtual or physical machines. +- While you can build a cluster with 1 machine, in order to run all the examples and tests you + need at least 4 nodes. +- Many Getting-started-guides make a distinction between the master node and regular nodes. This + is not strictly necessary. +- Nodes will need to run some version of Linux with the x86_64 architecture. It may be possible + to run on other OSes and Architectures, but this guide does not try to assist with that. +- Apiserver and etcd together are fine on a machine with 1 core and 1GB RAM for clusters with 10s of nodes. + Larger or more active clusters may benefit from more cores. +- Other nodes can have any reasonable amount of memory and any number of cores. They need not + have identical configurations. + +### 네트워크 + +#### 네트워크 연결 +Kubernetes has a distinctive [networking model](/docs/concepts/cluster-administration/networking/). + +Kubernetes allocates an IP address to each pod. When creating a cluster, you +need to allocate a block of IPs for Kubernetes to use as Pod IPs. The simplest +approach is to allocate a different block of IPs to each node in the cluster as +the node is added. A process in one pod should be able to communicate with +another pod using the IP of the second pod. This connectivity can be +accomplished in two ways: + +- **Using an overlay network** + - An overlay network obscures the underlying network architecture from the + pod network through traffic encapsulation (for example vxlan). + - Encapsulation reduces performance, though exactly how much depends on your solution. +- **Without an overlay network** + - Configure the underlying network fabric (switches, routers, etc.) to be aware of pod IP addresses. + - This does not require the encapsulation provided by an overlay, and so can achieve + better performance. + +Which method you choose depends on your environment and requirements. There are various ways +to implement one of the above options: + +- **Use a network plugin which is called by Kubernetes** + - Kubernetes supports the [CNI](https://github.com/containernetworking/cni) network plugin interface. + - There are a number of solutions which provide plugins for Kubernetes (listed alphabetically): + - [Calico](http://docs.projectcalico.org/) + - [Flannel](https://github.com/coreos/flannel) + - [Open vSwitch (OVS)](http://openvswitch.org/) + - [Romana](http://romana.io/) + - [Weave](http://weave.works/) + - [More found here](/docs/admin/networking#how-to-achieve-this/) + - You can also write your own. +- **Compile support directly into Kubernetes** + - This can be done by implementing the "Routes" interface of a Cloud Provider module. + - The Google Compute Engine ([GCE](/docs/setup/turnkey/gce/)) and [AWS](/docs/setup/turnkey/aws/) guides use this approach. +- **Configure the network external to Kubernetes** + - This can be done by manually running commands, or through a set of externally maintained scripts. + - You have to implement this yourself, but it can give you an extra degree of flexibility. + +You will need to select an address range for the Pod IPs. + +- Various approaches: + - GCE: each project has its own `10.0.0.0/8`. Carve off a `/16` for each + Kubernetes cluster from that space, which leaves room for several clusters. + Each node gets a further subdivision of this space. + - AWS: use one VPC for whole organization, carve off a chunk for each + cluster, or use different VPC for different clusters. +- Allocate one CIDR subnet for each node's PodIPs, or a single large CIDR + from which smaller CIDRs are automatically allocated to each node. + - You need max-pods-per-node * max-number-of-nodes IPs in total. A `/24` per + node supports 254 pods per machine and is a common choice. If IPs are + scarce, a `/26` (62 pods per machine) or even a `/27` (30 pods) may be sufficient. + - For example, use `10.10.0.0/16` as the range for the cluster, with up to 256 nodes + using `10.10.0.0/24` through `10.10.255.0/24`, respectively. + - Need to make these routable or connect with overlay. + +Kubernetes also allocates an IP to each [service](/docs/concepts/services-networking/service/). However, +service IPs do not necessarily need to be routable. The kube-proxy takes care +of translating Service IPs to Pod IPs before traffic leaves the node. You do +need to allocate a block of IPs for services. Call this +`SERVICE_CLUSTER_IP_RANGE`. For example, you could set +`SERVICE_CLUSTER_IP_RANGE="10.0.0.0/16"`, allowing 65534 distinct services to +be active at once. Note that you can grow the end of this range, but you +cannot move it without disrupting the services and pods that already use it. + +Also, you need to pick a static IP for master node. + +- Call this `MASTER_IP`. +- Open any firewalls to allow access to the apiserver ports 80 and/or 443. +- Enable ipv4 forwarding sysctl, `net.ipv4.ip_forward = 1` + +#### 네트워크 폴리시 + +Kubernetes enables the definition of fine-grained network policy between Pods using the [NetworkPolicy](/docs/concepts/services-networking/network-policies/) resource. + +Not all networking providers support the Kubernetes NetworkPolicy API, see [Using Network Policy](/docs/tasks/configure-pod-container/declare-network-policy/) for more information. + +### 클러스터 이름 구성 + +You should pick a name for your cluster. Pick a short name for each cluster +which is unique from future cluster names. This will be used in several ways: + + - by kubectl to distinguish between various clusters you have access to. You will probably want a + second one sometime later, such as for testing new Kubernetes releases, running in a different +region of the world, etc. + - Kubernetes clusters can create cloud provider resources (for example, AWS ELBs) and different clusters + need to distinguish which resources each created. Call this `CLUSTER_NAME`. + +### 소프트웨어 바이너리 + +You will need binaries for: + + - etcd + - A container runner, one of: + - docker + - rkt + - Kubernetes + - kubelet + - kube-proxy + - kube-apiserver + - kube-controller-manager + - kube-scheduler + +#### 쿠버네티스 바이너리 다운로드 및 압축 해제 + +A Kubernetes binary release includes all the Kubernetes binaries as well as the supported release of etcd. +You can use a Kubernetes binary release (recommended) or build your Kubernetes binaries following the instructions in the +[Developer Documentation](https://git.k8s.io/community/contributors/devel/). Only using a binary release is covered in this guide. + +Download the [latest binary release](https://github.com/kubernetes/kubernetes/releases/latest) and unzip it. +Server binary tarballs are no longer included in the Kubernetes final tarball, so you will need to locate and run +`./kubernetes/cluster/get-kube-binaries.sh` to download the client and server binaries. +Then locate `./kubernetes/server/kubernetes-server-linux-amd64.tar.gz` and unzip *that*. +Then, within the second set of unzipped files, locate `./kubernetes/server/bin`, which contains +all the necessary binaries. + +#### 이미지 선택 + +You will run docker, kubelet, and kube-proxy outside of a container, the same way you would run any system daemon, so +you just need the bare binaries. For etcd, kube-apiserver, kube-controller-manager, and kube-scheduler, +we recommend that you run these as containers, so you need an image to be built. + +You have several choices for Kubernetes images: + +- Use images hosted on Google Container Registry (GCR): + - For example `k8s.gcr.io/hyperkube:$TAG`, where `TAG` is the latest + release tag, which can be found on the [latest releases page](https://github.com/kubernetes/kubernetes/releases/latest). + - Ensure $TAG is the same tag as the release tag you are using for kubelet and kube-proxy. + - The [hyperkube](https://releases.k8s.io/{{< param "githubbranch" >}}/cmd/hyperkube) binary is an all in one binary + - `hyperkube kubelet ...` runs the kubelet, `hyperkube apiserver ...` runs an apiserver, etc. +- Build your own images. + - Useful if you are using a private registry. + - The release contains files such as `./kubernetes/server/bin/kube-apiserver.tar` which + can be converted into docker images using a command like + `docker load -i kube-apiserver.tar` + - You can verify if the image is loaded successfully with the right repository and tag using + command like `docker images` + +We recommend that you use the etcd version which is provided in the Kubernetes binary distribution. The Kubernetes binaries in the release +were tested extensively with this version of etcd and not with any other version. +The recommended version number can also be found as the value of `TAG` in `kubernetes/cluster/images/etcd/Makefile`. + +For the miniumum recommended version of etcd, refer to +[Configuring and Updating etcd](/docs/tasks/administer-cluster/configure-upgrade-etcd/) + +The remainder of the document assumes that the image identifiers have been chosen and stored in corresponding env vars. Examples (replace with latest tags and appropriate registry): + + - `HYPERKUBE_IMAGE=k8s.gcr.io/hyperkube:$TAG` + - `ETCD_IMAGE=k8s.gcr.io/etcd:$ETCD_VERSION` + +### 보안 모델 + +There are two main options for security: + +- Access the apiserver using HTTP. + - Use a firewall for security. + - This is easier to setup. +- Access the apiserver using HTTPS + - Use https with certs, and credentials for user. + - This is the recommended approach. + - Configuring certs can be tricky. + +If following the HTTPS approach, you will need to prepare certs and credentials. + +#### 인증서 준비 + +You need to prepare several certs: + +- The master needs a cert to act as an HTTPS server. +- The kubelets optionally need certs to identify themselves as clients of the master, and when + serving its own API over HTTPS. + +Unless you plan to have a real CA generate your certs, you will need +to generate a root cert and use that to sign the master, kubelet, and +kubectl certs. How to do this is described in the [authentication +documentation](/docs/concepts/cluster-administration/certificates/). + +You will end up with the following files (we will use these variables later on) + +- `CA_CERT` + - put in on node where apiserver runs, for example in `/srv/kubernetes/ca.crt`. +- `MASTER_CERT` + - signed by CA_CERT + - put in on node where apiserver runs, for example in `/srv/kubernetes/server.crt` +- `MASTER_KEY ` + - put in on node where apiserver runs, for example in `/srv/kubernetes/server.key` +- `KUBELET_CERT` + - optional +- `KUBELET_KEY` + - optional + +#### 자격 증명 준비 + +The admin user (and any users) need: + + - a token or a password to identify them. + - tokens are just long alphanumeric strings, 32 chars for example. See + - `TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/[:space:]" | dd bs=32 count=1 2>/dev/null)` + +Your tokens and passwords need to be stored in a file for the apiserver +to read. This guide uses `/var/lib/kube-apiserver/known_tokens.csv`. +The format for this file is described in the [authentication documentation](/docs/reference/access-authn-authz/authentication/#static-token-file). + +For distributing credentials to clients, the convention in Kubernetes is to put the credentials +into a [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/). + +The kubeconfig file for the administrator can be created as follows: + + - If you have already used Kubernetes with a non-custom cluster (for example, used a Getting Started + Guide), you will already have a `$HOME/.kube/config` file. + - You need to add certs, keys, and the master IP to the kubeconfig file: + - If using the firewall-only security option, set the apiserver this way: + - `kubectl config set-cluster $CLUSTER_NAME --server=http://$MASTER_IP --insecure-skip-tls-verify=true` + - Otherwise, do this to set the apiserver ip, client certs, and user credentials. + - `kubectl config set-cluster $CLUSTER_NAME --certificate-authority=$CA_CERT --embed-certs=true --server=https://$MASTER_IP` + - `kubectl config set-credentials $USER --client-certificate=$CLI_CERT --client-key=$CLI_KEY --embed-certs=true --token=$TOKEN` + - Set your cluster as the default cluster to use: + - `kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NAME --user=$USER` + - `kubectl config use-context $CONTEXT_NAME` + +Next, make a kubeconfig file for the kubelets and kube-proxy. There are a couple of options for how +many distinct files to make: + + 1. Use the same credential as the admin + - This is simplest to setup. + 1. One token and kubeconfig file for all kubelets, one for all kube-proxy, one for admin. + - This mirrors what is done on GCE today + 1. Different credentials for every kubelet, etc. + - We are working on this but all the pieces are not ready yet. + +You can make the files by copying the `$HOME/.kube/config` or by using the following template: + +```yaml +apiVersion: v1 +kind: Config +users: +- name: kubelet + user: + token: ${KUBELET_TOKEN} +clusters: +- name: local + cluster: + certificate-authority: /srv/kubernetes/ca.crt +contexts: +- context: + cluster: local + user: kubelet + name: service-account-context +current-context: service-account-context +``` + +Put the kubeconfig(s) on every node. The examples later in this +guide assume that there are kubeconfigs in `/var/lib/kube-proxy/kubeconfig` and +`/var/lib/kubelet/kubeconfig`. + +## 노드의 기본 소프트웨어 구성 및 설치 + +This section discusses how to configure machines to be Kubernetes nodes. + +You should run three daemons on every node: + + - docker or rkt + - kubelet + - kube-proxy + +You will also need to do assorted other configuration on top of a +base OS install. + +Tip: One possible starting point is to setup a cluster using an existing Getting +Started Guide. After getting a cluster running, you can then copy the init.d scripts or systemd unit files from that +cluster, and then modify them for use on your custom cluster. + +### Docker + +The minimum required Docker version will vary as the kubelet version changes. The newest stable release is a good choice. Kubelet will log a warning and refuse to start pods if the version is too old, so pick a version and try it. + +If you previously had Docker installed on a node without setting Kubernetes-specific +options, you may have a Docker-created bridge and iptables rules. You may want to remove these +as follows before proceeding to configure Docker for Kubernetes. + +```shell +iptables -t nat -F +ip link set docker0 down +ip link delete docker0 +``` + +The way you configure docker will depend in whether you have chosen the routable-vip or overlay-network approaches for your network. +Some suggested docker options: + + - create your own bridge for the per-node CIDR ranges, call it cbr0, and set `--bridge=cbr0` option on docker. + - set `--iptables=false` so docker will not manipulate iptables for host-ports (too coarse on older docker versions, may be fixed in newer versions) +so that kube-proxy can manage iptables instead of docker. + - `--ip-masq=false` + - if you have setup PodIPs to be routable, then you want this false, otherwise, docker will + rewrite the PodIP source-address to a NodeIP. + - some environments (for example GCE) still need you to masquerade out-bound traffic when it leaves the cloud environment. This is very environment specific. + - if you are using an overlay network, consult those instructions. + - `--mtu=` + - may be required when using Flannel, because of the extra packet size due to udp encapsulation + - `--insecure-registry $CLUSTER_SUBNET` + - to connect to a private registry, if you set one up, without using SSL. + +You may want to increase the number of open files for docker: + + - `DOCKER_NOFILE=1000000` + +Where this config goes depends on your node OS. For example, GCE's Debian-based distro uses `/etc/default/docker`. + +Ensure docker is working correctly on your system before proceeding with the rest of the +installation, by following examples given in the Docker documentation. + +### rkt + +[rkt](https://github.com/coreos/rkt) is an alternative to Docker. You only need to install one of Docker or rkt. +The minimum version required is [v0.5.6](https://github.com/coreos/rkt/releases/tag/v0.5.6). + +[systemd](http://www.freedesktop.org/wiki/Software/systemd/) is required on your node to run rkt. The +minimum version required to match rkt v0.5.6 is +[systemd 215](http://lists.freedesktop.org/archives/systemd-devel/2014-July/020903.html). + +[rkt metadata service](https://github.com/coreos/rkt/blob/master/Documentation/networking/overview.md) is also required +for rkt networking support. You can start rkt metadata service by using command like +`sudo systemd-run rkt metadata-service` + +Then you need to configure your kubelet with flag: + + - `--container-runtime=rkt` + +### kubelet + +All nodes should run kubelet. See [Software Binaries](#software-binaries). + +Arguments to consider: + + - If following the HTTPS security approach: + - `--kubeconfig=/var/lib/kubelet/kubeconfig` + - Otherwise, if taking the firewall-based security approach + - `--config=/etc/kubernetes/manifests` + - `--cluster-dns=` to the address of the DNS server you will setup (see [Starting Cluster Services](#starting-cluster-services).) + - `--cluster-domain=` to the dns domain prefix to use for cluster DNS addresses. + - `--docker-root=` + - `--root-dir=` + - `--pod-cidr=` The CIDR to use for pod IP addresses, only used in standalone mode. In cluster mode, this is obtained from the master. + - `--register-node` (described in [Node](/docs/admin/node/) documentation.) + +### kube-proxy + +All nodes should run kube-proxy. (Running kube-proxy on a "master" node is not +strictly required, but being consistent is easier.) Obtain a binary as described for +kubelet. + +Arguments to consider: + + - If following the HTTPS security approach: + - `--master=https://$MASTER_IP` + - `--kubeconfig=/var/lib/kube-proxy/kubeconfig` + - Otherwise, if taking the firewall-based security approach + - `--master=http://$MASTER_IP` + +Note that on some Linux platforms, you may need to manually install the +`conntrack` package which is a dependency of kube-proxy, or else kube-proxy +cannot be started successfully. + +For more details about debugging kube-proxy problems, refer to +[Debug Services](/docs/tasks/debug-application-cluster/debug-service/) + +### 네트워킹 + +Each node needs to be allocated its own CIDR range for pod networking. +Call this `NODE_X_POD_CIDR`. + +A bridge called `cbr0` needs to be created on each node. The bridge is explained +further in the [networking documentation](/docs/concepts/cluster-administration/networking/). The bridge itself +needs an address from `$NODE_X_POD_CIDR` - by convention the first IP. Call +this `NODE_X_BRIDGE_ADDR`. For example, if `NODE_X_POD_CIDR` is `10.0.0.0/16`, +then `NODE_X_BRIDGE_ADDR` is `10.0.0.1/16`. NOTE: this retains the `/16` suffix +because of how this is used later. + +If you have turned off Docker's IP masquerading to allow pods to talk to each +other, then you may need to do masquerading just for destination IPs outside +the cluster network. For example: + +```shell +iptables -t nat -A POSTROUTING ! -d ${CLUSTER_SUBNET} -m addrtype ! --dst-type LOCAL -j MASQUERADE +``` + +This will rewrite the source address from +the PodIP to the Node IP for traffic bound outside the cluster, and kernel +[connection tracking](http://www.iptables.info/en/connection-state.html) +will ensure that responses destined to the node still reach +the pod. + +NOTE: This is environment specific. Some environments will not need +any masquerading at all. Others, such as GCE, will not allow pod IPs to send +traffic to the internet, but have no problem with them inside your GCE Project. + +### 기타 + +- Enable auto-upgrades for your OS package manager, if desired. +- Configure log rotation for all node components (for example using [logrotate](http://linux.die.net/man/8/logrotate)). +- Setup liveness-monitoring (for example using [supervisord](http://supervisord.org/)). +- Setup volume plugin support (optional) + - Install any client binaries for optional volume types, such as `glusterfs-client` for GlusterFS + volumes. + +### 구성 관리 사용 + +The previous steps all involved "conventional" system administration techniques for setting up +machines. You may want to use a Configuration Management system to automate the node configuration +process. There are examples of Ansible, Juju, and CoreOS Cloud Config in the +various Getting Started Guides. + +## 클러스터 부트스트랩 + +While the basic node services (kubelet, kube-proxy, docker) are typically started and managed using +traditional system administration/automation approaches, the remaining *master* components of Kubernetes are +all configured and managed *by Kubernetes*: + + - Their options are specified in a Pod spec (yaml or json) rather than an /etc/init.d file or + systemd unit. + - They are kept running by Kubernetes rather than by init. + +### etcd + +You will need to run one or more instances of etcd. + + - Highly available and easy to restore - Run 3 or 5 etcd instances with, their logs written to a directory backed + by durable storage (RAID, GCE PD) + - Not highly available, but easy to restore - Run one etcd instance, with its log written to a directory backed + by durable storage (RAID, GCE PD). + + {{< note >}}**Note:** May result in operations outages in case of + instance outage. {{< /note >}} + - Highly available - Run 3 or 5 etcd instances with non durable storage. + + {{< note >}}**Note:** Log can be written to non-durable storage + because storage is replicated.{{< /note >}} + +See [cluster-troubleshooting](/docs/admin/cluster-troubleshooting/) for more discussion on factors affecting cluster +availability. + +To run an etcd instance: + +1. Copy [`cluster/gce/manifests/etcd.manifest`](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/manifests/etcd.manifest) +1. Make any modifications needed +1. Start the pod by putting it into the kubelet manifest directory + +### API 서버, 컨트롤러 관리자, 스케줄러 + +The apiserver, controller manager, and scheduler will each run as a pod on the master node. + +For each of these components, the steps to start them running are similar: + +1. Start with a provided template for a pod. +1. Set the `HYPERKUBE_IMAGE` to the values chosen in [Selecting Images](#selecting-images). +1. Determine which flags are needed for your cluster, using the advice below each template. +1. Set the flags to be individual strings in the command array (for example $ARGN below) +1. Start the pod by putting the completed template into the kubelet manifest directory. +1. Verify that the pod is started. + +#### API 서버 파드 템플릿 + +```json +{ + "kind": "Pod", + "apiVersion": "v1", + "metadata": { + "name": "kube-apiserver" + }, + "spec": { + "hostNetwork": true, + "containers": [ + { + "name": "kube-apiserver", + "image": "${HYPERKUBE_IMAGE}", + "command": [ + "/hyperkube", + "apiserver", + "$ARG1", + "$ARG2", + ... + "$ARGN" + ], + "ports": [ + { + "name": "https", + "hostPort": 443, + "containerPort": 443 + }, + { + "name": "local", + "hostPort": 8080, + "containerPort": 8080 + } + ], + "volumeMounts": [ + { + "name": "srvkube", + "mountPath": "/srv/kubernetes", + "readOnly": true + }, + { + "name": "etcssl", + "mountPath": "/etc/ssl", + "readOnly": true + } + ], + "livenessProbe": { + "httpGet": { + "scheme": "HTTP", + "host": "127.0.0.1", + "port": 8080, + "path": "/healthz" + }, + "initialDelaySeconds": 15, + "timeoutSeconds": 15 + } + } + ], + "volumes": [ + { + "name": "srvkube", + "hostPath": { + "path": "/srv/kubernetes" + } + }, + { + "name": "etcssl", + "hostPath": { + "path": "/etc/ssl" + } + } + ] + } +} +``` + +Here are some apiserver flags you may need to set: + +- `--cloud-provider=` see [cloud providers](#cloud-providers) +- `--cloud-config=` see [cloud providers](#cloud-providers) +- `--address=${MASTER_IP}` *or* `--bind-address=127.0.0.1` and `--address=127.0.0.1` if you want to run a proxy on the master node. +- `--service-cluster-ip-range=$SERVICE_CLUSTER_IP_RANGE` +- `--etcd-servers=http://127.0.0.1:4001` +- `--tls-cert-file=/srv/kubernetes/server.cert` +- `--tls-private-key-file=/srv/kubernetes/server.key` +- `--enable-admission-plugins=$RECOMMENDED_LIST` + - See [admission controllers](/docs/reference/access-authn-authz/admission-controllers/) for recommended arguments. +- `--allow-privileged=true`, only if you trust your cluster user to run pods as root. + +If you are following the firewall-only security approach, then use these arguments: + +- `--token-auth-file=/dev/null` +- `--insecure-bind-address=$MASTER_IP` +- `--advertise-address=$MASTER_IP` + +If you are using the HTTPS approach, then set: + +- `--client-ca-file=/srv/kubernetes/ca.crt` +- `--token-auth-file=/srv/kubernetes/known_tokens.csv` +- `--basic-auth-file=/srv/kubernetes/basic_auth.csv` + +This pod mounts several node file system directories using the `hostPath` volumes. Their purposes are: + +- The `/etc/ssl` mount allows the apiserver to find the SSL root certs so it can + authenticate external services, such as a cloud provider. + - This is not required if you do not use a cloud provider (bare-metal for example). +- The `/srv/kubernetes` mount allows the apiserver to read certs and credentials stored on the + node disk. These could instead be stored on a persistent disk, such as a GCE PD, or baked into the image. +- Optionally, you may want to mount `/var/log` as well and redirect output there (not shown in template). + - Do this if you prefer your logs to be accessible from the root filesystem with tools like journalctl. + +*TODO* document proxy-ssh setup. + +##### 클라우드 공급자 + +Apiserver supports several cloud providers. + +- options for `--cloud-provider` flag are `aws`, `azure`, `cloudstack`, `fake`, `gce`, `mesos`, `openstack`, `ovirt`, `rackspace`, `vsphere`, or unset. +- unset used for bare metal setups. +- support for new IaaS is added by contributing code [here](https://releases.k8s.io/{{< param "githubbranch" >}}/pkg/cloudprovider/providers) + +Some cloud providers require a config file. If so, you need to put config file into apiserver image or mount through hostPath. + +- `--cloud-config=` set if cloud provider requires a config file. +- Used by `aws`, `gce`, `mesos`, `openstack`, `ovirt` and `rackspace`. +- You must put config file into apiserver image or mount through hostPath. +- Cloud config file syntax is [Gcfg](https://code.google.com/p/gcfg/). +- AWS format defined by type [AWSCloudConfig](https://releases.k8s.io/{{< param "githubbranch" >}}/pkg/cloudprovider/providers/aws/aws.go) +- There is a similar type in the corresponding file for other cloud providers. + +#### 스케줄러 파드 템플릿 + +Complete this template for the scheduler pod: + +```json +{ + "kind": "Pod", + "apiVersion": "v1", + "metadata": { + "name": "kube-scheduler" + }, + "spec": { + "hostNetwork": true, + "containers": [ + { + "name": "kube-scheduler", + "image": "$HYPERKUBE_IMAGE", + "command": [ + "/hyperkube", + "scheduler", + "--master=127.0.0.1:8080", + "$SCHEDULER_FLAG1", + ... + "$SCHEDULER_FLAGN" + ], + "livenessProbe": { + "httpGet": { + "scheme": "HTTP", + "host": "127.0.0.1", + "port": 10251, + "path": "/healthz" + }, + "initialDelaySeconds": 15, + "timeoutSeconds": 15 + } + } + ] + } +} +``` + +Typically, no additional flags are required for the scheduler. + +Optionally, you may want to mount `/var/log` as well and redirect output there. + +#### 컨트롤러 관리자 템플릿 + +Template for controller manager pod: + +```json +{ + "kind": "Pod", + "apiVersion": "v1", + "metadata": { + "name": "kube-controller-manager" + }, + "spec": { + "hostNetwork": true, + "containers": [ + { + "name": "kube-controller-manager", + "image": "$HYPERKUBE_IMAGE", + "command": [ + "/hyperkube", + "controller-manager", + "$CNTRLMNGR_FLAG1", + ... + "$CNTRLMNGR_FLAGN" + ], + "volumeMounts": [ + { + "name": "srvkube", + "mountPath": "/srv/kubernetes", + "readOnly": true + }, + { + "name": "etcssl", + "mountPath": "/etc/ssl", + "readOnly": true + } + ], + "livenessProbe": { + "httpGet": { + "scheme": "HTTP", + "host": "127.0.0.1", + "port": 10252, + "path": "/healthz" + }, + "initialDelaySeconds": 15, + "timeoutSeconds": 15 + } + } + ], + "volumes": [ + { + "name": "srvkube", + "hostPath": { + "path": "/srv/kubernetes" + } + }, + { + "name": "etcssl", + "hostPath": { + "path": "/etc/ssl" + } + } + ] + } +} +``` + +Flags to consider using with controller manager: + + - `--cluster-cidr=`, the CIDR range for pods in cluster. + - `--allocate-node-cidrs=`, if you are using `--cloud-provider=`, allocate and set the CIDRs for pods on the cloud provider. + - `--cloud-provider=` and `--cloud-config` as described in apiserver section. + - `--service-account-private-key-file=/srv/kubernetes/server.key`, used by the [service account](/docs/user-guide/service-accounts) feature. + - `--master=127.0.0.1:8080` + +#### API 서버, 스케줄러, 컨트롤러 관리자 시작 및 확인 + +Place each completed pod template into the kubelet config dir +(whatever `--config=` argument of kubelet is set to, typically +`/etc/kubernetes/manifests`). The order does not matter: scheduler and +controller manager will retry reaching the apiserver until it is up. + +Use `ps` or `docker ps` to verify that each process has started. For example, verify that kubelet has started a container for the apiserver like this: + +```shell +$ sudo docker ps | grep apiserver +5783290746d5 k8s.gcr.io/kube-apiserver:e36bf367342b5a80d7467fd7611ad873 "/bin/sh -c '/usr/lo'" 10 seconds ago Up 9 seconds k8s_kube-apiserver.feb145e7_kube-apiserver-kubernetes-master_default_eaebc600cf80dae59902b44225f2fc0a_225a4695 +``` + +Then try to connect to the apiserver: + +```shell +$ echo $(curl -s http://localhost:8080/healthz) +ok +$ curl -s http://localhost:8080/api +{ + "versions": [ + "v1" + ] +} +``` + +If you have selected the `--register-node=true` option for kubelets, they will now begin self-registering with the apiserver. +You should soon be able to see all your nodes by running the `kubectl get nodes` command. +Otherwise, you will need to manually create node objects. + +### 클러스터 서비스 시작 + +You will want to complete your Kubernetes clusters by adding cluster-wide +services. These are sometimes called *addons*, and [an overview +of their purpose is in the admin guide](/docs/admin/cluster-components/#addons). + +Notes for setting up each cluster service are given below: + +* Cluster DNS: + * Required for many Kubernetes examples + * [Setup instructions](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/) + * [Admin Guide](/docs/concepts/services-networking/dns-pod-service/) +* Cluster-level Logging + * [Cluster-level Logging Overview](/docs/user-guide/logging/overview/) + * [Cluster-level Logging with Elasticsearch](/docs/user-guide/logging/elasticsearch/) + * [Cluster-level Logging with Stackdriver Logging](/docs/user-guide/logging/stackdriver/) +* Container Resource Monitoring + * [Setup instructions](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/) +* GUI + * [Setup instructions](https://github.com/kubernetes/dashboard) + +## 문제 해결 + +### validate-cluster 명령 실행 + +`cluster/validate-cluster.sh` is used by `cluster/kube-up.sh` to determine if +the cluster start succeeded. + +Example usage and output: + +```shell +KUBECTL_PATH=$(which kubectl) NUM_NODES=3 KUBERNETES_PROVIDER=local cluster/validate-cluster.sh +Found 3 node(s). +NAME STATUS AGE VERSION +node1.local Ready 1h v1.6.9+a3d1dfa6f4335 +node2.local Ready 1h v1.6.9+a3d1dfa6f4335 +node3.local Ready 1h v1.6.9+a3d1dfa6f4335 +Validate output: +NAME STATUS MESSAGE ERROR +controller-manager Healthy ok +scheduler Healthy ok +etcd-1 Healthy {"health": "true"} +etcd-2 Healthy {"health": "true"} +etcd-0 Healthy {"health": "true"} +Cluster validation succeeded +``` + +### 파드와 서비스 검사 + +Try to run through the "Inspect your cluster" section in one of the other Getting Started Guides, such as [GCE](/docs/setup/turnkey/gce/#inspect-your-cluster). +You should see some services. You should also see "mirror pods" for the apiserver, scheduler and controller-manager, plus any add-ons you started. + +### 예제 실행하기 + +At this point you should be able to run through one of the basic examples, such as the [nginx example](/examples/application/deployment.yaml). + +### 적합성 테스트 실행 + +You may want to try to run the [Conformance test](http://releases.k8s.io/{{< param "githubbranch" >}}/test/e2e_node/conformance/run_test.sh). Any failures may give a hint as to areas that need more attention. + +### 네트워킹 + +The nodes must be able to connect to each other using their private IP. Verify this by +pinging or SSH-ing from one node to another. + +### 도움말 얻기 + +If you run into trouble, see the section on [troubleshooting](/docs/setup/turnkey/gce/#troubleshooting), post to the +[Kubernetes Forum](https://discuss.kubernetes.io), or come ask questions on [Slack](/docs/troubleshooting#slack). + +## 지원 레벨 + + +IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level +-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- +any | any | any | any | [docs](/docs/getting-started-guides/scratch/) | | Community ([@erictune](https://github.com/erictune)) + + +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions/) chart. diff --git a/content/ko/docs/setup/turnkey/_index.md b/content/ko/docs/setup/turnkey/_index.md new file mode 100644 index 0000000000000..8abee4413ce5b --- /dev/null +++ b/content/ko/docs/setup/turnkey/_index.md @@ -0,0 +1,4 @@ +--- +title: 턴키 클라우드 솔루션 +weight: 40 +--- diff --git a/content/ko/docs/setup/turnkey/alibaba-cloud.md b/content/ko/docs/setup/turnkey/alibaba-cloud.md new file mode 100644 index 0000000000000..a15951551f214 --- /dev/null +++ b/content/ko/docs/setup/turnkey/alibaba-cloud.md @@ -0,0 +1,20 @@ +--- +reviewers: +- colemickens +- brendandburns +title: Running Kubernetes on Alibaba Cloud +--- + +## Alibaba Cloud Container Service + +The [Alibaba Cloud Container Service](https://www.aliyun.com/product/containerservice) lets you run and manage Docker applications on a cluster of Alibaba Cloud ECS instances. It supports the popular open source container orchestrators: Docker Swarm and Kubernetes. + +To simplify cluster deployment and management, use [Kubernetes Support for Alibaba Cloud Container Service](https://www.aliyun.com/solution/kubernetes/). You can get started quickly by following the [Kubernetes walk-through](https://help.aliyun.com/document_detail/53751.html), and there are some [tutorials for Kubernetes Support on Alibaba Cloud](https://yq.aliyun.com/teams/11/type_blog-cid_200-page_1) in Chinese. + +To use custom binaries or open source Kubernetes, follow the instructions below. + +## Custom Deployments + +The source code for [Kubernetes with Alibaba Cloud provider implementation](https://github.com/AliyunContainerService/kubernetes) is open source and available on GitHub. + +For more information, see "[Quick deployment of Kubernetes - VPC environment on Alibaba Cloud](https://www.alibabacloud.com/forum/read-830)" in English and [Chinese](https://yq.aliyun.com/articles/66474). diff --git a/content/ko/docs/setup/turnkey/aws.md b/content/ko/docs/setup/turnkey/aws.md new file mode 100644 index 0000000000000..fc96678676e24 --- /dev/null +++ b/content/ko/docs/setup/turnkey/aws.md @@ -0,0 +1,89 @@ +--- +title: Running Kubernetes on AWS EC2 +content_template: templates/task +--- + +{{% capture overview %}} + +This page describes how to install a Kubernetes cluster on AWS. + +{{% /capture %}} + +{{% capture prerequisites %}} + +To create a Kubernetes cluster on AWS, you will need an Access Key ID and a Secret Access Key from AWS. + +### Supported Production Grade Tools + +* [conjure-up](/docs/getting-started-guides/ubuntu/) is an open-source installer for Kubernetes that creates Kubernetes clusters with native AWS integrations on Ubuntu. + +* [Kubernetes Operations](https://github.com/kubernetes/kops) - Production Grade K8s Installation, Upgrades, and Management. Supports running Debian, Ubuntu, CentOS, and RHEL in AWS. + +* [CoreOS Tectonic](https://coreos.com/tectonic/) includes the open-source [Tectonic Installer](https://github.com/coreos/tectonic-installer) that creates Kubernetes clusters with Container Linux nodes on AWS. + +* CoreOS originated and the Kubernetes Incubator maintains [a CLI tool, kube-aws](https://github.com/kubernetes-incubator/kube-aws), that creates and manages Kubernetes clusters with [Container Linux](https://coreos.com/why/) nodes, using AWS tools: EC2, CloudFormation and Autoscaling. + +{{% /capture %}} + +{{% capture steps %}} + +## Getting started with your cluster + +### Command line administration tool: kubectl + +The cluster startup script will leave you with a `kubernetes` directory on your workstation. +Alternately, you can download the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases). + +Next, add the appropriate binary folder to your `PATH` to access kubectl: + +```shell +# macOS +export PATH=/platforms/darwin/amd64:$PATH + +# Linux +export PATH=/platforms/linux/amd64:$PATH +``` + +An up-to-date documentation page for this tool is available here: [kubectl manual](/docs/user-guide/kubectl/) + +By default, `kubectl` will use the `kubeconfig` file generated during the cluster startup for authenticating against the API. +For more information, please read [kubeconfig files](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) + +### Examples + +See [a simple nginx example](/docs/tasks/run-application/run-stateless-application-deployment/) to try out your new cluster. + +The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) + +For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/) + +## Scaling the cluster + +Adding and removing nodes through `kubectl` is not supported. You can still scale the amount of nodes manually through adjustments of the 'Desired' and 'Max' properties within the [Auto Scaling Group](http://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html), which was created during the installation. + +## Tearing down the cluster + +Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the +`kubernetes` directory: + +```shell +cluster/kube-down.sh +``` + +## Support Level + + +IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level +-------------------- | ------------ | ------------- | ---------- | --------------------------------------------- | ---------| ---------------------------- +AWS | kops | Debian | k8s (VPC) | [docs](https://github.com/kubernetes/kops) | | Community ([@justinsb](https://github.com/justinsb)) +AWS | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/aws) | | Community +AWS | Juju | Ubuntu | flannel, calico, canal | [docs](/docs/getting-started-guides/ubuntu) | 100% | Commercial, Community + +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. + +## Further reading + +Please see the [Kubernetes docs](/docs/) for more details on administering +and using a Kubernetes cluster. + +{{% /capture %}} diff --git a/content/ko/docs/setup/turnkey/azure.md b/content/ko/docs/setup/turnkey/azure.md new file mode 100644 index 0000000000000..534f1a5536a06 --- /dev/null +++ b/content/ko/docs/setup/turnkey/azure.md @@ -0,0 +1,39 @@ +--- +reviewers: +- colemickens +- brendandburns +title: Running Kubernetes on Azure +--- + +## Azure Container Service + +The [Azure Container Service](https://azure.microsoft.com/en-us/services/container-service/) offers simple +deployments of one of three open source orchestrators: DC/OS, Swarm, and Kubernetes clusters. + +For an example of deploying a Kubernetes cluster onto Azure via the Azure Container Service: + +**[Microsoft Azure Container Service - Kubernetes Walkthrough](https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes)** + +## Custom Deployments: ACS-Engine + +The core of the Azure Container Service is **open source** and available on GitHub for the community +to use and contribute to: **[ACS-Engine](https://github.com/Azure/acs-engine)**. + +ACS-Engine is a good choice if you need to make customizations to the deployment beyond what the Azure Container +Service officially supports. These customizations include deploying into existing virtual networks, utilizing multiple +agent pools, and more. Some community contributions to ACS-Engine may even become features of the Azure Container Service. + +The input to ACS-Engine is similar to the ARM template syntax used to deploy a cluster directly with the Azure Container Service. +The resulting output is an Azure Resource Manager Template that can then be checked into source control and can then be used +to deploy Kubernetes clusters into Azure. + +You can get started quickly by following the **[ACS-Engine Kubernetes Walkthrough](https://github.com/Azure/acs-engine/blob/master/docs/kubernetes.md)**. + +## CoreOS Tectonic for Azure + +The CoreOS Tectonic Installer for Azure is **open source** and available on GitHub for the community to use and contribute to: **[Tectonic Installer](https://github.com/coreos/tectonic-installer)**. + +Tectonic Installer is a good choice when you need to make cluster customizations as it is built on [Hashicorp's Terraform](https://www.terraform.io/docs/providers/azurerm/) Azure Resource Manager (ARM) provider. This enables users to customize or integrate using familiar Terraform tooling. + +You can get started using the [Tectonic Installer for Azure Guide](https://coreos.com/tectonic/docs/latest/install/azure/azure-terraform.html). + diff --git a/content/ko/docs/setup/turnkey/clc.md b/content/ko/docs/setup/turnkey/clc.md new file mode 100644 index 0000000000000..463787e4c3abd --- /dev/null +++ b/content/ko/docs/setup/turnkey/clc.md @@ -0,0 +1,342 @@ +--- +title: Running Kubernetes on CenturyLink Cloud +--- + +{: toc} + +These scripts handle the creation, deletion and expansion of Kubernetes clusters on CenturyLink Cloud. + +You can accomplish all these tasks with a single command. We have made the Ansible playbooks used to perform these tasks available [here](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/ansible/README.md). + +## Find Help + +If you run into any problems or want help with anything, we are here to help. Reach out to use via any of the following ways: + +- Submit a github issue +- Send an email to Kubernetes AT ctl DOT io +- Visit [http://info.ctl.io/kubernetes](http://info.ctl.io/kubernetes) + +## Clusters of VMs or Physical Servers, your choice. + +- We support Kubernetes clusters on both Virtual Machines or Physical Servers. If you want to use physical servers for the worker nodes (minions), simple use the --minion_type=bareMetal flag. +- For more information on physical servers, visit: [https://www.ctl.io/bare-metal/](https://www.ctl.io/bare-metal/) +- Physical serves are only available in the VA1 and GB3 data centers. +- VMs are available in all 13 of our public cloud locations + +## Requirements + +The requirements to run this script are: + +- A linux administrative host (tested on ubuntu and macOS) +- python 2 (tested on 2.7.11) + - pip (installed with python as of 2.7.9) +- git +- A CenturyLink Cloud account with rights to create new hosts +- An active VPN connection to the CenturyLink Cloud from your linux host + +## Script Installation + +After you have all the requirements met, please follow these instructions to install this script. + +1) Clone this repository and cd into it. + +```shell +git clone https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc +``` + +2) Install all requirements, including + + * Ansible + * CenturyLink Cloud SDK + * Ansible Modules + +```shell +sudo pip install -r ansible/requirements.txt +``` + +3) Create the credentials file from the template and use it to set your ENV variables + +```shell +cp ansible/credentials.sh.template ansible/credentials.sh +vi ansible/credentials.sh +source ansible/credentials.sh + +``` + +4) Grant your machine access to the CenturyLink Cloud network by using a VM inside the network or [ configuring a VPN connection to the CenturyLink Cloud network.](https://www.ctl.io/knowledge-base/network/how-to-configure-client-vpn/) + + +#### Script Installation Example: Ubuntu 14 Walkthrough + +If you use an ubuntu 14, for your convenience we have provided a step by step +guide to install the requirements and install the script. + +```shell +# system +apt-get update +apt-get install -y git python python-crypto +curl -O https://bootstrap.pypa.io/get-pip.py +python get-pip.py + +# installing this repository +mkdir -p ~home/k8s-on-clc +cd ~home/k8s-on-clc +git clone https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc.git +cd adm-kubernetes-on-clc/ +pip install -r requirements.txt + +# getting started +cd ansible +cp credentials.sh.template credentials.sh; vi credentials.sh +source credentials.sh +``` + + + +## Cluster Creation + +To create a new Kubernetes cluster, simply run the ```kube-up.sh``` script. A complete +list of script options and some examples are listed below. + +```shell +CLC_CLUSTER_NAME=[name of kubernetes cluster] +cd ./adm-kubernetes-on-clc +bash kube-up.sh -c="$CLC_CLUSTER_NAME" +``` + +It takes about 15 minutes to create the cluster. Once the script completes, it +will output some commands that will help you setup kubectl on your machine to +point to the new cluster. + +When the cluster creation is complete, the configuration files for it are stored +locally on your administrative host, in the following directory + +```shell +> CLC_CLUSTER_HOME=$HOME/.clc_kube/$CLC_CLUSTER_NAME/ +``` + + +#### Cluster Creation: Script Options + +```shell +Usage: kube-up.sh [OPTIONS] +Create servers in the CenturyLinkCloud environment and initialize a Kubernetes cluster +Environment variables CLC_V2_API_USERNAME and CLC_V2_API_PASSWD must be set in +order to access the CenturyLinkCloud API + +All options (both short and long form) require arguments, and must include "=" +between option name and option value. + + -h (--help) display this help and exit + -c= (--clc_cluster_name=) set the name of the cluster, as used in CLC group names + -t= (--minion_type=) standard -> VM (default), bareMetal -> physical] + -d= (--datacenter=) VA1 (default) + -m= (--minion_count=) number of kubernetes minion nodes + -mem= (--vm_memory=) number of GB ram for each minion + -cpu= (--vm_cpu=) number of virtual cps for each minion node + -phyid= (--server_conf_id=) physical server configuration id, one of + physical_server_20_core_conf_id + physical_server_12_core_conf_id + physical_server_4_core_conf_id (default) + -etcd_separate_cluster=yes create a separate cluster of three etcd nodes, + otherwise run etcd on the master node +``` + +## Cluster Expansion + +To expand an existing Kubernetes cluster, run the ```add-kube-node.sh``` +script. A complete list of script options and some examples are listed [below](#cluster-expansion-script-options). +This script must be run from the same host that created the cluster (or a host +that has the cluster artifact files stored in ```~/.clc_kube/$cluster_name```). + +```shell +cd ./adm-kubernetes-on-clc +bash add-kube-node.sh -c="name_of_kubernetes_cluster" -m=2 +``` + +#### Cluster Expansion: Script Options + +```shell +Usage: add-kube-node.sh [OPTIONS] +Create servers in the CenturyLinkCloud environment and add to an +existing CLC kubernetes cluster + +Environment variables CLC_V2_API_USERNAME and CLC_V2_API_PASSWD must be set in +order to access the CenturyLinkCloud API + + -h (--help) display this help and exit + -c= (--clc_cluster_name=) set the name of the cluster, as used in CLC group names + -m= (--minion_count=) number of kubernetes minion nodes to add +``` + +## Cluster Deletion + +There are two ways to delete an existing cluster: + +1) Use our python script: + +```shell +python delete_cluster.py --cluster=clc_cluster_name --datacenter=DC1 +``` + +2) Use the CenturyLink Cloud UI. To delete a cluster, log into the CenturyLink +Cloud control portal and delete the parent server group that contains the +Kubernetes Cluster. We hope to add a scripted option to do this soon. + +## Examples + +Create a cluster with name of k8s_1, 1 master node and 3 worker minions (on physical machines), in VA1 + +```shell +bash kube-up.sh --clc_cluster_name=k8s_1 --minion_type=bareMetal --minion_count=3 --datacenter=VA1 +``` + +Create a cluster with name of k8s_2, an ha etcd cluster on 3 VMs and 6 worker minions (on VMs), in VA1 + +```shell +bash kube-up.sh --clc_cluster_name=k8s_2 --minion_type=standard --minion_count=6 --datacenter=VA1 --etcd_separate_cluster=yes +``` + +Create a cluster with name of k8s_3, 1 master node, and 10 worker minions (on VMs) with higher mem/cpu, in UC1: + +```shell +bash kube-up.sh --clc_cluster_name=k8s_3 --minion_type=standard --minion_count=10 --datacenter=VA1 -mem=6 -cpu=4 +``` + + + +## Cluster Features and Architecture + +We configure the Kubernetes cluster with the following features: + +* KubeDNS: DNS resolution and service discovery +* Heapster/InfluxDB: For metric collection. Needed for Grafana and auto-scaling. +* Grafana: Kubernetes/Docker metric dashboard +* KubeUI: Simple web interface to view Kubernetes state +* Kube Dashboard: New web interface to interact with your cluster + +We use the following to create the Kubernetes cluster: + +* Kubernetes 1.1.7 +* Ubuntu 14.04 +* Flannel 0.5.4 +* Docker 1.9.1-0~trusty +* Etcd 2.2.2 + +## Optional add-ons + +* Logging: We offer an integrated centralized logging ELK platform so that all + Kubernetes and docker logs get sent to the ELK stack. To install the ELK stack + and configure Kubernetes to send logs to it, follow [the log + aggregation documentation](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/log_aggregration.md). Note: We don't install this by default as + the footprint isn't trivial. + +## Cluster management + +The most widely used tool for managing a Kubernetes cluster is the command-line +utility ```kubectl```. If you do not already have a copy of this binary on your +administrative machine, you may run the script ```install_kubectl.sh``` which will +download it and install it in ```/usr/bin/local```. + +The script requires that the environment variable ```CLC_CLUSTER_NAME``` be defined + +```install_kubectl.sh``` also writes a configuration file which will embed the necessary +authentication certificates for the particular cluster. The configuration file is +written to the ```${CLC_CLUSTER_HOME}/kube``` directory + +```shell +export KUBECONFIG=${CLC_CLUSTER_HOME}/kube/config +kubectl version +kubectl cluster-info +``` + +### Accessing the cluster programmatically + +It's possible to use the locally stored client certificates to access the apiserver. For example, you may want to use any of the [Kubernetes API client libraries](/docs/reference/using-api/client-libraries/) to program against your Kubernetes cluster in the programming language of your choice. + +To demonstrate how to use these locally stored certificates, we provide the following example of using ```curl``` to communicate to the master apiserver via https: + +```shell +curl \ + --cacert ${CLC_CLUSTER_HOME}/pki/ca.crt \ + --key ${CLC_CLUSTER_HOME}/pki/kubecfg.key \ + --cert ${CLC_CLUSTER_HOME}/pki/kubecfg.crt https://${MASTER_IP}:6443 +``` + +But please note, this *does not* work out of the box with the ```curl``` binary +distributed with macOS. + +### Accessing the cluster with a browser + +We install [the kubernetes dashboard](/docs/tasks/web-ui-dashboard/). When you +create a cluster, the script should output URLs for these interfaces like this: + +kubernetes-dashboard is running at ```https://${MASTER_IP}:6443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy```. + +Note on Authentication to the UIs: The cluster is set up to use basic +authentication for the user _admin_. Hitting the url at +```https://${MASTER_IP}:6443``` will require accepting the self-signed certificate +from the apiserver, and then presenting the admin password written to file at: + +```> _${CLC_CLUSTER_HOME}/kube/admin_password.txt_``` + + +### Configuration files + +Various configuration files are written into the home directory *CLC_CLUSTER_HOME* under +```.clc_kube/${CLC_CLUSTER_NAME}``` in several subdirectories. You can use these files +to access the cluster from machines other than where you created the cluster from. + +* ```config/```: Ansible variable files containing parameters describing the master and minion hosts +* ```hosts/```: hosts files listing access information for the ansible playbooks +* ```kube/```: ```kubectl``` configuration files, and the basic-authentication password for admin access to the Kubernetes API +* ```pki/```: public key infrastructure files enabling TLS communication in the cluster +* ```ssh/```: SSH keys for root access to the hosts + + +## ```kubectl``` usage examples + +There are a great many features of _kubectl_. Here are a few examples + +List existing nodes, pods, services and more, in all namespaces, or in just one: + +```shell +kubectl get nodes +kubectl get --all-namespaces pods +kubectl get --all-namespaces services +kubectl get --namespace=kube-system replicationcontrollers +``` + +The Kubernetes API server exposes services on web URLs, which are protected by requiring +client certificates. If you run a kubectl proxy locally, ```kubectl``` will provide +the necessary certificates and serve locally over http. + +```shell +kubectl proxy -p 8001 +``` + +Then, you can access urls like ```http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/``` without the need for client certificates in your browser. + + +## What Kubernetes features do not work on CenturyLink Cloud + +These are the known items that don't work on CenturyLink cloud but do work on other cloud providers: + +- At this time, there is no support services of the type [LoadBalancer](/docs/tasks/access-application-cluster/create-external-load-balancer/). We are actively working on this and hope to publish the changes sometime around April 2016. + +- At this time, there is no support for persistent storage volumes provided by + CenturyLink Cloud. However, customers can bring their own persistent storage + offering. We ourselves use Gluster. + + +## Ansible Files + +If you want more information about our Ansible files, please [read this file](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/ansible/README.md) + +## Further reading + +Please see the [Kubernetes docs](/docs/) for more details on administering +and using a Kubernetes cluster. + + + diff --git a/content/ko/docs/setup/turnkey/gce.md b/content/ko/docs/setup/turnkey/gce.md new file mode 100644 index 0000000000000..a69675a40e641 --- /dev/null +++ b/content/ko/docs/setup/turnkey/gce.md @@ -0,0 +1,224 @@ +--- +title: Running Kubernetes on Google Compute Engine +content_template: templates/task +--- + +{{% capture overview %}} + +The example below creates a Kubernetes cluster with 4 worker node Virtual Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient). + +{{% /capture %}} + +{{% capture prerequisites %}} + +If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) for hosted cluster installation and management. + +For an easy way to experiment with the Kubernetes development environment, click the button below +to open a Google Cloud Shell with an auto-cloned copy of the Kubernetes source repo. + +[![Open in Cloud Shell](https://gstatic.com/cloudssh/images/open-btn.png)](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/kubernetes/kubernetes&page=editor&open_in_editor=README.md) + +If you want to use custom binaries or pure open source Kubernetes, please continue with the instructions below. + +### Prerequisites + +1. You need a Google Cloud Platform account with billing enabled. Visit the [Google Developers Console](https://console.cloud.google.com) for more details. +1. Install `gcloud` as necessary. `gcloud` can be installed as a part of the [Google Cloud SDK](https://cloud.google.com/sdk/). +1. Enable the [Compute Engine Instance Group Manager API](https://console.developers.google.com/apis/api/replicapool.googleapis.com/overview) in the [Google Cloud developers console](https://console.developers.google.com/apis/library). +1. Make sure that gcloud is set to use the Google Cloud Platform project you want. You can check the current project using `gcloud config list project` and change it via `gcloud config set project `. +1. Make sure you have credentials for GCloud by running `gcloud auth login`. +1. (Optional) In order to make API calls against GCE, you must also run `gcloud auth application-default login`. +1. Make sure you can start up a GCE VM from the command line. At least make sure you can do the [Create an instance](https://cloud.google.com/compute/docs/instances/#startinstancegcloud) part of the GCE Quickstart. +1. Make sure you can SSH into the VM without interactive prompts. See the [Log in to the instance](https://cloud.google.com/compute/docs/instances/#sshing) part of the GCE Quickstart. + +{{% /capture %}} + +{{% capture steps %}} + +## Starting a cluster + +You can install a client and start a cluster with either one of these commands (we list both in case only one is installed on your machine): + + +```shell +curl -sS https://get.k8s.io | bash +``` + +or + +```shell +wget -q -O - https://get.k8s.io | bash +``` + +Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster. + +By default, some containers will already be running on your cluster. Containers like `fluentd` provide [logging](/docs/concepts/cluster-administration/logging/), while `heapster` provides [monitoring](https://releases.k8s.io/master/cluster/addons/cluster-monitoring/README.md) services. + +The script run by the commands above creates a cluster with the name/prefix "kubernetes". It defines one specific cluster config, so you can't run it more than once. + +Alternately, you can download and install the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases), then run the `/cluster/kube-up.sh` script to start the cluster: + +```shell +cd kubernetes +cluster/kube-up.sh +``` + +If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster. + +If you run into trouble, please see the section on [troubleshooting](/docs/setup/turnkey/gce/#troubleshooting), post to the +[Kubernetes Forum](https://discuss.kubernetes.io), or come ask questions on [Slack](/docs/troubleshooting/#slack). + +The next few steps will show you: + +1. How to set up the command line client on your workstation to manage the cluster +1. Examples of how to use the cluster +1. How to delete the cluster +1. How to start clusters with non-default options (like larger clusters) + +## Installing the Kubernetes command line tools on your workstation + +The cluster startup script will leave you with a running cluster and a `kubernetes` directory on your workstation. + +The [kubectl](/docs/user-guide/kubectl/) tool controls the Kubernetes cluster +manager. It lets you inspect your cluster resources, create, delete, and update +components, and much more. You will use it to look at your new cluster and bring +up example apps. + +You can use `gcloud` to install the `kubectl` command-line tool on your workstation: + +```shell +gcloud components install kubectl +``` + +{{< note >}} +**Note:** The kubectl version bundled with `gcloud` may be older than the one +downloaded by the get.k8s.io install script. See [Installing kubectl](/docs/tasks/kubectl/install/) +document to see how you can set up the latest `kubectl` on your workstation. +{{< /note >}} + +## Getting started with your cluster + +### Inspect your cluster + +Once `kubectl` is in your path, you can use it to look at your cluster. E.g., running: + +```shell +kubectl get --all-namespaces services +``` + +should show a set of [services](/docs/user-guide/services) that look something like this: + +```shell +NAMESPACE NAME TYPE CLUSTER_IP EXTERNAL_IP PORT(S) AGE +default kubernetes ClusterIP 10.0.0.1 443/TCP 1d +kube-system kube-dns ClusterIP 10.0.0.2 53/TCP,53/UDP 1d +kube-system kube-ui ClusterIP 10.0.0.3 80/TCP 1d +... +``` + +Similarly, you can take a look at the set of [pods](/docs/user-guide/pods) that were created during cluster startup. +You can do this via the + +```shell +kubectl get --all-namespaces pods +``` + +command. + +You'll see a list of pods that looks something like this (the name specifics will be different): + +```shell +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system fluentd-cloud-logging-kubernetes-minion-63uo 1/1 Running 0 14m +kube-system fluentd-cloud-logging-kubernetes-minion-c1n9 1/1 Running 0 14m +kube-system fluentd-cloud-logging-kubernetes-minion-c4og 1/1 Running 0 14m +kube-system fluentd-cloud-logging-kubernetes-minion-ngua 1/1 Running 0 14m +kube-system kube-dns-v5-7ztia 3/3 Running 0 15m +kube-system kube-ui-v1-curt1 1/1 Running 0 15m +kube-system monitoring-heapster-v5-ex4u3 1/1 Running 1 15m +kube-system monitoring-influx-grafana-v1-piled 2/2 Running 0 15m +``` + +Some of the pods may take a few seconds to start up (during this time they'll show `Pending`), but check that they all show as `Running` after a short period. + +### Run some examples + +Then, see [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster. + +For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/). The [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) is a good "getting started" walkthrough. + +## Tearing down the cluster + +To remove/delete/teardown the cluster, use the `kube-down.sh` script. + +```shell +cd kubernetes +cluster/kube-down.sh +``` + +Likewise, the `kube-up.sh` in the same directory will bring it back up. You do not need to rerun the `curl` or `wget` command: everything needed to setup the Kubernetes cluster is now on your workstation. + +## Customizing + +The script above relies on Google Storage to stage the Kubernetes release. It +then will start (by default) a single master VM along with 4 worker VMs. You +can tweak some of these parameters by editing `kubernetes/cluster/gce/config-default.sh` +You can view a transcript of a successful cluster creation +[here](https://gist.github.com/satnam6502/fc689d1b46db9772adea). + +## Troubleshooting + +### Project settings + +You need to have the Google Cloud Storage API, and the Google Cloud Storage +JSON API enabled. It is activated by default for new projects. Otherwise, it +can be done in the Google Cloud Console. See the [Google Cloud Storage JSON +API Overview](https://cloud.google.com/storage/docs/json_api/) for more +details. + +Also ensure that-- as listed in the [Prerequisites section](#prerequisites)-- you've enabled the `Compute Engine Instance Group Manager API`, and can start up a GCE VM from the command line as in the [GCE Quickstart](https://cloud.google.com/compute/docs/quickstart) instructions. + +### Cluster initialization hang + +If the Kubernetes startup script hangs waiting for the API to be reachable, you can troubleshoot by SSHing into the master and node VMs and looking at logs such as `/var/log/startupscript.log`. + +**Once you fix the issue, you should run `kube-down.sh` to cleanup** after the partial cluster creation, before running `kube-up.sh` to try again. + +### SSH + +If you're having trouble SSHing into your instances, ensure the GCE firewall +isn't blocking port 22 to your VMs. By default, this should work but if you +have edited firewall rules or created a new non-default network, you'll need to +expose it: `gcloud compute firewall-rules create default-ssh --network= +--description "SSH allowed from anywhere" --allow tcp:22` + +Additionally, your GCE SSH key must either have no passcode or you need to be +using `ssh-agent`. + +### Networking + +The instances must be able to connect to each other using their private IP. The +script uses the "default" network which should have a firewall rule called +"default-allow-internal" which allows traffic on any port on the private IPs. +If this rule is missing from the default network or if you change the network +being used in `cluster/config-default.sh` create a new rule with the following +field values: + +* Source Ranges: `10.0.0.0/8` +* Allowed Protocols and Port: `tcp:1-65535;udp:1-65535;icmp` + +## Support Level + + +IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level +-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- +GCE | Saltstack | Debian | GCE | [docs](/docs/setup/turnkey/gce/) | | Project + +For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. + +## Further reading + +Please see the [Kubernetes docs](/docs/) for more details on administering +and using a Kubernetes cluster. + +{{% /capture %}} diff --git a/content/ko/docs/setup/turnkey/stackpoint.md b/content/ko/docs/setup/turnkey/stackpoint.md new file mode 100644 index 0000000000000..cebe925f525c8 --- /dev/null +++ b/content/ko/docs/setup/turnkey/stackpoint.md @@ -0,0 +1,189 @@ +--- +reviewers: +- baldwinspc +title: Running Kubernetes on Multiple Clouds with Stackpoint.io +content_template: templates/concept +--- + +{{% capture overview %}} + +[StackPointCloud](https://stackpoint.io/) is the universal control plane for Kubernetes Anywhere. StackPointCloud allows you to deploy and manage a Kubernetes cluster to the cloud provider of your choice in 3 steps using a web-based interface. + +{{% /capture %}} + +{{% capture body %}} + +## AWS + +To create a Kubernetes cluster on AWS, you will need an Access Key ID and a Secret Access Key from AWS. + +1. Choose a Provider + + a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account. + + b. Click **+ADD A CLUSTER NOW**. + + c. Click to select Amazon Web Services (AWS). + +1. Configure Your Provider + + a. Add your Access Key ID and a Secret Access Key from AWS. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair. + + b. Click **SUBMIT** to submit the authorization information. + +1. Configure Your Cluster + + Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster. + +1. Run the Cluster + + You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters). + + For information on using and managing a Kubernetes cluster on AWS, [consult the Kubernetes documentation](/docs/getting-started-guides/aws/). + + +## GCE + +To create a Kubernetes cluster on GCE, you will need the Service Account JSON Data from Google. + +1. Choose a Provider + + a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account. + + b. Click **+ADD A CLUSTER NOW**. + + c. Click to select Google Compute Engine (GCE). + +1. Configure Your Provider + + a. Add your Service Account JSON Data from Google. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair. + + b. Click **SUBMIT** to submit the authorization information. + +1. Configure Your Cluster + + Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster. + +1. Run the Cluster + + You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters). + + For information on using and managing a Kubernetes cluster on GCE, [consult the Kubernetes documentation](/docs/getting-started-guides/gce/). + + +## Google Kubernetes Engine + +To create a Kubernetes cluster on Google Kubernetes Engine, you will need the Service Account JSON Data from Google. + +1. Choose a Provider + + a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account. + + b. Click **+ADD A CLUSTER NOW**. + + c. Click to select Google Kubernetes Engine. + +1. Configure Your Provider + + a. Add your Service Account JSON Data from Google. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair. + + b. Click **SUBMIT** to submit the authorization information. + +1. Configure Your Cluster + + Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster. + +1. Run the Cluster + + You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters). + + For information on using and managing a Kubernetes cluster on Google Kubernetes Engine, consult [the official documentation](/docs/home/). + + +## DigitalOcean + +To create a Kubernetes cluster on DigitalOcean, you will need a DigitalOcean API Token. + +1. Choose a Provider + + a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account. + + b. Click **+ADD A CLUSTER NOW**. + + c. Click to select DigitalOcean. + +1. Configure Your Provider + + a. Add your DigitalOcean API Token. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair. + + b. Click **SUBMIT** to submit the authorization information. + +1. Configure Your Cluster + + Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster. + +1. Run the Cluster + + You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters). + + For information on using and managing a Kubernetes cluster on DigitalOcean, consult [the official documentation](/docs/home/). + + +## Microsoft Azure + +To create a Kubernetes cluster on Microsoft Azure, you will need an Azure Subscription ID, Username/Email, and Password. + +1. Choose a Provider + + a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account. + + b. Click **+ADD A CLUSTER NOW**. + + c. Click to select Microsoft Azure. + +1. Configure Your Provider + + a. Add your Azure Subscription ID, Username/Email, and Password. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair. + + b. Click **SUBMIT** to submit the authorization information. + +1. Configure Your Cluster + + Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster. + +1. Run the Cluster + + You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters). + + For information on using and managing a Kubernetes cluster on Azure, [consult the Kubernetes documentation](/docs/getting-started-guides/azure/). + + +## Packet + +To create a Kubernetes cluster on Packet, you will need a Packet API Key. + +1. Choose a Provider + + a. Log in to [stackpoint.io](https://stackpoint.io) with a GitHub, Google, or Twitter account. + + b. Click **+ADD A CLUSTER NOW**. + + c. Click to select Packet. + +1. Configure Your Provider + + a. Add your Packet API Key. Select your default StackPointCloud SSH keypair, or click **ADD SSH KEY** to add a new keypair. + + b. Click **SUBMIT** to submit the authorization information. + +1. Configure Your Cluster + + Choose any extra options you may want to include with your cluster, then click **SUBMIT** to create the cluster. + +1. Run the Cluster + + You can monitor the status of your cluster and suspend or delete it from [your stackpoint.io dashboard](https://stackpoint.io/#/clusters). + + For information on using and managing a Kubernetes cluster on Packet, consult [the official documentation](/docs/home/). + +{{% /capture %}} diff --git a/content/ko/docs/tutorials/_index.md b/content/ko/docs/tutorials/_index.md new file mode 100644 index 0000000000000..3fb49affb1c17 --- /dev/null +++ b/content/ko/docs/tutorials/_index.md @@ -0,0 +1,78 @@ +--- +title: 튜토리얼 +main_menu: true +weight: 60 +content_template: templates/concept +--- + +{{% capture overview %}} + +쿠버네티스 문서의 본 섹션은 튜토리얼을 포함하고 있다. +튜토리얼은 개별 [작업](/docs/tasks) 단위보다 더 큰 목표를 달성하기 +위한 방법을 보여준다. 일반적으로 튜토리얼은 각각 순차적 단계가 있는 여러 +섹션으로 구성된다. +각 튜토리얼을 따라하기 전에, 나중에 참조할 수 있도록 +[표준 용어집](/docs/reference/glossary/) 페이지를 북마크하기를 권한다. + +{{% /capture %}} + +{{% capture body %}} + +## 기초 + +* [쿠버네티스 기초](/ko/docs/tutorials/kubernetes-basics/)는 쿠버네티스 시스템을 이해하는데 도움이 되고 기초적인 쿠버네티스 기능을 일부 사용해 볼 수 있는 심도있는 대화형 튜토리얼이다. + +* [Scalable Microservices with Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615) + +* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#) + +* [Hello Minikube](/ko/docs/tutorials/hello-minikube/) + +## 구성 + +* [Configuring Redis Using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/) + +## 상태 유지를 하지 않는(stateless) 애플리케이션 + +* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/) + +* [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/) + +## 상태 유지가 필요한(stateful) 애플리케이션 + +* [StatefulSet Basics](/docs/tutorials/stateful-application/basic-stateful-set/) + +* [Example: WordPress and MySQL with Persistent Volumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) + +* [Example: Deploying Cassandra with Stateful Sets](/docs/tutorials/stateful-application/cassandra/) + +* [Running ZooKeeper, A CP Distributed System](/docs/tutorials/stateful-application/zookeeper/) + +## CI/CD 파이프라인 + +* [Set Up a CI/CD Pipeline with Kubernetes Part 1: Overview](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/5/set-cicd-pipeline-kubernetes-part-1-overview) + +* [Set Up a CI/CD Pipeline with a Jenkins Pod in Kubernetes (Part 2)](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/6/set-cicd-pipeline-jenkins-pod-kubernetes-part-2) + +* [Run and Scale a Distributed Crossword Puzzle App with CI/CD on Kubernetes (Part 3)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/run-and-scale-distributed-crossword-puzzle-app-cicd-kubernetes-part-3) + +* [Set Up CI/CD for a Distributed Crossword Puzzle App on Kubernetes (Part 4)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/set-cicd-distributed-crossword-puzzle-app-kubernetes-part-4) + +## 클러스터 + +* [AppArmor](/docs/tutorials/clusters/apparmor/) + +## 서비스 + +* [Using Source IP](/docs/tutorials/services/source-ip/) + +{{% /capture %}} + +{{% capture whatsnext %}} + +튜토리얼을 작성하고 싶다면, +튜토리얼 페이지 유형과 튜토리얼 템플릿에 대한 정보가 있는 +[Using Page Templates](/docs/home/contribute/page-templates/) +페이지를 참조한다. + +{{% /capture %}} diff --git a/content/ko/docs/tutorials/hello-minikube.md b/content/ko/docs/tutorials/hello-minikube.md new file mode 100644 index 0000000000000..e6cedf8a89bcb --- /dev/null +++ b/content/ko/docs/tutorials/hello-minikube.md @@ -0,0 +1,430 @@ +--- +title: Hello Minikube +content_template: templates/tutorial +weight: 5 +menu: + main: + title: "Get Started" + weight: 10 + post: > +

Ready to get your hands dirty? Build a simple Kubernetes cluster that runs "Hello World" for Node.js.

+--- + +{{% capture overview %}} + +이 튜토리얼의 목표는 Node.js 로 작성된 간단한 Hello World 애플리케이션을 쿠버네티스에서 실행되는 +애플리케이션으로 변환하는 것이다. 튜토리얼을 통해 로컬에서 작성된 코드를 Docker 컨테이너 이미지로 +변환한 다음, [Minikube](/docs/getting-started-guides/minikube)에서 해당 이미지를 실행하는 +방법을 보여 준다. Minikube는 무료로 로컬 머신을 이용해서 쿠버네티스를 실행할 수 있는 간단한 방법을 +제공한다. + +{{% /capture %}} + +{{% capture objectives %}} + +* Node.js로 hello world 애플리케이션을 실행한다. +* Minikube에 만들어진 애플리케이션을 배포한다. +* 애플리케이션 로그를 확인한다. +* 애플리케이션 이미지를 업데이트한다. + + +{{% /capture %}} + +{{% capture prerequisites %}} + +* macOS의 경우, [Homebrew](https://brew.sh)를 사용하여 Minikube를 설치할 수 있다. + + {{< note >}} + **참고:** macOS 10.13 버전으로 업데이트 후 `brew update`를 실행 시 Homebrew에서 다음과 같은 오류가 발생할 경우에는, + + ``` + Error: /usr/local is not writable. You should change the ownership + and permissions of /usr/local back to your user account: + sudo chown -R $(whoami) /usr/local + ``` + Homebrew를 다시 설치하여 문제를 해결할 수 있다. + ``` + /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" + ``` + {{< /note >}} + +* 예제 애플리케이션을 실행하기 위해서는 [NodeJS](https://nodejs.org/en/)가 필요하다. + +* Docker를 설치한다. macOS의 경우, +[Docker for Mac](https://docs.docker.com/engine/installation/mac/)를 권장한다. + + +{{% /capture %}} + +{{% capture lessoncontent %}} + +## Minikube 클러스터 만들기 + +이 튜토리얼에서는 [Minikube](https://github.com/kubernetes/minikube)를 사용하여 +로컬 클러스터를 만든다. 이 튜토리얼에서는 macOS에서 +[Docker for Mac](https://docs.docker.com/engine/installation/mac/)을 +사용한다고 가정하였다. Docker for Mac 대신 Linux 혹은 VirtualBox와 같이 다른 플랫폼을 +사용하는 경우, Minikube를 설치하는 방법이 약간 다를 수 있다. 일반적인 Minikube 설치 지침은 +[Minikube installation guide](/docs/getting-started-guides/minikube/) +를 참조한다. + +Homebrew를 사용하여 최신 버전의 Minikube를 설치한다. +```shell +brew cask install minikube +``` + +[Minikube driver installation guide](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#hyperkit-driver)에 +설명한 것과 같이 HyperKit 드라이버를 설치한다. + +Homebrew를 사용하여 쿠버네티스 클러스터와 상호 작용을 위한 +`kubectl` 명령줄 도구를 다운로드한다. + +```shell +brew install kubernetes-cli +``` + +프록시를 거치지않고 직접 [https://cloud.google.com/container-registry/](https://cloud.google.com/container-registry/)같은 사이트에 액세스 할 수 있는지 확인하려면 새 터미널을 열고 다음과 같이 실행한다. + +```shell +curl --proxy "" https://cloud.google.com/container-registry/ +``` + +Docker 데몬이 시작되었는지 확인한다. Docker가 실행 중인지는 다음과 같은 커맨드를 사용하여 확인할 수 있다. + +```shell +docker images +``` + +프록시가 필요하지 않은 경우, Minikube 클러스터를 시작한다. + +```shell +minikube start --vm-driver=hyperkit +``` +프록시가 필요한 경우, 다음 방법을 사용하여 프록시 설정과 함께 Minikube 클러스터를 시작할 수 있다. + +```shell +minikube start --vm-driver=hyperkit --docker-env HTTP_PROXY=http://your-http-proxy-host:your-http-proxy-port --docker-env HTTPS_PROXY=http(s)://your-https-proxy-host:your-https-proxy-port +``` + +`--vm-driver=hyperkit` 플래그는 Docker for Mac을 사용하고 있음을 의미한다. +기본 VM 드라이버는 VirtualBox이다. + +이제 Minikube 컨텍스트를 설정한다. 컨텍스트는 'kubectl'이 어떠한 클러스터와 상호 작용하려고 +하는지를 결정한다. `~/.kube/config` 파일에 사용 가능한 모든 컨텍스트가 들어있다. + +```shell +kubectl config use-context minikube +``` + +`kubectl`이 클러스터와 통신할 수 있도록 설정되어 있는지 확인한다. + +```shell +kubectl cluster-info +``` + +브라우저에서 쿠버네티스 대시보드를 연다. + +```shell +minikube dashboard +``` + +## Node.js 애플리케이션 만들기 + +다음 단계에서는 애플리케이션을 작성해 본다. 아래 코드를 `hellonode` 폴더에 +`server.js`라는 이름으로 저장한다. + +{{< codenew language="js" file="minikube/server.js" >}} + +작성된 애플리케이션을 실행한다. + +```shell +node server.js +``` + +[http://localhost:8080/](http://localhost:8080/)에 접속하면 "Hello World!"라는 메시지를 확인할 수 있을 것이다. + +**Ctrl-C**를 입력하면 실행 중인 Node.js 서버가 중단된다. + +다음 단계는 작성된 애플리케이션을 Docker 컨테이너에 패키지하는 것이다. + +## Docker 컨테이너 이미지 만들기 + +앞에서 사용하였던 `hellonode` 폴더에 `Dockerfile`이라는 이름으로 파일을 만든다. Dockerfile +은 빌드하고자 하는 이미지를 기술한 파일이다. 기존 이미지를 확장하여 Docker +컨테이너 이미지를 빌드할 수 있다. 이 튜토리얼에서는 기존 Node.js 이미지를 확장하여 사용한다. + +{{< codenew language="conf" file="minikube/Dockerfile" >}} + +본 레시피는 Docker 레지스트리에 있는 공식 Node.js LTS 이미지로부터 시작해서, +8080 포트를 열고, `server.js` 파일을 이미지에 복사하고 +Node.js 서버를 시작한다. + +이 튜토리얼은 Minikube를 사용하기 때문에, Docker 이미지를 레지스트리로 Push하는 대신, +Minikube VM과 동일한 Docker 호스트를 사용하면 이미지를 단순히 빌드하기만 해도 +이미지가 자동으로 (역주: Minikube에서 사용할 수 있는 위치에) 생긴다. 이를 위해서, +다음의 커맨드를 사용해서 Minikube Docker 데몬을 사용할 수 있도록 한다. + +```shell +eval $(minikube docker-env) +``` + +{{< note >}} +**참고:** 나중에 Minikube 호스트를 더 이상 사용하고 싶지 않은 경우, +`eval $ (minikube docker-env -u)`를 실행하여 변경을 되돌릴 수 있다. +{{< /note >}} + +Minikube Docker 데몬을 사용하여 Docker 이미지를 빌드한다. (마지막의 점에 주의) + +```shell +docker build -t hello-node:v1 . +``` + +이제 Minikube VM에서 빌드한 이미지를 실행할 수 있다. + +## 디플로이먼트 만들기 + +쿠버네티스 [*파드*](/docs/concepts/workloads/pods/pod/)는 관리 및 네트워크 구성을 목적으로 +함께 묶은 하나 이상의 컨테이너 그룹이다. +이 튜토리얼의 파드에는 단 하나의 컨테이너만 있다. +쿠버네티스 [*디플로이먼트*](/docs/concepts/workloads/controllers/deployment/)는 파드의 +헬스를 검사해서 파드의 컨테이너가 종료되면 다시 시작해준다. +파드의 생성 및 확장을 관리하는 방법으로 디플로이먼트를 권장한다. + +`kubectl run` 커맨드를 사용하여 파드를 관리하는 디플로이먼트를 만든다. +파드는 `hello-node:v1` Docker 이미지를 기반으로 한 컨테이너를 실행한다. +(이미지를 레지스트리에 Push하지 않았기 때문에) Docker 레지스트리에서 이미지를 가져오기 보다는, +항상 로컬 이미지를 사용하기 위해 `--image-pull-policy` 플래그를 `Never`로 설정한다. + +```shell +kubectl run hello-node --image=hello-node:v1 --port=8080 --image-pull-policy=Never +``` + +디플로이먼트를 확인한다. + + +```shell +kubectl get deployments +``` + +출력: + + +```shell +NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE +hello-node 1 1 1 1 3m +``` + +파드를 확인한다. + + +```shell +kubectl get pods +``` + +출력: + + +```shell +NAME READY STATUS RESTARTS AGE +hello-node-714049816-ztzrb 1/1 Running 0 6m +``` + +클러스터 이벤트를 확인한다. + +```shell +kubectl get events +``` + +`kubectl`의 설정을 확인한다. + +```shell +kubectl config view +``` + +`kubectl` 커맨드에 대한 더 많은 정보를 원하는 경우, +[kubectl overview](/docs/user-guide/kubectl-overview/)를 확인한다. + +## 서비스 만들기 + +기본적으로 파드는 쿠버네티스 클러스터 내의 내부 IP 주소로만 접속 가능하다. +쿠버네티스 가상 네트워크 밖에서 `hello-node` 컨테이너에 접속하기 위해서는 파드를 +쿠버네티스 [*서비스*](/docs/concepts/services-networking/service/)로 +노출해야 한다. + +개발 환경에서, `kubectl expose` 커맨드를 사용해서 파드를 퍼블릭 인터넷에 +노출할 수 있다. + +```shell +kubectl expose deployment hello-node --type=LoadBalancer +``` + +방금 생성한 서비스를 확인한다. + +```shell +kubectl get services +``` + +출력: + +```shell +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +hello-node ClusterIP 10.0.0.71 8080/TCP 6m +kubernetes ClusterIP 10.0.0.1 443/TCP 14d +``` + +`--type=LoadBalancer` 플래그는 해당 서비스를 클러스터 바깥으로 노출시키는 +것을 지시한다. 로드 밸런서를 지원하는 클라우드 제공 업체의 경우, 외부 +IP 주소가 프로비저닝되어서 서비스에 접근할 수 있도록 해준다. Minikube에서 +`LoadBalancer` 타입의 서비스는 `minikube service` 커맨드를 통해 접근할 수 있다. + +```shell +minikube service hello-node +``` + +위 커맨드는 앱을 서비스하는 로컬 IP 주소로 브라우저를 자동으로 열어서 +"Hello World" 메세지를 보여준다. + +브라우저 또는 curl을 통해 새 웹서비스에 요청을 보내면, 로그가 쌓이는 +것을 확인할 수 있을 것이다. + +```shell +kubectl logs +``` + +## App 업데이트 + +새로운 메시지를 출력하도록 `server.js` 파일을 수정한다. + +```javascript +response.end('Hello World Again!'); + +``` + +새로운 버전의 이미지를 빌드한다. (마지막의 점에 주의하라) + +```shell +docker build -t hello-node:v2 . +``` + +디플로이먼트의 이미지를 업데이트한다. + +```shell +kubectl set image deployment/hello-node hello-node=hello-node:v2 +``` + +앱을 다시 실행하여 새로운 메시지를 확인한다. + +```shell +minikube service hello-node +``` + +## 애드온 활성화하기 + +Minikube에는 활성화하거나 비활성화할 수 있고 로컬 쿠버네티스 환경에서 접속해 볼 수 있는 내장 애드온이 있다. + +우선 현재 지원되는 애드온 목록을 확인한다. + +```shell +minikube addons list +``` + +출력: + +```shell +- storage-provisioner: enabled +- kube-dns: enabled +- registry: disabled +- registry-creds: disabled +- addon-manager: enabled +- dashboard: disabled +- default-storageclass: enabled +- coredns: disabled +- heapster: disabled +- efk: disabled +- ingress: disabled +``` + +이하의 커맨드를 적용하기 위해서는 Minikube가 실행 중이어야 한다. 예를 들어, `heapster` 애드온을 활성화하기 위해서는 +다음과 같이 실행한다. + +```shell +minikube addons enable heapster +``` + +출력: + +```shell +heapster was successfully enabled +``` + +생성한 파드와 서비스를 확인한다. + +```shell +kubectl get po,svc -n kube-system +``` + +출력: + +```shell +NAME READY STATUS RESTARTS AGE +pod/heapster-zbwzv 1/1 Running 0 2m +pod/influxdb-grafana-gtht9 2/2 Running 0 2m + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/heapster NodePort 10.0.0.52 80:31655/TCP 2m +service/monitoring-grafana NodePort 10.0.0.33 80:30002/TCP 2m +service/monitoring-influxdb ClusterIP 10.0.0.43 8083/TCP,8086/TCP 2m +``` + +브라우저에서 엔드포인트를 열어 heapster와 상호 작용한다. + +```shell +minikube addons open heapster +``` + +출력: + +```shell +Opening kubernetes service kube-system/monitoring-grafana in default browser... +``` + +## 제거하기 + +이제 클러스터에서 만들어진 리소스를 제거한다. + +```shell +kubectl delete service hello-node +kubectl delete deployment hello-node +``` + +필요 시, 생성된 Docker 이미지를 강제로 제거한다. + +```shell +docker rmi hello-node:v1 hello-node:v2 -f +``` + +필요 시, Minikube VM을 정지한다. + +```shell +minikube stop +eval $(minikube docker-env -u) +``` + +필요 시, Minikube VM을 삭제한다. + +```shell +minikube delete +``` + +{{% /capture %}} + + +{{% capture whatsnext %}} + +* [Deployment objects](/docs/concepts/workloads/controllers/deployment/)에 대해서 더 배워 본다. +* [Deploying applications](/docs/user-guide/deploying-applications/)에 대해서 더 배워 본다. +* [Service objects](/docs/concepts/services-networking/service/)에 대해서 더 배워 본다. + +{{% /capture %}} + + diff --git a/content/ko/docs/tutorials/kubernetes-basics/_index.html b/content/ko/docs/tutorials/kubernetes-basics/_index.html new file mode 100644 index 0000000000000..da2487b5d1dfe --- /dev/null +++ b/content/ko/docs/tutorials/kubernetes-basics/_index.html @@ -0,0 +1,114 @@ +--- +title: 쿠버네티스 기초 학습 +linkTitle: 쿠버네티스 기초 학습 +weight: 5 +--- + + + + + + + +
+ +
+ +
+
+

쿠버네티스 기초

+

이 튜토리얼에서는 쿠버네티스 클러스터 오케스트레이션 시스템의 기초를 익힐 수 있는 가이드를 제공한다. 각각의 모듈에는 쿠버네티스의 주요 기능과 개념에 대한 배경 지식이 담겨 있으며 대화형 온라인 튜토리얼도 포함되어 있다. 대화형 튜토리얼에서 간단한 클러스터와 그 클러스터 상의 컨테이너화된 애플리케이션을 직접 관리해볼 수 있다.

+

대화형 튜토리얼을 사용해서 다음의 내용을 배울 수 있다.

+
    +
  • 컨테이너화된 애플리케이션을 클러스터에 배포하기
  • +
  • 디플로이먼트를 스케일링하기
  • +
  • 컨테이너화된 애플리케이션을 새로운 소프트웨어 버전으로 업데이트하기
  • +
  • 컨테이너화된 애플리케이션을 디버그하기
  • +
+

이 튜토리얼에서는 Katacoda를 사용해서 독자의 웹브라우저에서 Minikube가 동작하는 가상 터미널을 구동시킨다. Minikube는 로컬에 설치할 수 있는 작은 규모의 쿠버네티스로써 어디에서든 작동된다. 어떤 소프트웨어도 설치할 필요가 없고, 아무 것도 설정할 필요가 없다. 왜냐하면 대화형 튜토리얼이 웹브라우저 자체에서 바로 동작하기 때문이다.

+
+
+ +
+ +
+
+

쿠버네티스가 어떤 도움이 될까?

+

오늘날의 웹서비스에 대해서, 사용자는 애플리케이션이 24/7 가용하기를 바라고, 개발자는 하루에도 몇 번이고 새로운 버전의 애플리케이션을 배포하기를 바란다. 컨테이너화를 통해 소프트웨어를 패키지하면 애플리케이션을 다운타임 없이 쉽고 빠르게 릴리스 및 업데이트할 수 있게 되어서 이런 목표를 달성하는데 도움이 된다. 쿠버네티스는 이렇게 컨테이너화된 애플리케이션을 원하는 곳 어디에든 또 언제든 구동시킬 수 있다는 확신을 갖는데 도움을 주며, 그 애플리케이션이 작동하는데 필요한 자원과 도구를 찾는 것을 도와준다. 쿠버네티스는 구글의 컨테이너 오케스트레이션 부문의 축적된 경험으로 설계되고 커뮤니티로부터 도출된 최고의 아이디어가 결합된 운영 수준의 오픈 소스 플랫폼이다.

+
+
+ +
+

쿠버네티스 기초 모듈

+ +
+ + + +
+ +
+ + + diff --git a/content/ko/docs/tutorials/kubernetes-basics/create-cluster/_index.md b/content/ko/docs/tutorials/kubernetes-basics/create-cluster/_index.md new file mode 100644 index 0000000000000..3f5c525b0b778 --- /dev/null +++ b/content/ko/docs/tutorials/kubernetes-basics/create-cluster/_index.md @@ -0,0 +1,4 @@ +--- +title: 클러스터 생성하기 +weight: 10 +--- diff --git a/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html b/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html new file mode 100644 index 0000000000000..90b0abecaa4b6 --- /dev/null +++ b/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html @@ -0,0 +1,41 @@ +--- +title: 대화형 튜토리얼 - 클러스터 생성하기 +weight: 20 +--- + + + + + + + + + + + +
+ +
+ +
+
+ 터미널로 상호 작용하기 위해서, 데스크탑/태블릿 버전을 사용해주세요 +
+
+
+ + +
+ +
+ + + diff --git a/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html b/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html new file mode 100644 index 0000000000000..ff101a153115f --- /dev/null +++ b/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html @@ -0,0 +1,132 @@ +--- +title: Minikube를 사용해서 클러스터 생성하기 +weight: 10 +--- + + + + + + + + + +
+ +
+ +
+ +
+

목표

+
    +
  • 쿠버네티스 클러스터가 무엇인지 배운다.
  • +
  • Minikube가 무엇인지 배운다.
  • +
  • 온라인 터미널을 사용해서 쿠버네티스 클러스터를 시작한다.
  • +
+
+ +
+

쿠버네티스 클러스터

+

+ 쿠버네티스는 서로 연결되어서 단일 유닛처럼 동작하는 고가용성의 컴퓨터 클러스터를 상호조정한다. + 쿠버네티스의 추상화된 개념을 통해 개별 머신에 얽매이지 않고 컨테이너화된 애플리케이션을 클러스터에 + 배포할 수 있다. 이렇게 새로운 배포 모델을 활용하려면, 애플리케이션을 개별 호스트에 결합되지 않는 + 방식으로 패키지할 필요가 있다. 즉, 컨테이너화 해야 한다. 컨테이너화된 애플리케이션은 호스트에 + 매우 깊이 통합된 패키지로써 특정 머신에 직접 설치되는 예전의 배포 모델보다 유연하고 가용성이 높다. + 쿠버네티스는 애플리케이션 컨테이너를 클러스터에 분산시키고 스케줄링하는 일을 보다 효율적으로 + 자동화한다. 쿠버네티스는 오픈소스 + 플랫폼이고 운영 수준의 안정성을 가졌다. +

+

쿠버네티스 클러스터는 두 가지 형태의 자원으로 구성된다. +

    +
  • 마스터는 클러스터를 상호조정한다
  • +
  • 노드는 애플리케이션을 구동하는 작업자다
  • +
+

+
+ +
+
+

요약:

+
    +
  • 쿠버네티스 클러스터
  • +
  • Minikube
  • +
+
+
+

+ 쿠버네티스는 컴퓨터 클러스터에 걸쳐서 애플리케이션 컨테이너의 위치(스케줄링)와 실행을 + 오케스트레이션하는 운영 수준의 오픈소스 플랫폼이다. +

+
+
+
+
+ +
+
+

클러스터 다이어그램

+
+
+ +
+
+

+
+
+
+ +
+
+

마스터는 클러스터 관리를 담당한다. 마스터는 애플리케이션을 스케줄링하거나, 애플리케이션의 + 항상성을 유지하거나, 애플리케이션을 스케일링하고, 새로운 변경사항을 순서대로 반영하는 일과 같은 + 클러스터 내 모든 활동을 조율한다.

+

노드는 쿠버네티스 클러스터 내 워커 머신으로써 동작하는 VM 또는 물리적인 컴퓨터다. + 각 노드는 노드를 관리하고 쿠버네티스 마스터와 통신하는 Kubelet이라는 에이전트를 갖는다. 노드는 + 컨테이너 운영을 담당하는 Docker 또는 + rkt과 같은 툴도 갖는다. 운영 트래픽을 처리하는 쿠버네티스 + 클러스터는 최소 세 대의 노드를 가져야한다.

+ +
+
+
+

마스터는 클러스터를 관리하고 노드는 구동되는 애플리케이션을 수용하는데 사용된다.

+
+
+
+ +
+
+

애플리케이션을 쿠버네티스에 배포한다는 것은, 마스터에 애플리케이션 컨테이너를 구동하라고 지시하는 + 것이다. 마스터는 컨테이너를 클러스터의 어느 노드에 구동시킬지를 스케줄한다. 노드는 마스터가 + 제공하는 쿠버네티스 API를 통해서 마스터와 통신한다. 최종 사용자도 쿠버네티스 API를 직접 + 사용해서 클러스터와 상호작용할 수 있다.

+ +

쿠버네티스 클러스터는 물리 및 가상 머신 모두에 설치될 수 있다. 쿠버네티스 개발을 시작하려면 + Minikube를 사용할 수 있다. Minikube는 로컬 + 머신에 VM을 만들고 하나의 노드로 구성된 간단한 클러스터를 배포하는 가벼운 쿠버네티스 구현체다. + Minikube는 리눅스, 맥, 그리고 윈도우 시스템에서 구동이 가능하다. Minikube CLI는 클러스터에 대해 + 시작, 중지, 상태 조회 및 삭제 등의 기본적인 부트스트래핑 기능을 제공한다. 하지만, 본 튜토리얼에서는 + Minikube가 미리 설치된 채로 제공되는 온라인 터미널을 사용할 것이다.

+ +

이제 쿠버네티스가 무엇인지 알아봤으니, 온라인 튜토리얼로 이동해서 우리의 첫 번째 클러스터를 + 시작해보자!

+ +
+
+
+ + + +
+ +
+ + + diff --git a/content/ko/docs/tutorials/kubernetes-basics/deploy-app/_index.md b/content/ko/docs/tutorials/kubernetes-basics/deploy-app/_index.md new file mode 100644 index 0000000000000..e1bfe7f9752b6 --- /dev/null +++ b/content/ko/docs/tutorials/kubernetes-basics/deploy-app/_index.md @@ -0,0 +1,4 @@ +--- +title: 앱 배포하기 +weight: 20 +--- diff --git a/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html b/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html new file mode 100644 index 0000000000000..ce3b72a2d460e --- /dev/null +++ b/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html @@ -0,0 +1,45 @@ +--- +title: 대화형 튜토리얼 - 앱 배포하기 +weight: 20 +--- + + + + + + + + + + + +
+ +
+ +
+
+
+ 터미널로 상호 작용하기 위해서, 데스크탑/태블릿 버전을 사용해주세요 +
+ +
+
+ +
+ + +
+ +
+ + + diff --git a/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html new file mode 100644 index 0000000000000..7b524bea4dd00 --- /dev/null +++ b/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -0,0 +1,129 @@ +--- +title: kubectl을 사용해서 디플로이먼트 생성하기 +weight: 10 +--- + + + + + + + + + +
+ +
+ +
+ +
+

목표

+
    +
  • 애플리케이션 디플로이먼트(Deployment)에 대해 배운다.
  • +
  • kubectl로 첫 애플리케이션을 쿠버네티스에 배포한다.
  • +
+
+ +
+

쿠버네티스 디플로이먼트

+

+ 일단 쿠버네티스 클러스터를 구동시키면, 그 위에 컨테이너화된 애플리케이션을 배포할 수 있다. + 그러기 위해서, 쿠버네티스 디플로이먼트 설정을 만들어야 한다. 디플로이먼트는 쿠버네티스가 + 애플리케이션의 인스턴스를 어떻게 생성하고 업데이트해야 하는지를 지시한다. 디플로이먼트가 만들어지면, + 쿠버네티스 마스터가 해당 애플리케이션 인스턴스를 클러스터의 개별 노드에 스케줄한다. +

+ +

애플리케이션 인스턴스가 생성되면, 쿠버네티스 디플로이먼트 컨트롤러는 지속적으로 이들 인스턴스를 + 모니터링한다. 인스턴스를 구동 중인 노드가 다운되거나 삭제되면, 디플로이먼트 컨트롤러가 인스턴스를 + 교체시켜준다. 이렇게 머신의 장애나 정비에 대응할 수 있는 자동 복구(self-healing) 메커니즘을 + 제공한다.

+ +

오케스트레이션 기능이 없던 환경에서는, 설치 스크립트가 애플리케이션을 시작하는데 종종 사용되곤 + 했지만, 머신의 장애가 발생한 경우 복구를 해주지는 않았다. 쿠버네티스 디플로이먼트는 애플리케이션 + 인스턴스를 생성해주고 여러 노드에 걸쳐서 지속적으로 인스턴스가 구동되도록 하는 두 가지를 모두 + 하기 때문에 애플리케이션 관리를 위한 접근법에서 근본적인 차이를 가져다준다.

+ +
+ +
+
+

요약:

+
    +
  • 디플로이먼트
  • +
  • Kubectl
  • +
+
+
+

+ 디플로이먼트는 애플리케이션 인스턴스를 생성하고 업데이트하는 역할을 담당한다. +

+
+
+
+
+ +
+
+

쿠버네티스에 첫 번째 애플리케이션 배포하기

+
+
+ +
+
+

+
+
+
+ +
+
+ +

Kubectl이라는 쿠버네티스 CLI를 통해 디플로이먼트를 생성하고 관리할 수 있다. + Kubectl은 클러스터와 상호 작용하기 위해 쿠버네티스 API를 사용한다. 이 모듈에서는, 쿠버네티스 + 클러스터 상에 애플리케이션을 구동시키는 디플로이먼트를 생성하기 위해 필요한 가장 일반적인 Kubectl + 명령어를 배우게 된다.

+ +

디플로이먼트를 생성할 때, 애플리케이션에 대한 컨테이너 이미지와 구동시키고자 하는 복제 수를 지정해야 + 한다. 디플로이먼트를 업데이트해서 이런 정보를 나중에 변경할 수 있다. 모듈 + 56의 부트캠프에서 어떻게 + 스케일하고 업데이트하는지에 대해 다룬다.

+ + +
+
+
+

애플리케이션이 쿠버네티스 상에 배포되려면 지원되는 컨테이너 형식 중 하나로 패키지 되어야한다. +

+
+
+
+ +
+
+ +

우리의 첫 번째 디플로이먼트로, Docker 컨테이너로 패키지된 Node.js + 애플리케이션을 사용해보자. 소스 코드와 Dockerfile은 GitHub + 저장소에서 찾을 수 있다.

+ +

이제 디플로이먼트를 이해했으니, 온라인 튜토리얼을 통해 우리의 첫 번째 애플리케이션을 배포해보자!

+ +
+
+
+ + + +
+ +
+ + + diff --git a/content/ko/docs/tutorials/kubernetes-basics/explore/_index.md b/content/ko/docs/tutorials/kubernetes-basics/explore/_index.md new file mode 100644 index 0000000000000..fb189f91d3833 --- /dev/null +++ b/content/ko/docs/tutorials/kubernetes-basics/explore/_index.md @@ -0,0 +1,4 @@ +--- +title: 앱 조사하기 +weight: 30 +--- diff --git a/content/ko/docs/tutorials/kubernetes-basics/explore/explore-interactive.html b/content/ko/docs/tutorials/kubernetes-basics/explore/explore-interactive.html new file mode 100644 index 0000000000000..9b90944a825d2 --- /dev/null +++ b/content/ko/docs/tutorials/kubernetes-basics/explore/explore-interactive.html @@ -0,0 +1,45 @@ +--- +title: 대화형 튜토리얼 - 앱 조사하기 +weight: 20 +--- + + + + + + + + + + + +
+ +
+ +
+
+ +
+ 터미널과 상호작용하기 위해, 데스크탑/태블릿 버전을 이용한다. +
+ +
+
+
+ + +
+ +
+ + + diff --git a/content/ko/docs/tutorials/kubernetes-basics/explore/explore-intro.html b/content/ko/docs/tutorials/kubernetes-basics/explore/explore-intro.html new file mode 100644 index 0000000000000..0865caad9f18b --- /dev/null +++ b/content/ko/docs/tutorials/kubernetes-basics/explore/explore-intro.html @@ -0,0 +1,143 @@ +--- +title: 파드와 노드 보기 +weight: 10 +--- + + + + + + + + + + +
+ +
+ +
+ +
+

목표

+
    +
  • 쿠버네티스 파드에 대해 배운다.
  • +
  • 쿠버네티스 노드에 대해 배운다.
  • +
  • 배포된 애플리케이션의 문제를 해결한다.
  • +
+
+ +
+

쿠버네티스 파드

+

모듈 2에서 배포를 생성했을 때, 쿠버네티스는 여러분의 애플리케이션 인스턴스에 파드를 생성했다. 파드는 하나 또는 그 이상의 애플리케이션 컨테이너 (도커 또는 rkt와 같은)들의 그룹을 나타내는 쿠버네티스의 추상적 개념으로 일부는 컨테이너에 대한 자원을 공유한다. 그 자원은 다음을 포함한다:

+
    +
  • 볼륨과 같은, 공유 스토리지
  • +
  • 클러스터 IP 주소와 같은, 네트워킹
  • +
  • 컨테이너 이미지 버전 또는 사용할 특정 포트와 같이, 각 컨테이너가 동작하는 방식에 대한 정보
  • +
+

파드는 특유한 "로컬호스트" 애플리케이션 모형을 만들어. 상대적으로 밀접하게 결합되어진 상이한 애플리케이션 컨테이너들을 수용할 수 있다. 가령, 파드는 Node.js 앱과 더불어 Node.js 웹서버에 의해 발행되는 데이터를 공급하는 상이한 컨테이너를 함께 수용할 수 있다. 파드 내 컨테이너는 IP 주소, 그리고 포트 스페이스를 공유하고 항상 함께 위치하고 함께 스케쥴링 되고 동일 노드 상의 컨텍스트를 공유하면서 동작한다.

+ +

파드는 쿠버네티스 플랫폼 상에서 최소 단위가 된다. 우리가 쿠버네티스에서 배포를 생성할 때, 그 배포는 컨테이너 내부에서 컨테이너와 함께 파드를 생성한다. 각 파드는 스케쥴 되어진 노드에게 묶여지게 된다. 그리고 (재구동 정책에 따라) 소멸되거나 삭제되기 전까지 그 노드에 유지된다. 노드에 실패가 발생할 경우, 클러스터 내에 가용한 다른 노드들을 대상으로 스케쥴되어진다.

+ +
+
+
+

요약:

+
    +
  • 파드
  • +
  • 노드
  • +
  • Kubectl 주요 명령어
  • +
+
+
+

+ 파드는 하나 또는 그 이상의 애플리케이션 컨테이너 (도커 또는 rkt와 같은)들의 그룹이고 공유 스토리지 (볼륨), IP 주소 그리고 그것을 동작시키는 방식에 대한 정보를 포함하고 있다. +

+
+
+
+
+ +
+
+

파드 개요

+
+
+ +
+
+

+
+
+
+ +
+
+

노드

+

파드는 언제나 노드 상에서 동작한다. 노드는 쿠버네티스에서 워커 머신을 말하며 클러스터에 따라 가상 또는 물리 머신일 수 있다. 각 노드는 마스터에 의해 관리된다. 하나의 노드는 여러 개의 파드를 가질 수 있고, 쿠버네티스 마스터는 클러스터 내 노드를 통해서 파드에 대한 스케쥴링을 자동으로 처리한다.

+ +

모든 쿠버네티스 노드는 최소한 다음과 같이 동작한다.

+
    +
  • Kubelet은, 쿠버네티스 마스터와 노드 간 통신을 책임지는 프로세스이며, 하나의 머신 상에서 동작하는 파드와 컨테이너를 관리한다.
  • +
  • (도커, rkt)와 같은 컨테이너 런타임은 레지스트리에서 컨테이너 이미지를 가져와 묶여 있는 것을 풀고 애플리케이션을 동작시키는 책임을 맡는다.
  • +
+ +
+
+
+

만약 컨테이너들이 밀접하고 결합되어 있고 디스크와 같은 자원을 공유해야 한다면 오직 하나의 단일 파드에 함께 스케쥴되어져야 한다.

+
+
+
+ +
+ +
+
+

노드 개요

+
+
+ +
+
+

+
+
+
+ +
+
+

kubectl로 문제해결하기

+

모듈 2에서, 여러분은 Kubectl 커맨드-라인 인터페이스를 사용하였다. 여러분은 배포된 애플리케이션과 그 환경에 대한 정보를 얻기 위해 모듈 3에서도 계속 그것을 사용하게 될 것이다. 가장 보편적인 운용업무는 다음 kubectl 명령어를 이용해 처리될 수 있다:

+
    +
  • kubectl get - 자원을 나열한다
  • +
  • kubectl describe - 자원에 대해 상세한 정보를 보여준다.
  • +
  • kubectl logs - 파드 내 컨테이너의 로그들을 출력한다
  • +
  • kubectl exec - 파드 내 컨테이너에 대한 명령을 실행한다.
  • +
+ +

언제 애플리케이션이 배포되었으며, 현재 상태가 어떠한지, 그것의 구성은 어떠한지 등을 보기 위해 이러한 명령을 이용할 수 있다.

+ +

이제 클러스터 컴포넌트와 커맨드 라인에 대해 알아 보았으니, 애플리케이션을 조사해 보자.

+ +
+
+
+

노드는 쿠버네티스에 있어서 워커 머신이며 클러스터에 따라 VM 또는 물리 머신이 될 수 있다. 여러개의 파드는 하나의 노드 위에서 동작할 수 있다.

+
+
+
+
+ + + +
+ +
+ + + diff --git a/content/ko/docs/tutorials/kubernetes-basics/expose/_index.md b/content/ko/docs/tutorials/kubernetes-basics/expose/_index.md new file mode 100644 index 0000000000000..cbef903578532 --- /dev/null +++ b/content/ko/docs/tutorials/kubernetes-basics/expose/_index.md @@ -0,0 +1,4 @@ +--- +title: 앱 외부로 노출하기 +weight: 40 +--- diff --git a/content/ko/docs/tutorials/kubernetes-basics/expose/expose-interactive.html b/content/ko/docs/tutorials/kubernetes-basics/expose/expose-interactive.html new file mode 100644 index 0000000000000..6d7a569862942 --- /dev/null +++ b/content/ko/docs/tutorials/kubernetes-basics/expose/expose-interactive.html @@ -0,0 +1,38 @@ +--- +title: 대화형 튜토리얼 - 앱 노출하기 +weight: 20 +--- + + + + + + + + + + + +
+ +
+ +
+
+ 터미널과 상호작용하기 위해, 데스크탑/태블릿 버전을 이용한다. +
+
+
+
+ + +
+ +
+ + + diff --git a/content/ko/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/ko/docs/tutorials/kubernetes-basics/expose/expose-intro.html new file mode 100644 index 0000000000000..6ffa2c6126f33 --- /dev/null +++ b/content/ko/docs/tutorials/kubernetes-basics/expose/expose-intro.html @@ -0,0 +1,114 @@ +--- +title: 앱 노출을 위해 서비스 이용하기 +weight: 10 +--- + + + + + + + + + +
+ +
+ +
+
+

목표

+
    +
  • 쿠버네티스의 서비스에 대해 배운다.
  • +
  • 레이블과 레이블셀랙터 오브젝트가 어떻게 서비스와 연관되는지 이해한다.
  • +
  • 서비스를 이용하여 쿠버네티스 클러스터 외부로 애플리케이션을 노출한다.
  • +
+
+ +
+

쿠버네티스 서비스들에 대한 개요

+ +

쿠버네티스 파드들 은 언젠가는 죽게된다. 실제 파드들은 생명주기를 갖는다. 워커 노드가 죽으면, 노드 상에서 동작하는 파드들 또한 종료된다. 레플리케이션 컨트롤러는 여러분의 애플리케이션이 지속적으로 동작할 수 있도록 새로운 파드들의 생성을 통해 동적으로 클러스터를 미리 지정해 둔 상태로 되돌려 줄 수도 있다. 또 다른 예시로서, 3개의 복제본을 갖는 이미지 처리용 백엔드를 고려해 보자. 그 복제본들은 대체 가능한 상태이다. 그래서 프론트엔드 시스템은 하나의 파드가 소멸되어 재생성이 되더라도, 백엔드 복제본들에 의한 영향을 받아서는 안된다. 즉, 동일 노드 상의 파드들이라 할지라도, 쿠버네티스 클러스터 내 각 파드는 유일한 IP 주소를 가지며, 여러분의 애플리케이션들이 지속적으로 기능할 수 있도록 파드들 속에서 발생하는 변화에 대해 자동으로 조정해 줄 방법이 있어야 한다.

+ +

쿠버네티스에서 서비스는 하나의 논리적인 파드 셋과 그 파드들에 접근할 수 있는 정책을 정의하는 추상적 개념이다. 서비스는 종속적인 파드들 사이를 느슨하게 결합되도록 해준다. 서비스는 모든 쿠버네티스 오브젝트들과 같이 YAML (보다 선호하는) 또는 JSON을 이용하여 정의된다. 서비스가 대상으로 하는 파드 셋은 보통 LabelSelector에 의해 결정된다 (여러분이 왜 스펙에 selector가 포함되지 않은 서비스를 필요로 하게 될 수도 있는지에 대해 아래에서 확인해 보자).

+ +

비록 각 파드들이 고유의 IP를 갖고 있기는 하지만, 그 IP들은 서비스의 도움없이 클러스터 외부로 노출되어질 수 없다. 서비스들은 여러분의 애플리케이션들에게 트래픽이 실릴 수 있도록 허용해준다. 서비스들은 ServiceSpec에서 type을 지정함으로써 다양한 방식들로 노출시킬 수 있다:

+
    +
  • ClusterIP (기본값) - 클러스터 내에서 내부 IP 에 대해 서비스를 노출해준다. 이 방식은 오직 클러스터 내에서만 서비스가 접근될 수 있도록 해준다.
  • +
  • NodePort - NAT가 이용되는 클러스터 내에서 각각 선택된 노드들의 동일한 포트에 서비스를 노출시켜준다. <NodeIP>:<NodePort>를 이용하여 클러스터 외부로부터 서비스가 접근할 수 있도록 해준다. CluserIP의 상위 집합이다.
  • +
  • LoadBalancer - (지원 가능한 경우) 기존 클라우드에서 외부용 로드밸런서를 생성하고 서비스에 고정된 공인 IP를 할당해준다. NodePort의 상위 집합이다.
  • +
  • ExternalName - 이름으로 CNAME 레코드를 반환함으로써 임의의 이름(스펙에서 externalName으로 명시)을 이용하여 서비스를 노출시켜준다. 프록시는 사용되지 않는다. 이 방식은 kube-dns 버전 1.7 이상에서 지원 가능하다.
  • +
+

다른 서비스 타입들에 대한 추가 정보는 소스 IP 이용하기 튜토리얼에서 확인 가능하다. 또한 서비스들로 애플리케이션에 접속하기도 참고해 보자.

+

부가적으로, spec에 selector를 정의하지 않고 말아넣은 서비스들의 몇 가지 유즈케이스들이 있음을 주의하자. selector 없이 생성된 서비스는 상응하는 엔드포인트 오브젝트들 또한 생성하지 않는다. 이로써 사용자들로 하여금 하나의 서비스를 특정한 엔드포인트에 매핑 시킬수 있도록 해준다. selector를 생략하게 되는 또 다른 가능성은 여러분이 type: ExternalName을 이용하겠다고 확고하게 의도하는 경우이다.

+
+
+
+

요약

+
    +
  • 파드들을 외부 트래픽에 노출하기
  • +
  • 여러 파드에 걸쳐서 트래픽 로드밸런싱 하기
  • +
  • 레이블 사용하기
  • +
+
+
+

쿠버네티스 서비스는 논리적 파드 셋을 정의하고 외부 트래픽 노출, 로드밸런싱 그리고 그 파드들에 대한 서비스 디스커버리를 가능하게 해주는 추상 계층이다.

+
+
+
+
+ +
+
+

서비스와 레이블

+
+
+ +
+
+

+
+
+ +
+
+

서비스는 파드 셋에 걸쳐서 트래픽을 라우트한다. 여러분의 애플리케이션에 영향을 주지 않으면서 쿠버네티스에서 파드들이 죽게도 하고, 복제가 되게도 해주는 추상적 개념이다. 종속적인 파드들 사이에서의 디스커버리와 라우팅은 (하나의 애플리케이션에서 프로트엔드와 백엔드 컴포넌트와 같은) 쿠버네티스 서비스들에 의해 처리된다.

+

서비스는 쿠버네티스의 객체들에 대해 논리 연산을 허용해주는 기본 그룹핑 단위인, 레이블과 셀렉터를 이용하여 파드 셋과 매치시킨다. 레이블은 오브젝트들에 붙여진 키/밸류 쌍으로 다양한 방식으로 이용 가능하다:

+
    +
  • 개발, 테스트, 그리고 상용환경에 대한 객체들의 지정
  • +
  • 임베디드된 버전 태그들
  • +
  • 태그들을 이용하는 객체들에 대한 분류
  • +
+ +
+
+
+

여러분은 kubectl 명령에
--expose 옵션을 사용함으로써 디플로이먼트 생성과 동일 시점에 서비스를 생성할 수 있다.

+
+
+
+ +
+ +
+
+

+
+
+
+
+
+

레이블은 오브젝트의 생성 시점 또는 이후 시점에 붙여질 수 있다. 언제든지 수정이 가능하다. 이제 서비스를 이용하여 우리의 애플리케이션을 노출도 시켜보고 레이블도 적용해 보자.

+
+
+
+ +
+
+ + + diff --git a/content/ko/docs/tutorials/kubernetes-basics/scale/_index.md b/content/ko/docs/tutorials/kubernetes-basics/scale/_index.md new file mode 100644 index 0000000000000..d23618457f365 --- /dev/null +++ b/content/ko/docs/tutorials/kubernetes-basics/scale/_index.md @@ -0,0 +1,4 @@ +--- +title: 앱 스케일링하기 +weight: 50 +--- diff --git a/content/ko/docs/tutorials/kubernetes-basics/scale/scale-interactive.html b/content/ko/docs/tutorials/kubernetes-basics/scale/scale-interactive.html new file mode 100644 index 0000000000000..41dc8ce052830 --- /dev/null +++ b/content/ko/docs/tutorials/kubernetes-basics/scale/scale-interactive.html @@ -0,0 +1,44 @@ +--- +title: 대화형 튜토리얼 - 앱 스케일링하기 +weight: 20 +--- + + + + + + + + + + + +
+ +
+ +
+
+ 터미널로 상호 작용하기 위해서, 데스크탑/태블릿 버전을 사용해주세요 +
+
+
+
+ + +
+ + + +
+ + + diff --git a/content/ko/docs/tutorials/kubernetes-basics/scale/scale-intro.html b/content/ko/docs/tutorials/kubernetes-basics/scale/scale-intro.html new file mode 100644 index 0000000000000..a3f7da1d4d336 --- /dev/null +++ b/content/ko/docs/tutorials/kubernetes-basics/scale/scale-intro.html @@ -0,0 +1,136 @@ +--- +title: 복수의 앱 인스턴스를 구동하기 +weight: 10 +--- + + + + + + + + + +
+ +
+ +
+ +
+

목표

+
    +
  • kubectl을 사용해서 애플리케이션을 스케일한다.
  • +
+
+ +
+

애플리케이션을 스케일하기

+ +

지난 모듈에서 디플로이먼트, + 를 만들고 서비스를 + 통해서 디플로이먼트를 외부에 노출시켜 봤다. 해당 디플로이먼트는 애플리케이션을 구동하기 위해 단 + 하나의 파드(Pod)만을 생성했었다. 트래픽이 증가하면, 사용자 요청에 맞추어 애플리케이션의 규모를 + 조정할 필요가 있다.

+ +

디플로이먼트의 복제 수를 변경하면 스케일링이 수행된다

+ +
+
+
+

요약:

+
    +
  • 디플로이먼트 스케일링하기
  • +
+
+
+

kubectl run 명령에 --replicas 파라미터를 사용해서 처음부터 복수의 인스턴스로 구동되는 + 디플로이먼트를 만들 수도 있다

+
+
+
+
+ +
+
+

스케일링 개요

+
+
+ +
+
+
+ +
+
+ +
+ +
+
+ +

디플로이먼트를 스케일 아웃하면 신규 파드가 생성되어서 가용한 자원이 있는 노드에 스케줄된다. + 스케일링 기능은 새로 의도한 상태(desired state)까지 파드의 수를 늘린다. 쿠버네티스는 + 파드의 오토스케일링 + 도 지원하지만 본 튜토리얼에서는 다루지 않는다. 0까지 스케일링하는 것도 가능하다. 이 경우 해당 + 디플로이먼트의 모든 파드가 종료된다.

+ +

애플리케이션의 인스턴스를 복수로 구동하게 되면 트래픽을 해당 인스턴스 모두에 분산시킬 방법이 + 필요해진다. 서비스는 노출된 디플로이먼트의 모든 파드에 네트워크 트래픽을 분산시켜줄 통합된 + 로드밸런서를 갖는다. 서비스는 엔드포인트를 이용해서 구동중인 파드를 지속적으로 모니터링함으로써 + 가용한 파드에만 트래픽이 전달되도록 해준다.

+ +
+
+
+

디플로이먼트의 복제 수를 변경하면 스케일링이 수행된다.

+
+
+
+ +
+ +
+
+

일단 복수의 애플리케이션의 인스턴스가 구동 중이면, 다운타임 없이 롤링 업데이트를 할 수 있다. + 다음 모듈에서 이 내용을 다루도록 하겠다. 이제 온라인 터미널로 가서 애플리케이션을 스케일해보자.

+
+
+
+ + + +
+ +
+ + + diff --git a/content/ko/docs/tutorials/kubernetes-basics/update/_index.md b/content/ko/docs/tutorials/kubernetes-basics/update/_index.md new file mode 100644 index 0000000000000..60fdea6b89e27 --- /dev/null +++ b/content/ko/docs/tutorials/kubernetes-basics/update/_index.md @@ -0,0 +1,4 @@ +--- +title: 앱 업데이트하기 +weight: 60 +--- diff --git a/content/ko/docs/tutorials/kubernetes-basics/update/update-interactive.html b/content/ko/docs/tutorials/kubernetes-basics/update/update-interactive.html new file mode 100644 index 0000000000000..83ba8f581f8e1 --- /dev/null +++ b/content/ko/docs/tutorials/kubernetes-basics/update/update-interactive.html @@ -0,0 +1,37 @@ +--- +title: 대화형 튜토리얼 - 앱 업데이트 하기 +weight: 20 +--- + + + + + + + + + + + +
+ +
+ +
+
+ 터미널과 상호작용하기 위해, 데스크탑/태블릿 버전을 이용한다. +
+
+
+
+ +
+ +
+ + + diff --git a/content/ko/docs/tutorials/kubernetes-basics/update/update-intro.html b/content/ko/docs/tutorials/kubernetes-basics/update/update-intro.html new file mode 100644 index 0000000000000..ea011672a044f --- /dev/null +++ b/content/ko/docs/tutorials/kubernetes-basics/update/update-intro.html @@ -0,0 +1,141 @@ +--- +title: 롤링 업데이트 수행하기 +weight: 10 +--- + + + + + + + + + +
+ +
+ +
+ +
+

목표

+
    +
  • kubectl을 이용하여 롤링 업데이트 수행하기
  • +
+
+ +
+

애플리케이션 업데이트하기

+ +

사용자들은 애플리케이션이 항상 가용한 상태일 것이라 여기고 개발자들은 하루에 여러번씩 새로운 버전을 + 배포하도록 요구 받고있다. 쿠버네티스에서는 이것을 롤링 업데이트를 통해 이루고 있다. 롤링 업데이트는 파드 인스턴스를 점진적으로 새로운 것으로 업데이트하여 디플로이먼트 업데이트가 서비스 중단 없이 이루어질 수 있도록 해준다. + 새로운 파드는 가용한 자원을 보유한 노드로 스케줄될 것이다.

+ +

이전 모듈에서 여러 개의 인스턴스를 동작시키도록 애플리케이션을 스케일했다. 이것은 애플리케이션의 가용성에 영향을 미치지 않으면서 + 업데이트를 수행하는 것에 대한 요구이다. 기본적으로, 업데이트가 이루어지는 동안 이용 불가한 파드의 최대 개수와 생성 가능한 새로운 파드의 최대 개수는 하나다. + 두 옵션은 (파드에 대한) 개수 또는 백분율로 구성될 수 있다. + 쿠버네티스에서, 업데이트는 버전으로 관리되고 어떠한 디플로이먼트 업데이트라도 이전의 (안정적인) 버전으로 원복이 가능하다.

+ +
+
+
+

요약:

+
    +
  • 앱 업데이트하기
  • +
+
+
+

롤링 업데이트는 파드 인스턴스를 점진적으로 새로운 것으로 업데이트하여 디플로이먼트 업데이트가 서비스 중단 없이 이루어질 수 있도록 해준다.

+
+
+
+
+ +
+
+

롤링 업데이트 개요

+
+
+
+
+
+ +
+
+
+ +
+
+ +

애플리케이션 스케일링과 유사하게, 디플로이먼트가 외부로 노출되면, 서비스는 업데이트가 이루어지는 동안 오직 가용한 파드에게만 트래픽을 로드밸런스 할 것이다. 가용한 파드란 애플리케이션의 사용자들에게 가용한 상태의 인스턴스를 말한다.

+ +

롤링 업데이트는 다음 동작들을 허용해준다:

+
    +
  • 하나의 환경에서 또 다른 환경으로의 애플리케이션 프로모션 (컨테이너 이미지 업데이트를 통해)
  • +
  • 이전 버전으로의 롤백
  • +
  • 서비스 중단 없는 애플리케이션의 지속적인 통합과 지속적인 전달
  • + +
+ +
+
+
+

디플로이먼트가 외부로 노출되면, 서비스는 업데이트가 이루어지는 동안 오직 가용한 파드에게만 트래픽을 로드밸런스 할 것이다.

+
+
+
+ +
+ +
+
+

다음 대화형 튜토리얼에서, 새로운 버전으로 애플리케이션을 업데이트하고, 롤백 또한 수행해 볼 것이다.

+
+
+
+ + + +
+ +
+ + + diff --git a/content/ko/examples/minikube/Dockerfile b/content/ko/examples/minikube/Dockerfile new file mode 100644 index 0000000000000..34b1f40f528ca --- /dev/null +++ b/content/ko/examples/minikube/Dockerfile @@ -0,0 +1,4 @@ +FROM node:6.9.2 +EXPOSE 8080 +COPY server.js . +CMD node server.js diff --git a/content/ko/examples/minikube/server.js b/content/ko/examples/minikube/server.js new file mode 100644 index 0000000000000..76345a17d81db --- /dev/null +++ b/content/ko/examples/minikube/server.js @@ -0,0 +1,9 @@ +var http = require('http'); + +var handleRequest = function(request, response) { + console.log('Received request for URL: ' + request.url); + response.writeHead(200); + response.end('Hello World!'); +}; +var www = http.createServer(handleRequest); +www.listen(8080);