diff --git a/_data/setup.yml b/_data/setup.yml index 009dcde78a298..5803ac17ca092 100644 --- a/_data/setup.yml +++ b/_data/setup.yml @@ -11,6 +11,11 @@ toc: - docs/imported/release/notes.md - docs/setup/building-from-source.md +- title: Version 1.10 Troubleshooting + landing page: /docs/reference/pvc-finalizer-downgrade-issue/ + section: + - docs/reference/pvc-finalizer-downgrade-issue.md + - title: Independent Solutions landing_page: /docs/getting-started-guides/minikube/ section: diff --git a/case-studies/amadeus.html b/case-studies/amadeus.html new file mode 100644 index 0000000000000..0e3b0726d3a2c --- /dev/null +++ b/case-studies/amadeus.html @@ -0,0 +1,105 @@ +--- +title: Amadeus Case Study +layout: basic +case_study_styles: true +cid: caseStudies +css: /css/style_amadeus.css +--- + +
+

CASE STUDY:
Another Technical Evolution for a 30-Year-Old Company +

+
+
+ Company  Amadeus IT Group     Location  Madrid, Spain     Industry  Travel Technology +
+
+
+
+
+

Challenge

+ In the past few years, Amadeus, which provides IT solutions to the travel industry around the world, found itself in need of a new platform for the 5,000 services supported by its service-oriented architecture. The 30-year-old company operates its own data center in Germany, and there were growing demands internally and externally for solutions that needed to be geographically dispersed. And more generally, "we had objectives of being even more highly available," says Eric Mountain, Senior Expert, Distributed Systems at Amadeus. Among the company’s goals: to increase automation in managing its infrastructure, optimize the distribution of workloads, use data center resources more efficiently, and adopt new technologies more easily. +
+
+

Solution

+ Mountain has been overseeing the company’s migration to Kubernetes, using OpenShift Container Platform, Red Hat’s enterprise container platform. +

+

Impact

+ One of the first projects the team deployed in Kubernetes was the Amadeus Airline Cloud Availability solution, which helps manage ever-increasing flight-search volume. "It’s now handling in production several thousand transactions per second, and it’s deployed in multiple data centers throughout the world," says Mountain. "It’s not a migration of an existing workload; it’s a whole new workload that we couldn’t have done otherwise. [This platform] gives us access to market opportunities that we didn’t have before." +
+
+
+ +
+
+ "We want multi-data center capabilities, and we want them for our mainstream system as well. We didn’t think that we could achieve them with our existing system. We need new automation, things that Kubernetes and OpenShift bring."

- Eric Mountain, Senior Expert, Distributed Systems at Amadeus IT Group
+
+
+
+ +
+

In his two decades at Amadeus, Eric Mountain has been the migrations guy.

+ Back in the day, he worked on the company’s move from Unix to Linux, and now he’s overseeing the journey to cloud native. "Technology just keeps changing, and we embrace it," he says. "We are celebrating our 30 years this year, and we continue evolving and innovating to stay cost-efficient and enhance everyone’s travel experience, without interrupting workflows for the customers who depend on our technology."

+ That was the challenge that Amadeus—which provides IT solutions to the travel industry around the world, from flight searches to hotel bookings to customer feedback—faced in 2014. The technology team realized it was in need of a new platform for the 5,000 services supported by its service-oriented architecture.

+ The tipping point occurred when they began receiving many requests, internally and externally, for solutions that needed to be geographically outside the company’s main data center in Germany. "Some requests were for running our applications on customer premises," Mountain says. "There were also new services we were looking to offer that required response time to the order of a few hundred milliseconds, which we couldn’t achieve with transatlantic traffic. Or at least, not without eating into a considerable portion of the time available to our applications for them to process individual queries."

+ More generally, the company was interested in leveling up on high availability, increasing automation in managing infrastructure, optimizing the distribution of workloads and using data center resources more efficiently. "We have thousands and thousands of servers," says Mountain. "These servers are assigned roles, so even if the setup is highly automated, the machine still has a given role. It’s wasteful on many levels. For instance, an application doesn’t necessarily use the machine very optimally. Virtualization can help a bit, but it’s not a silver bullet. If that machine breaks, you still want to repair it because it has that role and you can’t simply say, ‘Well, I’ll bring in another machine and give it that role.’ It’s not fast. It’s not efficient. So we wanted the next level of automation."

+
+
+ +
+
+ "We hope that if we build on what others have built, what we do might actually be upstream-able. As Kubernetes and OpenShift progress, we see that we are indeed able to remove some of the additional layers we implemented to compensate for gaps we perceived earlier." +
+
+ +
+
+ While mainly a C++ and Java shop, Amadeus also wanted to be able to adopt new technologies more easily. Some of its developers had started using languages like Python and databases like Couchbase, but Mountain wanted still more options, he says, "in order to better adapt our technical solutions to the products we offer, and open up entirely new possibilities to our developers." Working with recent technologies and cool new things would also make it easier to attract new talent. +

+ All of those needs led Mountain and his team on a search for a new platform. "We did a set of studies and proofs of concept over a fairly short period, and we considered many technologies," he says. "In the end, we were left with three choices: build everything on premise, build on top of Kubernetes whatever happens to be missing from our point of view, or go with OpenShift and build whatever remains there." +

+ The team decided against building everything themselves—though they’d done that sort of thing in the past—because "people were already inventing things that looked good," says Mountain. +

+ Ultimately, they went with OpenShift Container Platform, Red Hat’s Kubernetes-based enterprise offering, instead of building on top of Kubernetes because "there was a lot of synergy between what we wanted and the way Red Hat was anticipating going with OpenShift," says Mountain. "They were clearly developing Kubernetes, and developing certain things ahead of time in OpenShift, which were important to us, such as more security." +

+ The hope was that those particular features would eventually be built into Kubernetes, and, in the case of security, Mountain feels that has happened. "We realize that there’s always a certain amount of automation that we will probably have to develop ourselves to compensate for certain gaps," says Mountain. "The less we do that, the better for us. We hope that if we build on what others have built, what we do might actually be upstream-able. As Kubernetes and OpenShift progress, we see that we are indeed able to remove some of the additional layers we implemented to compensate for gaps we perceived earlier." +
+
+ +
+
+ "It’s not a migration of an existing workload; it’s a whole new workload that we couldn’t have done otherwise. [This platform] gives us access to market opportunities that we didn’t have before." +
+
+ +
+
+ The first project the team tackled was one that they knew had to run outside the data center in Germany. Because of the project’s needs, "We couldn’t rely only on the built-in Kubernetes service discovery; we had to layer on top of that an extra service discovery level that allows us to load balance at the operation level within our system," says Mountain. They also built a stream dedicated to monitoring, which at the time wasn’t offered in the Kubernetes or OpenShift ecosystem. Now that Prometheus and other products are available, Mountain says the company will likely re-evaluate their monitoring system: "We obviously always like to leverage what Kubernetes and OpenShift can offer." +

+ The second project ended up going into production first: the Amadeus Airline Cloud Availability solution, which helps manage ever-increasing flight-search volume and was deployed in public cloud. Launched in early 2016, it is "now handling in production several thousand transactions per second, and it’s deployed in multiple data centers throughout the world," says Mountain. "It’s not a migration of an existing workload; it’s a whole new workload that we couldn’t have done otherwise. [This platform] gives us access to market opportunities that we didn’t have before." +

+ Having been through this kind of technical evolution more than once, Mountain has advice on how to handle the cultural changes. "That’s one aspect that we can tackle progressively," he says. "We have to go on supplying our customers with new features on our pre-existing products, and we have to keep existing products working. So we can’t simply do absolutely everything from one day to the next. And we mustn’t sell it that way." +

+ The first order of business, then, is to pick one or two applications to demonstrate that the technology works. Rather than choosing a high-impact, high-risk project, Mountain’s team selected a smaller application that was representative of all the company’s other applications in its complexity: "We just made sure we picked something that’s complex enough, and we showed that it can be done." +
+
+ +
+
+ "The bottom line is we want these multi-data center capabilities, and we want them as well for our mainstream system," he says. "And we don’t think that we can implement them with our previous system. We need the new automation, homogeneity, and scale that Kubernetes and OpenShift bring." +
+
+ +
+
+ Next comes convincing people. "On the operations side and on the R&D side, there will be people who say quite rightly, ‘There is a system, and it works, so why change?’" Mountain says. "The only thing that really convinces people is showing them the value." For Amadeus, people realized that the Airline Cloud Availability product could not have been made available on the public cloud with the company’s existing system. The question then became, he says, "Do we go into a full-blown migration? Is that something that is justified?" +

+ "The bottom line is we want these multi-data center capabilities, and we want them as well for our mainstream system," he says. "And we don’t think that we can implement them with our previous system. We need the new automation, homogeneity, and scale that Kubernetes and OpenShift bring." +

+ So how do you get everyone on board? "Make sure you have good links between your R&D and your operations," he says. "Also make sure you’re going to talk early on to the investors and stakeholders. Figure out what it is that they will be expecting from you, that will convince them or not, that this is the right way for your company." +

+ His other advice is simply to make the technology available for people to try it. "Kubernetes and OpenShift Origin are open source software, so there’s no complicated license key for the evaluation period and you’re not limited to 30 days," he points out. "Just go and get it running." Along with that, he adds, "You’ve got to be prepared to rethink how you do things. Of course making your applications as cloud native as possible is how you’ll reap the most benefits: 12 factors, CI/CD, which is continuous integration, continuous delivery, but also continuous deployment." +

+ And while they explore that aspect of the technology, Mountain and his team will likely be practicing what he preaches to others taking the cloud native journey. "See what happens when you break it, because it’s important to understand the limits of the system," he says. Or rather, he notes, the advantages of it. "Breaking things on Kube is actually one of the nice things about it—it recovers. It’s the only real way that you’ll see that you might be able to do things." +
+
diff --git a/css/style_amadeus.css b/css/style_amadeus.css new file mode 100644 index 0000000000000..7a91c4f692fee --- /dev/null +++ b/css/style_amadeus.css @@ -0,0 +1,469 @@ +#caseStudyTitle { + margin-top: 1em !important; + font-family:"Roboto", sans-serif; +} + +p { + font-family:"Roboto", sans-serif; + padding:5%; +} + +.header_logo { + + width:23%; + margin-bottom:-0.6%; + margin-left:10px; +} + +a { + text-decoration:none; + color:#3366ff; +} + +body { + margin:0; + +} + +h1 { + font-family:"Roboto", sans-serif; + font-weight:bold; + letter-spacing:0.025em; + font-size:42px; + padding-bottom:0px; +} + +.subhead { + font-size:26px; + font-weight:100; + line-height:40px; + padding-bottom:1%; + padding-top:0.5%; + +} + +.banner1 { + font-family:"Roboto", sans-serif; + font-weight:300; + color:#ffffff; + padding-top:12%; + padding-bottom:0.5%; + padding-left:10%; + font-size:32px; + background: url('/images/CaseStudy_amadeus_banner1.jpg'); + background-size:100% auto; + background-repeat:no-repeat; +} + +.banner2 { + font-family:"Roboto", sans-serif; + font-weight:300; + color:#ffffff; + padding-top:4%; + padding-bottom:4%; + width:100%; + font-size:24px; + letter-spacing:0.03em; + line-height:34px; + float:left; + background-size:100% auto; + background-color:#666666; + background-repeat:repeat; + +} + +.banner3 { + font-family:"Roboto", sans-serif; + font-weight:300; + color:#ffffff; + padding-left:5%; + padding-right:5%; + padding-top:4%; + padding-bottom:4%; + font-size:24px; + letter-spacing:0.03em; + line-height:34px; + float:left; + background: url('/images/CaseStudy_amadeus_banner3.jpg'); + background-size:100% auto; +} + +.banner4 { + font-family:"Roboto", sans-serif; + font-weight:300; + color:#ffffff; + padding-top:5%; + padding-bottom:5%; + font-size:24px; + letter-spacing:0.03em; + line-height:34px; + float:left; + background: url('/images/CaseStudy_amadeus_banner4.jpg'); + background-size:100% auto; +} + +.banner5 { + font-family:"Roboto", sans-serif; + font-weight:300; + color:#ffffff; + padding-top:3%; + padding-bottom:3%; + font-size:24px; + letter-spacing:0.03em; + line-height:35px; + float:left; + background-size:100% auto; + background-color:#666666; + background-repeat:no-repeat; +} + +.banner2text { + font-family:"Roboto", sans-serif; + font-weight:300; + color:#ffffff; + width:70%; + padding-left:15%; + float:left; + text-align:center; +} + +.banner3text { + font-family:"Roboto", sans-serif; + font-weight:300; + color:#ffffff; + width:75%; + padding-left:13%; + text-align:center; +} + +.banner4text { + font-family:"Roboto", sans-serif; + font-weight:300; + color:#ffffff; + width:65%; + padding-left:17%; + text-align:center; +} + +.banner5text { + font-family:"Roboto", sans-serif; + font-weight:300; + color:#ffffff; + width:68%; + padding-left:16%; + float:left; + text-align:center; +} + + +h2 { + font-family:"Roboto", sans-serif; + font-weight:300; + font-size:24px; + line-height:34px; + color:#3366ff; +} + +.quote { + font-family:"Roboto", sans-serif; + font-weight:300; + font-size:22px; + line-height:32px; + color:#3366ff; +} + +.details { + font-family:"Roboto", sans-serif; + font-weight:300; + font-size:18px; + color:#3366ff; + letter-spacing:0.03em; + padding-bottom:1.5%; + padding-top:2%; + padding-left:10%; +} + + +hr { + border-bottom:0px solid; + width:100%; + opacity:0.5; + color:#aaaaaa; + height:1px; +} + +.col1 { + width: 42%; + padding-right:8%; + float:left; + font-family:"Roboto", sans-serif; + font-weight:100; + color:#606060; + line-height:20px; + letter-spacing:0.03em; + font-size:14px; + +} + +.col2 { + width: 45%; + font-family:"Roboto", sans-serif; + font-weight:300; + float:left; + line-height:20px; + color:#606060; + letter-spacing:0.03em; + font-size:14px; + +} + +.fullcol { + width:77%; + margin-left:11%; + margin-right:10%; + margin-top:4%; + margin-bottom:4%; + font-family:"Roboto", sans-serif; + font-weight:300; + float:left; + line-height:22px; + color:#606060; + letter-spacing:0.03em; + font-size:14px; +} + +.cols { + width:80%; + margin-left:10%; + margin-top:1%; + margin-bottom:4%; + font-family:"Roboto", sans-serif; + font-weight:300; + float:left; + +} + +h4 { + font-family:"Roboto", sans-serif; + font-weight:400; + letter-spacing:0.9; + font-size:20px; + padding-bottom:0px; +} + + +@media screen and (max-width: 910px){ + +h1 { + font-family:"Roboto", sans-serif; + font-weight:bold; + line-height:36px; + letter-spacing:0.03em; + font-size:30px !important; + padding-bottom:0px; + width:80%; +} + +.header_logo { + width:35%; + margin-bottom:-.5%; + magin-left:10px; +} + +.subhead { + font-size:18px; + font-weight:100; + line-height:27px; +} + +.details { + font-family:"Roboto", sans-serif; + font-weight:300; + font-size:16px; + color:#3366ff; + letter-spacing:0.03em; + padding-bottom:2%; + line-height:28px; + padding-top:4%; + padding-left:10%; +} + +.logo { + width:8%; +} + +.col1 { + width: 95%; + padding-right:8%; + float:left; + font-family:"Roboto", sans-serif; + font-weight:300; + color:#606060; + line-height:20px; + letter-spacing:0.03em; + font-size:14px; +} + +.col2 { + width: 95%; + padding-top:2%; + padding-bottom:5%; + font-family:"Roboto", sans-serif; + font-weight:300; + float:left; + line-height:20px; + color:#606060; + letter-spacing:0.03em; + font-size:14px; +} + +.banner1 { + font-family:"Roboto", sans-serif; + font-weight:300; + color:#ffffff; + padding-top:15%; + padding-bottom:2%; + padding-left:10%; + font-size:18px; + background: url('/images/CaseStudy_amadeus_banner1.jpg'); + background-size:100% auto; +} + +.banner2 { + font-family:"Roboto", sans-serif; + font-weight:300; + color:#ffffff; + padding-top:4%; + padding-bottom:4%; + font-size:18px; + letter-spacing:0.03em; + line-height:24px; + width:100%; + float:left; + background:none; + background-color:#666666; +} + +.banner3 { + font-family:"Roboto", sans-serif; + font-weight:300; + color:#ffffff; + padding-top:5%; + padding-bottom:5%; + font-size:16px; + letter-spacing:0.03em; + line-height:23px; + width:90%; + float:left; + background: url('/images/CaseStudy_amadeus_banner3.jpg'); +} + +.banner4 { + font-family:"Roboto", sans-serif; + font-weight:300; + color:#ffffff; + padding-top:4%; + padding-bottom:4%; + font-size:18px; + letter-spacing:0.03em; + line-height:24px; + width:100%; + float:left; + background: url('/images/CaseStudy_amadeus_banner4.jpg'); +} + +.banner5 { + font-family:"Roboto", sans-serif; + font-weight:300; + color:#ffffff; + padding-top:4%; + padding-bottom:4%; + font-size:16px; + letter-spacing:0.03em; + line-height:23px; + width:100%; + float:left; + background:none; + background-color:#666666; +} + +.banner2text { + font-family:"Roboto", sans-serif; + font-weight:300; + color:#ffffff; + width:90%; + padding-left:5%; + padding-bottom:1%; + padding-top:1%; + float:left; + text-align:center; + color:#ffffff; +} + +.banner3text { + font-family:"Roboto", sans-serif; + font-weight:300; + color:#ffffff; + width:90%; + padding-left:5%; + padding-top:5%; + padding-bottom:5%; + text-align:center; +} + +.banner4text { + font-family:"Roboto", sans-serif; + font-weight:300; + color:#ffffff; + width:90%; + padding-left:5%; + padding-top:3%; + padding-bottom:3%; + text-align:center; +} + +.banner5text { + font-family:"Roboto", sans-serif; + font-weight:300; + color:#ffffff; + width:90%; + padding-left:5%; + padding-top:3%; + padding-bottom:3%; + text-align:center; +} + +.fullcol { + margin-top:6%; + margin-bottom:8%; +} + +h2 { + line-height:26px; + font-size:18px; +} + +.quote { + font-size:18px; + line-height:24px; +} + +.logo { + width:35%; +} + +@media screen and (max-width: 580px){ + + .header_logo { + width:60%; + margin-bottom:1%; + margin-left:0%; + margin-top:2%; + + } + + .banner1 { + background: url('/images/CaseStudy_amadeus_banner_mobile.jpg'); + + +} diff --git a/docs/admin/high-availability/building.md b/docs/admin/high-availability/building.md index 4175c3d6f61e3..fde5b3946939e 100644 --- a/docs/admin/high-availability/building.md +++ b/docs/admin/high-availability/building.md @@ -186,7 +186,7 @@ This code is called the "reconciler," because it reconciles the list of endpoints stored in etcd, and the list of endpoints that are actually up and running. -Prior Kubernetes 1.9, the reconciler expects you to provide the +Prior to Kubernetes 1.9, the reconciler expects you to provide the number of endpoints (i.e., the number of apiserver replicas) through a command-line flag (e.g. `--apiserver-count=3`). If more replicas are available, the reconciler trims down the list of endpoints. diff --git a/docs/concepts/overview/object-management-kubectl/declarative-config.md b/docs/concepts/overview/object-management-kubectl/declarative-config.md index 1772c4874d2d4..710bf23cf47a5 100644 --- a/docs/concepts/overview/object-management-kubectl/declarative-config.md +++ b/docs/concepts/overview/object-management-kubectl/declarative-config.md @@ -617,7 +617,7 @@ by `name`. # configuration file value containers: - name: nginx - image: nginx:1.11 + image: nginx:1.10 - name: nginx-helper-b image: helper:1.3 - name: nginx-helper-c # key: nginx-helper-c; will be added in result diff --git a/docs/concepts/overview/working-with-objects/namespaces.md b/docs/concepts/overview/working-with-objects/namespaces.md index b7e5329ba5da0..cae681936f678 100644 --- a/docs/concepts/overview/working-with-objects/namespaces.md +++ b/docs/concepts/overview/working-with-objects/namespaces.md @@ -86,5 +86,4 @@ across namespaces, you need to use the fully qualified domain name (FQDN). Most Kubernetes resources (e.g. pods, services, replication controllers, and others) are in some namespaces. However namespace resources are not themselves in a namespace. And low-level resources, such as [nodes](/docs/admin/node) and -persistentVolumes, are not in any namespace. Events are an exception: they may or may not -have a namespace, depending on the object the event is about. +persistentVolumes, are not in any namespace. diff --git a/docs/concepts/policy/pod-security-policy.md b/docs/concepts/policy/pod-security-policy.md index 1700f3801b7a5..d17a5e700fe73 100644 --- a/docs/concepts/policy/pod-security-policy.md +++ b/docs/concepts/policy/pod-security-policy.md @@ -523,9 +523,8 @@ for the default list of capabilities when using the Docker runtime. ### SELinux -- *MustRunAs* - Requires `seLinuxOptions` to be configured if not using -pre-allocated values. Uses `seLinuxOptions` as the default. Validates against -`seLinuxOptions`. +- *MustRunAs* - Requires `seLinuxOptions` to be configured. Uses +`seLinuxOptions` as the default. Validates against `seLinuxOptions`. - *RunAsAny* - No default provided. Allows any `seLinuxOptions` to be specified. diff --git a/docs/concepts/storage/persistent-volumes.md b/docs/concepts/storage/persistent-volumes.md index 7a0e88796473e..3632604e2d364 100644 --- a/docs/concepts/storage/persistent-volumes.md +++ b/docs/concepts/storage/persistent-volumes.md @@ -196,7 +196,7 @@ parameters: allowVolumeExpansion: true ``` -Once both feature gate and the aforementioned admission plug-in are turned on, an user can request larger volume for their `PersistentVolumeClaim` +Once both feature gate and the aforementioned admission plug-in are turned on, a user can request larger volume for their `PersistentVolumeClaim` by simply editing the claim and requesting a larger size. This in turn will trigger expansion of the volume that is backing the underlying `PersistentVolume`. Under no circumstances will a new `PersistentVolume` be created to satisfy the claim. Kubernetes will instead attempt to resize the existing volume. diff --git a/docs/concepts/storage/volumes.md b/docs/concepts/storage/volumes.md index 1e141129bfc1a..21a9f0014105f 100644 --- a/docs/concepts/storage/volumes.md +++ b/docs/concepts/storage/volumes.md @@ -286,7 +286,7 @@ See the [FC example](https://github.com/kubernetes/examples/tree/{{page.githubbr ### flocker -[Flocker](https://clusterhq.com/flocker) is an open-source clustered container data volume manager. It provides management +[Flocker](https://github.com/ClusterHQ/flocker) is an open-source clustered container data volume manager. It provides management and orchestration of data volumes backed by a variety of storage backends. A `flocker` volume allows a Flocker dataset to be mounted into a pod. If the diff --git a/docs/getting-started-guides/gce.md b/docs/getting-started-guides/gce.md index 8fd45b26e1394..8497db17c9f92 100644 --- a/docs/getting-started-guides/gce.md +++ b/docs/getting-started-guides/gce.md @@ -51,7 +51,7 @@ wget -q -O - https://get.k8s.io | bash Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster. -By default, some containers will already be running on your cluster. Containers like `fluentd` provide [logging](/docs/concepts/cluster-administration/logging/), while `heapster` provides [monitoring](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/README.md) services. +By default, some containers will already be running on your cluster. Containers like `fluentd` provide [logging](/docs/concepts/cluster-administration/logging/), while `heapster` provides [monitoring](http://releases.k8s.io/master/cluster/addons/cluster-monitoring/README.md) services. The script run by the commands above creates a cluster with the name/prefix "kubernetes". It defines one specific cluster config, so you can't run it more than once. diff --git a/docs/getting-started-guides/ubuntu/installation.md b/docs/getting-started-guides/ubuntu/installation.md index f02bcb5bdb3fc..7c788380d0e19 100644 --- a/docs/getting-started-guides/ubuntu/installation.md +++ b/docs/getting-started-guides/ubuntu/installation.md @@ -10,15 +10,17 @@ Ubuntu 16.04 introduced the [Canonical Distribution of Kubernetes](https://www.u {% endcapture %} {% capture prerequisites %} -- A working [Juju client](https://jujucharms.com/docs/2.2/reference-install); this does not have to be a Linux machine, it can also be Windows or OSX. +- A working [Juju client](https://jujucharms.com/docs/2.3/reference-install); this does not have to be a Linux machine, it can also be Windows or OSX. - A [supported cloud](#cloud-compatibility). - Bare Metal deployments are supported via [MAAS](http://maas.io). Refer to the [MAAS documentation](http://maas.io/docs/) for configuration instructions. - OpenStack deployments are currently only tested on Icehouse and newer. -- Network access to the following domains - - *.jujucharms.com - - gcr.io - - github.com - - Access to an Ubuntu mirror (public or private) +- One of the following: + - Network access to the following domains + - *.jujucharms.com + - gcr.io + - github.com + - Access to an Ubuntu mirror (public or private) + - Offline deployment prepared with [these](https://github.com/juju-solutions/bundle-canonical-kubernetes/wiki/Running-CDK-in-a-restricted-environment) instructions. {% endcapture %} @@ -27,7 +29,7 @@ Ubuntu 16.04 introduced the [Canonical Distribution of Kubernetes](https://www.u Out of the box the deployment comes with the following components on 9 machines: - Kubernetes (automated deployment, operations, and scaling) - - Three node Kubernetes cluster with one master and two worker nodes. + - Four node Kubernetes cluster with one master and three worker nodes. - TLS used for communication between units for security. - Flannel Software Defined Network (SDN) plugin - A load balancer for HA kubernetes-master (Experimental) @@ -60,13 +62,26 @@ Bare Metal (MAAS) | Juju | Ubuntu | flannel, calico | [doc For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. -## Configure Juju to use your cloud provider +## Installation options + +You can launch a cluster in one of two ways: [conjure-up](#conjure-up) or [juju deploy](#juju-deploy). Conjure-up is just a convenience wrapper over juju and simplifies the installation. As such, it is the preferred method of install. Deployment of the cluster is [supported on a wide variety of public clouds](#cloud-compatibility), private OpenStack clouds, or raw bare metal clusters. Bare metal deployments are supported via [MAAS](http://maas.io/). +## Conjure-up +To install Kubernetes with conjure-up, you need only to run the following commands and then follow the prompts: + +``` +sudo snap install conjure-up --classic +conjure-up kubernetes +``` +## Juju deploy + +### Configure Juju to use your cloud provider + After deciding which cloud to deploy to, follow the [cloud setup page](https://jujucharms.com/docs/devel/getting-started) to configure deploying to that cloud. -Load your [cloud credentials](https://jujucharms.com/docs/2.2/credentials) for each +Load your [cloud credentials](https://jujucharms.com/docs/2.3/credentials) for each cloud provider you would like to use. In this example @@ -99,11 +114,11 @@ ERROR failed to bootstrap model: instance provisioning failed (Failed) ``` -You will need a controller node for each cloud or region you are deploying to. See the [controller documentation](https://jujucharms.com/docs/2.2/controllers) for more information. +You will need a controller node for each cloud or region you are deploying to. See the [controller documentation](https://jujucharms.com/docs/2.3/controllers) for more information. Note that each controller can host multiple Kubernetes clusters in a given cloud or region. -## Launch a Kubernetes cluster +### Launch a Kubernetes cluster The following command will deploy the initial 9-node starter cluster. The speed of execution is very dependent of the performance of the cloud you're deploying to: @@ -122,50 +137,64 @@ The `juju status` command provides information about each unit in the cluster. U Output: ``` -Model Controller Cloud/Region Version -default aws-us-east-2 aws/us-east-2 2.0.1 - -App Version Status Scale Charm Store Rev OS Notes -easyrsa 3.0.1 active 1 easyrsa jujucharms 3 ubuntu -etcd 3.1.2 active 3 etcd jujucharms 14 ubuntu -flannel 0.6.1 maintenance 4 flannel jujucharms 5 ubuntu -kubeapi-load-balancer 1.10.0 active 1 kubeapi-load-balancer jujucharms 3 ubuntu exposed -kubernetes-master 1.6.1 active 1 kubernetes-master jujucharms 6 ubuntu -kubernetes-worker 1.6.1 active 3 kubernetes-worker jujucharms 8 ubuntu exposed -topbeat active 3 topbeat jujucharms 5 ubuntu - -Unit Workload Agent Machine Public address Ports Message -easyrsa/0* active idle 0 52.15.95.92 Certificate Authority connected. -etcd/0 active idle 3 52.15.79.127 2379/tcp Healthy with 3 known peers. -etcd/1* active idle 4 52.15.111.66 2379/tcp Healthy with 3 known peers. (leader) -etcd/2 active idle 5 52.15.144.25 2379/tcp Healthy with 3 known peers. -kubeapi-load-balancer/0* active idle 7 52.15.84.179 443/tcp Loadbalancer ready. -kubernetes-master/0* active idle 8 52.15.106.225 6443/tcp Kubernetes master services ready. - flannel/3 active idle 52.15.106.225 Flannel subnet 10.1.48.1/24 -kubernetes-worker/0* active idle 9 52.15.153.246 Kubernetes worker running. - flannel/2 active idle 52.15.153.246 Flannel subnet 10.1.53.1/24 -kubernetes-worker/1 active idle 10 52.15.52.103 Kubernetes worker running. - flannel/0* active idle 52.15.52.103 Flannel subnet 10.1.31.1/24 -kubernetes-worker/2 active idle 11 52.15.104.181 Kubernetes worker running. - flannel/1 active idle 52.15.104.181 Flannel subnet 10.1.83.1/24 - -Machine State DNS Inst id Series AZ -0 started 52.15.95.92 i-06e66414008eca61c xenial us-east-2c -3 started 52.15.79.127 i-0038186d2c5103739 xenial us-east-2b -4 started 52.15.111.66 i-0ac66c86a8ec93b18 xenial us-east-2a -5 started 52.15.144.25 i-078cfe79313d598c9 xenial us-east-2c -7 started 52.15.84.179 i-00fd70321a51b658b xenial us-east-2c -8 started 52.15.106.225 i-0109a5fc942c53ed7 xenial us-east-2b -9 started 52.15.153.246 i-0ab63e34959cace8d xenial us-east-2b -10 started 52.15.52.103 i-0108a8cc0978954b5 xenial us-east-2a -11 started 52.15.104.181 i-0f5562571c649f0f2 xenial us-east-2c +Model Controller Cloud/Region Version SLA +conjure-canonical-kubern-f48 conjure-up-aws-650 aws/us-east-2 2.3.2 unsupported + +App Version Status Scale Charm Store Rev OS Notes +easyrsa 3.0.1 active 1 easyrsa jujucharms 27 ubuntu +etcd 2.3.8 active 3 etcd jujucharms 63 ubuntu +flannel 0.9.1 active 4 flannel jujucharms 40 ubuntu +kubeapi-load-balancer 1.10.3 active 1 kubeapi-load-balancer jujucharms 43 ubuntu exposed +kubernetes-master 1.9.3 active 1 kubernetes-master jujucharms 13 ubuntu +kubernetes-worker 1.9.3 active 3 kubernetes-worker jujucharms 81 ubuntu exposed + +Unit Workload Agent Machine Public address Ports Message +easyrsa/0* active idle 3 18.219.190.99 Certificate Authority connected. +etcd/0 active idle 5 18.219.56.23 2379/tcp Healthy with 3 known peers +etcd/1* active idle 0 18.219.212.151 2379/tcp Healthy with 3 known peers +etcd/2 active idle 6 13.59.240.210 2379/tcp Healthy with 3 known peers +kubeapi-load-balancer/0* active idle 1 18.222.61.65 443/tcp Loadbalancer ready. +kubernetes-master/0* active idle 4 18.219.105.220 6443/tcp Kubernetes master running. + flannel/3 active idle 18.219.105.220 Flannel subnet 10.1.78.1/24 +kubernetes-worker/0 active idle 2 18.219.221.98 80/tcp,443/tcp Kubernetes worker running. + flannel/1 active idle 18.219.221.98 Flannel subnet 10.1.38.1/24 +kubernetes-worker/1* active idle 7 18.219.249.103 80/tcp,443/tcp Kubernetes worker running. + flannel/2 active idle 18.219.249.103 Flannel subnet 10.1.68.1/24 +kubernetes-worker/2 active idle 8 52.15.89.16 80/tcp,443/tcp Kubernetes worker running. + flannel/0* active idle 52.15.89.16 Flannel subnet 10.1.73.1/24 + +Machine State DNS Inst id Series AZ Message +0 started 18.219.212.151 i-065eab4eabc691b25 xenial us-east-2a running +1 started 18.222.61.65 i-0b332955f028d6281 xenial us-east-2b running +2 started 18.219.221.98 i-0879ef1ed95b569bc xenial us-east-2a running +3 started 18.219.190.99 i-08a7b364fc008fc85 xenial us-east-2c running +4 started 18.219.105.220 i-0f92d3420b01085af xenial us-east-2a running +5 started 18.219.56.23 i-0271f6448cebae352 xenial us-east-2c running +6 started 13.59.240.210 i-0789ef5837e0669b3 xenial us-east-2b running +7 started 18.219.249.103 i-02f110b0ab042f7ac xenial us-east-2b running +8 started 52.15.89.16 i-086852bf1bee63d4e xenial us-east-2c running + +Relation provider Requirer Interface Type Message +easyrsa:client etcd:certificates tls-certificates regular +easyrsa:client kubeapi-load-balancer:certificates tls-certificates regular +easyrsa:client kubernetes-master:certificates tls-certificates regular +easyrsa:client kubernetes-worker:certificates tls-certificates regular +etcd:cluster etcd:cluster etcd peer +etcd:db flannel:etcd etcd regular +etcd:db kubernetes-master:etcd etcd regular +kubeapi-load-balancer:loadbalancer kubernetes-master:loadbalancer public-address regular +kubeapi-load-balancer:website kubernetes-worker:kube-api-endpoint http regular +kubernetes-master:cni flannel:cni kubernetes-cni subordinate +kubernetes-master:kube-api-endpoint kubeapi-load-balancer:apiserver http regular +kubernetes-master:kube-control kubernetes-worker:kube-control kube-control regular +kubernetes-worker:cni flannel:cni kubernetes-cni subordinate ``` ## Interacting with the cluster After the cluster is deployed you may assume control over the cluster from any kubernetes-master, or kubernetes-worker node. -First you need to download the credentials and client application to your local workstation: +If you didn't use conjure-up, you will first need to download the credentials and client application to your local workstation: Create the kubectl config directory. @@ -211,7 +240,7 @@ resources from Juju by using **constraints**. You can increase the amount of CPU or memory (RAM) in any of the systems requested by Juju. This allows you to fine tune the Kubernetes cluster to fit your workload. Use flags on the bootstrap command or as a separate `juju constraints` command. Look to the -[Juju documentation for machine](https://jujucharms.com/docs/2.2/charms-constraints) +[Juju documentation for machine](https://jujucharms.com/docs/2.3/charms-constraints) details. ## Scale out cluster @@ -243,8 +272,7 @@ It is strongly recommended to run an odd number of units for quorum. ## Tear down cluster -If you want stop the servers you can destroy the Juju model or the -controller. Use the `juju switch` command to get the current controller name: +If you used conjure-up to create your cluster, you can tear it down with `conjure-down`. If you used juju directly, you can tear it down by destroying the Juju model or the controller. Use the `juju switch` command to get the current controller name: ```shell juju switch diff --git a/docs/reference/kubectl/cheatsheet.md b/docs/reference/kubectl/cheatsheet.md index 42c9277984b2a..58003cb0d9f38 100644 --- a/docs/reference/kubectl/cheatsheet.md +++ b/docs/reference/kubectl/cheatsheet.md @@ -7,7 +7,7 @@ approvers: title: kubectl Cheat Sheet --- -See also: [Kubectl Overview](/docs/user-guide/kubectl-overview/) and [JsonPath Guide](/docs/user-guide/jsonpath). +See also: [Kubectl Overview](/docs/reference/kubectl/overview/) and [JsonPath Guide](/docs/reference/kubectl/jsonpath). ## Kubectl Autocomplete @@ -292,8 +292,8 @@ Output format | Description `-o=custom-columns=` | Print a table using a comma separated list of custom columns `-o=custom-columns-file=` | Print a table using the custom columns template in the `` file `-o=json` | Output a JSON formatted API object -`-o=jsonpath=