You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/docs/concepts/cluster-administration/cloud-providers.md
+63-11
Original file line number
Diff line number
Diff line change
@@ -10,11 +10,15 @@ cloud provider.
10
10
{{% /capture %}}
11
11
12
12
{{% capture body %}}
13
-
# AWS
13
+
##AWS
14
14
This section describes all the possible configurations which can
15
15
be used when running Kubernetes on Amazon Web Services.
16
16
17
-
## Load Balancers
17
+
### Node Name
18
+
19
+
The AWS cloud provider uses the private DNS name of the AWS instance as the name of the Kubernetes Node object.
20
+
21
+
### Load Balancers
18
22
You can setup [external load balancers](/docs/tasks/access-application-cluster/create-external-load-balancer/)
19
23
to use specific features in AWS by configuring the annotations as shown below.
20
24
@@ -58,9 +62,39 @@ Different settings can be applied to a load balancer service in AWS using _annot
58
62
59
63
The information for the annotations for AWS is taken from the comments on [aws.go](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/aws/aws.go)
60
64
61
-
# OpenStack
65
+
## Azure
66
+
67
+
### Node Name
68
+
69
+
The Azure cloud provider uses the hostname of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object.
70
+
Note that the Kubernetes Node name must match the Azure VM name.
71
+
72
+
## CloudStack
73
+
74
+
### Node Name
75
+
76
+
The CloudStack cloud provider uses the hostname of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object.
77
+
Note that the Kubernetes Node name must match the CloudStack VM name.
78
+
79
+
## GCE
80
+
81
+
### Node Name
82
+
83
+
The GCE cloud provider uses the hostname of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object.
84
+
Note that the first segment of the Kubernetes Node name must match the GCE instance name (e.g. a Node named `kubernetes-node-2.c.my-proj.internal` must correspond to an instance named `kubernetes-node-2`).
85
+
86
+
## OpenStack
62
87
This section describes all the possible configurations which can
63
-
be used when using OpenStack with Kubernetes. The OpenStack cloud provider
88
+
be used when using OpenStack with Kubernetes.
89
+
90
+
### Node Name
91
+
92
+
The OpenStack cloud provider uses the instance name (as determined from OpenStack metadata) as the name of the Kubernetes Node object.
93
+
Note that the instance name must be a valid Kubernetes Node name in order for the kubelet to successfully register its Node object.
94
+
95
+
### Services
96
+
97
+
The OpenStack cloud provider
64
98
implementation for Kubernetes supports the use of these OpenStack services from
65
99
the underlying cloud, where available:
66
100
@@ -88,12 +122,12 @@ OpenStack services other than Keystone are not available and simply disclaim
88
122
support for impacted features. Certain features are also enabled or disabled
89
123
based on the list of extensions published by Neutron in the underlying cloud.
90
124
91
-
## cloud.conf
125
+
### cloud.conf
92
126
Kubernetes knows how to interact with OpenStack via the file cloud.conf. It is
93
127
the file that will provide Kubernetes with credentials and location for the OpenStack auth endpoint.
94
128
You can create a cloud.conf file by specifying the following details in it
95
129
96
-
### Typical configuration
130
+
#### Typical configuration
97
131
This is an example of a typical configuration that touches the values that most
98
132
often need to be set. It points the provider at the OpenStack cloud's Keystone
99
133
endpoint, provides details for how to authenticate with it, and configures the
These configuration options for the OpenStack provider pertain to its global
116
150
configuration and should appear in the `[Global]` section of the `cloud.conf`
117
151
file:
@@ -146,7 +180,7 @@ file:
146
180
When using Keystone V3 - which changes tenant to project - the `tenant-id` value
147
181
is automatically mapped to the project construct in the API.
148
182
149
-
#### Load Balancer
183
+
##### Load Balancer
150
184
These configuration options for the OpenStack provider pertain to the load
151
185
balancer and should appear in the `[LoadBalancer]` section of the `cloud.conf`
152
186
file:
@@ -190,7 +224,7 @@ file:
190
224
`node-security-group`must also be supplied.
191
225
* `node-security-group` (Optional): ID of the security group to manage.
192
226
193
-
#### Block Storage
227
+
##### Block Storage
194
228
These configuration options for the OpenStack provider pertain to block storage
195
229
and should appear in the `[BlockStorage]` section of the `cloud.conf` file:
196
230
@@ -228,7 +262,7 @@ provider configuration:
228
262
bs-version=v2
229
263
```
230
264
231
-
#### Metadata
265
+
##### Metadata
232
266
These configuration options for the OpenStack provider pertain to metadata and
233
267
should appear in the `[Metadata]` section of the `cloud.conf` file:
234
268
@@ -250,7 +284,7 @@ should appear in the `[Metadata]` section of the `cloud.conf` file:
250
284
both configuration drive and metadata service though and only one or the other
251
285
may be available which is why the default is to check both.
252
286
253
-
#### Router
287
+
##### Router
254
288
255
289
These configuration options for the OpenStack provider pertain to the [kubenet]
256
290
Kubernetes network plugin and should appear in the `[Router]` section of the
@@ -267,4 +301,22 @@ Kubernetes network plugin and should appear in the `[Router]` section of the
267
301
268
302
{{% /capture %}}
269
303
304
+
## OVirt
305
+
306
+
### Node Name
307
+
308
+
The OVirt cloud provider uses the hostname of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object.
309
+
Note that the Kubernetes Node name must match the VM FQDN (reported by OVirt under `<vm><guest_info><fqdn>...</fqdn></guest_info></vm>`)
310
+
311
+
## Photon
312
+
313
+
### Node Name
314
+
315
+
The Photon cloud provider uses the hostname of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object.
316
+
Note that the Kubernetes Node name must match the Photon VM name (or if `overrideIP` is set to true in the `--cloud-config`, the Kubernetes Node name must match the Photon VM IP address).
317
+
318
+
## VSphere
319
+
320
+
### Node Name
270
321
322
+
The VSphere cloud provider uses the hostname of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object.
Copy file name to clipboardExpand all lines: content/en/docs/concepts/cluster-administration/networking.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -94,7 +94,7 @@ very niche operation. In this case a port will be allocated on the host `Node`
94
94
and traffic will be forwarded to the `Pod`. The `Pod` itself is blind to the
95
95
existence or non-existence of host ports.
96
96
97
-
## How to achieve this
97
+
## How to implement the Kubernetes networking model
98
98
99
99
There are a number of ways that this network model can be implemented. This
100
100
document is not an exhaustive study of the various methods, but hopefully serves
@@ -121,11 +121,11 @@ Details on how the AOS system works can be accessed here: http://www.apstra.com/
121
121
122
122
### Big Cloud Fabric from Big Switch Networks
123
123
124
-
[Big Cloud Fabric](https://www.bigswitch.com/container-network-automation) is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premise environments. Using unified physical & virtual SDN, Big Cloud Fabric tackles inherent container networking problems such as load balancing, visibility, troubleshooting, security policies & container traffic monitoring.
124
+
[Big Cloud Fabric](https://www.bigswitch.com/container-network-automation) is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premises environments. Using unified physical & virtual SDN, Big Cloud Fabric tackles inherent container networking problems such as load balancing, visibility, troubleshooting, security policies & container traffic monitoring.
125
125
126
126
With the help of the Big Cloud Fabric's virtual pod multi-tenant architecture, container orchestration systems such as Kubernetes, RedHat Openshift, Mesosphere DC/OS & Docker Swarm will be natively integrated along side with VM orchestration systems such as VMware, OpenStack & Nutanix. Customers will be able to securely inter-connect any number of these clusters and enable inter-tenant communication between them if needed.
127
127
128
-
BCF was recognized by Gartner as a visionary in the latest [Magic Quadrant](http://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html). One of the BCF Kubernetes on premise deployments (which includes Kubernetes, DC/OS & VMware running on multiple DCs across different geographic regions) is also referenced [here](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/).
128
+
BCF was recognized by Gartner as a visionary in the latest [Magic Quadrant](http://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html). One of the BCF Kubernetes on-premises deployments (which includes Kubernetes, DC/OS & VMware running on multiple DCs across different geographic regions) is also referenced [here](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/).
Copy file name to clipboardExpand all lines: content/en/docs/concepts/configuration/assign-pod-node.md
+14-1
Original file line number
Diff line number
Diff line change
@@ -4,9 +4,14 @@ reviewers:
4
4
- kevin-wangzefeng
5
5
- bsalamat
6
6
title: Assigning Pods to Nodes
7
+
content_template: templates/concept
7
8
weight: 30
8
9
---
9
10
11
+
{{< toc >}}
12
+
13
+
{{% capture overview %}}
14
+
10
15
You can constrain a [pod](/docs/concepts/workloads/pods/pod/) to only be able to run on particular [nodes](/docs/concepts/architecture/nodes/) or to prefer to
11
16
run on particular nodes. There are several ways to do this, and they all use
12
17
[label selectors](/docs/concepts/overview/working-with-objects/labels/) to make the selection.
@@ -19,7 +24,9 @@ services that communicate a lot into the same availability zone.
19
24
You can find all the files for these examples [in our docs
**Note:** Special characters such as `$`, `\*`, and `!` require escaping.
537
+
If the password you are using has special characters, you need to escape them using the `\\` character. For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way:
Copy file name to clipboardExpand all lines: content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
+15-1
Original file line number
Diff line number
Diff line change
@@ -4,18 +4,26 @@ reviewers:
4
4
- freehan
5
5
- thockin
6
6
title: Network Plugins
7
+
content_template: templates/concept
7
8
weight: 10
8
9
---
9
10
10
11
{{< toc >}}
11
12
12
-
__Disclaimer__: Network plugins are in alpha. Its contents will change rapidly.
13
+
{{% capture overview %}}
14
+
15
+
{{< feature-state state="alpha" >}}
16
+
{{< warning >}}Alpha features change rapidly. {{< /warning >}}
13
17
14
18
Network plugins in Kubernetes come in a few flavors:
15
19
16
20
* CNI plugins: adhere to the appc/CNI specification, designed for interoperability.
17
21
* Kubenet plugin: implements basic `cbr0` using the `bridge` and `host-local` CNI plugins
18
22
23
+
{{% /capture %}}
24
+
25
+
{{% capture body %}}
26
+
19
27
## Installation
20
28
21
29
The kubelet has a single default network plugin, and a default network common to the entire cluster. It probes for plugins when it starts up, remembers what it found, and executes the selected plugin at appropriate times in the pod lifecycle (this is only true for Docker, as rkt manages its own CNI plugins). There are two Kubelet command line parameters to keep in mind when using plugins:
@@ -71,3 +79,9 @@ This option is provided to the network-plugin; currently **only kubenet supports
71
79
*`--network-plugin=cni` specifies that we use the `cni` network plugin with actual CNI plugin binaries located in `--cni-bin-dir` (default `/opt/cni/bin`) and CNI plugin configuration located in `--cni-conf-dir` (default `/etc/cni/net.d`).
72
80
*`--network-plugin=kubenet` specifies that we use the `kubenet` network plugin with CNI `bridge` and `host-local` plugins placed in `/opt/cni/bin` or `cni-bin-dir`.
73
81
*`--network-plugin-mtu=9001` specifies the MTU to use, currently only used by the `kubenet` network plugin.
Copy file name to clipboardExpand all lines: content/en/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md
+11
Original file line number
Diff line number
Diff line change
@@ -3,14 +3,19 @@ reviewers:
3
3
- rickypai
4
4
- thockin
5
5
title: Adding entries to Pod /etc/hosts with HostAliases
6
+
content_template: templates/concept
6
7
weight: 60
7
8
---
8
9
9
10
{{< toc >}}
10
11
12
+
{{% capture overview %}}
11
13
Adding entries to a Pod's /etc/hosts file provides Pod-level override of hostname resolution when DNS and other options are not applicable. In 1.7, users can add these custom entries with the HostAliases field in PodSpec.
12
14
13
15
Modification not using HostAliases is not suggested because the file is managed by Kubelet and can be overwritten on during Pod creation/restart.
16
+
{{% /capture %}}
17
+
18
+
{{% capture body %}}
14
19
15
20
## Default Hosts File Content
16
21
@@ -91,3 +96,9 @@ In 1.8, HostAlias is supported for all Pods regardless of network configuration.
91
96
Kubelet [manages](https://github.com/kubernetes/kubernetes/issues/14633) the hosts file for each container of the Pod to prevent Docker from [modifying](https://github.com/moby/moby/issues/17190) the file after the containers have already been started.
92
97
93
98
Because of the managed-nature of the file, any user-written content will be overwritten whenever the hosts file is remounted by Kubelet in the event of a container restart or a Pod reschedule. Thus, it is not suggested to modify the contents of the file.
0 commit comments