-
Notifications
You must be signed in to change notification settings - Fork 1.9k
OSDOCS-773 Azure BYO VNet #18133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OSDOCS-773 Azure BYO VNet #18133
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,46 @@ | ||
| [id="installing-azure-vnet"] | ||
| = Installing a cluster on Azure to an existing VNet | ||
| include::modules/common-attributes.adoc[] | ||
| :context: installing-azure-vnet | ||
|
|
||
| toc::[] | ||
|
|
||
| In {product-title} version {product-version}, you can install a cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the `install-config.yaml` file before you install the cluster. | ||
|
|
||
| .Prerequisites | ||
|
|
||
| * Review details about the | ||
| xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update] | ||
| processes. | ||
| * xref:../../installing/installing_azure/installing-azure-account.adoc#installing-azure-account[Configure an Azure account] to host the cluster and determine the tested and validated region to deploy the cluster to. | ||
| * If you use a firewall, you must | ||
| xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configure it to allow the sites] that your cluster requires access to. | ||
|
|
||
| include::modules/installation-about-custom-azure-vnet.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/cluster-entitlements.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/ssh-agent-using.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-obtaining-installer.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-initializing.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-configuration-parameters.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-azure-config-yaml.adoc[leveloffset=+2] | ||
|
|
||
| // Removing; Proxy not supported for Azure IPI for 4.2 | ||
| // include::modules/installation-configure-proxy.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-launching-installer.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/cli-installing-cli.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1] | ||
|
|
||
| .Next steps | ||
|
|
||
| * xref:../../installing/install_config/customizations.adoc#customizations[Customize your cluster]. | ||
| * If necessary, you can | ||
| xref:../../support/remote_health_monitoring/opting-out-of-remote-health-reporting.adoc#opting-out-remote-health-reporting_opting-out-remote-health-reporting[opt out of remote health reporting]. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,103 @@ | ||
| // Module included in the following assemblies: | ||
| // | ||
| // * installing/installing_azure/installing-azure-vnet.adoc | ||
|
|
||
| [id="installation-about-custom-azure-vnet_{context}"] | ||
| = About reusing a VNet for your {product-title} cluster | ||
|
|
||
| In {product-title} {product-version}, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. | ||
|
|
||
| By deploying {product-title} into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. | ||
|
|
||
| [IMPORTANT] | ||
| ==== | ||
| The use of an existing VNet requires the use of the updated Azure Private DNS (preview) feature. See link:https://azure.microsoft.com/en-us/updates/announcing-preview-refresh-for-azure-dns-private-zones-2/[Announcing Preview Refresh for Azure DNS Private Zones] for more information about the limitations of this feature. | ||
| ==== | ||
|
|
||
| [id="installation-about-custom-azure-vnet-requirements_{context}"] | ||
| == Requirements for using your VNet | ||
|
|
||
| When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: | ||
|
|
||
| * Subnets | ||
| * Route tables | ||
| * VNets | ||
| * Network Security Groups | ||
|
|
||
| If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. | ||
|
|
||
| The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. | ||
|
|
||
| Your VNet must meet the following characteristics: | ||
|
|
||
| * The VNet’s CIDR block must contain the `Networking.MachineCIDR` range, which is the IP address pool for cluster machines. | ||
| * The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. | ||
|
|
||
| You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. | ||
|
|
||
| To ensure that the subnets that you provide are suitable, the installation program confirms the following data: | ||
|
|
||
| * All the subnets that you specify exist. | ||
| * You provide two private subnets for each availability zone. | ||
| * The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. | ||
|
|
||
| If you destroy a cluster that uses an existing VNet, the VNet is not deleted. | ||
|
|
||
| [id="installation-about-custom-azure-vnet-nsg-requirements_{context}"] | ||
| === Network security group requirements | ||
|
|
||
| The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. | ||
|
|
||
| [IMPORTANT] | ||
| ==== | ||
| The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. | ||
| ==== | ||
|
|
||
| .Required ports | ||
| [options="header",cols="1,3,1,1"] | ||
| |=== | ||
|
|
||
| |Port | ||
| |Description | ||
| |Control plane | ||
| |Compute | ||
|
|
||
| |`80` | ||
| |Allows HTTP traffic | ||
| |x | ||
| | | ||
|
|
||
| |`443` | ||
| |Allows HTTPS traffic | ||
| |x | ||
| | | ||
|
|
||
| |`6443` | ||
| |Allows communication to the control plane machines. | ||
| |x | ||
| |x | ||
|
|
||
| |=== | ||
|
|
||
|
|
||
| [id="installation-about-custom-azure-permissions_{context}"] | ||
| == Division of permissions | ||
|
|
||
| Starting with {product-title} 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. | ||
|
|
||
| The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and networking core components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. | ||
|
|
||
| [id="installation-about-custom-azure-vnet-isolation_{context}"] | ||
| == Isolation between clusters | ||
|
|
||
| Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. | ||
| //// | ||
| These are some of the details from the AWS version, and if any of them are relevant to Azure, they can be included. | ||
| If you deploy {product-title} to an existing network, the isolation of cluster services is reduced in the following ways: | ||
|
|
||
| * You can install multiple {product-title} clusters in the same VNet. | ||
| * ICMP ingress is allowed to entire network. | ||
| * TCP 22 ingress (SSH) is allowed to the entire network. | ||
| * Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. | ||
| * Control plane TCP 22623 ingress (MCS) is allowed to the entire network. | ||
| //// | ||
|
||
Uh oh!
There was an error while loading. Please reload this page.