Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion modules/configuring-hybrid-ovnkubernetes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,6 @@ status: {}
<2> Specify the CIDR configuration used when adding nodes.
<3> Specify `OVNKubernetes` as the Container Network Interface (CNI) cluster network provider.
<4> Specify the CIDR configuration used for nodes on the additional overlay network. The `hybridClusterNetwork` CIDR cannot overlap with the `clusterNetwork` CIDR.
<5> Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere; the custom port can be any open port excluding the default `4789` port. For more information on this requirement, see the Microsoft documentation on link:https://docs.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/common-problems#pod-to-pod-connectivity-between-hosts-is-broken-on-my-kubernetes-cluster-running-on-vsphere[Pod to pod connectivity between hosts is broken].
<5> Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default `4789` port. For more information on this requirement, see the Microsoft documentation on link:https://docs.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/common-problems#pod-to-pod-connectivity-between-hosts-is-broken-on-my-kubernetes-cluster-running-on-vsphere[Pod-to-pod connectivity between hosts is broken].

. Optional: Back up the `<installation_directory>/manifests/cluster-network-03-config.yml` file. The installation program deletes the `manifests/` directory when creating the cluster.
5 changes: 3 additions & 2 deletions modules/creating-the-vsphere-windows-vm-golden-image.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,12 @@ Create a vSphere Windows virtual machine (VM) golden image.

.Prerequisites

* You have installed a cluster on vSphere.
* You have installed a cluster on vSphere configured with hybrid networking using OVN-Kubernetes.
* You have defined a custom VXLAN port in your hybrid networking configuration to work around the link:https://docs.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/common-problems#pod-to-pod-connectivity-between-hosts-is-broken-on-my-kubernetes-cluster-running-on-vsphere[pod-to-pod connectivity issue between hosts].

.Procedure

. Create the VM from an updated version of the Windows Server 1909 VM image that includes the following link:https://support.microsoft.com/en-us/help/4565351/windows-10-update-kb4565351[Microsoft patch].
. Create the VM from an updated version of the Windows Server 1909 VM image that includes the link:https://support.microsoft.com/en-us/help/4565351/windows-10-update-kb4565351[Microsoft patch KB4565351]. This patch is required to set the VXLAN UDP port, which is required for clusters installed on vSphere. This patch is not available for the `Windows Server 2019` VM image.

. Create the `C:\Users\Administrator.ssh\authorized_keys` file in the Windows VM containing the public key that corresponds to the private key that resides in the secret you created in the `openshift-windows-machine-config-operator` namespace. The private key of the secret was created when first installing the Windows Machine Config Operator (WMCO) to give {product-title} access to Windows VMs. The `authorized_keys` file is used to configure SSH in the Windows VM.

Expand Down
25 changes: 20 additions & 5 deletions windows_containers/windows-containers-release-notes-2-x.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,24 @@ WMCO supports self-managed clusters built using installer-provisioned infrastruc
* Microsoft Azure
* VMware vSphere

The following Windows Server operating systems are supported in this release of the WMCO:
The following Windows Server operating systems are supported in this release of the WMCO, depending on which platform your cluster is installed on:

* Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019
[cols="1,4",options="header"]
|===

|WMCO platform
|Windows Server version

|AWS
|Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 version 10.0.17763.1457 or earlier

|Azure
|Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 version 10.0.17763.1457 or earlier

|vSphere
|Windows Server Semi-Annual Channel (SAC): Windows Server 1909 with link:https://support.microsoft.com/en-us/help/4565351/windows-10-update-kb4565351[Microsoft patch KB4565351]

|===

Version 2.x of the WMCO is only compatible with {product-title} 4.7.

Expand All @@ -56,7 +71,7 @@ Windows nodes are now fully integrated with most of the monitoring capabilities

* The Prometheus windows_exporter used by the WMCO currently collects metrics through HTTP, so it is considered unsafe. You must ensure that only trusted users can retrieve metrics from the endpoint. The windows_exporter feature recently added support for HTTPS configuration, but this configuration has not been implemented for WMCO. Support for HTTPS configuration in the WMCO will be added in a future release.

* If you have a cluster with two Windows nodes, and you create a web server deployment with two replicas, the pods each land on a Windows compute node. In this scenario, if you create a `Service` object with type `LoadBalancer`, communication with the load balancer endpoint is not reliable. To mitigate this issue, you must use Windows Server 2019 with a version 10.0.17763.1457 or earlier. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1905950[*BZ#1905950*])
* If you have a cluster with two Windows nodes, and you create a web server deployment with two replicas, each pod lands on a Windows compute node. In this scenario, if you create a `Service` object with type `LoadBalancer`, communication with the load balancer endpoint is not reliable. To mitigate this issue for clusters installed on AWS or Azure, you must use Windows Server 2019 version 10.0.17763.1457 or earlier. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1905950[*BZ#1905950*])
+
To pick the correct image for the `MachineSet` object, follow the instructions based on your cloud provider:
+
Expand All @@ -68,7 +83,7 @@ To pick the correct image for the `MachineSet` object, follow the instructions b
[source,terminal]
----
$ aws ec2 describe-images \
--filters Name=name,Values=Windows_Server-2019-English-Full-ContainersLatest-2021.01.13
--filters Name=name,Values=Windows_Server-2019-English-Full-ContainersLatest-2020.09.09
--region <region> \// <1>
--query 'Images[*].[ImageId]' \
--output=json | jq .[0][0]
Expand Down Expand Up @@ -106,4 +121,4 @@ $ az vm image list --all --location <location> \// <1>
----
--

* There is currently an issue in Windows Server 2019 versions released after version `10.0.17763.1457` where Windows workloads behind a load balancer are unreachable for clusters installed on AWS. The Windows Server 2019 version `10.0.17763.1457` and earlier are recommended to work around this issue; however, these earlier images are no longer available. This image version unavailability prevents the ability to run Windows workloads behind a load balancer on clusters installed on AWS at this time. See this link:https://github.com/microsoft/Windows-Containers/issues/78[Microsoft Windows Containers issue] for more details.
* There is currently an issue in Windows Server 2019 versions released after version `10.0.17763.1457` where Windows workloads behind a load balancer are unreachable for clusters installed on AWS and Azure. You must use Windows Server 2019 version `10.0.17763.1457` or earlier to work around this issue; however, these earlier images are no longer available for AWS. This image version unavailability prevents the ability to run Windows workloads behind a load balancer on clusters installed on AWS at this time. See the link:https://github.com/microsoft/Windows-Containers/issues/78[Microsoft Windows Containers issue] for more details.