Skip to content

AKS on Azure Stack HCI and Windows Server 2024-06-11 update

Latest
Compare
Choose a tag to compare
@walterov walterov released this 11 Jun 23:28
· 3 commits to main since this release
4344282

Announcements

  • With this release, we are retiring older AKS-HCI versions: January 2023, March 2023, and May 2023. Please update your clusters to remain in support.
Component 2024-02-08 2024-06-11
AKS hybrid 1.0.22.10209 1.0.23.10605
Kubernetes Versions 1.25.6, 1.25.11, 1.26.3, 1.26.6, 1.27.1, 1.27.3 1.26.10, 1.26.12, 1.27.7, 1.27.9, 1.28.3, 1.28.5

Release Notes

Version Numbers

  • KVA version: 1.28.5
  • PowerShell: 1.2.4
  • Containerd: 1.6.26
  • WAC: 2306 GA version
  • AKS Extension in WAC: 4.11.0

What's New

Features

  • Improvements in fail-back functionality allows AKS to efficiently use the recovered servers after a fail-over event has been resolved and the HCI cluster has been restored.

Software updates

  • We have updated several components and dependencies to the latest versions to fix the following CVEs:
    • CVE-2023-5528 Kubernetes Improper Input Validation vulnerability
    • CVE-2023-3955 Insufficient input sanitization on Windows nodes leads to privilege escalation
    • CVE-2023-3676 Insufficient input sanitization on Windows nodes leads to privilege
    • CVE-2023-45288 - Golang.org/x/net is bumped to v0.23.0 to address this
    • CVE-2024-24786 - Updated google.golang.org/protobuf to v1.33.0 to resolve this

Bug Fixes

  - A bug that would allow VHD to be deleted while it was still attached to a VM, has been fixed.
  - Several MOC bugs were fixed and enhancements made in the VHD attach/detach/cleanup processes.

Some important Bugs fixes and Regressions fixed in k8s 1.28 called out below

(please check the full list on Kubernetes release notes -1.28 changelog.md)

- Fix pod restart after node reboot when NewVolumeManagerReconstruction feature gate is enabled and SELinuxMountReadWriteOncePod disabled 
- Fix a race condition in kube-proxy when using LocalModeNodeCIDR to avoid dropping Services traffic if the object node is recreated when kube-proxy is starting 
- Fixed a race condition between Run() and SetTransform() and SetWatchErrorHandler() in shared informers. 
- Fixed a regression in default configurations, which enabled PodDisruptionConditions by default, that prevented the control plane's pod garbage collector from deleting pods that contained duplicated field keys (env. variables with repeated keys or container ports). 
- Fixed the issue where pod with ordinal number lower than the rolling partitioning number was being deleted it was coming up with updated image.
- Fixes calculating the requeue time in the cronjob controller, which results in properly handling failed/stuck jobs 
- Service Controller: update load balancer hosts after node's ProviderID is updated 
- Fix a bug in cronjob controller where already created jobs may be missing from the status.
- Fixed a bug where containers would not start on cgroupv2 systems where swap is disabled. 
- Fixed a regression in kube-proxy where it might refuse to start if given single-stack IPv6 configuration options on a node that has both IPv4 and IPv6 IPs. 
- Fixed an issue to not drain all the pods in a namespace when an empty-selector i.e. "{}" is specified in a Pod Disruption Budget (PDB) 
- Fixed attaching volumes after detach errors. Now volumes that failed to detach are not treated as attached, Kubernetes will make sure they are fully attached before they can be used by pods. 
- Fixed bug to surface events for the following metrics: apiserver_encryption_config_controller_automatic_reload_failures_total, apiserver_encryption_config_controller_automatic_reload_last_timestamp_seconds, apiserver_encryption_config_controller_automatic_reload_success_total
- Fixes a bug where Services using finalizers may hold onto ClusterIP and/or NodePort allocated resources for longer than expected if the finalizer is removed using the status subresource 
- Revised the logic for DaemonSet rolling update to exclude nodes if scheduling constraints are not met. This eliminates the problem of rolling updates to a DaemonSet getting stuck around tolerations
- Sometimes, the scheduler incorrectly placed a pod in the "unschedulable" queue instead of the "backoff" queue. This happened when some plugin previously declared the pod as "unschedulable" and then in a later attempt encounters some other error. Scheduling of that pod then got delayed by up to five minutes, after which periodic flushing moved the pod back into the "active" queue.