diff --git a/_topic_map.yml b/_topic_map.yml index c4a33c222f60..270836b5976a 100644 --- a/_topic_map.yml +++ b/_topic_map.yml @@ -40,8 +40,8 @@ Topics: - Name: Overview File: index Distros: openshift-enterprise,openshift-dedicated -- Name: OpenShift Container Platform 3.6 Release Notes - File: ocp_3_6_release_notes +- Name: OpenShift Container Platform 3.7 Release Notes + File: ocp_3_7_release_notes Distros: openshift-enterprise - Name: Latest Product Updates File: osd_latest_product_updates diff --git a/admin_guide/managing_networking.adoc b/admin_guide/managing_networking.adoc index dda0528cb904..d19d6698fd49 100644 --- a/admin_guide/managing_networking.adoc +++ b/admin_guide/managing_networking.adoc @@ -30,7 +30,7 @@ plugin], you can manage the separate pod overlay networks for projects using the administrator CLI. ifdef::openshift-enterprise,openshift-origin[] See the xref:../install_config/configuring_sdn.adoc#install-config-configuring-sdn[Configuring the SDN] section -for plugin configuration steps, if necessary. +for plug-in configuration steps, if necessary. endif::openshift-enterprise,openshift-origin[] [[joining-project-networks]] @@ -1023,7 +1023,7 @@ the case with unicast. The *ovs-subnet* and *ovs-multitenant* plugins have their own legacy models of network isolation, and don't support Kubernetes `NetworkPolicy`. However, `NetworkPolicy` support -is available by using the *ovs-networkpolicy* plugin. +is available by using the *ovs-networkpolicy* plug-in. In a cluster xref:../install_config/configuring_sdn.adoc#install-config-configuring-sdn[configured @@ -1260,4 +1260,4 @@ services to include this site in their HSTS preload lists. For example, sites such as Google can construct a list of sites that have `preload` set. Browsers can then use these lists to determine which sites to only talk to over HTTPS, even before they have interacted with the site. Without `preload` set, they need -to have talked to the site over HTTPS to get the header. \ No newline at end of file +to have talked to the site over HTTPS to get the header. diff --git a/architecture/service_catalog/index.adoc b/architecture/service_catalog/index.adoc index 9d186a20308f..8f71e16cee41 100644 --- a/architecture/service_catalog/index.adoc +++ b/architecture/service_catalog/index.adoc @@ -77,7 +77,7 @@ known as _unbinding_. Part of the deletion process includes deleting the secret that references the service binding being deleted. Once all the service bindings are removed, the service instance may be deleted. -Deleting the service instance is known as _deprovisioning_. +Deleting the service instance is known as _deprovisioning_. [[service-catalog-concepts-terminology]] == Concepts and Terminology diff --git a/install_config/install/advanced_install.adoc b/install_config/install/advanced_install.adoc index 52d102bb9b97..ccc50da913e5 100644 --- a/install_config/install/advanced_install.adoc +++ b/install_config/install/advanced_install.adoc @@ -2171,7 +2171,7 @@ Technology Preview feature. The installer uses modularized playbooks allowing administrators to install specific components as needed. By breaking up the roles and playbooks, there is better targeting of ad hoc administration tasks. This results in an increased -level of control during installations and results in time savings. +level of control during installations and results in time savings. [NOTE] ==== diff --git a/release_notes/images/3.7-add-to-project-options.png b/release_notes/images/3.7-add-to-project-options.png new file mode 100644 index 000000000000..91551b149403 Binary files /dev/null and b/release_notes/images/3.7-add-to-project-options.png differ diff --git a/release_notes/images/3.7-add-to-project-wizard-animated.gif b/release_notes/images/3.7-add-to-project-wizard-animated.gif new file mode 100644 index 000000000000..a069d99f9554 Binary files /dev/null and b/release_notes/images/3.7-add-to-project-wizard-animated.gif differ diff --git a/release_notes/images/3.7-bind-mongodb-nodejs-at-creation.gif b/release_notes/images/3.7-bind-mongodb-nodejs-at-creation.gif new file mode 100644 index 000000000000..af755c312888 Binary files /dev/null and b/release_notes/images/3.7-bind-mongodb-nodejs-at-creation.gif differ diff --git a/release_notes/images/3.7-notification-drawer.png b/release_notes/images/3.7-notification-drawer.png new file mode 100644 index 000000000000..17bc7983da30 Binary files /dev/null and b/release_notes/images/3.7-notification-drawer.png differ diff --git a/release_notes/images/3.7-search-filter-catalog.gif b/release_notes/images/3.7-search-filter-catalog.gif new file mode 100644 index 000000000000..1297824afffd Binary files /dev/null and b/release_notes/images/3.7-search-filter-catalog.gif differ diff --git a/release_notes/images/37-quota-warning.png b/release_notes/images/37-quota-warning.png new file mode 100644 index 000000000000..b4e36161f3a7 Binary files /dev/null and b/release_notes/images/37-quota-warning.png differ diff --git a/release_notes/images/37-statefulset-page-envar-editor.png b/release_notes/images/37-statefulset-page-envar-editor.png new file mode 100644 index 000000000000..6398891d99e3 Binary files /dev/null and b/release_notes/images/37-statefulset-page-envar-editor.png differ diff --git a/release_notes/images/crio-3-7.png b/release_notes/images/crio-3-7.png new file mode 100644 index 000000000000..8ca341e7d98e Binary files /dev/null and b/release_notes/images/crio-3-7.png differ diff --git a/release_notes/index.adoc b/release_notes/index.adoc index 707f60fa9d19..6a4e60901b22 100644 --- a/release_notes/index.adoc +++ b/release_notes/index.adoc @@ -7,7 +7,7 @@ :experimental: ifdef::openshift-enterprise[] -The following release notes for {product-title} 3.6 summarize all new features, +The following release notes for {product-title} 3.7 summarize all new features, major corrections from the previous version, and any known bugs upon general availability. endif::[] @@ -16,8 +16,8 @@ ifdef::openshift-dedicated[] The following release notes for {product-title} summarize key features upon general availability. OpenShift Dedicated uses the same code base as OpenShift Container Platform 3; for more detailed technical notes, see the -link:https://docs.openshift.com/container-platform/3.6/release_notes/ocp_3_6_release_notes.html[OpenShift -Container Platform 3.6 Release Notes]. +link:https://docs.openshift.com/container-platform/3.7/release_notes/ocp_3_7_release_notes.html[OpenShift +Container Platform 3.7 Release Notes]. endif::[] [[release-versioning-policy]] @@ -29,13 +29,13 @@ beta APIs (which may occasionally be changed in a non-backwards compatible manner). The {product-title} version must match between master and node hosts, excluding -temporary mismatches during cluster upgrades. For example, in a 3.6 cluster, all -masters must be 3.6 and all nodes must be 3.6. However, {product-title} will -continue to support older `oc` clients against newer servers. For example, a 3.6 -`oc` will work against 3.3, 3.4, 3.5, and 3.6 servers. +temporary mismatches during cluster upgrades. For example, in a 3.7 cluster, all +masters must be 3.7 and all nodes must be 3.7. However, {product-title} will +continue to support older `oc` clients against newer servers. For example, a 3.4 +`oc` will work against 3.3, 3.4, and 3.5 servers. Changes of APIs for non-security related reasons will involve, at minimum, two -minor releases (3.1 to 3.2 to 3.3, for example) to allow older `oc` to update. +minor releases (3.4 to 3.5 to 3.6, for example) to allow older `oc` to update. Using new capabilities may require newer `oc`. A 3.2 server may have additional capabilities that a 3.1 `oc` cannot use and a 3.2 `oc` may have additional capabilities that are not supported by a 3.1 server. diff --git a/release_notes/ocp_3_6_release_notes.adoc b/release_notes/ocp_3_6_release_notes.adoc index 93448627b15e..f1e4f692b6b4 100644 --- a/release_notes/ocp_3_6_release_notes.adoc +++ b/release_notes/ocp_3_6_release_notes.adoc @@ -30,7 +30,7 @@ security, privacy, compliance, and governance requirements. == About This Release Red Hat {product-title} version 3.6 -(link:https://access.redhat.com/errata/RHBA-2017:1716[RHBA-2017:1716]) is now +(link:https://access.redhat.com/errata/RHEA-2017:1716[RHEA-2017:1716]) is now available. This release is based on link:https://github.com/openshift/origin/releases/tag/v3.6.0-rc.0[OpenShift Origin 3.6]. New features, changes, bug fixes, and known issues that pertain to diff --git a/release_notes/ocp_3_7_release_notes.adoc b/release_notes/ocp_3_7_release_notes.adoc new file mode 100644 index 000000000000..36a958601a80 --- /dev/null +++ b/release_notes/ocp_3_7_release_notes.adoc @@ -0,0 +1,2660 @@ +[[release-notes-ocp-3-7-release-notes]] += {product-title} 3.7 Release Notes +{product-author} +{product-version} +:data-uri: +:icons: +:experimental: +:toc: macro +:toc-title: +:prewrap!: + +toc::[] + +== Overview + +Red Hat {product-title} is a Platform as a Service (PaaS) that provides +developers and IT organizations with a cloud application platform for deploying +new applications on secure, scalable resources with minimal configuration and +management overhead. {product-title} supports a wide selection of +programming languages and frameworks, such as Java, Ruby, and PHP. + +Built on Red Hat Enterprise Linux and Google Kubernetes, {product-title} +provides a secure and scalable multi-tenant operating system for today’s +enterprise-class applications, while providing integrated application runtimes +and libraries. {product-title} brings the OpenShift PaaS platform to customer +data centers, enabling organizations to implement a private PaaS that meets +security, privacy, compliance, and governance requirements. + +[[ocp-37-about-this-release]] +== About This Release + +Red Hat {product-title} version 3.7 +(link:https://access.redhat.com/errata/RHSA-2017:3188[RHSA-2017:3188]) is now +available. This release is based on +link:https://github.com/openshift/origin/releases/tag/v3.7.0-rc.0[OpenShift +Origin 3.7]. New features, changes, bug fixes, and known issues that pertain to +{product-title} 3.7 are included in this topic. + +{product-title} 3.7 is supported on RHEL 7.3, 7.4.2, and Atomic Host 7.4.2 and +newer with the latest packages from Extras, including Docker 1.12. + +For initial installations, see the +xref:../install_config/install/planning.adoc#install-config-install-planning[Installing +a Cluster] topics in the +xref:../install_config/index.adoc#install-config-index[Installation and +Configuration] documentation. + +To upgrade to this release from a previous version, see the +xref:../install_config/upgrading/index.adoc#install-config-upgrading-index[Upgrading +a Cluster] topics in the +xref:../install_config/index.adoc#install-config-index[Installation and +Configuration] documentation. + +[[ocp-37-new-features-and-enhancements]] +== New Features and Enhancements + +This release adds improvements related to the following components and concepts. + +[[ocp-37-container-orchestration]] +=== Container Orchestration + +[[ocp-37-kubernetes-upstream]] +==== Kubernetes Upstream + +Many core features Google announced in June for Kubernetes 1.7 were the result +of OpenShift engineering. Red Hat continues to influence the product in the +areas of storage, networking, resource management, authentication and +authorization, multi-tenancy, security, service deployments, templating, and +controller functionality. + +[[ocp-37-crio]] +==== CRI-O (Technology Preview) + +This feature is currently in xref:ocp-37-technology-preview[Technology Preview] +and not for production workloads. CRI-O with builds will not yet work. + + +CRI-O v1.0 is a lightweight, native Kubernetes container runtime interface. By +design, it provides only the runtime capabilities needed by the kubelet. CRI-O is +designed to be part of Kubernetes and evolve in lock-step with the platform. + +CRI-O brings: + +* A minimal and secure architecture. +* Excellent scale and performance. +* The ability to run any Open Container Initiative (OCI) or docker image. +* Familiar operational tooling and commands. + +image::crio-3-7.png[CRI-O] + +[[ocp-37-cluster-wide-tolerations-per-namespace-tolerations]] +==== Cluster-wide Tolerations and Per-namespace Tolerations to Control Pod Placement + +In a multi-tenant environment, you want to leverage administration controllers +to help define rules that can help govern a cluster, should a tenant not set a +toleration for placement. + +The following is offered to administrators where the namespace setting will +override the cluster setting: + +* Cluster-wide and per-namespace default toleration for pods. +* Cluster-wide and per-namespace white-listing of toleration for pods. + +.Cluster-wide Off Example +---- +admissionConfig: + pluginConfig: + PodTolerationRestriction: + configuration: + kind: DefaultAdmissionConfig + apiVersion: v1 + disable: true +---- + +.Cluster-wide On Example +---- +admissionConfig: + pluginConfig: + PodTolerationRestriction: + configuration: + apiVersion: podtolerationrestriction.admission.k8s.io/v1alpha1 + kind: Configuration + default: + - key: key3 + value: value3 + whitelist: + - key: key1 + value: value1 + - key: key3 + value: value3 +---- + +.Namespace-specific Example +---- +apiVersion: v1 +kind: Namespace +metadata: + annotations: + openshift.io/description: "" + openshift.io/display-name: "" + openshift.io/sa.scc.mcs: s0:c8,c7 + openshift.io/sa.scc.supplemental-groups: 1000070000/10000 + openshift.io/sa.scc.uid-range: 1000070000/10000 + scheduler.alpha.kubernetes.io/defaultTolerations: '[ { "key": "key1", "value":"value1" }]' + scheduler.alpha.kubernetes.io/tolerationsWhitelist: '[ { "key": "key1", "value": + "value1" }, { "key": "key2", "value": "value2" } ]' + generateName: dma- +spec: + finalizers: + - openshift.io/origin + - kubernetes +---- + +[[ocp-37-security]] +=== Security + +[[ocp-37-documented-private-public-key-configurations-and-crypto-levels]] +==== Documented Private and Public Key Configurations and Crypto Levels + +While {product-title} is a secured by default implementation of Kubernetes, +there is now documentation on what security protocols and ciphers are used. + +{product-title} leverages Transport Layer Security (TLS) cipher suites, JSON Web +Algorithms (JWA) crypto algorithms, and offers external libraries such as The +Generic Security Service Application Program Interface (GSSAPI) and libgpgme. + +xref:../architecture/index.adoc#architecture-index[Private and public key +configurations and Crypto levels] are now documented for {product-title}. + +[[ocp-37-node-authorizer-node-restriction-admission-plug-in]] +==== Node Authorizer and Node Restriction Admission Plug-in + +Pods can no longer try to gain information from secrets, configuration maps, PV, +PVC, or API objects from other nodes. + +link:https://kubernetes.io/docs/admin/authorization/node/[Node authorizer] +governs what APIs a kubelet can perform. Spanning read-, write-, and auth-related +operations. In order for the admission controller to know the identity of the +node to enforce the rules, nodes are provisioned with credentials that identify +them with the user name `system:node:` and group `system:nodes`. + +These enforcements are in place by default on all new installations of +{product-title} 3.7. For upgrades from {product-title} 3.6, they are not in +place due to the `system:nodes` RBAC being granted from OCP 3.6. To turn the +enforcements on, run: + +---- +# oadm policy remove-cluster-role-from-group system:node system:nodes +---- + +[[ocp-37-advanced-auditing]] +==== Advanced Auditing (Technology Preview) + +This feature is currently in xref:ocp-37-technology-preview[Technology Preview] +and not for production workloads. + +With Advanced Auditing (currently in Technology Preview), administrators are now +exposed to more information from the API call within the audit trail. This +provides a deeper traceability of what is occurring across the cluster. We also +capture all login events at the default logging level and modifications to role +binds and SCC. + +{product-title} now has an audit `policyFile` or `policyConfiguration` where +administrators can filter in on what they want to capture. + +See +xref:../install_config/master_node_configuration.adoc#master-node-config-advanced-audit[Advanced +Audit] for more information. + +[[ocp-37-complete-upstreaming-of-rbac-then-downstreaming]] +==== Complete Upstreaming of RBAC, Then Downstreaming it Back into OpenShift + +The rolebinding and RBAC experience is now the same across all Kubernetes +distributions. + +Administrators do not have to do anything for this migration to occur. The +upgrade process to {product-title} 3.7 offers a seamless experience. Now, the +user experience is consistent with upstream. + +A role can be defined within a namespace with a `Role`, or cluster-wide with a +`ClusterRole`. + +A `RoleBinding` or `ClusterRoleBinding` binds a role to subjects. Subjects can +be groups, users, or service accounts. A role binding grants the permissions +defined in a role. + +[[ocp-37-longer-lived-api-tokens-to-oauth-clients]] +==== Issue Longer-lived API Tokens to OAuth Clients + +Administrators now have the ability to set different token timeouts for the +different ways users connect to {product-title} (for example, via the `oc` command +line, from a GitHub authentication, or from the web console). + +Administrators can edit `oauthclients` and set the `accessTokenMaxAgeSeconds` to +a time value in seconds that meets their needs. + +There are three possible OAuth client types: + +. `openshift-web-console` - The client used to request tokens for the OpenShift web console. + +. `openshift-browser-client` - The client used to request tokens at +*_/oauth/token/request_* with a user-agent that can handle interactive logins, +such as using Auth from GitHub, Google Authenticator, and so on. + +. `openshift-challenging-client` - The client used to request tokens with a user-agent that can + handle WWW-Authenticate challenges, such as the `oc` command line. + +- When `accessTokenMaxAgeSeconds` is set to `0`, tokens do not expire. +- When left blank, {product-title} uses the definition in `master-config`. +- Edit the client of interest via: ++ +---- +# oc edit oauthclients openshift-browser-client +---- + +- Set `accessTokenMaxAgeSeconds` to `600`. +- Check the setting via: ++ +---- +# oc get oauthaccesstoken +---- + +See +xref:../architecture/additional_concepts/other_api_objects.adoc#accessTokenMaxAgeSeconds[Other +API Objects] for more information. + +[[ocp-37-scc-now-supports-flexvolume]] +==== Security Context Constraints Now Supports flexVolume + +flexVolumes allow users to integrate with new APIs easily by being able to mount +in the items needed for integration. For example, the ability to bind mount in +certain files without overwriting whole directories to integrate with Kerberos. + +Administrators are now able to grant access to users to use specific flexVolume +driver names. Previously, the only way administrators could restrict flexVolumes +was by setting them as `on` or `off`. + +[[ocp-37-storage]] +=== Storage + +[[ocp-37-local-persistent-volumes]] +==== Local Storage Persistent Volumes (Technology Preview) + +Local storage persistent volumes is a feature currently in +xref:ocp-37-technology-preview[Technology Preview] and not for production +workloads. + +Local persistent volumes (PVs) now offer the ability to allow tenants to request +storage that is local to a node through the regular persistent volume claim +(PVC) process without needing to know the node. Local storage is commonly used +in data store applications. + +The administrator needs to create the local storage on the nodes, mount them +under directories, and then manually create the persistent volume (PV). +Alternatively, they can use an external provisioner and feed it the node +configuration via `configMaps`. + +Example persistent volume named `example-local-pv` that some tenants can now claim: + +---- +apiVersion: v1 +kind: PersistentVolume +metadata: + name: example-local-pv + annotations: + "volume.alpha.kubernetes.io/node-affinity": '{ + "requiredDuringSchedulingIgnoredDuringExecution": { + "nodeSelectorTerms": [ + { "matchExpressions": [ + { "key": "kubernetes.io/hostname", + "operator": "In", + "values": ["my-node"] + } + ]} + ]} + }' +spec: + capacity: + storage: 5Gi + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + storageClassName: local-storage + local: + path: /mnt/disks/vol1 +---- + +See +xref:../install_config/configuring_local.adoc#install-config-configuring-local[Configuring +for Local Volume] and +xref:../install_config/persistent_storage/persistent_storage_local.adoc#install-config-persistent-storage-persistent-storage-local[Persistent +Storage Using Local Volume] for more information. + +[[ocp-37-tenant-driven-storage-snapshotting]] +==== Tenant-driven Storage Snapshotting (Technology Preview) + +Tenant-driven storage snapshotting is currently in +xref:ocp-37-technology-preview[Technology Preview] and not for production +workloads. + +Tenants now have the ability to leverage the underlying storage technology +backing the persistent volume (PV) assigned to them to make a snapshot of their +application data. Tenants can also now restore a given snapshot from the past to +their current application. + +An external provisioner is used to access the EBS, GCE pDisk, and HostPath, and +Cinder snapshotting API. This Technology Preview feature has tested EBS and +HostPath. The tenant must stop the pods and start them manually. + +. The administrator runs an external provisioner for the cluster. These are images +from the Red hat Container Catalog. + +. The tenant made a PVC and owns a PV from one of the supported storage +solutions.The administrator must create a new `StorageClass` in the cluster with: ++ +---- +kind: StorageClass +apiVersion: storage.k8s.io/v1 +metadata: + name: snapshot-promoter +provisioner: volumesnapshot.external-storage.k8s.io/snapshot-promoter +---- + +. The tenant can create a snapshot of a PVC named `gce-pvc` and the resulting +snapshot will be called `snapshot-demo`. ++ +---- +$ oc create -f snapshot.yaml + +apiVersion: volumesnapshot.external-storage.k8s.io/v1 +kind: VolumeSnapshot +metadata: + name: snapshot-demo + namespace: myns +spec: + persistentVolumeClaimName: gce-pvc +---- + +. Now, they can restore their pod to that snapshot. ++ +---- +$ oc create -f restore.yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: snapshot-pv-provisioning-demo + annotations: + snapshot.alpha.kubernetes.io/snapshot: snapshot-demo +spec: + storageClassName: snapshot-promoter +---- + +[[ocp-37-storage-classes-get-zones]] +==== Storage Classes Get Zones + +Public clouds are particular about not allowing storage to cross zones or +regions, so tenants need an ability at times to specify a particular zone. + +In {product-title} 3.7, administrators can now leverage a zone's definition +within the `StorageClass`: + +---- +kind: StorageClass +apiVersion: storage.k8s.io/v1beta1 +metadata: + name: slow +provisioner: kubernetes.io/ +parameters: + type: pd-standard + zones: zone1,zone2 +---- + +See +xref:../install_config/persistent_storage/dynamically_provisioning_pvs.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[Dynamic +Provisioning and Creating Storage Classes] for more information. + +[[ocp-37-increased-volume-density]] +==== Increased Persistent Volume Density Support by CNS + +Container-native storage (CNS) on {product-title} 3.7 now supports much higher +persistent volume density (three times more) to support a large number of +applications at scale. This is due to the introduction of brick-multiplexing +support in GlusterFS. + +Over 1,000 volumes in a 3-node cluster with 32 GB of RAM per node available to +GlusterFS has been successfully tested. Also, 300 Block PVs are supported now on +3-node CNS. + +[[ocp-37-cns-multi-protocol-support]] +==== CNS Multi-protocol (File, Block, and S3) Support for OpenShift + +Container-native storage (CNS) is now extended support iSCSI and S3 back end for +{product-title}. Heketi is enhanced to support persistent volume (PV) expansion, +volume option, and HA. + +Block device-based RWO implementation is added to CNS to improve the performance +of ElasticSearch, PostgreSQL, and so on. With {product-title} 3.7, Elastic and +Cassandra are fully supported. + +[[ocp-37-cns-full-support-infrastructure-services]] +==== CNS Full Support for Infrastructure Services + +Container-native storage (CNS) now fully supports all {product-title} +infrastructure services: registry, logging, and metrics. + +{product-title} logging (with Elasticsearch) and {product-title} metrics (with +Cassandra) are fully supported on persistent volumes backed by CNS/CRS iSCSI +block storage. + +The {product-title} registry is hosted on CNS/CRS by RWX persistent volumes, +providing high availability and redundancy through Gluster architecture. + +Logging and metrics were tested at scale with 1000+ pods. + +[[cop-37-automated-cns-deployment-with-openshift-advanced-installation]] +==== Automated Container Native Storage Deployment with OpenShift Advanced Installation + +{product-title} 3.7 now includes an integrated and simplified installation of +container-native storage (CNS) through the advanced installer. The advanced +installer is enhanced for automated and integrated support for deployment of CNS +including block provisioner, S3 provisioner, and files for correctly configured +out-of-the-box {product-title} and CNS. The CNS storage device details are added +to the installer’s inventory file. The installer manages configuration and +deployment of CNS, its dynamic provisioners, and other pertinent details. + +[[ocp-37-flexvolume-support-for-non-storage-use-cases]] +==== Official FlexVolume Support for Non-storage Use Cases + +There is now a supported interface to allow you to bind and mount in content +from a running pod. FlexVolume is a script interface that runs on the kubelet and +offers five main functions to help you mount in content such as device drivers, +secrets, and certificates as bind mounts to the container from the host: + +* `init` - Initialize the volume driver. +* `attach` - Attach the volume to the host. +* `mount` - Mount the volume on the host. This is the part that makes the volume available +to the host to mount it in *_/var/lib/kubelet_*. +* `unmount` - Unmount the volume. +* `detach` - Detach the volume from the host. + +[[ocp-37-scale]] +=== Scale + +[[ocp-37-scale-cluster-limits]] +==== Cluster Limits + +Updated guidance around +xref:../scaling_performance/cluster_limits.adoc#scaling-performance-cluster-limits[Cluster +Limits] for {product-title} 3.7 is now available. + +[[ocp-37-scale-tuned-profile-hierarchy]] +==== Updated Tuned Profile Hierarchy + +The xref:../scaling_performance/host_practices.adoc#scaling-performance-capacity-tuned-profile[Tuned Profile Hierarchy] +is updated as of 3.7. + +[[ocp-37-scale-cluster-loader]] +==== Cluster Loader + +Guidance regarding use of +xref:../scaling_performance/using_cluster_loader.adoc#scaling-performance-using-cluster-loader[Cluster +Loader] is now available with the release of {product-title} 3.7. Cluster Loader +is a tool that deploys large numbers of various objects to a cluster, which +creates user-defined cluster objects. Build, configure, and run Cluster Loader +to measure performance metrics of your {product-title} deployment at various +cluster states. + +[[ocp-37-scale-benefits-of-using-the-overlay-graph-driver]] +==== Guidance on Overlay Graph Driver with SELinux + +In {product-title} 3.7, guidance about the +xref:../scaling_performance/optimizing_storage.adoc#benefits-of-using-the-overlay-graph-driver[benefits +of using the Overlay Graph Driver with SELinux] is now available. + +[[ocp-37-scale-providing-storage-to-an-etcd-node-using-pci-passthrough-with-openstack]] +==== Providing Storage to an etcd Node Using PCI Passthrough with OpenStack + +Guidance on +xref:../scaling_performance/host_practices.adoc#providing-storage-to-an-etcd-node-using-pci-passthrough-with-openstack[Providing +Storage to an etcd Node Using PCI Passthrough with OpenStack] is now available. + +[[ocp-37-networking]] +=== Networking + +[[ocp-37-network-policy]] +==== Network Policy +Network Policy is now fully supported in {product-title} 3.7. + +Network Policy is an optional plug-in specification of how selections of pods +are allowed to communicate with each other and other network endpoints. It +provides fine-grained network namespace isolation using labels and port +specifications. + +After installing the Network Policy plug-in, an annotation that flips the +namespace from `allow all traffic` to `deny all traffic` must first be set on +the namespace. At that point, `NetworkPolicies` can be created that define what +traffic to allow. The annotation is as follows: + +---- +$ oc annotate namespace ${ns} 'net.beta.kubernetes.io/network-policy={"ingress":{"isolation":"DefaultDeny"}}' +---- + +[NOTE] +==== +The annotation is not needed when using the v1 API. +==== + +The allow-to-red policy specifies "all red pods in namespace `project-a` allow +traffic from any pods in any namespace." This does not apply to the red pod in +namespace `project-b` because `podSelector` only applies to the namespace in +which it was applied. + +.Policy applied to project +---- +kind: NetworkPolicy +apiVersion: extensions/v1beta1 +metadata: + name: allow-to-red +spec: + podSelector: + matchLabels: + type: red + ingress: + - {} +---- + +See +xref:../admin_guide/managing_networking.adoc#admin-guide-manage-networking[Managing +Networking] for more information. + +[[ocp-37-cluster-ip-range-more-flexible]] +==== Cluster IP Range Now More Flexible + +Cluster IP ranges are now more flexible by allowing multiple subnets for hosts. +This provides the capability to allocate multiple, smaller IP address ranges for +the cluster. This makes it easier to migrate from one allocated IP range to +another. + +There are multiple comma-delimited CIDRs in the configuration file. Each node is +allocated only a single subnet from within any of the available ranges. You can +not allocate different-sized host subnets, or use this to change the host subnet +size. The `clusterNetworkCIDRs` can be different sizes, but must be equal to or +larger than the host subnet size. It is not allowed to have some nodes use +subnets that are not part of the `clusterNetworkCIDRs`. Nodes can allocate +different-sized subnets by setting different `hostSubnetLength` values. + +In regard to migration or edits, networks can be added to the list, CIDRs in the +list may be re-ordered, and a CIDR can be removed from the list when there are +no nodes that have an SDN allocation from that CIDR. + +Example: + +---- +networkConfig: + clusterNetworkCIDR: 10.128.0.0/24 + clusterNetworks: + - cidr: 11.128.0.0/24 + hostSubnetLength: 6 + - cidr: 12.128.0.0/24 + hostSubnetLength: 6 + - cidr: 13.128.0.0/24 + hostSubnetLength: 4 + externalIPNetworkCIDRs: + - 0.0.0.0/0 + hostSubnetLength: 6 +---- + +[[ocp-37-routes-alloed-to-set-cookie-names-for-session-stickiness]] +==== Routes Allowed to Set Cookie Names for Session Stickiness + +The HAProxy router can look for a cookie in a client request. Based on that +cookie name and value, always route requests that have that cookie to the same +pod instead of relying upon the client source IP, which can be obscured by an F5 +doing load balancing. + +A cookie with a unique name is used to handle session persistence. + +. Set a per-route configuration to set the cookie name used for the session. +. Add an `env` to set a router-wide default. +. Ensure that the cookie is set and honored by the router to control access. + +Example scenario: + +. Set a default cookie name for the HAProxy router: ++ +---- +$ oc env dc/router ROUTER_COOKIE_NAME=default-cookie +---- + +. Log in as a normal user and create the project/pod/svc/route: ++ +---- +$ oc login user1 +$ oc new-project project1 +$ oc create -f https://example.com/myhttpd.json +$ oc create -f https://example.com/service_unsecure.json +$ oc expose service service-unsecure +---- + +. Access the route: ++ +---- +$ curl $route -v +---- ++ +The HTTP response will contain the cookie name. For example: ++ +---- +Set-Cookie: default_cookie=[a-z0-9]+ +---- + +. Modify the cookie name using route annotation: ++ +---- +$ oc annotate route service-unsecure router.openshift.io/cookie_name="route-cookie" +---- + +. Re-access the route: ++ +---- +$ curl $route -v +---- ++ +The HTTP response will contain the new cookie name: ++ +---- +Set-Cookie: route-cookie=[a-z0-9]+ +---- + +See +xref:../architecture/networking/routes.adoc#route-specific-annotations[Route-specific +Annotations] for more information. + +[[ocp-37-hsts-policy-support]] +==== HSTS Policy Support + +xref:../architecture/networking/routes.adoc#hsts[HTTP Strict Transport Security +(HSTS)] ensures all communication between the server and client is encrypted and +that all sent and received responses are delivered to and received from the +authenticated server. + +An HSTS policy is provided to the client via an HTTPS header (HSTS headers over +HTTP are ignored) using an `haproxy.router.openshift.io/hsts_header` annotation +to the route. When the Strict-Transport-Security response in the header is +received by a client, it observes the policy until it is updated by another +response from the host, or it times-out (`max-age=0`). + +Example using reencrypt route: + +. Create the pod/svc/route: ++ +---- +$ oc create -f https://example.com/test.yaml +---- + +. Set the Strict-Transport-Security header: ++ +---- +$ oc annotate route serving-cert haproxy.router.openshift.io/hsts_header="max-age=300;includeSubDomains;preload" +---- + +. Access the route using `https`: ++ +---- +$ curl --head https://$route -k + + ... + Strict-Transport-Security: max-age=300;includeSubDomains;preload + ... +---- + +[[ocp-37-semi-automatic-namespace-wide-egress-ip]] +==== Semi-automatic Namespace-wide Egress IP + +All outgoing external connections from a project will share a single +fixed-source IP address and will send all traffic via that IP so that external +firewalls can recognize the application associated with a packet. + +See +xref:../admin_guide/managing_networking.adoc#admin-guide-manage-networking[Managing +Networking] for more information. + +[[ocp-37-master]] +=== Master + +[[ocp-37-public-pull-url-provided-for-images]] +==== Public Pull URL Provided for Images + +A public pull URL is provided for images versus being able to know the internal +in-cluster IP or DNS of the service. + +A new API field for the image stream with the public URL of the image was added, +and a public URL is configured in the *_master-config.yaml_* file. The web +console will understand this new field and generate the public pull +specifications automatically to users (so users can just copy and paste the pull +URL). + +Example: + +. Check the `internalRegistryHostname` setting in the *_master-config.yaml_* file: ++ +---- + ... + imagePolicyConfig: + internalRegistryHostname: docker-registry.default.svc:5000 + ... +---- + +. Delete the `OPENSHIFT_DEFAULT_REGISTRY` variable in both: ++ +---- +/etc/sysconfig/atomic-openshift-master-api +/etc/sysconfig/atomic-openshift-master-controllers +---- + +. Start a build and check the push URL. It should push the new build image with +`internalRegistryHostname` to the `docker-registry`. + +[[ocp-37-custom-resource-definitions]] +==== Custom Resource Definitions + +A _resource_ is an endpoint in the Kubernetes API that stores a collection of +API objects of a certain kind (for example, pod objects). A _custom resource +definition_ is a built-in API that enables the ability to plug in your own +custom, managed object and application as if it were native to Kubernetes. +Therefore, you can leverage Kubernetes cluster management, RBAC and +authentication services, PI services, CLI, security, and so on, without having +to know Kubernetes internals or modifying Kubernetes itself in any way. + +Custom Resource Definitions (CRD) deprecates Third Party Resources in Kubernetes +1.7. + +How it works: + +. Define a CRD class (your custom objects) and register the new resource type. +This defines how it fits into the hierarchy and how it will be referenced from +the CLI and API. + +. Define a function to create a custom client, which is aware of the new resource +schema. + +. Once completed, it can be accessed from the CLI. However, in order to build +controllers or custom functionality, you need API access to the objects, and so +you need to build a set of CRUD functions (library) to access the objects and the event-driven listener for controllers. + +. Create a client that: ++ +* Connects to the Kubernetes cluster. +* Creates the new CRD (if it does not exist). +* Creates a new custom client. +* Creates a new test object using the client library. +* Creates a controller that listens to events associated with new resources. + +See +xref:../admin_guide/custom_resource_definitions.adoc#admin-guide-custom-resources[Extending +the Kubernetes API with Custom Resources] for more information. + +[[ocp-37-api-aggregation]] +==== API Aggregation + +There is now Kubernetes documentation on how API aggregation works in +{product-title} 3.7 and how other users can add third-party APIs: + +* link:https://github.com/kubernetes/website/blob/master/docs/tasks/access-kubernetes-api/setup-extension-api-server.md[Set up an extension `api-server` to work with the aggregation layer] +* link:https://github.com/kubernetes/website/blob/master/docs/concepts/api-extension/apiserver-aggregation.md[Kubernetes aggregation layer] + +[[ocp-37-master-prometheuh-endpoint-coverage]] +==== Master Prometheus Endpoint Coverage + +Prometheus endpoint logic was added to upstream components so that monitoring +and health indicators can be added around deployment configurations. + +[[ocp-37-installation]] +=== Installation + +[[ocp-37-migrate-etcd-before-upgrade]] +==== Migrate etcd Before OpenShift Container Platform 3.7 Upgrade +Starting in {product-title} 3.7, the use of the etcd3 v3 data model is required. + +{product-title} gains performance improvements with the v3 data model. In order +to upgrade the data model, an embedded etcd configuration option in no longer +allowed. Embedded is not co-located and mainly used in single-master +deployments. Migration scripts will convert the v3 data model and allow you to +move an embedded etcd to an external etcd either on the same host or a different +host than the masters. In addition, there is a new scale up ability for etcd +clusters. + +See +xref:../install_config/upgrading/migrating_embedded_etcd.adoc#install-config-upgrading-etcd-data-migration[Migrating +Embedded etcd to External etcd] for more information. + +[[ocp-37-modular-installer]] +==== Modular Installer to Allow Playbooks to Run Independently + +The installer has been enhanced to allow administrators to install specific +components. By breaking up the roles and playbooks, there is better targeting of +ad hoc administration tasks. + +[[new-install-experience-around-phases]] +==== New Installation Experience Around Phases +When you run the installer, {product-title} now reports back at the end what +phases you have gone through. + +If the installation fails during a phase, you will be notified on the screen +along with the errors from the Ansible run. Once you resolve the issue, rather +than run the entire installation over again, you can pick up from the failed +phase. This results in an increased level of control during installations and +results in time savings. + +[[ocp-37-increased-control-over-image-stream-templates]] +==== Increased Control Over Image Stream and Templates +With {product-title} 3.7, there is added control over whether or not your cluster +automatically upgrades all the content provided during cluster upgrades. + +Edit the `openshift_install_examples` variable in the hosted file or set it as a variable in the installer. + +---- +RPM = /etc/origin/examples /etc/origin/hosted +Container = /usr/share/openshift/examples /usr/share/openshift/hosted + +openshift_install_examples=false +---- + +Setting `openshift_install_examples` to `false` will cause the installer to not +upgrade the imagestream and templates. `True` is the default behavior. + +[[ocp-37-install-config-cfme-from-ocp-installer]] +==== Installation and Configuration of CFME 4.6 from the OpenShift Installer + +CloudForms Management Engine (CFME) 4.6 is now fully supported running on +{product-title} 3.7 as a set of containers. + +[IMPORTANT] +==== +CloudForms (CFME) 4.6 is not yet released. Until it is available, this role is +limited to installing ManageIQ (MIQ), the open source project that CFME is based +on. +==== + + CFME is an available API endpoint on +all {product-title} clusters that choose to use it. More cluster administrators +are now able to leverage CFME and begin experiencing the insight and automations +available to them in {product-title}. + +To install CFME 4.6: + +---- +# ansible-playbook -v -i +playbooks/byo/openshift-management/config.yml +---- + +[NOTE] +==== +There is a link:https://bugzilla.redhat.com/show_bug.cgi?id=1506951[known issue] with this playbook. +==== + +To configure CFME 4.6 to consume the {product-title} installation it is running on: + +---- +# ansible-playbook -v -i +playbooks/byo/openshift-management/add_container_provider.yml +---- + +You can also automate the configuration of the provider to point to multiple OpenShift clusters: + +---- +# ansible-playbook -v -e container_providers_config=/tmp/cp.yml +playbooks/byo/openshift-management/add_many_container_providers.yml +---- + +[NOTE] +==== +The *_/tmp/cp.yml_* file requires some manual configurations to create and use +it correctly. See +xref:../install_config/cfme/container_provider.adoc#cfme-container-provider-multiple[Multiple +Container Providers] for more information. +==== + +See xref:../install_config/cfme/index.adoc#install-config-cfme-intro[Deploying +Red Hat CloudForms on OpenShift Container Platform] for more information. + +[[ocp-37-diagnostics]] +=== Diagnostics + +[[ocp-37-additional-health-checks]] +==== Additional Health Checks + +More health checks are now available for administrators to run after +installations and upgrades. Administrators need the ability to run tests +periodically to help determine the health of the framework components within the +cluster. {product-title} 3.7 offers test functionality via Ansible playbooks +that can be run and output can be sent as file-based output. + +---- +$ ansible-playbook playbooks/byo/openshift-checks/adhoc.yml + curator + diagnostics + disk_availability + docker_image_availability + docker_storage + elasticsearch + etcd_imagedata_size + etcd_traffic + etcd_volume + fluentd + fluentd_config + kibana + logging + logging_index_time + memory_availability + ovs_version + package_availability + package_update + package_version + +$ ansible-playbook playbooks/byo/openshift-checks/adhoc.yml -e +openshift_checks=fluentd_config,logging_index_time,docker_storage +---- + +Alternatively, they are included in the health playbook: + +---- +$ ansible-playbook playbooks/byo/openshift-checks/health.yml +---- + +To capture the output: + +---- +$ ansible-playbook playbooks/byo/openshift-checks/health.yml -e +openshift_checks_output_dir=/tmp/checks +---- + +[[ocp-37-metrics-and-logging]] +=== Metrics and Logging + +[[ocp-37-journald-system-logs]] +==== Jourald for System Logs and JSON File for Container Logs + +Docker log driver is set to `json-file` as the default for all nodes. Docker +`log-driver` can be set to `journal`, but there is no log rate throttling with +journal driver. So, there is always a risk for denial-of-service attacks from +rogue containers. + +Fluentd will automatically determine which log driver (`journald` or +`json-file`) the container runtime is using. Fluentd will now always read logs +from journald and also *_/var/log/containers_* (if `log-driver` is set to +`json-file`). Fluentd will no longer read from *_/var/log/messages_*. + +See +xref:../install_config/aggregate_logging.adoc#install-config-aggregate-logging[Aggregating +Container Logs] for more information. + +[[ocp-37-docker-events-and-api-calls-aggregated-to-efk-as-logs]] +==== Docker Events and API Calls Aggregated to EFK as Logs + +Fluentd captures standard error and standard out from the running containers on +the node. With this change, fluentd collects all the errors and events coming +from the docker daemon running on the node and sends it to Elasticsearch (ES). + +Enable this via the {product-title} installer: + +---- +openshift_logging_fluentd_audit_container_engine=true +---- + +The collected information is in operation indices of ES and only cluster +administrators have visual access. The event message includes action, pod name, +image name, and user time-stamp. + +[[ocp-37-master-events-aggregated-to-efk-as-logs]] +==== Master Events are Aggregated to EFK as Logs + +The *eventrouter* pod scrapes the events from kubernetes API and and outputs to +*STDOUT*. The *fluentd* plug-in transforms the log message and sends it to +Elasticsearch (ES). + +Enable `openshift_logging_install_eventrouter` by setting it to `true`. It is +off by default. *Eventrouter* is deployed to the default namespace. Collected +information is in operation indices of ES and only cluster administrators have +visual access. + +See the +link:https://github.com/openshift/origin-aggregated-logging/blob/master/docs/proposals/kube_events_design_doc.md[design +documentation] for more information. + +[[ocp-37-kibana-dashboards-for-ops-now-shareable]] +==== Kibana Dashboards for Operations Are Now Shareable + +This allows {product-title} administrators the ability to share saved Kibana +searches, visualizations, and dashboards. + +When `openshift_logging_elasticsearch_kibana_index_mode` is set to `shared_ops`, one +`admin` user can create queries and visualizations for other `admin` users. +Other users can not see those same queries and visualizations. + +When `openshift_logging_elasticsearch_kibana_index_mode` is set to `unique`, +users can only see saved queries and visualizations they created. This is the +default behavior. + +See +xref:../install_config/aggregate_logging.adoc#aggregate-logging-ansible-variables[Aggregating +Container Logs] for more information. + +[[ocp-37-removed-es-copy-method]] +==== Removed ES_Copy Method for Sending Logs to External ES + +`ES_Copy` was replaced with the *secure_formard* plug-in for fluentd to send +logs from fluentd to external fluentd (that can then ingest into ES). `ES_COPY` +is removed from the installer and the documentation. + +When `openshift_installer` is run for logging to upgrade to 3.7, the installer +now checks for `ES_COPY` in the inventory and fails the upgrade with: + +---- +msg: The ES_COPY feature is no longer supported. Please remove the variable from your inventory +---- + +See +xref:../install_config/aggregate_logging.adoc#fluentd-log-external-elasticsearch[Aggregating +Container Logs] for more information. + +[[ocp-37-expose-es-as-a-route]] +==== Expose Elasticsearch as a Route + +By default, Elasticsearch (ES) deployed with OpenShift aggregated logging is not +accessible from outside the logging cluster. This enables a route for external +access to ES for those tools that want to access its data. + +You now have direct access to ES using only your OpenShift token and have the +ability to provide the external ES and ES Ops hostnames when creating the server +certificate (similar to Kibana). Ansible tasks now simplify route deployment. + +[[ocp-37-removed-metrics-and-logging-deployers]] +==== Removed Metrics and Logging Deployers + +The metrics and logging deployers bare now replaced with `playbook2image` for +`oc cluster up` so that `openshift-ansible` is used to install logging and +metrics: + +---- +$ oc cluster up --logging --metrics +---- + +Check metrics and pod status: + +---- +$ oc get pod -n openshift-infra +$ oc get pod -n logging +---- + +[[ocp-37-prometheus]] +==== Prometheus (Technology Preview) + +{product-title} operators deploy Prometheus (currently in +xref:ocp-37-technology-preview[Technology Preview] and not for production +workloads) on a {product-title} cluster, collect Kubernetes and infrastructure +metrics, and get alerts. Operators can see and query metrics and alerts on the +Prometheus web dashboard, or bring their own Grafana and hook it up to +Prometheus. + +See xref:../nstall_config/cluster_metrics.adoc#openshift-prometheus[Prometheus +on OpenShift] for more information. + +[[ocp-37-integrated-approach-to-adding-hosa]] +==== Integrated Approach to Adding Hawkular OpenShift Agent (Tecnhology Preview) + +Hawkular OpenShift Agent (HOSA) remains in +xref:ocp-37-technology-preview[Technology Preview] and not for production +workloads. It is packaged and can now be installed with the +`openshift_metrics_install_hawkular_agent` option in the installer by setting it +to `true`. + +See +xref:../install_config/cluster_metrics.adoc#metrics-ansible-variable[Enabling +Cluster Metrics] for more information. + +[[ocp-37-developer-experience]] +=== Developer Experience + +[[ocp-37-template-instantation-api]] +==== Template Instantiation API + +Clients can now easily invoke a server API instead of relying on client logic. + +See xref:../rest_api/examples.adoc#template-instantiation[Template +Instantiation] for more information. + +[[ocp-37-dev-experience-metrics]] +==== Metrics + +{product-title} now includes: + +* Prometheus metrics that show you the health of builds in the system (number +running, failing, failure reasons, and so on). + +* Timing information on build objects themselves to show how long they spent in +various steps (not exposed as Prometheus metrics). + +[[ocp-37-web-console]] +=== Web Console + +[[ocp-37-openshift-ansible-broker]] +==== OpenShift Ansible Broker + +In {product-title} 3.7, Open Service Broker API is implemented, enabling users +to leverage Ansible for provisioning and managing services from the Service +Catalog. This is a standardized approach for delivering simple to complex +multi-container OpenShift services via Ansible. It works in conjunction with +Ansible Playbook Bundle (APB) for lightweight application definition. APBs can +be used to deliver and orchestrate on-platform services, but could also be used +to provision and orchestrate off-platform services (from cloud providers, IaaS, +and so on). + +OpenShift Ansible Broker supports production workloads and multiple service +plans. There is now secure connectivity between Service Catalog and Service +Broker. + +You can interact with the Service Catalog to provision and manage services while +the details of the broker remain largely hidden. + +[[ocp-37-ansible-playbook-bundles]] +==== Ansible Playbook Bundles + +Ansible Playbook Bundles (APBs) are short-lived, lightweight container image +consisting of: + +* a simple directory structure with named action playbooks. +* metadata (required and optional parameters, as well as dependencies). +* an Ansible runtime environment. + +Developer tooling is included, providing a guided approach to APB creation. +There is also support for the *_test_* playbook, allowing for functional testing +of the service.) Two new APBs are introduced for MariaDB (SCL) and MySQL DB +(SCL). + +When a user provisions an application from the Service Catalog, the Ansible +Service Broker will download the associated APB image from the registry and run +it. + +Developing APBs can be done in one of two ways: Creating the APB container image +manually using standardized container creation tooling, or with APB tooling that +Red Hat will deliver, which provides a guided approach to creation. + +[[ocp-37-openshift-template-broker]] +==== OpenShift Template Broker + +The OpenShift Template Broker exposes templates through a Open Service Broker +API to the Service Catalog. + +The Template Broker matches the lifecycles of `provision`, `deprovision`, +`bind`, and `unbind` with existing templates. No changes are required to +templates, unless you expose `bind`. Your application will get injected with +configuration details. + +[[ocp-37-initial-experience]] +==== Initial Experience + +{product-title} 3.7 provides a better initial user experience with the Service +Catalog. This includes: + +* A task-focused interface +* Key call-outs +* Unified search +* Streamlined navigation + +The new user interface is designed to really streamline the getting started +process, in addition to incorporating the new Service Catalog items. It shows +the existing content (for example, builder images and templates) as well as +catalog items (if the catalog is enabled). + +[NOTE] +==== +The new user experience can be enabled as a Technology Preview feature without +the Service Catalog to be active. A cluster with this user interface (UI) +would still be supported. Running the catalog UI without the Service Catalog +enabled will work, but access to templates without the catalog will require a +few extra steps. +==== + +[[ocp-37-search-catalog]] +==== Search Catalog + +{product-title} 3.7 provides a simple way to quickly get what you want The new +Search Catalog user interface is designed to make it much easier to find items +in a number of ways, making it even faster to find the items you are wanting to +deploy. + +image::3.7-search-filter-catalog.gif[search catalog] + +[[ocp-37-add-from-catalog]] +==== Add from Catalog + +Provision a service from the catalog. Select the desired service and follow +prompts for the desired project and configuration details. + +image::3.7-add-to-project-wizard-animated.gif[add to project] + +[[ocp-37-connect-a-service]] +==== Connect a Service +Once a service is deployed, get coordinates to connect the application to it. + +The broker returns a secret, which is stored in the project for use. You are +guided through a process to update the deployment to inject a secret. + +image::3.7-bind-mongodb-nodejs-at-creation.gif[connect a service] + +[[ocp-37-include-templates-from-other-projects]] +==== Include Templates from Other Projects + +Since templates are now served through a broker, there is now a way for you to +deploy templates from other projects. + +Upload the template, then select the template from a project. + +image::3.7-add-to-project-options.png[Add to Project Options] + +[[ocp-37-notifications]] +==== Notifications +Key notifications are now under a single UI element, the notification drawer. + +The bell icon is decorated when new notifications exist. You can mark all read, +clear all, view all, or dismiss individual ones. Key notifications are +represented with the level of information, warning, or error. + +image::3.7-notification-drawer.png[Notification drawer] + +[[ocp-37-improved-quota-warnings]] +==== Improved Quota Warnings +Quota notifications are now put in the notification drawer and are less intrusive. + +image::37-quota-warning.png[quota warning] + +There are now separate notifications for each quota type instead of one generic +warning. When at quota and not over quota, this is displayed as an informative +message. Usage and maximum is displayed in the message. You can mark *Don't Show +Me Again* per quota type. Administrators can create custom messages to the quota +warning. + +[[ocp-47-environment-variable-editor-added-to-stateful-sets-page]] +==== Environment Variable Editor Added to the Stateful Sets Page + +An environment variable editor is now added to the *Stateful Sets* page. + +image::37-statefulset-page-envar-editor.png[Stateful Sets Page] + +[[ocp-37-support-for-envfrom]] +==== Support for the EnvFrom Construct + +Anything with a pod template now supports the `EnvFrom` construct that lets you +break down an entire configuration map or secret into environment variables without +explicitly setting `env name` to `key mappings`. + +[[ocp-37-notable-technical-changes]] +== Notable Technical Changes + +{product-title} 3.7 introduces the following notable technical changes. + +[discrete] +[[api-connectivity-variables-now-deprecated]] +=== API Connectivity Variables OPENSHIFT_MASTER and KUBERNETES_MASTER Are Now Deprecated + +{product-title} deployments using a +xref:../dev_guide/deployments/deployment_strategies.adoc#custom-strategy[custom +strategy] or +xref:../dev_guide/deployments/deployment_strategies.adoc#lifecycle-hooks[hooks] +are provided with a container environment, which includes two variables for API +connectivity: + +* `OPENSHIFT_MASTER`: A URL to the OpenShift API . +* `KUBERNETES_MASTER`: A URL to the Kubernetes API exposed by OpenShift. + +These variables are now deprecated, as they refer to internal endpoints rather +than the published OpenShift API service endpoints. To connect to the OpenShift +API in these contexts, use +xref:../dev_guide/service_accounts.adoc#dev-guide-service-accounts[service DNS] +or the automatically exposed `KUBERNETES` +xref:../dev_guide/environment_variables.adoc#automatically-added-environment-variables[service +environment variables]. + +The `OPENSHIFT_MASTER` and `KUBERNETES_MASTER` environment variables are removed +from deployment container environments as of {product-title} 3.7. + +[discrete] +[[openshift-hosted-ansible-variables-now-deprecated]] +=== openshift_hosted_{logging,metrics}_* Ansible Variables for the Installer Are Now Deprecated + +The `openshift_hosted_{logging,metrics}_*` Ansible variables used by the +installer have been deprecated. The +xref:../install_config/install/advanced_install.adoc#install-config-install-advanced-install[installation +documentation] has been updated to use the newer variable names. The deprecated +variable names are planned for removal in the next minor release of OpenShift +Container Platform. + +[discrete] +[[removed-generatedeploymentconfig-api-endpoint]] +=== Removed generatedeploymentconfig API Endpoint + +The `generatedeploymentconfig` API endpoint is now removed + +[discrete] +[[deprecating-some-policy-related-apis]] +=== Deprecated Policy Related APIs and Commands + +A large number of policy related APIs and commands are now deprecated. In +{product-title} 3.7, the policy objects are completely removed and native RBAC +is used instead. Any command trying to directly manipulate a policy object will +fail. Roles and rolebindings endpoints are still available, and they proxy the +operation to create native RBAC objects instead. The following commands do not +work against a 3.7 server: + +---- +$ oadm overwrite-policy +$ oadm migrate authorization +$ oc create policybinding +---- + +[NOTE] +==== +A 3.7 client will display an error message when trying these command against a +3.7 server, but will still work against a previous server version, and old +client will just fail hard against a 3.7 server. +==== + +[discrete] +[[RHELAH-version-7-4-2-1-required-containerized-installations]] +=== Red Hat Enterprise Linux Atomic Host Version 7.4.2.1 or Newer Required for Containerized Installations + +In {product-title} 3.7, containerized installations require Red Hat Enterprise +Linux Atomic Host version 7.4.2.1 or newer. + +[discrete] +[[installer-labeling-clusters-for-aws]] +=== Labeling Clusters for Amazon Web Services + +Starting with 3.7 versions of the installer, if you configured AWS provider +credentials, you must also ensure that all instances are labeled. Then, set the +`openshift_clusterid` variable to the cluster ID. See +xref:../admin_guide/aws_cluster_labeling.adoc#admin-guide-aws-cluster-labeling[Labeling +Clusters for Amazon Web Services (AWS)] for more information. + +[discrete] +[[stricter-sccs]] +=== Stricter Security Context Constraints (SCCs) + +With the release of {product-title} 3.7, there are now some stricter security +context constraints (SCCs). The following capabilities are now removed: + +- *nonroot* drops `KILL`, `MKNOD`, `SETUID`, and `SETGID`. +- *hostaccess* drops `KILL`, `MKNOD`, `SETUID`, and `SETGID`. +- *hostmount-anyuid* drops `MKNOD`. + +It is possible that the pods that previously were admitted by these SCCs, and +were using such capabilities, will fail after upgrade. In these rare cases, the +cluster administrator should create a custom SCC for such pods. + +[discrete] +[[updated-installer-support-for-cfme]] +=== Updated Installer Support for CFME 4.6 + +There is now updated installer support for CloudForms Management Engine (CFME) +4.6 on {product-title} 3.7. + +[discrete] +[[updated-installer-support-for-cfme]] +=== Node Authorizer and Admission Plug-in for Managing Node Permissions + +In {product-title} 3.7, the node authorizer and admission plug-in are used to +manage and limit a node's permissions. Therefore, nodes should be removed from +the group that previously granted them broad permissions across the cluster: + +---- +$ oc adm policy remove-cluster-role-from-group system:node system:nodes +---- + +In {product-title} 3.8, this step should be performed automatically via Ansible +as a post-upgrade step. + +[discrete] +[[kube-service-catalog-global]] +=== The kube-service-catalog Namespace Is Global + +The `kube-service-catalog` namespace is now made global by Ansible. Therefore, +if you want multicast to work in vnid 0, you must set the +`netnamespace.network.openshift.io/multicast-enabled=true` annotation on both +namespaces (`default` and `kube-service-catalog`). + +[discrete] +[[migration-to-kubernetes-rbac]] +=== Migration to Kubernetes Role-based Access Control (RBAC) + +[discrete] +[[steps-taken-during-3-6-release]] +==== Steps Taken During the 3.6 Release + +A custom migration controller was created to automatically migrate OpenShift +authorization policy resources to the equivalent RBAC resources: + +. If an OpenShift authorization policy resource was created or modified or +deleted, the action was automatically mirrored to the corresponding RBAC +resource + +. Changes directly applied to RBAC resources were, generally, automatically rolled +back and forced to match the corresponding OpenShift authorization policy +resource. If no corresponding resource existed, the RBAC resource would be +deleted. + +In essence, OpenShift authorization policy objects were the source of truth, and +the RBAC objects were forced into matching these objects. + +[discrete] +[[release-3-6-pre-upgrade-steos-before-upgrading-to-3-7]] +==== Release 3.6 Pre-upgrade Steps Before Upgrading to 3.7 + +There is a small set of configurations that are possible in OpenShift +authorization policy resources that are not supported by RBAC. Such +configurations require manual migration based on the use case. To guarantee that +all Openshift authorization policy objects are in sync with RBAC, the `oc adm +migrate authorization` command has been added. This read-only command emulates +the migration controller logic, and reports if any resource is out of sync. It +is run as a pre-upgrade step via an Ansible playbook and will cause the upgrade +to fail if the objects are not in sync. + +[discrete] +[[during-a-rolling-upgrade-from-release-3-6-t-3-7]] +==== During a Rolling Upgrade from Release 3.6 to 3.7 + +The following scenario describes a rolling upgrade + +. One master is upgraded and starts proxying OpenShift authorization policy +resources and authorizing against RBAC objects. + +. Old masters are still running the migration controller and one of them holds the +controller leader election lock (either because it already had it or because it +gained it by the first master being upgraded). + +. The new master cannot modify any RBAC or proxied OpenShift authorization policy +objects because the migration controller will undo all changes. + +. Old masters can change OpenShift authorization policy resources and the +migration controller will sync these to RBAC, making the changes visible to the +new master. + +. The new master does not have the migration controller. + +. Controllers only speak to their local masters in OpenShift installed via +Ansible, thus the migration controller is guaranteed to only communicate with +the old masters. + +. There is a small chance that a 3.7 controller process will become the leader +once two masters have been upgraded (meaning no migrations of policy objects +will occur after this point). + +. Once all masters have been upgraded from 3.6 to 3.7, OpenShift authorization +policy objects will be always proxied to RBAC objects. + +. The migration controller will be gone and it will be possible to make changes to +RBAC objects directly. + +*Considerations for Administrators During Rolling Upgrade* + +Avoid actions that require changes to OpenShift authorization policy resources +such as the creation of new projects. If a project is created against a new +master, the RBAC resources it creates will be deleted by the migration +controller since they will be seen as out of sync from the OpenShift +authorization policy resources. If a project is created against an old master +and the migration controller is no longer present due to a 3.7 controller +process being the leader, then its policy objects will not be synced and it will +have no RBAC resources. After the 3.7 upgrade is complete, the following +read-only script can be used to determine what namespaces lack RBAC role +bindings (it is up to the cluster administrator to decide how to remediate these +namespaces): + +---- +#!/bin/bash + +set -o errexit +set -o nounset +set -o pipefail + +for namespace in $(oc get namespace -o name); do + ns=$(echo "${namespace}" | cut -d / -f 2) + rolebindings_count=$(oc get rolebinding.rbac -o name -n "${ns}" | wc -l) + if [[ "${rolebindings_count}" == "0" ]]; then + echo "Namespace ${ns} has no role bindings which may require further investigation" + else + echo "Namespace ${ns}: ok" + fi +done +---- + +[discrete] +[[rbac-and-openshift-authorization-policy-in-3-7]] +==== RBAC and OpenShift Authorization Policy in Release 3.7 + +In 3.7, the RBAC objects become the source of truth. The OpenShift authorization +policy objects no longer exist as real objects; the APIs are proxied to the RBAC +resources. Therefore, creating, modifying, or deleting OpenShift authorization +policy resources seamlessly results in actions against RBAC objects. The API +master handles the conversion between these resources and legacy clients will +continue to work as if nothing has changed. The RBAC objects also support +watches, unlike the OpenShift authorization policy resources. + +Policy-based resources have been removed in 3.7. However, RBAC role and binding +objects are available and provide equivalent functionality. + +[[ocp-37-bug-fixes]] +== Bug Fixes + +This release fixes bugs for the following components: + +*Authentication* + +* The secret for the private browser OAuth client was not correctly initialized. +Therefore, the request token endpoint did not work. This bug fix correctly +initializes the browser OAuth client on server start. The request endpoint can +now be used to request tokens. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1491193[*BZ#1491193*]) + +* The LDAP sync/prune command did not take into account the use of +`groupUIDNameMapping` with a whitelist. The sync/prune command would fail with +"group not found" errors because it would query for the wrong group name. With +this bug fix, the command was updated to take `groupUIDNameMapping` into account +when using a whitelist. Now, the command queries for the correct group name when +`groupUIDNameMapping` and a whitelist are used together. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1484831[*BZ#1484831*]) + +* `RoleBinding` objects can now be created without first creating a +`PolicyBinding` object. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1477956[*BZ#1477956*] + +*Builds* + +* `ImageStream` output references and their corresponding secrets were resolved +during build creation time. If the output imagestream did not exist yet, no push +secret would be be computed, resulting in a build failure during push. With this +bug fix, the `ImageStream` output and push secret will be computed when +preparing to run the build, under logic which will retry until the `imagestream` +is available. Builds that are started before the output `imagestream` exists +will no longer fail during the push phase. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1443163[*BZ#1443163*]) + +* Build, delete, and watch events, and the current Jenkins job being canceled were +not handled when a build was canceled in OpenShift. Various negative, +inconsistent Jenkins job results occurred along with many exception stack traces +in the Jenkins system log. With this bug fix, Jenkins jobs are halted as soon as +the build watch event detects that a build was deleted as the result of a build +cancel action taken within OpenShift. There is now consistent, sensible behavior +for the Jenkins users when builds are canceled or deleted. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1473329[*BZ#1473329*]) + +* Source-to-image was not closing stdin/out/err pipes correctly in some error +cases, causing a hang to occur. This was causing some OpenShift builds to hang +in *running* status. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1442875[*BZ#1442875*]) + +* The *openshift jenkins sync* plug-in was updating Jenkins pipeline build status +annotations every second, regardless of whether the status changed. The +frequency of updates would put unnecessary stress on the etcd instance backing +openshift master. Now, Jenkins pipeline build status annotations are only +updated if the status actually changes, or 30 seconds have passed. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1475867[*BZ#1475867*]) + +* Permissions on directories injected as a build input via the image source input +mechanism have user-only access permissions. The resulting application image +cannot access the content when run as a random user ID. The directories will now +be injected with group permissions, which allows the container user to access +the directories. The directories will now be accessible at runtime as desired. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1480312[*BZ#1480312*]) + +* When no tag is explicitly set, docker pulls all images. Builds would pull more +images than necessary and take longer than needed. With this bug fix, a default +tag will be set when the user does not supply a tag. Only a single image will be +pulled for the build. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1498178[*BZ#1498178*]) + +* The BitBucket build trigger webhook did not handle older versions of the webhook +payload. Builds could not be triggered by older versions of the BitBucket +server. This bug fix adds support for the older payload format. Builds can now +be triggered by older versions of BitBucket. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1500731[*BZ#1500731*]) + +* A regression bug was reported whereby source-to-image builds would fail if the +source repository file system contained a broken symlink (pointing to a +non-existent item). This is now resolved. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1506173[*BZ#1506173*]) + +*Command Line Interface* + +* The `oc` binary for macOS is not signed. Some of the customer's company policies +do not allow users to install unsigned binaries. This bug fix signs the `oc` +binary using a Red Hat certificate. The `oc` binary is now trusted by companies +that restrict the installation of unsigned binaries. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1436093[*BZ#1436093*]) + +* The `git clone` command was being run without a timeout. Therefore, the `oc +new-app` command was timing out. With this bug fix, `oc new-app` now uses `git +ls-remote` with a timeout and the `oc new-app` command will not timeout. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1488283[*BZ#1488283*]) + +*Containers* + +* The `POOL_META_SIZE` configuration item is now added. The thin pool metadata +size was set to .1% of free space of volume group. `POOL_META_SIZE` allows the +operator to customize the size of thin pool metadata volume size to meet their +workload. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1451769[*BZ#1451769*]) + +*Deployments* + +* Shortly after OpenShift starts, the caches might not yet be synchronised. Asa +result, scaling the replication controllers might fail. Retry the scaling when +there is a cache miss. With this bug fix, the replication controllers are scaled +properly. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1427992[*BZ#1427992*]) + +*Image* + +* A .NET jenkins slave image for performing .NET CI/CD flows is now offered. This +makes it easier to build and test .NET code bases using Jenkins. A .NET slave +image is provided and configured out of the box in the Jenkins master image. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1451403[*BZ#1451403*]) + +* Jenkins now installs all plug-ins via one RPM, and the missing plug-in is now +included. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1481010[*BZ#1481010*]) + +* `importPolicy.insecure` is ignored in `oc import-image ` As a +result, re-import from an insecure registry fails because it expects a valid SSL +certificate. When the image stream tag exists, use its i`mportPolicy.insecure`. +With this bug fix, re-import succeeds. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1494231[*BZ#1494231*]) + +*Image Registry* + +* Images younger than the threshold are not added to the dependency graph. A blob +that is used by a young image and by a prunable image is deleted because it has +no references in the graph. Add young images to the graph and mark them as +non-prunable. With this bug fix, the blob has references and is not deleted. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1487408[*BZ#1487408*]) + +* The image pruning algorithm would consider only managed images for pruning. As a +result, mirrored blobs for not managed images could not be pruned. External +images could not be removed using pruning. With this bug fix, the pruning +algorithm evaluates all the images, not just managed images. External images and +their blobs can now be pruned. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1441028[*BZ#1441028*]) + +* Previously, a bug in a regulator of concurrent file system access could cause a +routine to hang. This caused many builds to hang during the registry push.This +bug fix corrects the regulator. As a result, concurrent pushes no longer hang. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1436841[*BZ#1436841*]) + +* N00b is now able to prune images, including images outside of the OpenShift +cluster. Previously, issuing the `oadm prune images` command would print +confusing errors (e.g., operation timeout). This bug fix enables errors to be +printed with hints. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1469654[*BZ#1469654*]) + +* The registry previously appended forwarded target ports to redirected location +URLs. The client’s new request to the target location lacked credentials, and as +a result, image push failed due to an authorization error. This bug fix rebased +the registry to a newer version that fixes forwarding processing logic. As a +result, clients can push images successfully to the exposed registry using +arbitrary TLS-termination. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1471707[*BZ#1471707*]) + +* Previously, `imagestreamtags` were not checked for dangling image references. +This caused references to deleted images to be retained. This bug fix removes +references to deleted images. As a result, deleting an image should allow +references to the image to be deleted from `imagestreamtags`. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1386917[*BZ#1386917*]) + +* Documentation and command help are now updated to include information on +troubleshooting insecure connections to the secured registry. Error messages are +now printed with hints, and new flags have been added to allow for insecure +fall-back. As a result, users can now easily enforce both secure and insecure +connections. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1448595[*BZ#1448595*]) + +*Installer* + +* Previously, the installation would fail when creating the Heketi secret because +the key file did not copy on the first master host. This bug fix enables the +installer to copy the SSH private key to the master node. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1477718[*BZ#1477718*]) + +* The Ansible quick install would previously fail if the hostname was manually +defined containing an uppercase letter. As a result, Kubernetes converted the +names of the nodes to lowercase and did not recognize a node name with an +uppercase letter. This bug fix ensures that hostnames for node objects are +created with lowercase letters. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1396350[*BZ#1396350*]) + +* When upgrading between versions (specifically 3.3/1.3 or earlier to 3.4 or +later) the default values for `clusterNetworkCIDR` and `hostSubnetLength` +changed. If the inventory file did not specify corresponding inventory +variables, the upgrade would fail. This caused the controller service to not +start back up. This bug fix requires that the inventory variables be set before +upgrading or installing. As a result, if the required inventory variables are +not set, the upgrade or installation will stop and tell the administrator to set +the variables. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1451023[*BZ#1451023*]) + +* Previously, the node service was not restarted when Open vSwitch was restarted, +which could result in a misconfigured networking environment. This bug fix +updates the services to ensure that the node service is restarted whenever Open +vSwitch is restarted. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1453113[*BZ#1453113*]) + +* Previously, Ansible facts added the `svc` domain to the `NO_PROXY` settings. As +a result, users behind proxies were not able to push to registry by DNS. This +bug fix adds the `svc` domain to the Ansible facts code. As a result, users +behind a proxy can now push to registry by DNS. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1467776[*BZ#1467776*]) + +* The flannel network was previously defined using the same subnet as the +Kubernetes services subnet. This caused a conflict between services and SDN +networks. The flannel network is now correctly defined by the +osm_cluster_network_cidr variable. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1473858[*BZ#1473858*]) + +* The necessary role for role binding in openshift_metrics was missing due to +being processed out of order in the role. The role binding creation would fail +and the role would fail to install. This bug fix updates the metrics to create +the role immediately. As a result, role binding can be created during +installation. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1476195[*BZ#1476195*]) + +* The etcd scaleup playbook had an error where it attempted to run commands on +hosts other than the host that was currently being scaled up resulting in an +error if the other hosts did not yet have certain dependencies met. The +playbooks now properly target only the host currently being scaled up. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1490739[*BZ#1490739*]) + +* The stand-alone entry point for the `openshift_storage_nfs` task did not have +the `os_firewall` role included. This resulted in the firewall not being +properly installed and configured. The `os_firewall` has been added to the +play. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1491657[*BZ#1491657*]) + +* The etcd quota backend was set to 2GB by default. This resulted in a cluster +going into a hold state, blocking all writes into the etcd storage. The default +quota backend was increased to 4GB by default to encompass the storage needs of +bigger clusters. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1492891[*BZ#1492891*]) + +* When a company CA is added as a named certificates, the CA is added to +`ca-bundle.crt` as well. This can cause client certificate popups when using +IE,Safari or Chrome if the user has client certs configured via the browser. The +code has been changed to not use the `ca-bundle.crt` and use the internal CA for +client cert CA. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1493276[*BZ#1493276*]) + +* As part of deprecating the use of `openshift_hosted_{logging,metrics}_*` +variables, a default size for the storage volume wasn’t set for an NFS +installation. As a result, the playbook would fail that the variable was not +defined at runtime. The code was changed to use the default '10Gi' if not +specified. The installer runs as expected. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1495203[*BZ#1495203*]) + +* The disconnected installer did not have a way to specify a username/password to + login to the docker repository to access downloaded images, requiring the user + to disable authentication. The installation script now includes a mechanism for + entering credentials. + (link:https://bugzilla.redhat.com/show_bug.cgi?id=1500642[*BZ#1500642*]) + +* A new Docker option `--signature-enabled` that was introduced in a recent Docker +release is set to `False` by default. The {product-title} installation removes +the parameter during the installation and Docker would get the default value of +`True`. The Ansible scripts have been changed to include this option. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1502560[*BZ#1502560*]) + +* Upgrading the logging component from 3.4.1 to 3.5.0 using Ansible failed with a +`No Elasticsearch pods found running` error. The logging upgrade has been +disabled as the EFK stack used for 3.4 and 3.5 is the same. The upgrade +functionality is not necessary. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1435144[*BZ#1435144*]) + +* When using ansible to configure the openID-connect provider for the OpenID and +GitLab providers resulted in an error when setting `challenge` to true. This +happens because of the validate function did not allowing this. The Ansible +validate function was removed for OpenID and GitLab providers. The installation +can complete successfully, and login succeeds. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1444367[*BZ#1444367*]) + +* Docker 1.12.6-34 uses *_/etc/containers/registries.conf_* to define registries, +but {product-title} installer uses *_/etc/sysconfig/docker_*. As a result, +system containers were reading registry information from the incorrect file. The +code was changed to duplicate the registries in both locations to ensure +additional/blocked/insecure registries are honored. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1460930[*BZ#1460930*]) + +* A containerized installation with system containers enabled +(`use_system_containers=true`) failed due to missing mounts. The code was +updated so that the install performs as expected. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1463574[*BZ#1463574*]) + +* The {product-title} would correctly fail is the public host name was 64 +characters or greater. However, the error message displayed did not report the +source of the failure. The installer has been changed to report if the +installation failed due to hostname length. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1467790[*BZ#1467790*]) + +* When installing the service catalog, the template service broker (TSB) was not +getting created. As a result, the TSB had to be created manually. The code has +been changed so that the TSB is created automatically. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1470623[*BZ#1470623*]) + +* Input for `include_granted_scopes`, which was expected to become a single quoted +boolean string, was instead being interpreted and written to the file incorrectly. The +resulting configuration file could have the wrong value for +`include_granted_scopes` and removal of a code block attempted to interpret the +input for `include_granted_scopes`. Input that is expected to land via +`include_granted_scopes` now passes to the *_master-config.yml_* as expected. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1488505[*BZ#1488505*]) + +* Because the Docker image availability health check does not support +authenticated registries, checks failed when running against an authenticated +registry. The code was changed to allow Docker to health check authenticated +registries. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1488833[*BZ#1488833*]) + +* Running the `redeploy-router-certificates.yml` playbook caused the +router pod to fail (`CrashLoopBackOff`). The code was changed so that after +running the `redeploy-router-certificates.yml` playbook, the router pod runs as +expected. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1490186[*BZ#1490186*]) + +* With Ansible 2.3, warnings are issued when using Jinja delimiters in 'when' +conditions. The delimiters have been removed from the code base to avoid these +warnings. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1490268[*BZ#1490268*]) + +* Due to an earlier code change, the installation failed when giving a wildcard +certificate to the installer. The code has been changed to properly copy a +wildcard certificate during installation. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1492786[*BZ#1492786*]) + +* Because of internal refactoring, the list of hostnames in the `NO_PROXY` file +was empty. The facts have been restored The list of NO_PROXY names is correctly +defined. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1495142[*BZ#1495142*]) + +* When `openshift_docker_use_system_container` was set to `false`, the installer +was incorrectly attempting to start the container engine, resulting in the +installation failing. The installer code was changed and the installation +proceeds as expected. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1496725[*BZ#1496725*]) + +* The installer can now use an inventory specified as a directory rather than just +a single file. This adds a parameter `INVENTORY_DIR` to the openshift-ansible +image such that the user can indicate that ansible-playbook should use a mounted +inventory directory. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1498908[*BZ#1498908*]) + +* The logic for selecting the Enterprise registry was moved to a location that +which was never read when installing system containers. Enterprise installs +using system containers would fail as the openshift-ansible image could not be +found in the Docker hub registry. Moved the enterprise registry logic into a +high level playbook so that it is set for all runtime set ups. The enterprise +images can be found and installation works. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1503860[*BZ#1503860*]) + +* Due to recent simplification and refactoring there was a possibility of +*_/etc/atomic.conf_* not being updated with proxy values before the first Atomic +command was executed. Proxy use with the Atomic command did not work during the +install. A new openshift_atomic role has been created for atomic specific tasks. +The first task added is proxy which handles updating /etc/atomic.conf to ensure +the proper proxy configuration is configured. This task file is then included +(via include_role) in system container related task files. The atomic command +always is able to use the properly defined proxy settings. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1503903[*BZ#1503903*]) + +* An undefined variable was used in a task. The undefined variable caused a jinja +template evaluation error which would crash the installation. The undefined +variable has been removed and replaced with more informative error text. The +playbook does not error out for external NFS storage class installations. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1504535[*BZ#1504535*]) + +* The OpenShift Health Checker was not part of an Installer Phase and was not +reported after playbook execution. The OpenShift Health Checker section of the +primary installer path has been moved to its own section and an installer +'phase' has been added to report on installer status. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1504593[*BZ#1504593*]) + +* When updating the `openshift-ansible` package, all subpackages are now updated +in order to keep them in sync. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1506971[*BZ#1506971*]) + +* The NetworkManager dispatcher script responsible for configuring a host to use +dnsmasq operated in a non-atomic manner, resulting in failed DNS queries during +boot up. The script has been refactored to ensure that required services are +verified before *_/etc/resolv.conf_* is reconfigured. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1410288[*BZ#1410288*]) + +* Using the Ansible installer to install metrics with dynamic storage failed. +Installation now fails if the parameter storage kind = 'dynamic' is set without +enabling dynamic provisioning. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1415297[*BZ#1415297*]) + +* An error occurred from the yum module during the upgrade process. Yum +transactions are now retried. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1479533[*BZ#1479533*]) + +* The 'registry-console' image stream did not have a source tag specified, causing +it to be improperly imported.The source tag has been added to the image stream +ensuring that it imports properly. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1480442[*BZ#1480442*]) + +* When enabling API aggregation with the ovs-multitenant SDN driver, creating a + global project failed due to a performance latency issue. While creating a + global project, the netnamespace is now checked to ensure availability and the + Ansible Playbook Bundle finishes the operation. + (link:https://bugzilla.redhat.com/show_bug.cgi?id=1487959[*BZ#1487959*]) + +* The device mapper kernel modules may not have been loaded on a host if +`overlay2` storage was used, which prevented the gluster storage system from +working properly. With this fix, the installer now ensures that when gluster is +used the `dm_thin_pool`, `dm_snapshot`, and `dm_mirror` modules are loaded. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1490905[*BZ#1490905*]) + +* Previously, if there was no DNS search path in */etc/resolv.conf*, then the +NetworkManager dispatcher would omit adding `cluster.local` to the search path. +With this bug fix, the dispatcher script was updated to ensure that a search +path is created if one did not already exist. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1496593[*BZ#1496593*]) + +* The example inventories have been updated to clearly indicate that the NFS +export directory must only consist of lowercase alphanumeric characters, hyphens +or periods, and must start and end with an alphanumeric character. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1488366[*BZ#1488366*]) + +*Logging* + +* Messages were read into Fluentd’s memory buffer and were lost if the pod was +restarted because Fluentd considered them read, but they were not pushed to +storage. This caused the loss of any message not stored, but already read by +Fluentd. This fix replaced the memory buffer with a file based buffer. As a +result, the file buffered messages are pushed to storage once Fluentd restarts. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1460749[*BZ#1460749*]) + +* Kibana visualizations and dashboard for monitoring container and pod logs allows +administrator users, cluster-admin or cluster-reader, to view logs by +deployment, namespace, pod, and container. The script +`es_load_kibana_ui_objects` is used to load dashboards and other Kibana UI +objects for the given user. To use, run `oc exec $espod -- +es_load_kibana_ui_objects user-name`. It exists inside the Elasticsearch and +ES-OPS pod, and must be run inside those pods. Additionally, it requires some +indices and other objects set up by the OpenShift Elasticsearch plug-in, so the +user must login to Kibana or Elasticsearch before using this script. This will +also add an index pattern for `project.*` and load the necessary index pattern +file. Kibana visualizations and dashboard gives administrators an easier way to +view Kubernetes/OpenShift related logs in the cluster, allowing admin users have +graphs and a dashboard to use to view logs from OpenShift pods and containers. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1467963[*BZ#1467963*]) + +* The execute bit in the downstream repo was previously not set for `run.sh`. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1474715[*BZ#1474715*]) + +* The value of the `buffer_chunk_limit` is now configurable, and defaults to 1M. +To configure the `buffer_chunk_limit`, set the value to the environment variable +`BUFFER_SIZE_LIMIT` or `openshift_logging_fluentd_buffer_size_limit` in the +Ansible inventory file. To cover various types of input, `buffer_chunk_limit` +needs to be configurable. The “size of the emitted data exceeds +buffer_chunk_limit" can be fixed by configuring `buffer_chunk_limit`. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1413147[*BZ#1413147*]) + +* Role permissions were generated based upon the project, causing queries to be +disallowed if they involved multiple indices. This fix generates role +permissions based on the user and not the project, allowing users to query +across multiple indices. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1445425[*BZ#1445425*]) + +* The `openshift-elasticsearch-plugin` was creating ACL roles based on the +provided name, which could include slashes and commas. This caused the dependent +`lib` to not properly evaluate roles. This fix hashes the name when creating ACL +roles so they no longer contain the invalid characters. Now, users can use +kibana and logging. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1456584[*BZ#1456584*]) + +* The `ansible` parameter name is confusing to use and does not properly reflect how +it is consumed by Fluentd. This fix removed the parameter, allowing Fluentd to +consistently collect logs based on the source it detects. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1466152[*BZ#1466152*]) + +* Elasticsearch was logging to console logs, resulting Elasticsearch ending up in +a feedback loop ingesting its own logs. This fix turned off console logs in +favor of file logs. As a result, the feedback loop is broken but users will need +to setup Elasticsearch log volume with file rotation to get ES logs. +Additionally, `oc logs` against an Elasticsearch pod will no longer be +sufficient to retrieve Elasticsearch pod logs. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1432607[*BZ#1432607*]) + +* Elasticsearch default value for sharing storage between Elasticsearch instances +was wrong. This caused the incorrect default value to be allowed an +Elasticsearch pod starting up (when another Elasticsearch pod was shutting down) +to create a new location on the PV for managing the storage volume, duplicating +data, and in some instances, potentially causing data loss. With this fix, all +Elasticsearch pods now run with `node.max_local_storage_nodes` set to `1`. As a +result, the Elasticsearch pods starting up and shutting down will no longer +share the same storage and prevent the data duplication and data loss. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1460564[*BZ#1460564*]) + +* Use underscores when providing memory switches to the Nodejs runtime instead of +dashes. As a result, the Nodejs interpreter understands the request. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1464020[*BZ#1464020*]) + +* The `openshift_logging_purge_logging` Ansible variable was introduced to purge +logging persistent data. Because the `openshift_logging_install_logging=false` +will keep persistent data, there was a need for a complete uninstall. As a +result, there are no changes to `openshift_logging_install_logging`, with the +additional variable `openshift_logging_purge_logging` for complete uninstall. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1467265[*BZ#1467265*]) + +* In the configuration for the Fluentd systemd input plug-in, the +`read_from_head` parameter was not set properly based on the environment +variable `JOURNAL_READ_FROM_HEAD` or its corresponding Ansible parameter +`openshift_logging_fluentd_journal_read_from_head`. Due to the problem, the full +contents of pre-existing logs were indexed instead of the latest logs captured +by “tail” when a `pos_file` does not exist, which happens when the logging +system is initially deployed or a `pos_file` is deleted. With this bug fix, the +parameter is correctly set. And based on the setting, if +`JOURNAL_READ_FROM_HEAD=true`, all the logs are indexed; if +`JOURNAL_READ_FROM_HEAD=false`, logs read from "tail" are indexed when a +`pos_file` does not exist. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1488941[*BZ#1488941*]) + +* When deploying `logging-fluentd` with `secure-forward` to send the collected +logs to `logging-mux`, it requires `openshift_logging_mux_client_mode=maximal` +with `openshift_logging_use_mux=True` in the ansible inventory if the Fluentd +container and the `mux` container are on the same node. If +`openshift_logging_mux_client_mode=maximal` is set without +`openshift_logging_use_mux=True`, the `mux` secret directory +*/etc/fluent/muxkeys* is mounted in the Fluentd container although the secret +directory does not exist. It makes Fluentd hang when it tries to access the +`mux` secrets at the startup time. This patch checks the value of +`openshift_logging_mux_client_mode` and `openshift_logging_use_mux` in the +Ansible playbook and if the former is true while the latter is false, then it +does not mount the `mux` secret directory in the Fluentd container. Also, if the +Fluentd start script finds the `mux` secret directory does not exist, it +disables `openshift_logging_mux_client_mode` even if it is enabled. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1490647[*BZ#1490647*]) + +* The `json-file` parser was assuming the "time" field was a Time object instead +of a String object, which does not have a "utc" method, causing the logs to fill +with the error. This fix checks the type of object for the "time" field, and +convert the String to a Time object if necessary. As a result, `json-file` read +time values are parsed correctly with no errors. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1491405[*BZ#1491405*]) + +* The `openshift-elasticsearch-plugin` was creating ACL roles based on the +provided name which could include slashes and commas. This caused the dependent +`lib` to not properly evaluate roles. This fix hashes the name when creating ACL +roles so they no longer contain the invalid characters. As a result, users can +use Kibana and logging. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=14942395[*BZ#1494239*]) + +*Web Console* + +* Previously in the web console pod terminal, you could not enter third-level +characters using the AltGr key such as ‘|’ (pipe) in some keyboard layouts. Now +Alt+Gr- combinations work properly in the web console pod terminal. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1292507[*BZ#1292507*]) + +* In the web console, copying and pasting content from the terminal could result +in extra spaces being added to the end of each line. Now when you copy content +from the terminal, no extra spaces are added. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1395564[*BZ#1395564*]) + +* The left navigation column did not support vertical scrolling. When the browser +viewport was less than 440 pixels tall and wider than 768 pixels the bottom left +navigation link was not accessible. The new left navigation column markup +supports vertical scrolling. Now, all left navigation links are accessible at +all browser viewport sizes and zoom levels. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1375134[*BZ#1375134*]) + +* Previously, on iOS Safari, number inputs used the full keyboard rather than the +number input. Now inputs that accept only numbers show the iOS number pad for +easier entry. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1470976[*BZ#1470976*]) + +* Previously, some requests for templates in the web console could timeout or + take a long time to complete over high latency network connections. This could + cause an error when loading the *Add to Project* page. The web console can now + load templates using much less data, which fixes the problem. + (link:https://bugzilla.redhat.com/show_bug.cgi?id=1471033[*BZ#1471033*]) + +* Clarifies help text on the Route creation and editing pages to make it clear +that the CA certificates should be certificate chains. + (link:https://bugzilla.redhat.com/show_bug.cgi?id=1471155[*BZ#1471155*]) + +* A known bug in Internet Explorer resulted in the layout of pod charts +overflowing their containers on the overview page. As a result, the pod charts +looked mis-aligned in the UI. The fix involved increasing the specificity on +some CSS declarations so that they only apply when they are needed, which is +during a deployment when the pod charts are being animated. As a result, the pod +charts appear correctly aligned in Internet Explorer. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1473512[*BZ#1473512*]) + +* A known bug in Internet Explorer resulted in the layout of catalog items taking +up too much space. As a result, not all the catalog items were visible in +Internet Explorer. The fix involved adding an additional CSS declaration as a +workaround for IE. As a result, the catalog items now take up the correct space +in IE. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1473615[*BZ#1473615*]) + +* The code was using an empty `envFrom` entry when creating/editing the +environment variable, causing a validation failure when adding or editing an +environment variable using *Deployment Configuration* page of the web console. +The user would receive an error that the deployment configuration is invalid. +The `envFrom` entry is now properly submitted and the user can add or edit +environment variables from the web console. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1502914[*BZ#1502914*]) + +* Various errors were present in the source code that prevented Config maps were +not available from the drop-down menu on the *Edit Deployment Config* page for +pre and post-hooks when using *Add Value from Config Map or Secret*. These +errors have been corrected. Config maps appear in the appropriate drop-downs. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1502914[*BZ#1502914*]) + +* Previously, secrets with null values would display incorrectly when values were +revealed on the secret details page. Now the web console will correctly display +the secret key as having no value. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1510346[*BZ#1510346*]) + +* Previously there was a quirk in the drag-and-drop behavior of the key value +editor. While reordering an env var it might jump more than a single node at a +time. This bug fix ensures that the drag-and-drop behavior will behave as +expected. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1428991[*BZ#1428991*]) + +* On the project overview, the *Application* drop-down menu was incorrectly set to +`overflow:hidden`. As a result, when the application row is collapsed, the menu +did not display fully. The `overflow: hidden` parameter has been removed and +the menu is now fully visible. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1460153[*BZ#1460153*]) + +* Previously, deleting a service account would ignore the SAs namespace. This +means that the delete action from the web UI could delete multiple service +account rolebindings under the service account tab if service accounts from +different namespaces had the same name. The delete action on the SA tab will now +respect the namespace and only delete the specified SA rolebinding from the +correct namespace. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1507730[*BZ#1507730*]) + +* The *Configuration* tab of the *Deployment* page in the web console was laid out +in such a way that a large gap could appear when the right column contents were +longer than the left column contents. The fix involved changing the layout +markup so the gap does not appear. The result is there is no longer a gap +between Volumes and Triggers when the right column content is longer than the +left column content. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1505255[*BZ#1505255*]) + +*Master* + +* Ansible installs with a caBundle on the service catalog API service resulting in +a _500 Internal Server Error_ on the product overview page in the web console. +The installer was changed to install with `insecureSkipTLSVerify` flag set to +`true`. As a result, the product overview page works as expected. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1473523[*BZ#1473523*]) + +* CronJobs are placed in batch/v2alpha1 group, whereas other batch resources are +placed in batch/v1. Due to this fact, some API machinery does not handle +multiversioning problems properly. The restmapper, which is responsible for +matching resource with appropriate api group version to handle multi-versioned +apis, was updated. Describing resources works as expected. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1480453[*BZ#1480453*]) + +* The installer was configured to watch specific resources that do not support +watching. As a result, the *_/var/log/messages_* file was reporting errors and +warnings related to the issue. The installer has been corrected to not watch +these resources and the errors/warnings are not generated. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1452206[*BZ#1452206*]) + +* Creating project using project template does not use the substituted project +name, but the namespace name. As a result, the user is not able to use +parametrized name as a project name as the generated suffix or prefix might be +dropped. The code was changed to allow the use of substituted project name when +creating the namespace. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1454535[*BZ#1454535*]) + +* Node status information was getting rate limited during heavy traffic causing +some nodes to fall into not ready status. The code was changed to use a separate +connection for node healthiness. As a result, node status is reported without +any problems. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1464653[*BZ#1464653*]) + +* Running multiple clusters in a single authorization zone in AWS requires +resources be tagged. If the clusters are not tagged, the clusters will not work +properly. The master controllers process will require a ClusterID on resources +in order to run. Existing resources will need to be tagged manually. Multiple +clusters in one az will work properly once tagged. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1468579[*BZ#1468579*]) + +* An upstream patch caused an error with the `oc apply` command. The patch deleted +an element from an array (eg. env) and then reordered or modified another array +(eg. volumeMounts). The `kubectl apply` fails with the _unable to find api field +in struct Container for the json field "$setElementOrder/env". The algorithm was +updated so that it continues operation under described condition. The `oc apply` +works without any problems. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1497325[*BZ#1497325*]) + +*Metrics* + +* When either a certificate within the chain at `serviceaccount/ca.crt` or any of +the certificates within the provided truststore file contain white space after +the `BEGIN CERTIFICATE` declaration, the Java keytool rejects the certificate +with an error, causing Origin Metrics to fail to start. As a workaround, Origin +Metrics will now attempt to remove the spaces before feeding the certificate to +the Keytool. Admins should ensure their certificates don't contain such spaces. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1503450[*BZ#1503450*]) + +* When deleting a large number of pods, the *hawkular-metrics* pod log reports +_Pool is busy_ errors. The condition was fixed upstream in Cassandra and +clusters with a large number of pods should not report the _Pool is busy_ +error. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1451209[*BZ#1451209*]) + +* When opening the metrics page in a disconnected environment, Hawkular attempted + to connect to external web sites, such asfonts.googleapis.com. Because the + cluster cannot connect to Internet, the metrics page loaded slowly. Changes + were made upstream so that Hawkular does not attempt to connect to external web + sites when there is no access to the Internet. As a result, in a disconnected + environment, the metrics page loads properly. + (link:https://bugzilla.redhat.com/show_bug.cgi?id=1466403[*BZ#1466403*]) + +* In Cassandra, it is possible that new generation objects (with the `-Xmn` flag) +can exceed the maximum size of the Java memory heap (with the `-Xmx` flag). If +that happens, the JVM will log a warning at start up, but Cassandra still +starts. The code was changed to set the size of new generation objects at ¼ of +the maximum heap size. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1471239[*BZ#1471239*]) + +* Cassandra metrics would not start up if the commit log exceeded the limit +applied to the log. An out-of-memory (OOM) condition would cause metrics to +constantly start and stop. The commit log size is now based on total available +memory. Also, log compression is no longer used, which will reduce the demand on +resources. As a result, large logs should not affect metrics operation. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1473013[*BZ#1473013*]) + +*Networking* + +* When changes are made to software defined network (SDN) plugin, the master +controller will fail to start when there are headless services in the cluster. +As a result, when initializing {product-title}, SDN fails to allow a nil service +IP and {product-title} was unable to start. The code was changed to allow nil +as a valid value of `srv.Spec.ClusterIP`. {product-title} SDN properly starts +after changing network with headless service. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1451881[*BZ#1451881*]) + +* The nodes local IP address is not part of the Open vSwitch (OVS) rules. If you +deny 0.0.0.0/0 and allow a DNS name in the egress network policy, the node will +not be able to reach that allowed address because DNS name resolution is blocked +Adding the local node IP to the ovs allowed rule so that the name resolution +will not be blocked. Also adding a note to the docs for the case when dns +resolution does not happen on the node. {product-title} can successfully block +0.0.0.0/0 as a `cidrSelector` and allow specific DNS names through. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1458849[*BZ#1458849*]) + +* If the `service network restart` command is executed on a machine while the +{product-title} node process is running, a `stop()` function properly disables +IP forwarding. However, the `start()` function was not re-enabling it. The code +was changed to persist IP forwarding on nodes during network restarts. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1477716[*BZ#1477716*]) + +* While upgrading nodes, if any invalid network CIDRs are detected, nodes might be +unable to upgrade and will fail. The code was changed to not fail with invalid +CIDRs. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1506017[*BZ#1506017*]) + +* The Kubernetes CNI (Container Network Interface) plug-in generates errors if +`hostNetwork=true` is configured for pods. This issue has been fixed. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1507257[*BZ#1507257*]) + +* Because of upstream issues in Kubernetes, vSphere had networking problems when +used with {product-title}. The periodic resync of Kubernetes into +{product-title} included the required changes. vSphere now works correctly. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1433236[*BZ#1433236*]) + +* Because of changes with upstream Kubernetes, the `oadm join-projects`, `oadm +isolate-projects` and other commands that depend on the pod update operation +will not work. The code was changed to fetch some required elements from the +Container Runtime Interface (CRI) directly. As a result, the pod update +operation works correctly and the commands work as expected. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1453190[*BZ#1453190*]) + +* Because of default authorization, project administrators (standard user) were +not able to manage network policies for their own projects. Changes to the code +now allow project admins to create, delete, list the network polices in their +own projects. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1461208[*BZ#1461208*]) + +* An invalid HostSubnet could not be fixed. As a result, if a node with an invalid +HostSubnet is restarted, the node assigned to the HostSubnet, would fail to +start. The code has been changed to allow an invalid HostSubnet to be changed, +using commands such as `oc edit hostsubnet`. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1466239[*BZ#1466239*]) + +* Adding an IPv6 address to a host subnet as an egress resulted in a panic error. +The code has been changed to better handle IPv6 addresses with a meaningful +error message. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1500664[*BZ#1500664*]) + +* Using ipfailover when a node fails ensures that a second node receives traffic. +Previously, traffic went back to the first node once it is back up, potentially +causing traffic imbalance. Now, using the `--preemption-strategy="nopreempt"` +option, allows the administrator to control the default strategy, meaning that +the strategy to switch to a higher priority node is suppressed. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1465987[*BZ#1465987*]) + +* A log message similar to the following was repeatedly appearing: ++ +---- +LoadBalancerRR: Removing endpoints for ops-health-monitoring/pull-07062050z-ie:8080-tcp +---- ++ +This caused the logs to be filled with information not deemed important. The +message has been hidden from the logs. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1468420[*BZ#1468420*]) + +* Previously, the image for the default network diagnostics pod was mismatched, +causing the diagnostics to fail. The image checking has been fixed, and the +network diagnostics works without errors. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1481147[*BZ#1481147*]) + +* Previously, conntrack entries for UDP traffic were not erased when an endpoint +was added for a service that previously had no endpoints. This meant that the +system could end up incorrectly caching a rule that would cause traffic to that +service be dropped rather than being send to the new endpoint. The relevant +conntrack entries have been changed to be deleted at the right time, meaning +that the UDP services work correctly when endpoints are added and removed. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1487438[*BZ#1487438*]) + +*Pod* + +* Previously, network debug tests were showing errors regarding not being able to +read stats from a changing pod. This was because, even though the container +process had exited, but the cgroup wasn’t removed, leading to a Docker container +with no tasks. The log spam has been reduced. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1328913[*BZ#1328913*]) + +* Because of an outdated Go format, kubemarl-scale was consistently failing. The +version of Golang was updated, stopping the failures. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1454239[*BZ#1454239*]) + +* Previously, the HPA V1 was unable to get the metrics from the resource CPU. This +was due to the custom setup of the HPA controller changing. The settings have +been restored. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1458663[*BZ#1458663*]) + +* Previously, multi-node environments produced “Failed to watch” errors. This was +because the controller didn’t have permission to watch resources, which meant +its behaviour was to retry every second by default. The controller has been +given the permission to watch resources. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1465361[*BZ#1465361*]) + +* Previously, the OpenShift master failed to start when using Openstack +integration without Neutron LBaaS, which is not available in OpenShift. The +issue now gives a warning instead of a failure, which mean the master will start +successfully even if the LBaaS is not available. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1465722[*BZ#1465722*]) + +* Previously, project volumes were not included in security context constraints, +meaning that pods could not be used with projected volumes. The projected +volumes have been added to the correct SCCs, and the projected volumes can be +used as expected. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1448816[*BZ#1448816*]) + +* Init containers with resource requests or limits were producing error messages. +This was due to a mismatch in the sum of a pod’s container resources, resulting +in the parent cgroup choosing the incorrect resource. The issue has been fixed +upstream and the correct resources are being chosen. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1459826[*BZ#1459826*]) + +* Previously, when a deployment configuration was created without any memory +information when quota restrictions were in place, no error message would +appear. The expected results were a “FailedCreate” event, much like with +replication controllers. The “FailedCreate” event now appears when the pod +immediately fails. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1465801[*BZ#1465801*]) + +* A design limitation in previous versions does not account for memory-backed +volumes against the pod's cumulative memory limit. So, it is possible for a user +to exhaust memory on the node by creating a large file in an memory-backed +volume, regardless of the memory limit. Now, pod-level cgroups have been added +to, among other things, enforce limits on memory-backed volumes, resulting in +memory-backed volume sizes now being bound by cumulative pod memory limits. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1422049[*BZ#1422049*]) + +* Previously, upgrading to 3.4 gave a “insufficient pods” error. This was due to a +change in configuration from a `max-pods` variable to the smaller of 250 or 10 +pods per core. The upgrade broke installations with fewer pods. The change has +been made so that the `max-pods` variable has become the limiting variable. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1430484[*BZ#1430484*]) + +* Previously, error messages in the status field of failed builds said “error” +instead of an actual error message. This was because the status was showing the +message from the Docker daemon returning the failed pod message. The message now +returns a more helpful error message. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1449820[*BZ#1449820*]) + +* Previously, registry pods were occasionally reporting liveness and readiness +probe failures with the message `http2: no cached connection was available`. +This was due to an upstream issue where the liveness and readiness probes get in +the way of each other. The problem has been fixed upstream, and updated for +{product-title} version 3.7. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1454858[*BZ#1454858*]) + +* Large clusters with a large amount of HPAs or unhealthy pods sent a large number +of events if an object was unable to reach its desired state. This bug fix +updates the event client to protect against spamming master components. As a +result, this controls traffic to the masters and reduces writed to etcd. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1466933[*BZ#1466933*]) + +* For all resources other than pod or PVCs, the quota controller would make a LIST +call per namespace to determine current usage counts. This caused quota +recalculation to take an extended period of time. This bug fix reduces LIST +calls made by the resource quota controller by using shared informer caches. As +a result, LIST operations made to the master were reduced and information was +pulled from a shared cache in the controller. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1473370[*BZ#1473370*]) + +* Previously, users were not able to to look up PVC information for the Drupal +database without receiving scheduler log spam. This bug fix prevents unnecessary +logging of a harmless error from a PVC-related scheduler predicate. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1475558[*BZ#1475558*]) + +* Previously, messages originating from the AWS SDK were causing partial log +entries due to new lines in the message itself. Error messages are now properly +quoted so all messages are +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1462445[*BZ#1462445*]) + +*Routing* + +* Previously, the help information included a redundant example. This bug fix +removed the redundant example. As a result, the help information is now more +concise. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1440620[*BZ#1440620*]) + +* Previously, the code path automatically prepended the partition name to the +vserver name. If the vserver was in a path of length more than 1, then the path +was lost because only the partition name was prepended. This bug fix prepends +the entire path of vserver instead of just concatenating the partition name and +vserver name. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1465304[*BZ#1465304*]) + +* Previously, if you had a router of a previous version of {product-title} a 403 +http status resulted when the router stats were accessed without credentials. +This web browser did not prompt the user for a password so the stats were +inaccessible. The code has been updated to return a 403 when no credentials are +passed and the browser now prompts the user for a password, so the router stats +are visible in a web browser. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1467257[*BZ#1467257*]) + +* Previously, the IP failover keepalived image did not support IPV6 addresses or +ranges, as well as IP address validation. Adding IPV6 addresses to the `oadm +ipfailover` command resulted in a new vrrp section pertaining to the wrong +address. The code has been updated, and inputting invalid IPV4 and IPV6 +addresses now return an error as expected. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1459960[*BZ#1459960*]) + +* Previously, the x-forwarded header and its associated information, displayed the +IPV6 form in IPV4 form. The `ROUTER_IP_V4_V6_MODE` environment variable has been +created to control which form is displayed. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1471255[*BZ#1471255*]) + +* Previously, the locking was overly broad, causing events to not be processed +while an HAProxy reload was happening. This meant that route changes would take +hours to process. The locking has been made more fine-grained, so that events +can be processed in parallel. And changes are now processed within the time of +two reloads of the router. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1471899[*BZ#1471899*]) + +* An error in the router code caused by a missing locking around a router data +structure was causing errors causing the router pod to occasionally crash and +restart. The locking has been fixed, and the router now works as expected. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1473031[*BZ#1473031*]) + +* When running the `oc adm router --expose-metrics` command, the router + deployment failed because the generated deployment configuration object was not + compatible. This was due to a background change upstream. A change has been + made with the `oc adm router` command, and the command can now handle + `--expose-metrics`. + (link:https://bugzilla.redhat.com/show_bug.cgi?id=1488954[*BZ#1488954*]) + +* Previously, multiple service catalog objects named “default” were not a problem, +but a change made them all top level. This bug fixes the object names to be +unique. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1420543[*BZ#1420543*]) + +*Service Broker* + +* Previously, a fresh installation using the `openshift-ansible` method and with a +`service-catalog` resulted in the service class being empty, resulting in the +stage registry giving a bad response. The administrator would need to see the +ASB logs and trigger a manual bootstrap. Now, if the bootstrap fails, the broker +fails, and the kubelet retries the process until it works correctly. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1468173[*BZ#1468173*]) + +* This bug fixes running the `service-catalog` binaries for the apiserver and +controller manager when used with the `--version` option, which previously +reported `UNKNOWN`, but now reports the correct value. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1476134[*BZ#1476134*], +link:https://bugzilla.redhat.com/show_bug.cgi?id=1475251[*BZ#1475251*]) + +* Previously, when deleting a namespace, the Ansible Service Broker (ASB) +attempted to execute deprovision playbook actions using a namespace in a +"terminating" state. This led to the APB actions being rejected, because of the +namespace terminating. As a result, deprovision fails, and both the APB +deprovision sandbox and target namespace were not deleted. Now, instead of +executing APB actions on namespace deletion, the records of the services to be +deprovisioned are cleaned up, allowing kubernetes to delete the resources +normally, meaning the target namespace is properly deleted by kubernetes. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1476173[*BZ#1476173*]) + +[[ocp-37-technology-preview]] +== Technology Preview Features + +Some features in this release are currently in Technology Preview. These +experimental features are not intended for production use. Please note the +following scope of support on the Red Hat Customer Portal for these features: + +https://access.redhat.com/support/offerings/techpreview[Technology Preview +Features Support Scope] + +The following new features are now available in Technology Preview: + +- Prometheus Cluster Monitoring +- xref:ocp-37-advanced-auditing[Advanced Auditing] +- xref:ocp-37-local-persistent-volumes[Local Storage Persistent Volumes] +- xref:ocp-37-crio[CRI-O] +- xref:ocp-37-tenant-driven-storage-snapshotting[Tenant-driven Storage Snapshotting] + +The following features that were formerly in Technology Preview from a previous +{product-title} release are now fully supported: + +- xref:../architecture/service_catalog/index.adoc#architecture-additional-concepts-service-catalog[Service Catalog] +- xref:../install_config/install/advanced_install.adoc#configuring-template-service-broker[Template Service Broker] +- xref:ocp-37-openshift-ansible-broker[OpenShift Ansible Broker] +- xref:ocp-37-ansible-playbook-bundles[Ansible Playbook Bundles] +- xref:../admin_guide/managing_networking.adoc#admin-guide-networking-networkpolicy[Network Policy] +- xref:ocp-37-initial-experience[Initial Experience] +- xref:ocp-37-add-from-catalog[Add from Catalog and Add to Project] +- xref:ocp-37-search-catalog[Search Catalog] +- xref:ocp-37-install-config-cfme-from-ocp-installer[Automated installation of CloudForms Inside OpenShift] + +The following features that were formerly in Technology Preview from a previous +{product-title} release remain in Technology Preview: + +- xref:../dev_guide/cron_jobs.adoc#dev-guide-cron-jobs[Cron Jobs (formerly called Scheduled Jobs)] +- xref:../dev_guide/deployments/kubernetes_deployments.adoc#dev-guide-kubernetes-deployments-support[Kubernetes +Deployments Support] +- xref:../release_notes/ocp_3_5_release_notes.adoc#ocp-35-statefulsets[`StatefulSets`, formerly known as `PetSets`] +- xref:../admin_guide/quota.adoc#limited-resources-quota[Require Explicit Quota to Consume a Resource] +- xref:../architecture/additional_concepts/storage.adoc#pv-mount-options[Mount Options] +- xref:../install_config/install/advanced_install.adoc#advanced-install-configuring-system-containers[Installation of etcd, Docker Daemon, and Ansible Installer as System Containers] +- Running OpenShift Installer as a System Container +- xref:ocp-37-integrated-approach-to-adding-hosa[Integrated Approach to Adding Hawkular OpenShift Agent] +- Bind in Context +- `mux` + +[[ocp-37-known-issues]] +== Known Issues + +* The installer can not deploy system container-based installations when the +specified registry requires authentication credentials in order to pull the +required system container images. The fix for this depends on an update to the +`atomic` command, which will be updated after {product-title} 3.7 GA. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1505744[*BZ#1505744*]) + +* A {product-title} 3.7 master will return an unstructured response instead of +structured JSON when an action is forbidden. This is a known issue and will be +fixed in {product-title} 3.8. + +* The volume snapshot Technology Preview feature may not be available to +non-administrator users by default due to API RBAC settings. When the volume +snapshot controller and provisioner are installed and run, the cluster +administrator needs to configure the API access to the VolumeSnapshot objects by +creating roles and cluster roles, then assigning them to the desired users or +user groups. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1502945[*BZ#1502945*]) + +* {product-title} is unable to list known health checks. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1509157[*BZ#1509157*]) + +* The current format of audit logs is difficult to consume. Some keys are +duplicates and some are misleading in that they match wrong keys in the +linux-audit dictionary. +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1496176[*BZ#1496176*]) + +[[ocp-37-asynchronous-errata-updates]] +== Asynchronous Errata Updates + +Security, bug fix, and enhancement updates for {product-title} 3.7 are released +as asynchronous errata through the Red Hat Network. All {product-title} 3.7 +errata is https://access.redhat.com/downloads/content/290/[available on the Red +Hat Customer Portal]. See the +https://access.redhat.com/support/policy/updates/openshift[{product-title} +Life Cycle] for more information about asynchronous errata. + +Red Hat Customer Portal users can enable errata notifications in the account +settings for Red Hat Subscription Management (RHSM). When errata notifications +are enabled, users are notified via email whenever new errata relevant to their +registered systems are released. + +[NOTE] +==== +Red Hat Customer Portal user accounts must have systems registered and consuming +{product-title} entitlements for {product-title} errata notification +emails to generate. +==== + +This section will continue to be updated over time to provide notes on +enhancements and bug fixes for future asynchronous errata releases of +{product-title} 3.7. Versioned asynchronous releases, for example with the form +{product-title} 3.7.z, will be detailed in subsections. In addition, releases in +which the errata text cannot fit in the space provided by the advisory will be +detailed in subsections that follow. + +[IMPORTANT] +==== +For any {product-title} release, always review the instructions on +xref:../install_config/upgrading/index.adoc#install-config-upgrading-index[upgrading your cluster] properly. +====