Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 4 additions & 5 deletions _attributes/common-attributes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -384,17 +384,16 @@ endif::openshift-origin[]
:ai-first: artificial intelligence (AI)
//RHEL AI attribute listed with RHEL family
//zero trust workload identity manager
:zero-trust-full: Zero Trust Workload Identity Manager for Red{nbsp}Hat OpenShift
:zero-trust-short: Zero Trust Workload Identity Manager
:zero-trust-full: Zero Trust Workload Identity Manager
:spiffe-full: Secure Production Identity Framework for Everyone (SPIFFE)
:svid-full: SPIFFE Verifiable Identity Document (SVID)
:spire-full: SPIFFE Runtime Environment
// Formerly on-cluster image layering
:image-mode-os-caps: Image mode for OpenShift
:image-mode-os-lower: image mode for OpenShift
// Formerly on-cluster layering
// Formerly on-cluster layering
:image-mode-os-on-caps: On-cluster image mode
:image-mode-os-on-lower: on-cluster image mode
// Formerly out-of-cluster layering
// Formerly out-of-cluster layering
:image-mode-os-out-caps: Out-of-cluster image mode
:image-mode-os-out-lower: out-of-cluster image mode
:image-mode-os-out-lower: out-of-cluster image mode
Original file line number Diff line number Diff line change
Expand Up @@ -16,17 +16,14 @@ ifndef::openshift-rosa,openshift-rosa-hcp[]
However, {oadp-short} does not serve as a disaster recovery solution for xref:../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd] or {OCP-short} Operators.
endif::openshift-rosa,openshift-rosa-hcp[]

[IMPORTANT]
====
{oadp-short} support is provided to customer workload namespaces and cluster scope resources.
{oadp-short} support is provided to customer workload namespaces, and cluster scope resources.

Full cluster xref:../../backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc#backing-up-applications[backup] and xref:../../backup_and_restore/application_backup_and_restore/backing_up_and_restoring/restoring-applications.adoc#restoring-applications[restore] are not supported.
====

[id="oadp-apis_{context}"]
== {oadp-full} APIs

{oadp-short} provides APIs that enable multiple approaches to customizing backups and preventing the inclusion of unnecessary or inappropriate resources.
{oadp-first} provides APIs that enable multiple approaches to customizing backups and preventing the inclusion of unnecessary or inappropriate resources.

OADP provides the following APIs:

Expand Down
7 changes: 4 additions & 3 deletions modules/configuration-ovnk-network-plugin-json-object.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,10 @@ The CNI specification version. The required value is `0.3.1`.
|`name`
|`string`
|
The name of the network. These networks are not namespaced. For example, a network named `l2-network` can be referenced by `NetworkAttachmentDefinition` custom resources (CRs) that exist in different namespaces.
This configuration allows pods that use the `NetworkAttachmentDefinition` CR in different namespaces to communicate over the same secondary network.
However, the `NetworkAttachmentDefinition` CRs must share the same network-specific parameters, such as `topology`, `subnets`, `mtu`, `excludeSubnets`, and `vlanID`. The `vlanID` parameter applies only when the `topology` field is set to `localnet`.
The name of the network. These networks are not namespaced. For example, you can have a network named
`l2-network` referenced from two different `NetworkAttachmentDefinition` CRDs that exist on two different
namespaces. This ensures that pods making use of the `NetworkAttachmentDefinition` CRD on their own different
namespaces can communicate over the same secondary network. However, those two different `NetworkAttachmentDefinition` CRDs must also share the same network specific parameters such as `topology`, `subnets`, `mtu`, and `excludeSubnets`.

|`type`
|`string`
Expand Down
4 changes: 0 additions & 4 deletions modules/creating-manifest-file-customized-br-ex-bridge.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -84,8 +84,6 @@ interfaces:
enabled: false
dhcp: false
bridge:
options:
mcast-snooping-enable: true
port:
- name: enp2s0 <5>
- name: br-ex
Expand Down Expand Up @@ -185,8 +183,6 @@ spec:
enabled: false
dhcp: false
bridge:
options:
mcast-snooping-enable: true
port:
- name: enp2s0 <6>
- name: br-ex
Expand Down
22 changes: 9 additions & 13 deletions modules/nw-dual-stack-convert.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,14 +13,14 @@ As a cluster administrator, you can convert your single-stack cluster network to
After converting your cluster to use dual-stack networking, you must re-create any existing pods for them to receive IPv6 addresses, because only new pods are assigned IPv6 addresses.
====

Converting a single-stack cluster network to a dual-stack cluster network consists of creating patches and applying them to the network and infrastructure of the cluster. You can convert to a dual-stack cluster network for a cluster that runs on either installer-provisioned infrastructure or user-provisioned infrastructure.
Converting a single-stack cluster network to a dual-stack cluster network consists of creating patches and applying them to the cluster's network and infrastructure. You can convert to a dual-stack cluster network for a cluster that runs on installer-provisioned infrastructure.

[NOTE]
====
Each patch operation that changes `clusterNetwork`, `serviceNetwork`, `apiServerInternalIPs`, and `ingressIP` objects triggers a restart of the cluster. Changing the `MachineNetworks` object does not cause a reboot of the cluster.
====

On installer-provisioned infrastructure only, if you need to add IPv6 virtual IPs (VIPs) for API and Ingress services to an existing dual-stack-configured cluster, you need to patch only the infrastructure and not the network for the cluster.
If you need to add IPv6 virtual IPs (VIPs) for API and Ingress services to an existing dual-stack-configured cluster, you need to patch only the cluster's infrastructure and not the cluster's network.

[IMPORTANT]
====
Expand Down Expand Up @@ -78,9 +78,7 @@ $ oc patch network.config.openshift.io cluster \// <1>
network.config.openshift.io/cluster patched
----

. On installer-provisioned infrastructure where you added IPv6 VIPs for API and Ingress services, complete the following steps:
+
.. Specify IPv6 VIPs for API and Ingress services for your cluster. Create a YAML configuration patch file that has a similar configuration to the following example:
. Specify IPv6 VIPs for API and Ingress services for your cluster. Create a YAML configuration patch file that has a similar configuration to the following example:
+
[source,yaml]
----
Expand All @@ -96,18 +94,16 @@ network.config.openshift.io/cluster patched
----
<1> Ensure that you specify an address block for the `machineNetwork` network where your machines operate. You must select both API and Ingress IP addresses for the machine network.
<2> Ensure that you specify each file path according to your platform. The example demonstrates a file path on a bare-metal platform.


. Patch the infrastructure by entering the following command in your CLI:
+
.. Patch the infrastructure by entering the following command in your CLI:
+
[source,terminal,subs="+quotes"]
[source,terminal,subs="+quotes,"]
----
$ oc patch infrastructure cluster \
$ oc patch infrastructure cluster \// <1>
--type='json' --patch-file <file>.yaml
----
+
Where:
+
<file>:: Specifies the name of your created YAML file.
<1> Where `file` specifies the name of your created YAML file.
+
.Example output
[source,text]
Expand Down
45 changes: 45 additions & 0 deletions modules/proc_network-observability-working-with-ipsec.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
// Module included in the following assemblies:
//
// network_observability/observing-network-traffic.adoc

:_mod-docs-content-type: PROCEDURE
[id="network-observability-working-with-ipsec_{context}"]
= Working with IPsec

In {product-title}, IPsec is disabled by default. You can enable IPsec by following the instructions in "Configuring IPsec encryption."

.Prerequisite

* You have enabled IPsec encryption on {product-title}.

.Procedure
. In the web console, navigate to *Operators* -> *Installed Operators*.
. Under the *Provided APIs* heading for the *NetObserv Operator*, select *Flow Collector*.
. Select *cluster* then select the *YAML* tab.
. Configure the `FlowCollector` custom resource for IPsec:
+
.Example configuration of `FlowCollector` for IPsec
[source, yaml]
----
apiVersion: flows.netobserv.io/v1beta2
kind: FlowCollector
metadata:
name: cluster
spec:
namespace: netobserv
agent:
type: eBPF
ebpf:
features:
- "IPSec"
----

.Verification

When IPsec is enabled:

* A new column named *IPsec Status* is displayed in the network observability *Traffic flows* view to show whether a flow was successfully IPsec-encrypted or if there was an error during encryption/decryption.

* A new dashboard showing the percent of encrypted traffic is generated.

//* You can measure traffic between nodes, and view the percentage of encrypted traffic.
5 changes: 0 additions & 5 deletions modules/update-vsphere-virtual-hardware-on-template.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,6 @@ Once converted from a template, do not power on the virtual machine.
====

. Update the virtual machine (VM) in the {vmw-full} client. Complete the steps outlined in link:https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-60768C2F-72E1-42E0-8A17-CA76849F2950.html[Upgrade the Compatibility of a Virtual Machine Manually] ({vmw-full} documentation).
+
[IMPORTANT]
====
If you modified the VM settings, those changes might reset after moving to a newer virtual hardware. Please review that all your configured settings are still in place after your upgrade before proceeding to the next step.
====
. Convert the VM in the {vmw-short} client to a template by right-clicking on the VM and then selecting **Template -> Convert to Template**.
+
[IMPORTANT]
Expand Down