Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
// Module included in the following assemblies:

// * networking/network_observability/configuring-operators.adoc

:_mod-docs-content-type: CONCEPT
[id="network-observability-filter-network-flows-at-ingestion_{context}"]
= Filter network flows at ingestion

You can create filters to reduce the number of generated network flows. Filtering network flows can reduce the resource usage of the Network Observability components.

You can configure two kinds of filters:

* eBPF agent filters
* Flowlogs-pipeline filters

[id="ebpf-agent-filters_{context}"]
== eBPF agent filters

eBPF agent filters maximize performance because they take effect at the earliest stage of the network flows collection process.

To configure eBPF agent filters with the Network Observability Operator, see "Filtering eBPF flow data using multiple rules".

[id="flowlogs-pipeline-filters_{context}"]
== Flowlogs-pipeline filters

Flowlogs-pipeline filters provide greater control over traffic selection because they take effect later in the network flows collection process. They are primarily used to improve data storage.

Flowlogs-pipeline filters use a simple query language to filter network flow, as shown in the following example:

[source,terminal]
----
(srcnamespace="netobserv" OR (srcnamespace="ingress" AND dstnamespace="netobserv")) AND srckind!="service"
----

The query language uses the following syntax:

.Query language syntax
[cols="1,3", options="header"]
|===
| Category
| Operators

| Logical boolean operators (not case-sensitive)
| `and`, `or`

| Comparison operators
| `=` (equals), +
`!=` (not equals), +
`=~` (matches regexp), +
`!~` (not matches regexp), +
`<` / `\<=` (less than or equal to), +
`>` / `>=` (greater than or equal to)

| Unary operations
| `with(field)` (field is present), +
`without(field)` (field is absent)

| Parenthesis-based priority
|===

You can configure flowlogs-pipeline filters in the `spec.processor.filters` section of the `FlowCollector` resource. For example:

.Example YAML Flowlogs-pipeline filter
[source,yaml]
----
apiVersion: flows.netobserv.io/v1beta2
kind: FlowCollector
metadata:
name: cluster
spec:
namespace: netobserv
agent:
processor:
filters:
- query: |
(SrcK8S_Namespace="netobserv" OR (SrcK8S_Namespace="openshift-ingress" AND DstK8S_Namespace="netobserv"))
outputTarget: Loki <1>
sampling: 10 <2>
----
<1> Sends matching flows to a specific output, such as Loki, Prometheus, or an external system. When omitted, sends to all configured outputs.
<2> Optional. Applies a sampling ratio to limit the number of matching flows to be stored or exported. For example, `sampling: 10` means 1/10 of the flows are kept.
13 changes: 13 additions & 0 deletions modules/network-observability-con_user-defined-networks.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
// Module included in the following assemblies:
//
// * network_observability/observing-network-traffic.adoc

:_mod-docs-content-type: CONCEPT
[id="network-observability-user-defined-networks_{context}"]
= User-defined networks

User-defined networks (UDN) improve the flexibility and segmentation capabilities of the default Layer 3 topology for a Kubernetes pod network by enabling custom Layer 2 and Layer 3 network segments, where all these segments are isolated by default. These segments act as primary or secondary networks for container pods and virtual machines that use the default OVN-Kubernetes CNI plugin.

UDNs enable a wide range of network architectures and topologies, enhancing network flexibility, security, and performance.

When the `UDNMapping` feature is enabled with Network Observability, the *Traffic* flow table has a *UDN labels* column. You can filter on *Source Network Name* and *Destination Network Name*.
8 changes: 5 additions & 3 deletions modules/network-observability-ebpf-rule-flow-filter.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,15 @@
:_mod-docs-content-type: CONCEPT
[id="network-observability-ebpf-flow-rule-filter_{context}"]
= eBPF flow rule filter
You can use rule-based filtering to control the volume of packets cached in the eBPF flow table. For example, a filter can specify that only packets coming from port 100 should be recorded. Then only the packets that match the filter are cached and the rest are not cached.
You can use rule-based filtering to control the volume of packets cached in the eBPF flow table. For example, a filter can specify that only packets coming from port 100 should be captured. Then only the packets that match the filter are captured and the rest are dropped.

You can apply multiple filter rules.

[id="ingress-and-egress-traffic-filtering_{context}"]
== Ingress and egress traffic filtering
CIDR notation efficiently represents IP address ranges by combining the base IP address with a prefix length. For both ingress and egress traffic, the source IP address is first used to match filter rules configured with CIDR notation. If there is a match, then the filtering proceeds. If there is no match, then the destination IP is used to match filter rules configured with CIDR notation.
Classless Inter-Domain Routing (CIDR) notation efficiently represents IP address ranges by combining the base IP address with a prefix length. For both ingress and egress traffic, the source IP address is first used to match filter rules configured with CIDR notation. If there is a match, then the filtering proceeds. If there is no match, then the destination IP is used to match filter rules configured with CIDR notation.

After matching either the source IP or the destination IP CIDR, you can pinpoint specific endpoints using the `peerIP` to differentiate the destination IP address of the packet. Based on the provisioned action, the flow data is either cached in the eBPF flow table or not cached.
After matching either the source IP or the destination IP CIDR, you can pinpoint specific endpoints using the `peerIP` to differentiate the destination IP address of the packet. Based on the provisioned action, the flow data is either cached in the eBPF flow table or not cached.

[id="dashboard-and-metrics-integrations_{context}"]
== Dashboard and metrics integrations
Expand Down
86 changes: 50 additions & 36 deletions modules/network-observability-filtering-ebpf-rule.adoc
Original file line number Diff line number Diff line change
@@ -1,22 +1,28 @@
// Module included in the following assemblies:
//
// network_observability/observing-network-traffic.adoc

:_mod-docs-content-type: PROCEDURE
[id="network-observability-filtering-ebpf-rule_{context}"]
= Filtering eBPF flow data using a global rule
You can configure the `FlowCollector` to filter eBPF flows using a global rule to control the flow of packets cached in the eBPF flow table.
= Filtering eBPF flow data using multiple rules
You can configure the `FlowCollector` custom resource to filter eBPF flows using multiple rules to control the flow of packets cached in the eBPF flow table.

[IMPORTANT]
====
* You cannot use duplicate Classless Inter-Domain Routing (CIDRs) in filter rules.
* When an IP address matches multiple filter rules, the rule with the most specific CIDR prefix (longest prefix) takes precedence.
====

.Procedure
. In the web console, navigate to *Operators* -> *Installed Operators*.
. Under the *Provided APIs* heading for *Network Observability*, select *Flow Collector*.
. Select *cluster*, then select the *YAML* tab.
. Configure the `FlowCollector` custom resource, similar to the following sample configurations:
+

[%collapsible]
.Filter Kubernetes service traffic to a specific Pod IP endpoint
====

.Example YAML to sample all North-South traffic, and 1:50 East-West traffic

By default, all other flows are rejected.

[source, yaml]
----
apiVersion: flows.netobserv.io/v1beta2
Expand All @@ -30,22 +36,29 @@ spec:
type: eBPF
ebpf:
flowFilter:
action: Accept <1>
cidr: 172.210.150.1/24 <2>
protocol: SCTP
direction: Ingress
destPortRange: 80-100
peerIP: 10.10.10.10
enable: true <3>
enable: true <1>
rules:
- action: Accept <2>
cidr: 0.0.0.0/0 <3>
sampling: 1 <4>
- action: Accept
cidr: 10.128.0.0/14
peerCIDR: 10.128.0.0/14 <5>
- action: Accept
cidr: 172.30.0.0/16
peerCIDR: 10.128.0.0/14
sampling: 50
----
<1> The required `action` parameter describes the action that is taken for the flow filter rule. Possible values are `Accept` or `Reject`.
<2> The required `cidr` parameter provides the IP address and CIDR mask for the flow filter rule and supports IPv4 and IPv6 address formats. If you want to match against any IP address, you can use `0.0.0.0/0` for IPv4 or `::/0` for IPv6.
<3> You must set `spec.agent.ebpf.flowFilter.enable` to `true` to enable this feature.
====
+
[%collapsible]
.See flows to any addresses outside the cluster
====
<1> To enable eBPF flow filtering, set `spec.agent.ebpf.flowFilter.enable` to `true`.
<2> To define the action for the flow filter rule, set the required `action` parameter. Valid values are `Accept` or `Reject`.
<3> To define the IP address and CIDR mask for the flow filter rule, set the required `cidr` parameter. This parameter supports both IPv4 and IPv6 address formats. To match any IP address, use `0.0.0.0/0` for IPv4 or ``::/0` for IPv6.
<4> To define the sampling rate for matched flows and override the global sampling setting `spec.agent.ebpf.sampling`, set the `sampling` parameter.
<5> To filter flows by Peer IP CIDR, set the `peerCIDR` parameter.

.Example YAML to filter flows with packet drops

By default, all other flows are rejected.

[source, yaml]
----
apiVersion: flows.netobserv.io/v1beta2
Expand All @@ -57,18 +70,19 @@ spec:
deploymentModel: Direct
agent:
type: eBPF
ebpf:
ebpf:
privileged: true <1>
features:
- PacketDrop <2>
flowFilter:
action: Accept <1>
cidr: 0.0.0.0/0 <2>
protocol: TCP
direction: Egress
sourcePort: 100
peerIP: 192.168.127.12 <3>
enable: true <4>
----
<1> You can `Accept` flows based on the criteria in the `flowFilter` specification.
<2> The `cidr` value of `0.0.0.0/0` matches against any IP address.
<3> See flows after `peerIP` is configured with `192.168.127.12`.
<4> You must set `spec.agent.ebpf.flowFilter.enable` to `true` to enable the feature.
====
enable: true <3>
rules:
- action: Accept <4>
cidr: 172.30.0.0/16
pktDrops: true <5>
----
<1> To enable packet drops, set `spec.agent.ebpf.privileged` to `true`.
<2> To report packet drops for each network flow, add the `PacketDrop` value to the `spec.agent.ebpf.features` list.
<3> To enable eBPF flow filtering, set `spec.agent.ebpf.flowFilter.enable` to `true`.
<4> To define the action for the flow filter rule, set the required `action` parameter. Valid values are `Accept` or `Reject`.
<5> To filter flows containing drops, set `pktDrops` to `true`.
Original file line number Diff line number Diff line change
Expand Up @@ -444,7 +444,6 @@ To filter two ports, use a "port1,port2" in string format. For example, `ports:
| `rules` defines a list of filtering rules on the eBPF Agents.
When filtering is enabled, by default, flows that don't match any rule are rejected.
To change the default, you can define a rule that accepts everything: `{ action: "Accept", cidr: "0.0.0.0/0" }`, and then refine with rejecting rules.
Unsupported *.

| `sampling`
| `integer`
Expand All @@ -470,7 +469,6 @@ Description::
`rules` defines a list of filtering rules on the eBPF Agents.
When filtering is enabled, by default, flows that don't match any rule are rejected.
To change the default, you can define a rule that accepts everything: `{ action: "Accept", cidr: "0.0.0.0/0" }`, and then refine with rejecting rules.
Unsupported *.
--

Type::
Expand All @@ -483,7 +481,7 @@ Type::
Description::
+
--
`EBPFFlowFilterRule` defines the desired eBPF agent configuration regarding flow filtering rule.
`EBPFFlowFilterRules` defines the desired eBPF agent configuration regarding flow filtering rules.
--

Type::
Expand Down Expand Up @@ -1480,15 +1478,15 @@ Type::

| `input`
| `string`
|
|

| `multiplier`
| `integer`
|
|

| `output`
| `string`
|
|

|===
== .spec.exporters[].openTelemetry.logs
Expand Down
10 changes: 5 additions & 5 deletions modules/network-observability-multitenancy.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
:_mod-docs-content-type: PROCEDURE
[id="network-observability-multi-tenancy_{context}"]
= Enabling multi-tenancy in Network Observability
Multi-tenancy in the Network Observability Operator allows and restricts individual user access, or group access, to the flows stored in Loki and or Prometheus. Access is enabled for project administrators. Project administrators who have limited access to some namespaces can access flows for only those namespaces.
Multi-tenancy in the Network Observability Operator allows and restricts individual user access, or group access, to the flows stored in Loki and or Prometheus. Access is enabled for project administrators. Project administrators who have limited access to some namespaces can access flows for only those namespaces.

For Developers, multi-tenancy is available for both Loki and Prometheus but requires different access rights.

Expand All @@ -15,23 +15,23 @@ For Developers, multi-tenancy is available for both Loki and Prometheus but requ

.Procedure

* For per-tenant access, you must have the `netobserv-reader` cluster role and the `netobserv-metrics-reader` namespace role to use the developer perspective. Run the following commands for this level of access:
* For per-tenant access, you must have the `netobserv-loki-reader` cluster role and the `netobserv-metrics-reader` namespace role to use the developer perspective. Run the following commands for this level of access:
+
[source,terminal]
----
$ oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name>
$ oc adm policy add-cluster-role-to-user netobserv-loki-reader <user_group_or_name>
----
+
[source,terminal]
----
$ oc adm policy add-role-to-user netobserv-metrics-reader <user_group_or_name> -n <namespace>
----

* For cluster-wide access, non-cluster-administrators must have the `netobserv-reader`, `cluster-monitoring-view`, and `netobserv-metrics-reader` cluster roles. In this scenario, you can use either the admin perspective or the developer perspective. Run the following commands for this level of access:
* For cluster-wide access, non-cluster-administrators must have the `netobserv-loki-reader`, `cluster-monitoring-view`, and `netobserv-metrics-reader` cluster roles. In this scenario, you can use either the admin perspective or the developer perspective. Run the following commands for this level of access:
+
[source,terminal]
----
$ oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name>
$ oc adm policy add-cluster-role-to-user netobserv-loki-reader <user_group_or_name>
----
+
[source,terminal]
Expand Down
Loading