diff --git a/modules/serverless-install-kafka-odc.adoc b/modules/serverless-install-kafka-odc.adoc index af2e5ac519fd..9d6a775d0434 100644 --- a/modules/serverless-install-kafka-odc.adoc +++ b/modules/serverless-install-kafka-odc.adoc @@ -6,8 +6,31 @@ [id="serverless-install-kafka-odc_{context}"] = Installing Knative Kafka -The {ServerlessOperatorName} provides the Knative Kafka API that can be used to create a `KnativeKafka` custom resource: +Knative Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with {ServerlessProductName}. Knative Kafka functionality is available in an {ServerlessProductName} installation if you have installed the `KnativeKafka` custom resource. +.Prerequisites + +* You have installed the {ServerlessOperatorName} and Knative Eventing on your cluster. +* You have access to a Red Hat AMQ Streams cluster. +* Install the OpenShift CLI (`oc`) if you want to use the verification steps. +* You have cluster administrator permissions on {product-title}. +* You are logged in to the {product-title} web console. + +.Procedure + +. In the *Administrator* perspective, navigate to *Operators* -> *Installed Operators*. + +. Check that the *Project* dropdown at the top of the page is set to *Project: knative-eventing*. + +. In the list of *Provided APIs* for the {ServerlessOperatorName}, find the *Knative Kafka* box and click *Create Instance*. + +. Configure the *KnativeKafka* object in the *Create Knative Kafka* page. ++ +[IMPORTANT] +==== +To use the Kafka channel, source, broker, or sink on your cluster, you must toggle the *enabled* switch for the options you want to use to *true*. These switches are set to *false* by default. Additionally, to use the Kafka channel, broker, or sink you must specify the bootstrap servers. +==== ++ .Example `KnativeKafka` custom resource [source,yaml] ---- @@ -38,44 +61,29 @@ spec: <5> A comma-separated list of bootstrap servers from your Red Hat AMQ Streams cluster. <6> Defines the number of partitions of the Kafka topics, backed by the `Broker` objects. The default is `10`. <7> Defines the replication factor of the Kafka topics, backed by the `Broker` objects. The default is `3`. +<8> Enables developers to use a Kafka sink in the cluster. + [NOTE] ==== The `replicationFactor` value must be less than or equal to the number of nodes of your Red Hat AMQ Streams cluster. ==== -<8> Enables developers to use a Kafka sink in the cluster. - -.Prerequisites - -* You have installed the {ServerlessOperatorName} and Knative Eventing on your cluster. -* You have access to a Red Hat AMQ Streams cluster. -* You have cluster administrator permissions on {product-title}. -* You are logged in to the {product-title} web console. -* Install the OpenShift CLI (`oc`) if you want to use the verification steps. -.Procedure - -. In the *Administrator* perspective, navigate to *Operators* -> *Installed Operators*. -. Check that the *Project* dropdown at the top of the page is set to *Project: knative-eventing*. -. In the list of *Provided APIs* for the {ServerlessOperatorName}, find the *Knative Kafka* box and click *Create Instance*. -. Configure the *KnativeKafka* object in the *Create Knative Kafka* page. -+ -[IMPORTANT] -==== -To use the Kafka channel, source, broker, or sink on your cluster, you must toggle the *enabled* switch for the options you want to use to *true*. These switches are set to *false* by default. Additionally, to use the Kafka channel, broker or sink, you must specify the bootstrap servers. -==== .. Using the form is recommended for simpler configurations that do not require full control of *KnativeKafka* object creation. + .. Editing the YAML is recommended for more complex configurations that require full control of *KnativeKafka* object creation. You can access the YAML by clicking the *Edit YAML* link in the top right of the *Create Knative Kafka* page. + . Click *Create* after you have completed any of the optional configurations for Kafka. You are automatically directed to the *Knative Kafka* tab where *knative-kafka* is in the list of resources. .Verification . Click on the *knative-kafka* resource in the *Knative Kafka* tab. You are automatically directed to the *Knative Kafka Overview* page. + . View the list of *Conditions* for the resource and confirm that they have a status of *True*. + image::knative-kafka-overview.png[Kafka Knative Overview page showing Conditions] + If the conditions have a status of *Unknown* or *False*, wait a few moments to refresh the page. + . Check that the Knative Kafka resources have been created: + [source,terminal] diff --git a/modules/serverless-kafka-broker-sasl-default-config.adoc b/modules/serverless-kafka-broker-sasl-default-config.adoc index a0137e42dd75..30470b849def 100644 --- a/modules/serverless-kafka-broker-sasl-default-config.adoc +++ b/modules/serverless-kafka-broker-sasl-default-config.adoc @@ -6,7 +6,7 @@ [id="serverless-kafka-broker-sasl-default-config_{context}"] = Configuring SASL authentication for Kafka brokers -As a cluster administrator, you can set up _Simple Authentication and Security Layer_ (SASL) authentication for Kafka brokers by modifying the `KnativeKafka` custom resource (CR). +_Simple Authentication and Security Layer_ (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster, otherwise events cannot be produced or consumed. You can set up SASL for Kafka brokers by modifying the `KnativeKafka` custom resource (CR). .Prerequisites diff --git a/modules/serverless-kafka-broker-tls-default-config.adoc b/modules/serverless-kafka-broker-tls-default-config.adoc index ebd9a9aeb084..72edbf9382cb 100644 --- a/modules/serverless-kafka-broker-tls-default-config.adoc +++ b/modules/serverless-kafka-broker-tls-default-config.adoc @@ -6,7 +6,7 @@ [id="serverless-kafka-broker-tls-default-config_{context}"] = Configuring TLS authentication for Kafka brokers -As a cluster administrator, you can set up _Transport Layer Security_ (TLS) authentication for Kafka brokers by modifying the `KnativeKafka` custom resource (CR). +_Transport Layer Security_ (TLS) is used by Apache Kafka clients and servers to encrypt traffic between Knative and Kafka, as well as for authentication. You can set up TLS for Kafka brokers by modifying the `KnativeKafka` custom resource (CR). .Prerequisites @@ -15,7 +15,7 @@ As a cluster administrator, you can set up _Transport Layer Security_ (TLS) auth * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. * You have a Kafka cluster CA certificate stored as a `.pem` file. * You have a Kafka cluster client certificate and a key stored as `.pem` files. -* Install the OpenShift CLI (`oc`). +* Install the OpenShift (`oc`) CLI. .Procedure diff --git a/modules/serverless-kafka-broker.adoc b/modules/serverless-kafka-broker.adoc index 9563e9141415..88bd48ee8c97 100644 --- a/modules/serverless-kafka-broker.adoc +++ b/modules/serverless-kafka-broker.adoc @@ -7,7 +7,7 @@ [id="serverless-kafka-broker_{context}"] = Creating a Kafka broker by using YAML -You can create a Kafka broker by using YAML files. +Creating Knative resources by using YAML files uses a declarative API, which enables you to describe applications declaratively and in a reproducible manner. To create a Kafka broker by using YAML, you must create a YAML file that defines a `Broker` object, then apply it by using the `oc apply` command. .Prerequisites diff --git a/modules/serverless-kafka-sink.adoc b/modules/serverless-kafka-sink.adoc index a31c94c7c721..3fab69e22a1c 100644 --- a/modules/serverless-kafka-sink.adoc +++ b/modules/serverless-kafka-sink.adoc @@ -6,14 +6,14 @@ [id="serverless-kafka-sink_{context}"] = Using a Kafka sink -You can create an event sink called a Kafka sink, that sends events to a Kafka topic. To do this, you must create a `KafkaSink` object. The following procedure explains how you can create a `KafkaSink` object by using YAML files and the `oc` CLI. +You can create an event sink called a Kafka sink that sends events to a Kafka topic. Creating Knative resources by using YAML files uses a declarative API, which enables you to describe applications declaratively and in a reproducible manner. To create a Kafka sink by using YAML, you must create a YAML file that defines a `KafkaSink` object, then apply it by using the `oc apply` command. .Prerequisites * The {ServerlessOperatorName}, Knative Eventing, and the `KnativeKafka` custom resource (CR) are installed on your cluster. * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. * You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import. -* Install the OpenShift CLI (`oc`). +* Install the OpenShift (`oc`) CLI. .Procedure diff --git a/modules/serverless-kafka-source-kn.adoc b/modules/serverless-kafka-source-kn.adoc index 3cda06095a8c..5cc6a8e25635 100644 --- a/modules/serverless-kafka-source-kn.adoc +++ b/modules/serverless-kafka-source-kn.adoc @@ -7,14 +7,15 @@ [id="serverless-kafka-source-kn_{context}"] = Creating a Kafka event source by using the Knative CLI -This section describes how to create a Kafka event source by using the `kn` command. +You can use the `kn source kafka create` command to create a Kafka source by using the `kn` CLI. Using the `kn` CLI to create event sources provides a more streamlined and intuitive user interface than modifying YAML files directly. .Prerequisites * The {ServerlessOperatorName}, Knative Eventing, Knative Serving, and the `KnativeKafka` custom resource (CR) are installed on your cluster. * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. * You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import. -* You have installed the `kn` CLI. +* You have installed the Knative (`kn`) CLI. +* Optional: You have installed the OpenShift (`oc`) CLI if you want to use the verification steps in this procedure. .Procedure diff --git a/modules/serverless-kafka-source-odc.adoc b/modules/serverless-kafka-source-odc.adoc index 069a8c827400..30ee027dd4f3 100644 --- a/modules/serverless-kafka-source-odc.adoc +++ b/modules/serverless-kafka-source-odc.adoc @@ -6,7 +6,7 @@ [id="serverless-kafka-source-odc_{context}"] = Creating a Kafka event source by using the web console -You can create and verify a Kafka event source from the {product-title} web console. +After Knative Kafka is installed on your cluster, you can create a Kafka source by using the web console. Using the {product-title} web console provides a streamlined and intuitive user interface to create a Kafka source. .Prerequisites diff --git a/modules/serverless-kafka-source-yaml.adoc b/modules/serverless-kafka-source-yaml.adoc index 90a3e789b121..fe8dbb83b74e 100644 --- a/modules/serverless-kafka-source-yaml.adoc +++ b/modules/serverless-kafka-source-yaml.adoc @@ -6,7 +6,7 @@ [id="serverless-kafka-source-yaml_{context}"] = Creating a Kafka event source by using YAML -You can create a Kafka event source by using YAML. +Creating Knative resources by using YAML files uses a declarative API, which enables you to describe applications declaratively and in a reproducible manner. To create a Kafka source by using YAML, you must create a YAML file that defines a `KafkaSource` object, then apply it by using the `oc apply` command. .Prerequisites diff --git a/serverless/admin_guide/serverless-kafka-admin.adoc b/serverless/admin_guide/serverless-kafka-admin.adoc index 13ec04970460..f121c1ad69f9 100644 --- a/serverless/admin_guide/serverless-kafka-admin.adoc +++ b/serverless/admin_guide/serverless-kafka-admin.adoc @@ -6,6 +6,8 @@ include::_attributes/common-attributes.adoc[] toc::[] +Knative Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with {ServerlessProductName}. Kafka provides options for event source, channel, broker, and event sink capabilities. + In addition to the Knative Eventing components that are provided as part of a core {ServerlessProductName} installation, cluster administrators can install the `KnativeKafka` custom resource (CR). [NOTE] @@ -22,13 +24,8 @@ The `KnativeKafka` CR provides users with additional options, such as: include::modules/serverless-install-kafka-odc.adoc[leveloffset=+1] -[id="serverless-kafka-admin-default-configs"] -== Configuring default settings for Kafka components - -If you have cluster administrator permissions, you can set default options for Knative Kafka components, either for the whole cluster or for a specific namespace. - -include::modules/serverless-kafka-broker-tls-default-config.adoc[leveloffset=+2] -include::modules/serverless-kafka-broker-sasl-default-config.adoc[leveloffset=+2] +include::modules/serverless-kafka-broker-tls-default-config.adoc[leveloffset=+1] +include::modules/serverless-kafka-broker-sasl-default-config.adoc[leveloffset=+1] [id="additional-resources_serverless-kafka-admin"] [role="_additional-resources"] diff --git a/serverless/develop/serverless-kafka-developer.adoc b/serverless/develop/serverless-kafka-developer.adoc index 4b7910d98e03..49dfdd023070 100644 --- a/serverless/develop/serverless-kafka-developer.adoc +++ b/serverless/develop/serverless-kafka-developer.adoc @@ -6,6 +6,8 @@ include::_attributes/common-attributes.adoc[] toc::[] +Knative Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with {ServerlessProductName}. Kafka provides options for event source, channel, broker, and event sink capabilities. + Knative Kafka functionality is available in an {ServerlessProductName} installation xref:../../serverless/admin_guide/serverless-kafka-admin.adoc#serverless-install-kafka-odc_serverless-kafka-admin[if a cluster administrator has installed the `KnativeKafka` custom resource]. [NOTE] @@ -27,7 +29,7 @@ See the xref:../../serverless/develop/serverless-event-delivery.adoc#serverless- [id="serverless-kafka-developer-source"] == Kafka source -You can create a Kafka source that reads events from an Apache Kafka cluster and passes these events to a sink. +You can create a Kafka source that reads events from an Apache Kafka cluster and passes these events to a sink. You can create a Kafka source by using the {product-title} web console, the Knative (`kn`) CLI, or by creating a `KafkaSource` object directly as a YAML file and using the OpenShift (`oc`) CLI to apply it. // dev console include::modules/serverless-kafka-source-odc.adoc[leveloffset=+2] @@ -58,11 +60,11 @@ include::modules/serverless-create-kafka-channel-yaml.adoc[leveloffset=+1] [id="serverless-kafka-developer-sink"] == Kafka sink +Kafka sinks are a type of xref:../../serverless/develop/serverless-event-sinks.adoc#serverless-event-sinks[event sink] that are available if a cluster administrator has enabled Kafka on your cluster. You can send events directly from an xref:../../serverless/discover/knative-event-sources.adoc#knative-event-sources[event source] to a Kafka topic by using a Kafka sink. + :FeatureName: Kafka sink include::snippets/technology-preview.adoc[leveloffset=+2] -Kafka sinks are a type of xref:../../serverless/develop/serverless-event-sinks.adoc#serverless-event-sinks[event sink] that are available if a cluster administrator has enabled Kafka on your cluster. You can send events directly from an xref:../../serverless/discover/knative-event-sources.adoc#knative-event-sources[event source] to a Kafka topic by using a Kafka sink. - // Kafka sink include::modules/serverless-kafka-sink.adoc[leveloffset=+2]