-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #19474 from newrelic/NR-338923-confluent-cloud-int…
…egration Nr 338923 confluent cloud integration
- Loading branch information
Showing
3 changed files
with
318 additions
and
0 deletions.
There are no files selected for viewing
299 changes: 299 additions & 0 deletions
299
...nfrastructure/other-infrastructure-integrations/confluent-cloud-integration.mdx
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,299 @@ | ||
--- | ||
title: Confluent cloud integration | ||
tags: | ||
- Integrations | ||
- Confluent cloud integrations | ||
- Apache Kafka | ||
|
||
metaDescription: " New Relic's Confluent cloud integration for Kafka: what data it reports, and how to enable it." | ||
freshnessValidatedDate: never | ||
--- | ||
|
||
New Relic offers an integration for collecting your [Confluent Cloud managed streaming for Apache Kafka](https://www.confluent.io/confluent-cloud/) data. This document explains how to activate this integration and describes the data that can be reported. | ||
|
||
## Prerequisites | ||
|
||
* A New Relic account | ||
* An active Confluent Cloud account | ||
* A Confluent Cloud API key and secret | ||
* `MetricsViewer` access on the Confluent Cloud account | ||
|
||
## Activate integration [#activate] | ||
|
||
To enable this integration, go to <DNT>**Integrations & Agents**</DNT>, select <DNT>**Confluent Cloud -> API Polling**</DNT> and follow the instructions. | ||
|
||
<Callout variant="important"> | ||
If you have IP Filtering set up, add the following IP addresses to your filter. | ||
* `162.247.240.0/22` | ||
* `152.38.128.0/19` | ||
|
||
For more information about New Relic IP ranges for cloud integration, refer [this document](/docs/new-relic-solutions/get-started/networks/#webhooks). | ||
For instructions to perform this task, refer [this document](https://docs.confluent.io/cloud/current/security/access-control/ip-filtering/manage-ip-filters.html). | ||
</Callout> | ||
|
||
|
||
|
||
## Configuration and polling [#polling] | ||
|
||
|
||
Default polling information for the Confluent Cloud Kafka integration: | ||
|
||
* New Relic polling interval: 5 minutes | ||
* Confluent Cloud data interval: 1 minute | ||
|
||
You can change the polling frequency only during the initial configuration. | ||
|
||
## View and use data [#find-data] | ||
|
||
To view your integration data, go to <DNT>**[one.newrelic.com > All capabilities](https://one.newrelic.com/all-capabilities) > Infrastructure > AWS**</DNT> and select an integration. | ||
|
||
You can [query and explore your data](/docs/using-new-relic/data/understand-data/query-new-relic-data) using the following [event type](/docs/data-apis/understand-data/new-relic-data-types/#metrics-in-service-levels): | ||
|
||
<table> | ||
<thead> | ||
<tr> | ||
<th> | ||
Entity | ||
</th> | ||
|
||
<th> | ||
Data type | ||
</th> | ||
|
||
<th> | ||
Provider | ||
</th> | ||
</tr> | ||
</thead> | ||
|
||
<tbody> | ||
<tr> | ||
<td> | ||
Cluster | ||
</td> | ||
|
||
<td> | ||
`Metric` | ||
</td> | ||
|
||
<td> | ||
`Confluent` | ||
</td> | ||
</tr> | ||
</tbody> | ||
</table> | ||
|
||
For more on how to use your data, see [Understand and use integration data](/docs/infrastructure/integrations/find-use-infrastructure-integration-data). | ||
|
||
## Metric data [#metrics] | ||
|
||
This integration records Amazon Managed Kafka data for cluster, partition, and topic entities. | ||
|
||
|
||
<table> | ||
<thead> | ||
<tr> | ||
<th style={{ width: "275px" }}> | ||
Metric | ||
</th> | ||
|
||
<th style={{ width: "150px" }}> | ||
Unit | ||
</th> | ||
|
||
<th> | ||
Description | ||
</th> | ||
</tr> | ||
</thead> | ||
|
||
<tbody> | ||
<tr> | ||
<td> | ||
`cluster_load_percent` | ||
</td> | ||
|
||
<td> | ||
Percent | ||
</td> | ||
|
||
<td> | ||
A measure of the utilization of the cluster. The value is between 0.0 and 1.0. | ||
Only dedicated tier clusters has this metric data. | ||
|
||
</td> | ||
</tr> | ||
|
||
<tr> | ||
<td> | ||
`hot_partition_ingress` | ||
</td> | ||
|
||
<td> | ||
Percent | ||
</td> | ||
|
||
<td> | ||
An indicator of the presence of a hot partition caused by ingress throughput. The value is 1.0 when a hot partition is detected, and empty when there is no hot partition detected. | ||
</td> | ||
</tr> | ||
|
||
<tr> | ||
<td> | ||
`hot_partition_egress` | ||
</td> | ||
|
||
<td> | ||
Percent | ||
</td> | ||
|
||
<td> | ||
An indicator of the presence of a hot partition caused by egress throughput. The value is 1.0 when a hot partition is detected, and empty when there is no hot partition detected. | ||
</td> | ||
</tr> | ||
|
||
<tr> | ||
<td> | ||
`request_bytes` | ||
</td> | ||
|
||
<td> | ||
Bytes | ||
</td> | ||
|
||
<td> | ||
The delta count of total request bytes from the specified request types sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds. | ||
</td> | ||
</tr> | ||
|
||
<tr> | ||
<td> | ||
`response_bytes` | ||
</td> | ||
|
||
<td> | ||
Bytes | ||
</td> | ||
|
||
<td> | ||
The delta count of total response bytes from the specified response types sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds. | ||
</td> | ||
</tr> | ||
|
||
<tr> | ||
<td> | ||
`received_bytes` | ||
</td> | ||
|
||
<td> | ||
Bytes | ||
</td> | ||
|
||
<td> | ||
The delta count of bytes of the customer's data received from the network. Each sample is the number of bytes received since the previous data sample. The count is sampled every 60 seconds. | ||
</td> | ||
</tr> | ||
|
||
<tr> | ||
<td> | ||
`sent_bytes` | ||
</td> | ||
|
||
<td> | ||
Bytes | ||
</td> | ||
|
||
<td> | ||
The delta count of bytes of the customer's data sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds. | ||
</td> | ||
</tr> | ||
|
||
<tr> | ||
<td> | ||
`received_records` | ||
</td> | ||
|
||
<td> | ||
Count | ||
</td> | ||
|
||
<td> | ||
The delta count of records received. Each sample is the number of records received since the previous data sample. The count is sampled every 60 seconds. | ||
</td> | ||
</tr> | ||
|
||
<tr> | ||
<td> | ||
`sent_records` | ||
</td> | ||
|
||
<td> | ||
Count | ||
</td> | ||
|
||
<td> | ||
The delta count of records sent. Each sample is the number of records sent since the previous data point. The count is sampled every 60 seconds. | ||
</td> | ||
</tr> | ||
|
||
<tr> | ||
<td> | ||
`partition_count` | ||
</td> | ||
|
||
<td> | ||
Count | ||
</td> | ||
|
||
<td> | ||
The number of partitions. | ||
</td> | ||
</tr> | ||
|
||
<tr> | ||
<td> | ||
`consumer_lag_offsets` | ||
</td> | ||
|
||
<td> | ||
Milliseconds | ||
</td> | ||
|
||
<td> | ||
The lag between a group member's committed offset and the partition's high watermark. | ||
</td> | ||
</tr> | ||
|
||
<tr> | ||
<td> | ||
`successful_authentication_count` | ||
</td> | ||
|
||
<td> | ||
Count | ||
</td> | ||
|
||
<td> | ||
The delta count of successful authentications. Each sample is the number of successful authentications since the previous data point. The count sampled every 60 seconds. | ||
</td> | ||
</tr> | ||
|
||
<tr> | ||
<td> | ||
`active_connection_count` | ||
</td> | ||
|
||
<td> | ||
Count | ||
</td> | ||
|
||
<td> | ||
The count of active authenticated connections. | ||
</td> | ||
</tr> | ||
|
||
|
||
|
||
</tbody> | ||
</table> | ||
|
17 changes: 17 additions & 0 deletions
17
src/content/whats-new/2024/12/whats-new-03-12-confluent-cloud-integration.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
--- | ||
title: 'Monitor your Kafka Metrics with API Polling for Confluent Cloud Integration' | ||
summary: 'Seamlessly integrate your Kafka Confluent Cloud Metrics into New Relic. You can now monitor your Clusters and Topics with ease via API Polling' | ||
releaseDate: '2024-12-03' | ||
learnMoreLink: 'https://docs.newrelic.com/docs/infrastructure/other-infrastructure-integrations/confluent-cloud-integration | ||
' | ||
--- | ||
|
||
We're excited to announce that you can now monitor the health of your Confluent Cloud Kafka Clusters. This can be done through the new Confluent Cloud integration via API Polling. | ||
|
||
Confluent Cloud lets developers to focus on building applications, microservices and data pipelines, rather than on managing the underlying infrastructure. With this integration, you can now monitor your clusters, topics, and brokers with ease. | ||
|
||
New Relic lets you create dashboards using this data. In February, we will release a public preview of a new feature called Messages Queues and Streaming, which provides out-of-the-box reports. This feature facilitates a seamless flow from clusters to topics and allow for a deep dive into any APM service utilizing Confluent Cloud for Kafka. Until then, all data are available for use, and custom dashboards can be created and saved. For more information on creating custom dashboards, see the documentation here. | ||
|
||
|
||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters