prometheus-exporter-monitoring
-
Updated
Oct 4, 2024 - Python
prometheus-exporter-monitoring
kafka connect에서 connector를 생성, 삭제, 관리하기 위한 라이브러리.실사용시 편의성을 고려하여 조금씩 개선해나간다.
Replicate data from MySQL, Postgres and MongoDB to ClickHouse
Deploy Kafka pipelines to Kubernetes
Explore Apache Kafka data pipelines in Kubernetes.
A data streaming project from a Udacity course that utilizes the Kafka ecosystem stacks and Faust to produce, transform, consume, and display data to a web page in real-time.
Stream CDC into an Amazon S3 data lake in Apache Iceberg format with AWS Glue Streaming using Amazon MSK and MSK Connect (Debezium)
Data Pipeline for CDC data from MySQL DB to Amazon S3 through Amazon MSK Serverless using Amazon MSK Connect (Debezium).
Streaming event pipeline around Apache Kafka and its ecosystem, simulating Real-time Data Streaming
It s a little study about Kafka and crew
Example pipeline to stream the data changes from RDBMS to Apache Iceberg tables
A streaming data pipeline uses Kafka as the backbone and Flink for data processing and transformations. Kafka Connect is used for writing the streams to S3 compatible blob stores and Redis (low latency KV store for real-time ML inference). Spark is used for the batch job to backfill the ml feature data.
Django with Kafka, Debezium, and Faust for Email Sending using Change Data Capture
Guardian for your Kafka Connect connectors. It check status of connectors and tasks and restart if they are failed
Repositório destinado a estudos referente apache-kafka
Add a description, image, and links to the kafka-connect topic page so that developers can more easily learn about it.
To associate your repository with the kafka-connect topic, visit your repo's landing page and select "manage topics."