Skip to content
This repository has been archived by the owner on Apr 24, 2023. It is now read-only.

fluent/fluent-bit-kubernetes-logging

Repository files navigation

Kubernetes Logging with Fluent Bit

⚠️ This repository is no longer maintained. Please use the charts from the Fluent Bit Helm Chart project. If you need any further assistance, reach out to the community on the available channels

Overview

Fluent Bit is a lightweight and extensible Log and Metrics Processor that comes with full support for Kubernetes:

  • Read Kubernetes/Docker log files from the file system or through systemd Journal
  • Enrich logs with Kubernetes metadata
  • Deliver logs to third party services like Elasticsearch, Splunk, Datadog, InfluxDB, HTTP, etc.

This repository contains a set of Yaml files to deploy Fluent Bit which consider namespace, RBAC, Service Account, etc.

Getting started

Fluent Bit must be deployed as a DaemonSet so that it will be available on every node of your Kubernetes cluster. To get started run the following commands to create the namespace, service account and role setup:

For Kubernetes v1.21 and below

$ kubectl create namespace logging
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-service-account.yaml
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role.yaml
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role-binding.yaml

For Kubernetes v1.22 and above

$ kubectl create namespace logging
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-service-account.yaml
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role-1.22.yaml
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role-binding-1.22.yaml

If you are deploying fluent-bit on openshift, you additionally need to run:

$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-openshift-security-context-constraints.yaml

Fluent Bit to Elasticsearch

The next step is to create a ConfigMap that will be used by our Fluent Bit DaemonSet:

$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-configmap.yaml

If the cluster uses a CRI runtime, like containerd or CRI-O, change the Parser described in input-kubernetes.conf from docker to cri.

Fluent Bit DaemonSet ready to be used with Elasticsearch on a normal Kubernetes Cluster:

$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-ds.yaml

Fluent Bit to Elasticsearch on Minikube

If you are using Minikube for testing purposes, use the following alternative DaemonSet manifest:

$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-ds-minikube.yaml

Fluent Bit to Kafka

Create a ConfigMap that will be used by our Fluent Bit DaemonSet:

$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/kafka/fluent-bit-configmap.yaml

Fluent Bit DaemonSet ready to be used with Kafka on a normal Kubernetes Cluster:

$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/kafka/fluent-bit-ds.yaml

Fluent Bit to Elasticsearch on Minikube

If you are using Minikube for testing purposes, use the following alternative DaemonSet manifest:

$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-ds-minikube.yaml

Details

The default configuration of Fluent Bit makes sure of the following:

  • Consume all containers logs from the running Node.
  • The Tail input plugin will not append more than 5MB into the engine until they are flushed to the Elasticsearch backend. This limit aims to provide a workaround for backpressure scenarios.
  • The Kubernetes filter will enrich the logs with Kubernetes metadata, specifically labels and annotations. The filter only goes to the API Server when it cannot find the cached info, otherwise it uses the cache.
  • The default backend in the configuration is Elasticsearch set by the Elasticsearch Output Plugin. It uses the Logstash format to ingest the logs. If you need a different Index and Type, please refer to the plugin option and do your own adjustments.
  • There is an option called Retry_Limit set to False that means if Fluent Bit cannot flush the records to Elasticsearch it will re-try indefinitely until it succeeds.

Get in touch with us!

Your contribution to testing is highly appreciated. We aim to make logging cheaper for everybody so your feedback is fundamental. Please get in touch on:

About

Fluent Bit Kubernetes Daemonset

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published