Skip to content

Commit

Permalink
Add ELK flow collector automation to vagrant setup (#2094)
Browse files Browse the repository at this point in the history
  • Loading branch information
srikartati committed Apr 21, 2021
1 parent 9bb7179 commit 422d92d
Show file tree
Hide file tree
Showing 4 changed files with 80 additions and 29 deletions.
46 changes: 31 additions & 15 deletions docs/network-flow-visibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,8 @@ kubectl apply -f https://raw.githubusercontent.com/vmware-tanzu/antrea/main/buil

The following configuration parameters have to be provided through the Flow Aggregator
ConfigMap. `externalFlowCollectorAddr` is a mandatory parameter. We provide an example
value for this parameter in the following snippet.
value for this parameter in the following snippet. If you have deployed the ELK
flow collector, then please use the address, `<Logstash Cluster IP>:4739:UDP`.

```yaml
flow-aggregator.conf: |
Expand Down Expand Up @@ -331,15 +332,24 @@ different Nodes can be preserved.
## Quick deployment

If you would like to quickly try Network Flow Visibility feature, you can deploy
Antrea and the Flow Aggregator Service with the required configuration on a
[vagrant setup](../test/e2e/README.md). You can use the following command:
Antrea, the Flow Aggregator Service and the ELK Flow Collector on the
[Vagrant setup](../test/e2e/README.md). However, the ELK Flow Collector deployment
requires the Vagrant Nodes to have higher memory than default, so we have to provision
the Nodes with the `--large` option. You can use the following command:

```shell
./infra/vagrant/push_antrea.sh -fc <externalFlowCollectorAddr>
./infra/vagrant/provision.sh --large
./infra/vagrant/push_antrea.sh --flow-collector ELK
```

For example, the address of ELK Flow Collector can be provided as `externalFlowCollectorAddr`
after successfully following the steps given in [here](#deployment-steps).
Alternatively, given any external IPFIX flow collector, you can deploy Antrea and
the Flow Aggregator Service on a default Vagrant setup by running the following
commands:

```shell
./infra/vagrant/provision.sh
./infra/vagrant/push_antrea.sh --flow-collector <externalFlowCollectorAddress>
```

## ELK Flow Collector

Expand Down Expand Up @@ -369,10 +379,15 @@ exploration.

### Deployment Steps

First step is to fetch the necessary resources from the Antrea repository. You can
either clone the entire repo or download the particular folder using the subversion (svn)
utility. If the deployed version of Antrea has a release `<TAG>` (e.g. `v0.10.0`),
then you can use the following command:
If you are looking for steps to deploy the ELK Flow Collector along with a new Antrea
cluster and the Flow Aggregator Service, then please refer to the
[quick deployment](#quick-deployment) section.

The following steps will deploy the ELK Flow Collector on an existing Kubernetes
cluster, which uses Antrea as the CNI. First step is to fetch the necessary resources
from the Antrea repository. You can either clone the entire repo or download the
particular folder using the subversion(svn) utility. If the deployed version of
Antrea has a release `<TAG>` (e.g. `v0.10.0`), then you can use the following command:

```shell
git clone --depth 1 --branch <TAG> https://github.com/vmware-tanzu/antrea.git && cd antrea/build/yamls/
Expand All @@ -398,12 +413,13 @@ kubectl create configmap logstash-configmap -n elk-flow-collector --from-file=./
kubectl apply -f ./elk-flow-collector/elk-flow-collector.yml -n elk-flow-collector
```

Kibana dashboard is exposed as a Nodeport Service, which can be accessed via
`http://[NodeIP]: 30007`
Please refer to the [Flow Aggregator Configuration](#configuration-1) to configure
external flow collector as Logstash Service Cluster IP.

`elk-flow-collector/kibana.ndjson` is an auto-generated reusable file containing
pre-built objects for visualizing Pod-to-Pod, Pod-to-Service and Node-to-Node
flow records. To import the dashboards into Kibana, go to
Kibana dashboard is exposed as a Nodeport Service, which can be accessed via
`http://[NodeIP]: 30007`. `elk-flow-collector/kibana.ndjson` is an auto-generated
reusable file containing pre-built objects for visualizing Pod-to-Pod, Pod-to-Service
and Node-to-Node flow records. To import the dashboards into Kibana, go to
**Management -> Saved Objects** and import `elk-flow-collector/kibana.ndjson`.

### Pre-built Dashboards
Expand Down
7 changes: 6 additions & 1 deletion test/e2e/infra/vagrant/Vagrantfile
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,16 @@ K8S_SERVICE_NETWORK_CIDR = (MODE == "v4") ? K8S_SERVICE_NETWORK_V4_CIDR : K8S_SE
NODE_NETWORK_V4_PREFIX = "192.168.77."
NODE_NETWORK_V6_PREFIX = "fd3b:fcf5:3e92:d732::"

MEMORY = 2048
if ENV['K8S_NODE_LARGE'] == "true"
MEMORY = 4096
end

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "ubuntu/focal64"

config.vm.provider "virtualbox" do |v|
v.memory = 2048
v.memory = MEMORY
# 2 CPUS required to initialize K8s cluster with "kubeadm init"
v.cpus = 2
end
Expand Down
12 changes: 11 additions & 1 deletion test/e2e/infra/vagrant/provision.sh
Original file line number Diff line number Diff line change
@@ -1,10 +1,15 @@
#!/usr/bin/env bash

function usage() {
echo "Usage: provision.sh [--ip-family <v4|v6>] [-h|--help]"
echo "Usage: provision.sh [--ip-family <v4|v6>] [-l|--large] [-h|--help]
Provisions the Vagrant VMs.
--ip-family <v4|v6> Deploy IPv4 or IPv6 Kubernetes cluster.
--large Deploy large vagrant VMs with 2 vCPUs and 4096MB memory.
By default, we deploy VMs with 2 vCPUs and 2048MB memory."
}

K8S_IP_FAMILY="v4"
K8S_NODE_LARGE=false
while [[ $# -gt 0 ]]
do
key="$1"
Expand All @@ -14,6 +19,10 @@ case $key in
K8S_IP_FAMILY="$2"
shift 2
;;
-l|--large)
K8S_NODE_LARGE=true
shift 1
;;
-h|--help)
usage
exit 0
Expand All @@ -28,6 +37,7 @@ THIS_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"

pushd $THIS_DIR

export K8S_NODE_LARGE
export K8S_IP_FAMILY

# A few important considerations for IPv6 clusters:
Expand Down
44 changes: 32 additions & 12 deletions test/e2e/infra/vagrant/push_antrea.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,16 @@
function usage() {
echo "Usage: push_antrea.sh [--prometheus] [-fc|--flow-collector <Address>] [-fa|--flow-aggregator] [-h|--help]
Push the latest Antrea image to all vagrant nodes and restart the Antrea daemons
--prometheus Deploy Prometheus service to scrape metrics from Antrea Agents and Controllers
--flow-collector <Address> Provide the IPFIX flow collector address to collect the flows from the Flow Aggregator service
It should be given in the format IP:port:proto. Example: 192.168.1.100:4739:udp
Please note that with this option we deploy the Flow Aggregator Service along with Antrea.
--flow-aggregator Deploy Flow Aggregator along with Antrea.
It is automatically deployed if --flow-collector is used."
--prometheus Deploy Prometheus service to scrape metrics
from Antrea Agents and Controllers.
--flow-collector <Addr|ELK> Provide either the external IPFIX collector
address or specify 'ELK' to deploy the ELK
flow collector. The address should be given
in the format IP:port:proto. Example: 192.168.1.100:4739:udp.
Please note that with this option we deploy
the Flow Aggregator Service.
--flow-aggregator Upload Flow Aggregator image and manifests
onto the Vagrant nodes to run Flow Aggregator e2e tests."
}

# Process execution flags
Expand Down Expand Up @@ -151,17 +155,33 @@ if [ "$FLOW_AGGREGATOR" == "true" ]; then
if [[ $FLOW_COLLECTOR != "" ]]; then
echo "Generating manifest with all features enabled along with FlowExporter feature"
$THIS_DIR/../../../../hack/generate-manifest.sh --mode dev --all-features > "${ANTREA_YML}"

$THIS_DIR/../../../../hack/generate-manifest-flow-aggregator.sh --mode dev -fc $FLOW_COLLECTOR > "${FLOW_AGG_YML}"
if [[ $FLOW_COLLECTOR == "ELK" ]]; then
echo "Deploy ELK flow collector"
echo "Copying ELK flow collector folder"
scp -F ssh-config -r $THIS_DIR/../../../../build/yamls/elk-flow-collector k8s-node-control-plane:~/
echo "Done copying"
# ELK flow collector needs a few minutes (2-4 mins.) to finish its deployment,
# so the Flow Aggregator service will not send any records till then.
ssh -F ssh-config k8s-node-control-plane kubectl create namespace elk-flow-collector
ssh -F ssh-config k8s-node-control-plane kubectl create configmap logstash-configmap -n elk-flow-collector --from-file=./elk-flow-collector/logstash/
ssh -F ssh-config k8s-node-control-plane kubectl apply -f elk-flow-collector/elk-flow-collector.yml -n elk-flow-collector
LOGSTASH_CLUSTER_IP=$(ssh -F ssh-config k8s-node-control-plane kubectl get -n elk-flow-collector svc logstash -o=jsonpath='{.spec.clusterIP}')
ELK_ADDR="${LOGSTASH_CLUSTER_IP}:4739:udp"

$THIS_DIR/../../../../hack/generate-manifest-flow-aggregator.sh --mode dev -fc $ELK_ADDR > "${FLOW_AGG_YML}"
else
$THIS_DIR/../../../../hack/generate-manifest-flow-aggregator.sh --mode dev -fc $FLOW_COLLECTOR > "${FLOW_AGG_YML}"
fi
else
$THIS_DIR/../../../../hack/generate-manifest-flow-aggregator.sh --mode dev > "${FLOW_AGG_YML}"
fi

copyManifestToNodes "$FLOW_AGG_YML"

echo "Restarting Flow Aggregator deployment"
ssh -F ssh-config k8s-node-control-plane kubectl -n flow-aggregator delete pod --all
ssh -F ssh-config k8s-node-control-plane kubectl apply -f flow-aggregator.yml
if [[ $FLOW_COLLECTOR != "" ]]; then
echo "Restarting Flow Aggregator deployment"
ssh -F ssh-config k8s-node-control-plane kubectl -n flow-aggregator delete pod --all
ssh -F ssh-config k8s-node-control-plane kubectl apply -f flow-aggregator.yml
fi

rm "${FLOW_AGG_YML}"
fi
Expand Down

0 comments on commit 422d92d

Please sign in to comment.