Skip to content

Commit

Permalink
Adding initial code to repo
Browse files Browse the repository at this point in the history
  • Loading branch information
Josh Rickard authored and Josh Rickard committed Jun 25, 2020
1 parent 5109f3e commit bafc8f7
Show file tree
Hide file tree
Showing 14 changed files with 512 additions and 2 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ __pycache__/
*.py[cod]
*$py.class

*.DS_Store

# C extensions
*.so

Expand Down
92 changes: 92 additions & 0 deletions CERTIFICATES.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
# Manually Generating Certificates

We first need to get a Certificate Authority (CA) Certificate from the elasticsearch container.

Run the [docker-compose.setup.yml](docker-compose.setup.yml) with the following:

```
docker-compose -f docker-compose.setup.yml up -d
```

## Get CA Certificate

Once those containers are running then we need to exec into the container:

```bash
docker-compose exec keystore bash
```

Once in the container we then invoke the built-in executable in the bin directory to generate our CA certificate:

```bash
bin/elasticsearch-certutil ca
```

## Getting Certificates

> Please note that I am creating a certificate for all other services (e.g. kibana, logstash) but depending on your setup you probably should create one for each
Let's use our recently generated CA certificate to generate a certificate. You should still be in the same container we were already in to generate the CA certificate:

```
bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
```

While we are still in this container, let's set passwords for all user accounts

## Set Passwords for all users

> Probably best to use the same password for all users when in a demo enviornment only
```
bin/elasticsearch-setup-passwords interactive
```

We are still in the container....

## Get PEM for Kibana

> This is actually outputted as a crt and key in a zip file
Run the following command to generate a PEM file for Kibana

```
bin/elasticsearch-certutil cert --pem -ca elastic-stack-ca.p12
```

## Copying Files to local system

Now that we have generated the necessary files, let's exit the container by typing `exit` and while in the same folder as your docker-compose.setup.yml let's run the following:

```bash
docker cp {CONTAINER_ID}:/usr/share/elasticsearch/elastic-certificates.p12 secrets/elastic-certificates.p12
docker cp {CONTAINER_ID}:/usr/share/elasticsearch/elastic-stack-ca.p12 secrets/elastic-stack-ca.p12
docker cp {CONTAINER_ID}:/usr/share/elasticsearch/certificate-bundle.zip secrets/certificate-bundle.zip

# Finally let's unzip the contents of the certificate-bundle.zip and put them in the secrets folder
unzip secrets/certificate-bundle.zip -d ./secrets
```

## Get logstash PEM

Now that we have these files, let's now generate an actual `.pem` file needed by logstash. You do this using openssl:

```
openssl pkcs12 -in secrets/elastic-certificates.p12 -out secrets/logstash.pem -clcerts -nokeys
```

## Finish

That's it - well it's a pain in the butt and took awhile to figure this out but for you that's it :)

You should have the following files in your [secrets](secrets) directory:

* elastic-certificates.p12
* elastic-stack-ca.p12
* instance.crt
* instance.key
* logstash.pem

Additionally, when you are done you will also have another file as well:

* elasticsearch.keystore
160 changes: 158 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,158 @@
# elk-tls-docker
This repository contains code to create a ELK stack with certificates & security enabled using docker-compose
# Setting up TLS ELK Stack

This repository will setup a certificate (TLS) communication based ELK stack. This will allow you to use features like built-in detections and the SIEM features in Kibana.

We will create three ELK 7.8.0 containers:

* Elasticsearch
* Logstash
* Kibana

## Setup

To use this, first include only the following in the [.env](.env) file:

```
ELK_VERSION=7.8.0
ELASTIC_USERNAME=elastic
ELASTIC_PASSWORD={some_password}
```

## Certificates

Before we build or create our containers we first need to get some certificates. You can do this using the [docker-compose.setup.yml](docker-compose.setup.yml) yaml file. Additionally if you run into issues you can see the associated documentation [here](CERTIFICATES.md)

```
docker-compose -f docker-compose.setup.yml run --rm certs
```

Once you run this yaml file, you should have all necessary certificates/keys. Next we need to set passwords.

## Running ELK

The first thing you will is set passwords for all accounts within ELK.

Let's run our ELK stack now:

```
docker-compose up -d
```

## Set Passwords

You will need to set passwords for all accounts. I recommend in a tesitng environment to create a single password and use this across all accounts - it makes it easier when troublehshooting.

We need to access the elasticsearch container and generate our passwords:

```
docker-compose exec elasticsearch bash
> bin/elasticsearch-setup-passwords interactive
# Set passwords for all accounts when prompted
```

## Finish

Now, that you have your keys/certs and passwords set we can then just restart the containers by running:

```
docker-compose up -d
```

You should be able to login into the ELK stack and be on your way.

## Enabling features

This section talks about enabling features within Kibana and the Security stack.

### Creating a Default SIEM Space

In order to access signals and install pre-packaged rules within elasticsearch & kibana we first need to create a default space. You can do this in the UI but below is some python code to help you create this:

```python
import requests
from requests.auth import HTTPBasicAuth

_HOST = 'https://0.0.0.0:5601'
_USERNAME = 'elastic'
_PASSWORD = 'some_password'

headers = {
'kbn-xsrf': 'Swimlane',
'Content-Type': 'application/json'
}

ENDPOINT = '/api/spaces/space'

body = {
'id': 2,
'name': '.siem-signals-default',
'description': 'Default SIEM signals space'
}

response = requests.put(
_HOST + ENDPOINT,
headers=headers,
auth=HTTPBasicAuth(_USERNAME, _PASSWORD),
data=body,
verify=False)
print(response.json())
```

### Loading pre-packaged rules

To access or load elastic's pre-packaged signals (detection rules) you can run the following after creating the default space above.

```python
import requests
from requests.auth import HTTPBasicAuth

_HOST = 'https://0.0.0.0:5601'
_USERNAME = 'elastic'
_PASSWORD = 'some_password'

headers = {
'kbn-xsrf': 'Swimlane',
'Content-Type': 'application/json'
}

# PUT - Add pre-built rules to Kibana SIEM
ENDPOINT = '/api/detection_engine/rules/prepackaged'

response = requests.put(
_HOST + ENDPOINT,
headers=headers,
auth=HTTPBasicAuth(_USERNAME, _PASSWORD),
verify=False)
print(response.json())
```

## Adding Data to Kibana

You can also add fake data to kibana using Swimlane's `soc-faker`. You install it using pip:

```python
pip install soc-faker
```

Next you can add fake windows event log data to elastic running the following:

```python
from elasticsearch import Elasticsearch, RequestsHttpConnection
from socfaker import SocFaker


soc_faker = SocFaker()

_HOST = 'https://0.0.0.0:9200'
_USERNAME = 'elastic'
_PASSWORD = 'some_password'

es = Elasticsearch([_HOST],http_auth=(_USERNAME, _PASSWORD), verify_certs=False)

count = 1
for doc in soc_faker.products.elastic.document(count=1000):
# adding documents to the winlogbeat- index
es.index(index='winlogbeat-', id=count, body=doc)
count += 1

```
15 changes: 15 additions & 0 deletions docker-compose.setup.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
version: '3.5'

services:
certs:
build:
context: elasticsearch/
args:
ELK_VERSION: ${ELK_VERSION}
command: bash /setup/setup.sh
user: "0"
volumes:
- ./secrets:/secrets
- ./setup/:/setup/
environment:
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD}
97 changes: 97 additions & 0 deletions docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
version: '3.5'


# will contain all elasticsearch data.
volumes:
elasticsearch-data:

secrets:
elasticsearch.keystore:
file: ./secrets/elasticsearch.keystore
elastic.certificates:
file: ./secrets/elastic-certificates.p12
kibana.certificate:
file: ./secrets/instance.crt
kibana.key:
file: ./secrets/instance.key
logstash.pem:
file: ./secrets/logstash.pem

services:
elasticsearch:
build:
context: elasticsearch/
args:
ELK_VERSION: ${ELK_VERSION}
restart: unless-stopped
environment:
ELASTIC_USERNAME: ${ELASTIC_USERNAME}
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD}
ELASTIC_CLUSTER_NAME: ${ELASTIC_CLUSTER_NAME}
ELASTIC_NODE_NAME: ${ELASTIC_NODE_NAME}
ELASTIC_INIT_MASTER_NODE: ${ELASTIC_INIT_MASTER_NODE}
ELASTIC_DISCOVERY_SEEDS: ${ELASTIC_DISCOVERY_SEEDS}
ES_JAVA_OPTS: -Xmx${ELASTICSEARCH_HEAP} -Xms${ELASTICSEARCH_HEAP}
bootstrap.memory_lock: "true"
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
secrets:
- source: elasticsearch.keystore
target: /usr/share/elasticsearch/config/elasticsearch.keystore
- source: elastic.certificates
target: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
ports:
- "9200:9200"
- "9300:9300"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 200000
hard: 200000

logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
restart: unless-stopped
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro
environment:
ELASTIC_USERNAME: ${ELASTIC_USERNAME}
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD}
ELASTICSEARCH_URL: "https://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}"
ELASTICSEARCH_HOST_PORT: ${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}
LS_JAVA_OPTS: "-Xmx${LOGSTASH_HEAP} -Xms${LOGSTASH_HEAP}"
secrets:
- source: logstash.pem
target: /etc/logstash/logstash.pem
- source: kibana.certificate
target: /etc/logstash/instance.crt
- source: kibana.key
target: /etc/logstash/instance.key

kibana:
build:
context: kibana/
args:
ELK_VERSION: $ELK_VERSION
restart: unless-stopped
volumes:
- ./kibana/config/:/usr/share/kibana/config
environment:
ELASTIC_USERNAME: ${ELASTIC_USERNAME}
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD}
ELASTICSEARCH_HOST_PORT: ${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}
ELASTICSEARCH_URL: "https://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}"
secrets:
- source: kibana.certificate
target: /etc/kibana/instance.crt
- source: kibana.key
target: /etc/kibana/instance.key
ports:
- "5601:5601"
4 changes: 4 additions & 0 deletions elasticsearch/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
ARG ELK_VERSION

# https://github.com/elastic/elasticsearch-docker
FROM docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION}
Loading

0 comments on commit bafc8f7

Please sign in to comment.