Skip to content

keithnoguchi/do-in-action

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DigitalOcean in Action!

asciicast

Forming DigitalOcean droplets through terraform.

Install terraform and ansible on your favorite laptop/desktop and enjoy having DO in action!

Note that, since DO Cloud Firewalls support has been just merged into master recently, you either grab the latest DO provider release, 0.1.0, or git clone by yourself, if you're adventurous.

Here is a part of ansible playbook I wrote for arch-on-air to git clone and install the latest terraform and ansible. As both are under really active development, use it on your own risk with hacker's mind. :)

Setup

Sign in or sign up to DigitalOcean through cloud.digitalocean.com and grab a DO APIv2 token. And then, set the API token for the terraform by defining TA_VAR_DO_API_TOKEN environment variable as below:

air$ export TF_VAR_DO_API_TOKEN=$(cat ~/.do/token.pem)

also, setup the SSH key in the setup section of the cloud.digitalocean.com and give it to terraform through TA_VAR_DO_FINGERPRINT environment variable, so that you can ssh into those droplets:

air$ export TF_VAR_DO_FINGERPRINT=$(ssh-keygen -E md5 -lf ~/.ssh/id_rsa.pub|awk '{print $2}'|sed 's/MD5://')

That's it! Now, let's roll!

Run

As mentioned before, all the droplets are formed through the terraform, as you can see it in main.tf file. To make the operation straightforward, however, I've wrote a simple Makefile.

Plan

terraform has a way to dry run the actual actions through terraform plan. There is a one-to-one Makefile target, called, plan:

air$ make plan

Deploy

Makefile deploy target is, as you can guess, just a wrapper to the terraform apply:

air$ make deploy

this will droplets in the cloud. Through the use of terraform output variables, you can check the IP reachability to one of the droplets as below:

IPv4:

air$ ping -4 -c3 $(terraform output server0_public_ipv4)
PING 104.131.96.15 (104.131.96.15) 56(84) bytes of data.
64 bytes from 104.131.96.15: icmp_seq=1 ttl=56 time=81.9 ms
64 bytes from 104.131.96.15: icmp_seq=2 ttl=56 time=81.2 ms
64 bytes from 104.131.96.15: icmp_seq=3 ttl=56 time=78.3 ms

--- 104.131.96.15 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 78.335/80.504/81.943/1.578 ms
air$

IPv6:

air$ ping -6 -c3 $(terraform output server0_public_ipv6)
PING 2604:a880:0800:0010:0000:0000:319e:1001(2604:a880:800:10::319e:1001) 56 data bytes
64 bytes from 2604:a880:800:10::319e:1001: icmp_seq=1 ttl=49 time=81.4 ms
64 bytes from 2604:a880:800:10::319e:1001: icmp_seq=2 ttl=49 time=81.4 ms
64 bytes from 2604:a880:800:10::319e:1001: icmp_seq=3 ttl=49 time=79.5 ms

--- 2604:a880:0800:0010:0000:0000:319e:1001 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 79.521/80.806/81.467/0.908 ms
air$

and to the floating IP:

air$ ping -c3 $(cd flips && terraform output server_flip)
PING 45.55.97.179 (45.55.97.179) 56(84) bytes of data.
64 bytes from 45.55.97.179: icmp_seq=1 ttl=56 time=99.2 ms
64 bytes from 45.55.97.179: icmp_seq=2 ttl=56 time=97.6 ms
64 bytes from 45.55.97.179: icmp_seq=3 ttl=56 time=98.5 ms

--- 45.55.97.179 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 97.690/98.508/99.278/0.649 ms
air$

And, as we're running the simple HTTP server through the droplet user data, you can check the HTTP reachability to the server, as below:

air$ curl http://$(terraform output server0_public_ipv4)
<h1>Hello world from server0</h1>
air$

and through the floating IP:

air$ curl http://$(terraform output server_flip)
<h1>Hello world from server0</h1>
air$

Test

Of course, you need a test to check those droplets. I use ansible to drive all those tests by dynamically retrieving the variables, IP addresses, etc., through the inventory.py. All you need to do is just call make test, which deploy droplets, if it's not there, and run all the ansible test playbooks:

air$ make test | tail -4

PLAY RECAP *********************************************************************
45.55.207.205              : ok=32   changed=0    unreachable=0    failed=0

air$

Test matrix

Here is the test matrix covered by make test-all:

Firewall type Non firewall Inbound deny-all Inbound allow-all Outbound deny-all Outbound allow-all In/Out-bound deny-all In/Out-bound allow-all
ICMP PASS PASS PASS PASS PASS PASS PASS
ICMP port unreach PASS PASS PASS PASS PASS PASS PASS
HTTP PASS PASS PASS PASS PASS PASS PASS
TCP PASS PASS PASS PASS PASS PASS PASS
UDP PASS PASS PASS PASS PASS PASS PASS
FTP PASS PASS PASS PASS PASS PASS PASS

Note that all those Ansible test playbooks covers four different IP reachability, public IPv4, private IPv4, public IPv6, and floating IPv4, respectively.

And also, you can run those tests individually through the standard ansible command. For example, following command runs HTTP test for non firewalls setup.

air$ ansible-playbook tests/http.yml

Test against the entire DO

Here is the simple shell script to run the test against the entire DO!

air$ for region in ams2 ams3 blr1 fra1 lon1 nyc1 nyc2 nyc3 sfo1 sfo2 sgp1 tor1
do
        # Cleanup the environment before the test.
        if ! make clean-all test-all TF_VAR_DO_REGION=$region
        then
                echo 'oh, no!'
                break
        fi
done

Import

Terraform has a feature called import, which imports the existing infrastructure, e.g. droplets, by terraform import command. In this section, I'll demonstrate how to use import sub command to import the existing droplets and apply firewalls rules against it through the terraform.

Please note that the imported droplets and the one described in main.tf should share the same attributes, e.g. droplet images, or terraform will destroy the one you import and re-create new one, as it treats that as a conflict of the resources.

Client

Let's import the existing droplet as a client role. For this, I use [doctl] for the pre-creation but you can use any cloud orchestration tool of your choice.

air$ doctl compute droplet create client0 --image ubuntu-16-04-x64 --region nyc3 \
--size 512mb --enable-ipv6 --enable-private-networking \
--ssh-keys $TF_VAR_DO_FINGERPRINT --user-data-file ./scripts/client_user_data.sh

I'm using [./scripts/client_user_data.sh] as the user data, for python and nmap.

Once it's up and running, you can import the droplet with terraform import command, as below. Only argument you need is the droplet ID and there are multiple way to get that. For doctl case, it will show in the console, or doctl compute droplet list.

air$ terraform import digitalocean_droplet.client 12345

Once it's successfully imported, all you have to do is just type make to let terraform to do the rest, with one caveat.

As terraform will check the remote infrastructure against your local intention to make sure it's in sync, it will compare all the attributes to make a decision. But there is one field that force terraform to re-create droplets all the time, which is the user data. This is because DigitalOcean's APIv2 doesn't expose the user data, and that makes terraform to always to destroy and re-create to make your intention happens in the cloud. Since this is not what I want, I've defined the environmental variable, called TF_VAR_DO_CLIENT_USER_DATA and pass it nil, as below, to fool terraform.

air$ TF_VAR_DO_CLIENT_USER_DATA= terraform apply

To run the full test suite, you can do the same by replace it to make test-all, as below:

air$ TF_VAR_DO_CLIENT_USER_DATA= make test-all

Server

It's almost identical to use the existing droplet as a server role.

First create the one with doctl:

air$ doctl compute droplet create server0 --image ubuntu-16-04-x64 --region nyc3 \
--size 512mb --enable-ipv6 --enable-private-networking \
--ssh-keys $TF_VAR_DO_FINGERPRINT --user-data-file ./scripts/server_user_data.sh

then, import it as the server:

air$ terraform import digitalocean_droplet.server 54321

now, run it with TF_VAR_DO_SERVER_USER_DATA=:

air$ TF_VAR_DO_SERVER_USER_DATA= terraform apply

To run the full test suite, you can do the same by replace it to make test-all, as below:

air$ TF_VAR_DO_SERVER_USER_DATA= make test-all

Benchmark

You can run the Cloud Bandwidth Performance Monitoring based benchmark against the server side droplet. Currently, it executes both the upload and the download bandwidth benchmark with iperf3. The duration of the single benchmark is 10 seconds each. It runs for an hour with 40 seconds interval to avoid the excessive load on the cloud.

You can execute for each IP address type, public and private IPv4, and IPv6, with the simple make target.

Here is the public IPv4 benchmark:

air$ make bench-ipv4

and the private IPv4 equivalent:

air$ make bench-ipv4-private

Those make target will kick the ansible playbook, ipv4.yml and ipv4_private.yml respectively, which first spin up the data vis containers on the client droplet. This takes a while, as it fetches docker images, and run it fresh. Once that ansible task is complete, you can point your browser to the graphana dashboard, for example:

air$ chromium $(terraform output monitor_public_ipv4):8000

The playbook keep continue to run the iperf3 server on the server droplet and run the iperf3 based bandwidth polling container for an hour. You can monitor the trend through the graphana dashboard as mentioned above.

Cleanup

Of course, we can destroy all those instances through make clean, which is just calling terraform destroy in addition to cleaning up the terraform state files:

air$ make clean

References

Many thanks to Yevgeniy (Jim) Brikman and Mitchell Anicas for their terraform insights! And Brent Salisbury for his cloud monitoring demonstrated in his Cloud Bandwidth Performance Monitoring github repository.

ASCII casts

I've recorded the asciinema cast for the droplet creation part. I'll upload it to the official site soon, but in a meanwhile, you can watch it with:

$ git clone https://github.com/keinohguchi/do-in-action
$ cd do-in-action
$ asciinema play ./casts/do-in-action.json

Happy Hacking!

About

DO with Terraform and Ansible

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published