Skip to content

Commit 4c26851

Browse files
committed
added run and play for root installations and setup. renamed mount play to emphasize format step
1 parent 988b12f commit 4c26851

5 files changed

+257
-12
lines changed

playbooks/play03--mount-vol-to-ec2.yaml playbooks/play03--format-mount-volume.yaml

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
# play03--mount-vol-to-ec2.yaml
2+
# play03--format-mount-volume.yaml
33

44
# The EC2 instance and the attached volume must already exist and the EC2 must be a valid
55
# managed node which has the Ansible public key authorized. This playbook to mount the
@@ -9,12 +9,12 @@
99
# must have already authorized the local public ssh key on that new host, then we can run this
1010
# playbook, using the standard ssh connection method to automate against it. THE MANUAL
1111
# STEPS WHICH WORK FOR MOUNTING THE NEW PERSISTENT VOLUME ARE DETAILED IN THE FILE:
12-
# /playbooks/todo--play03--mount-vol-to-ec2.txt
12+
# /playbooks/todo--play03--format-mount-volume.txt
1313

1414

1515
- name: Play03 - AWS CLI - Format and Mount the attached persistent volume of a new EC2 instance
1616
# vars:
17-
# format_mount_vol_host: fresh-new-inst-AA
17+
# format_mount_vol_host: new-instance-a
1818
hosts: "{{ format_mount_vol_host }}"
1919
tasks:
2020
####

playbooks/play04--ec2-root-installations-and-setup.yaml

+8-7
Original file line numberDiff line numberDiff line change
@@ -5,14 +5,15 @@
55
# In my opinion, using sudo like this is more straightforward, predictable and makes for a cleaner playbook.
66
# Using Ansible 'become' involves unnecessary complexity and clutters the playbook unnecessarily, IMHO.
77

8+
# TODO: Break the single Block into 2 maybe.
89

910
- name: Play04 - New EC2 Instance - Root Installations and Setup
10-
vars:
11-
hosts_new_ec2_instance_for_installs: TODO--ABSTRACT-THE-VARIABLE-HERE-AND-MAKE-THE-RUN-SCRIPT-FOR-THIS-PLAYBOOK
12-
hosts: "{{ hosts_new_ec2_instance_for_installs }}"
11+
# vars:
12+
# instance_for_installs: new-instance-a
13+
hosts: "{{ instance_for_installs }}"
1314
tasks:
1415
########
15-
- name: Block01 - AWS Info
16+
- name: Block01 - Installs, Setup
1617
block:
1718
####
1819
- name: B01T01 - Update Yum Package Manager -- yum update -y
@@ -50,8 +51,8 @@
5051
debug:
5152
var: out_docker_start.stdout_lines
5253
####
53-
- name: B01T06 - Sleep 10 while Docker Service Starts -- sleep 10
54-
command: sudo sleep 10
54+
- name: B01T06 - Sleep 8 seconds while Docker Service Starts -- sleep 8
55+
command: sudo sleep 8
5556
register: out_sleep
5657
- name: B01T06-out
5758
debug:
@@ -65,7 +66,7 @@
6566
var: out_docker_status.stdout_lines
6667
####
6768
- name: B01T08 - Docker Info -- docker info
68-
command: sudo service docker status
69+
command: sudo service docker info
6970
register: out_docker_info
7071
- name: B01T08-out
7172
debug:
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,230 @@
1+
# THESE ARE NOTES WITH COMMANDS AND OUTPUT - DO NOT RUN THIS AS A SCRIPT.
2+
# THESE STEPS ARE -GOOD- AND PROVEN AT LEAST 3 TIMES ON EC2 DEPLOYMENTS.
3+
4+
STARTING FROM HERE TO CONTINUE AUTOMATION IN ONE OR TWO MORE PLAYBOOKS:
5+
THESE TWO STEPS ARE THE LAST TWO ADDED TO:
6+
playbooks/play03--format-mount-volume.yaml
7+
8+
9+
* * * * * * STANDARDIZED MOUNT POINT:
10+
/data
11+
* Create directory to mount /dev/xvdb to.
12+
sudo mkdir /data
13+
14+
15+
* * * * * * MOUNT:
16+
sudo mount /dev/xvdb /data
17+
18+
*************************************** CONTINUE AUTMATION WORK FROM HERE ********************************************
19+
IN: playbooks/play03--format-mount-volume.yaml
20+
** MAYBE SPLIT IT INTO TWO PLAYBOOKS
21+
22+
TODO: MAKE A PLAYBOOK FOR SIMPLE REMOUNTING UNTIL WE GET THIS FSTAB AUTO-REMOUNTING SETUP AUTOMATED.
23+
24+
HERE ARE THE PREVIOUS STEPS LOGGED:
25+
[ec2-user@ip-172-31-8-87 ~]$ df -h
26+
Filesystem Size Used Avail Use% Mounted on
27+
devtmpfs 2.0G 0 2.0G 0% /dev
28+
tmpfs 2.0G 0 2.0G 0% /dev/shm
29+
tmpfs 2.0G 444K 2.0G 1% /run
30+
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
31+
/dev/xvda1 30G 2.1G 28G 7% /
32+
tmpfs 393M 0 393M 0% /run/user/1000
33+
[ec2-user@ip-172-31-8-87 ~]$
34+
[ec2-user@ip-172-31-8-87 ~]$ sudo mkdir /data
35+
[ec2-user@ip-172-31-8-87 ~]$
36+
[ec2-user@ip-172-31-8-87 ~]$ sudo mount /dev/xvdb /data
37+
[ec2-user@ip-172-31-8-87 ~]$ df -h
38+
Filesystem Size Used Avail Use% Mounted on
39+
devtmpfs 2.0G 0 2.0G 0% /dev
40+
tmpfs 2.0G 0 2.0G 0% /dev/shm
41+
tmpfs 2.0G 444K 2.0G 1% /run
42+
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
43+
/dev/xvda1 30G 2.1G 28G 7% /
44+
tmpfs 393M 0 393M 0% /run/user/1000
45+
/dev/xvdb 30G 247M 30G 1% /data
46+
47+
48+
* * * * * * ENSURE REMOUNTING AFTER EVERY REBOOT:
49+
Add entry to /etc/fstab, but first make a backup:
50+
sudo cp /etc/fstab /etc/fstab.orig
51+
52+
53+
[ec2-user@ip-172-31-8-87 ~]$ sudo cp /etc/fstab /etc/fstab.orig
54+
[ec2-user@ip-172-31-8-87 ~]$ cat /etc/fstab.orig
55+
#
56+
UUID=1377e573-627c-46ee-b7ca-9b86138b39db / xfs defaults,noatime 1 1
57+
58+
59+
Now we need the new UUID:
60+
Use the blkid command to find the UUID of the device. Make a note of the UUID of the
61+
device that you want to mount after reboot. You'll need it in the following step.
62+
63+
64+
(Diff commands on Ubuntu vs RH/other variants)
65+
[ec2-user@ip-172-31-8-87 ~]$ sudo blkid
66+
/dev/xvda1: LABEL="/" UUID="1377e573-627c-46ee-b7ca-9b86138b39db" TYPE="xfs" PARTLABEL="Linux" PARTUUID="9a2c3f9e-8213-4d6a-8591-a0bc2666b3f9"
67+
/dev/xvdb: UUID="5a189975-cd19-43e8-b5ff-6f3bef05ccc5" TYPE="xfs"
68+
69+
70+
* * * * * * ADD THE ENTRY WITH THE CORRECT UUID TO /etc/fstab
71+
sudo vim /etc/fstab
72+
73+
This is the file I make by adding the last line
74+
75+
76+
I made backups of the original and also made 2 extra backup files with the new
77+
entry, one with it active and one with it commented out.
78+
[ec2-user@ip-172-31-8-87 ~]$ sudo cp /etc/fstab /etc/fstab.newvol-prepped-but-disabled
79+
[ec2-user@ip-172-31-8-87 ~]$ sudo vim /etc/fstab
80+
[ec2-user@ip-172-31-8-87 ~]$ sudo cp /etc/fstab /etc/fstab.newvol-prepped-but-disabled
81+
[ec2-user@ip-172-31-8-87 ~]$ sudo vim /etc/fstab
82+
[ec2-user@ip-172-31-8-87 ~]$ sudo cp /etc/fstab /etc/fstab.datavol-active-backup
83+
84+
This is the /etc/fstab to use normally:
85+
--------
86+
#
87+
UUID=1377e573-627c-46ee-b7ca-9b86138b39db / xfs defaults,noatime 1 1
88+
UUID=5a189975-cd19-43e8-b5ff-6f3bef05ccc5 /data xfs defaults,nofail 0 2
89+
--------
90+
91+
OPTIONS EXPLAINED:
92+
nofail means the instance will be allowed to boot even if the vol can't mount.
93+
2 means it is not the root vol.
94+
0 prevents the filesystem from being dumped (? TODO: Clarify what this means??)
95+
96+
====================================================================================
97+
====================================================================================
98+
99+
Above here we complete setup of the /data volume.
100+
Now we do an ssh-keygen in case we need a public key later.
101+
102+
* * * * * * SSH-KEYGEN FOR FUTURE POSSIBLE USE:
103+
ssh-keygen -t rsa -q -N ""
104+
105+
[ec2-user ~]$ ssh-keygen -t rsa -q -N ""
106+
Enter file in which to save the key (/home/ec2-user/.ssh/id_rsa):
107+
108+
* TODO The ssh-keygen could possibly be made promptless using -f for file loc.
109+
110+
====================================================================================
111+
====================================================================================
112+
113+
* * * * * * AWS CONFIGURE * * * * * *
114+
115+
[ec2-user ~]$ aws configure
116+
AWS Access Key ID [None]: ********************
117+
AWS Secret Access Key [None]: *****************************************
118+
Default region name [None]: us-west-2
119+
Default output format [None]:
120+
121+
For output format, just pressed enter (None).
122+
123+
* * * * * * * PERFORM aws ec2 describe-volumes TO PROVE AWS CONFIGURE AUTH WORKS * * * * *
124+
125+
[ec2-user ~]$
126+
[ec2-user ~]$ aws ec2 describe-volumes
127+
{
128+
"Volumes": [
129+
.. truncated by me .. it does work.
130+
]
131+
}
132+
[ec2-user ~]$
133+
134+
* * * * * * TEST AWS GET LOGIN - FOR REPO USAGE * * * * * *
135+
136+
aws ecr get-login --no-include-email --region us-west-2
137+
* UPDATE: I actually have scripts which get the password and include it in the pull
138+
command, so this has evolved a little. See Nucleus Stack.
139+
140+
* We will keep using the old AWS CLI version for now. I might have tried upgrading it
141+
and ran into issues. I think BotFolk repo has notes on this.
142+
Whatever we do currently on BF, we will do on SM.
143+
144+
145+
====================================================================================
146+
====================================================================================
147+
148+
* * * * * * DOCKER COMPOSE * * * * * *
149+
150+
Latest version on GitHub currenty 2.14.2
151+
WE WILL DO THE CLASSIC "STANDALONE" INSTALL, NOT THE NEW PLUGIN.
152+
(Because BotFolk.ai works great and so we are not changing anything unless
153+
part of the new technology stack plan. We can experiment in peripheral/support areas
154+
later when dev time is less-scarce.)
155+
156+
157+
LATEST COMMAND TO USE:
158+
curl -SL https://github.com/docker/compose/releases/download/v2.14.2/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose
159+
COMMANDS I USED ON BOTFOLK (approximate, taken from ec2 history):
160+
cd /usr/local/bin
161+
sudo curl -L https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
162+
163+
Wow, major version change and differences CMD but essentially the same.
164+
I will do this one:
165+
cd /usr/local/bin
166+
sudo curl -SL https://github.com/docker/compose/releases/download/v2.14.2/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose
167+
----------------------------------------------------------
168+
/usr/local/bin/ starts out empty on this VM, which is nice!
169+
170+
[ec2-user ~]$ ls -alt /usr/local/bin
171+
total 0
172+
drwxr-xr-x 12 root root 131 Nov 12 01:06 ..
173+
drwxr-xr-x 2 root root 6 Apr 9 2019 .
174+
175+
------------------------------------------
176+
177+
178+
[ec2-user ~]$ cd /usr/local/bin
179+
[ec2-user bin]$ ls -alt
180+
total 0
181+
drwxr-xr-x 12 root root 131 Nov 12 01:06 ..
182+
drwxr-xr-x 2 root root 6 Apr 9 2019 .
183+
[ec2-user bin]$ sudo curl -SL https://github.com/docker/compose/releases/download/v2.14.2/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose
184+
% Total % Received % Xferd Average Speed Time Time Time Current
185+
Dload Upload Total Spent Left Speed
186+
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
187+
100 42.8M 100 42.8M 0 0 46.9M 0 --:--:-- --:--:-- --:--:-- 72.6M
188+
[ec2-user bin]$ docker-compose
189+
-bash: /usr/local/bin/docker-compose: Permission denied
190+
191+
192+
-----------------------------------------------------
193+
194+
So we need some perms changed.
195+
196+
* * * * * * SET DOCKER-COMPOSE BINARY PERMISSIONS
197+
198+
We are in /usr/local/bin
199+
200+
[ec2-user bin]$ sudo chmod 755 docker-compose
201+
202+
And the command help now works for the ec2-user without sudo, so I think we are good.
203+
204+
205+
===================================================================================
206+
207+
* NOTE: It seems we don't need to use sudo when running the site.
208+
I think we can just do docker-compose up as ec2-user. I don't know of any difference
209+
aside from security differences. But this test using sudo also works:
210+
[ec2-user bin]$ sudo /usr/local/bin/docker-compose version
211+
Docker Compose version v2.14.2
212+
213+
====================================================================================
214+
====================================================================================
215+
216+
* * * * * * DOCKER VOLUME SETUP * * * * * *
217+
218+
Create the directories for dbvolume and datavolume
219+
mkdir /data/datavolume
220+
mkdir /data/dbvolume
221+
222+
These paths will only be mapped to in the production docker-compose.production.yaml
223+
which lives on the EC2 instance.
224+
225+
TODO: See if any perms need to be changed.
226+
227+
====================================================================================
228+
====================================================================================
229+
230+

run-play03--mount-vol-to-ec2.sh run-play03--format-mount-volume.sh

+2-2
Original file line numberDiff line numberDiff line change
@@ -7,15 +7,15 @@ echo "TODO: The final piece of automation for this goal will edit the fstab file
77
echo "For the time being, if the instance is stopped and restarted, you will need to DO: sudo mount /dev/xvdb /data"
88
echo "TODO: In the interim, while the fstab solution in in the works, I will provide a simple Ansible remount play."
99
echo "For the required manual steps are in the following file:"
10-
echo "**** See: playbooks/todo--play03--mount-vol-to-ec2.txt"
10+
echo "**** See: playbooks/todo--play03--format-mount-volume.txt"
1111

1212
echo
1313
echo "Enter the host value for the managed node for which to format and mount the new attached volume."
1414
echo "Enter 'format_mount_vol_host' value: "
1515
read -r format_mount_vol_host
1616

1717

18-
ansible-playbook -i inventory.yaml playbooks/play03--mount-vol-to-ec2.yaml \
18+
ansible-playbook -i inventory.yaml playbooks/play03--format-mount-volume.yaml \
1919
--extra-vars="format_mount_vol_host=$format_mount_vol_host"
2020

2121

Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
#! /usr/bin/env bash
2+
3+
echo
4+
echo "PERFORM ROOT INSTALLATIONS AND SETUP."
5+
6+
echo
7+
echo "Enter the host value for the managed node for which to install packages and libraries and perform setup."
8+
echo "Enter 'instance_for_installs' value: "
9+
read -r instance_for_installs
10+
11+
12+
ansible-playbook -i inventory.yaml playbooks/play04--ec2-root-installations-and-setup.yaml \
13+
--extra-vars="instance_for_installs=$instance_for_installs"
14+

0 commit comments

Comments
 (0)