Skip to content

Latest commit

 

History

History
433 lines (242 loc) · 17.1 KB

Automating-docker-image-deployment-to-ECR.md

File metadata and controls

433 lines (242 loc) · 17.1 KB

AUTOMATING PHP DOCKER IMAGE DEPLOYMENT TO ECR USING JENKINS AND PROVISION INFRASTRUCTURE WITH TERRAFORM/PULUMI.

To deploy many small applications such as web front-end, web-backend, processing jobs, monitoring, logging solutions, etc, some of the applications will require various OS and runtimes of different versions and conflicting dependencies – in such case you would need to spin up serves for each group of applications with the exact OS/runtime/dependencies requirements. When it scales out to tens/hundreds and even thousands of applications, this approach becomes very tedious and challenging to maintain.

Also, when a developer develops an application and sends the application to another developer or DevOps Engineer in the software development team, there is always a problem where the code runs on the developer's computer but doesnt work on the computer of the team member.

SOLUTION:

Containerization solves this problem. Unlike a VM, Docker allocates not the whole guest OS for your application, but only isolated minimal part of it – this isolated container has all that the application needs and at the same time is lighter, faster, and can be shipped as a Docker image to multiple physical or virtual environments, as long as this environment can run Docker engine. This approach also solves environment incompatibility issue.

In other words, if an application is shipped as a container it has its own environment that is isolated, and it will always work the same way on any server that has Docker engine.

This Project provides step-by-step process to Automate PHP Docker image deployment to ECR using Jenkins and provision infrastructure with Terraform/Pulumi.

THE ARCHITECTURE

Prerequisites

  • A Jenkins server installed and configured. You can install Jenkins by following the official documentation - jenkins.io
  • Docker installed on the machine where Jenkins is running.
  • AWS CLI installed in the jenkins server.
  • Pulumi account or Terraform account.

TASK

The process involves creating a Docker image to ensure its optimal functionality and then utilizing Jenkins CI/CD in conjunction with Terraform and Packer to leverage infrastructure provisioning and AMI build respectively.

Alternatively, Pulumi is employed for infrastructure provisioning enabling the smooth deployment of the Docker image to Amazon Elastic Container Registry (ECR).

Setup using Terraform

  • Create the AMI for Jenkins server using Packer
  • Utilize Terraform to deploy the necessary infrastructure components for Amazon Elastic Container Registry (ECR) and a Jenkins server. The Jenkins server will leverage the AMI that was created during the provisioning process.
  • Configure Jenkins to build and push docker image to ECR.

Setup using Pulumi

  • Utilize Pulumi to deploy the necessary infrastructure components for Amazon Elastic Container Registry (ECR) and a Jenkins server. Pulumi uses the script jenkins-docker-setup.sh to install jenkins and docker in the instance during the provisioning process.
  • Configure Jenkins to build and push docker image to ECR.

Building Docker images prior to setting up the deployment pipeline for Amazon Elastic Container Registry (ECR) is considered a commendable approach from a technical standpoint. This practice ensures that the containerized applications or services are encapsulated within Docker images before they are transmitted to ECR for storage and distribution. By generating Docker images upfront, the pipeline can efficiently and reliably handle the deployment process, enabling seamless integration and automation of the containerization and delivery workflow to ECR.

Create an Ec2 Instance, install Docker and add the user to the docker group.

$ sudo usermod -aG docker ubuntu

Use the docker documentation to install docker.

Log out and log in to effect the changes.

Create a network:

Creating a custom network is not necessary because even if we do not create a network, Docker will use the default network for all the containers you run. There are use cases where this is necessary. If there is a requirement to control the cidr range of the containers running the entire application stack. This will be an ideal situation to create a network and specify the --subnet

Create a network with a subnet dedicated for this project and use it for both MySQL and the application so that they can connect.

$ docker network create --subnet=10.0.0.0/24 tooling_app_network

Verify this by running

$ docker network ls

Run the MySQL Server container using the created network.

First, let us create an environment variable to store the root password:

$ export MYSQL_PW=<password>

verify the environment variable is created

$ echo $MYSQL_PW

Then, pull the image and run the container in the network that was created earlier.

$ docker run --network tooling_app_network -h mysqlserverhost --name=toolingdb -e MYSQL_ROOT_PASSWORD=$MYSQL_PW -d mysql/mysql-server:latest

To verify whether an image has been successfully pulled and if the container is running

$ docker images

$ docker ps

To adhere to best security practices, it is not recommended to establish remote connections to the MySQL server using the root user. Therefore, we adopt an alternative approach by creating an SQL script that generates a new user exclusively for remote connection purposes. This helps enhance the overall security of the system.

Create a file, name it create_user.sql and add the below code in the file:

CREATE USER 'dybran'@'%' IDENTIFIED BY '<password>';

GRANT ALL PRIVILEGES ON * . * TO 'dybran'@'%';

Ensure you are in the directory create_user.sql file is located or declare the path.

Run the script

$ docker exec -i toolingdb mysql -uroot -p$MYSQL_PW < create_user.sql

Connect to the MySQL server from a second container running the MySQL client utility.

The advantage inherent in this approach is the elimination of the necessity to install any client-side tool on your laptop and the avoidance of direct connectivity to the container hosting the MySQL server.

Run the MySQL Client Container:

$ docker run --network tooling_app_network --name toolingdb-client -it --rm mysql mysql -h mysqlserverhost -u dybran -p

Prepare database schema

Prepare a database schema so that the Tooling application can connect.

Clone the Tooling-app repository here.

$ git clone https://github.com/dybran/tooling-2.git

On the terminal, export the location of the SQL file

$ export DB_SCHEMA=/home/ubuntu/tooling-2/html/tooling_db_schema.sql

Use the SQL script to create the database and prepare the schema. With the docker exec command, you can execute a command in a running container. $ docker exec -i toolingdb mysql -u root -p$MYSQL_PW < $tooling_db_schema

Containerizing the Tooling Application

Write the Dockerfile

N/B: Make sure to comment out line 20 in the tooling-app/html/db_conn.php.

Make sure you are in the directory that has the Dockerfile. i.e /home/ubuntu/tooling-2

Then run the command

$ docker build -t tooling:1.0 .

Update the .env file with connection details to the database.

The .env file is located in the html tooling/html/.env folder but not visible in terminal.

$ cd tooling-2/html/.env

$ sudo vi .env

To get more information on the toolingdb container run the command

$ docker inspect toolingdb

We can use the above to find the servername and so many other information about the container.

Open the db__conn.php file and update the credentials to connect to the tooling database.

Environment variables are stored outside the codebase and are specific to the environment in which the application runs. The values are set on the server or in the hosting environment and are accessible by the PHP code using the _'$ENV' superglobal array.

Run the container

docker run --network tooling_app_network -p 9080:80 -it -d tooling:1.0

Access the application through the browser

<Ip-address>:9080

Display the running containers for the tooling application and the toolingdb

$ docker ps

Stop the containers using the command

$ docker stop <container-id>

To remove the network

$ docker network rm tooling_app_network

AUTOMATE INFRASTRUCTURE PROVISIONING UTILIZING TERRAFORM OR PULUMI

I will undertake the provisioning of the infrastructure utilizing Terraform.

Additionally, I will demonstrate the provisioning process utilizing Pulumi, affording you the flexibility to select either of these Infrastructure as Code (IAC) tools.

Provisioning the Infrastructure using Terraform

First, we build the AMI by utilizing the jenkins-docker.sh script, to prepare an AMI for the jenkins server.

#!/bin/bash

# The jenkins & docker shell script that will run on instance initialization


# Install jenkins and java
sudo apt-get update
sudo apt install openjdk-17-jre -y

curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo tee \
  /usr/share/keyrings/jenkins-keyring.asc > /dev/null
echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
  https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
  /etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update
sudo apt-get install jenkins -y


# Install docker
sudo apt-get install ca-certificates curl gnupg -y

sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
  "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y


# Add ubuntu & Jenkins to the Docker group
sudo usermod -aG docker ubuntu
sudo usermod -aG docker jenkins

# run docker test container 
sudo docker run hello-world

# install aws cli
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" 
sudo apt install unzip
sudo unzip awscliv2.zip  
sudo ./aws/install
aws --version

# start & enable jenkins
sudo systemctl start jenkins
sudo systemctl enable jenkins

The above script:

  • installs java and jenkins
  • Installs Docker
  • Installs aws cli
  • Adds jenkins and Ubuntu to the docker group

N/B: Always refer to the documentation of the jenkins and Docker.

Run the command

$ cd AMI

$ Packer build jenkins-docker.pkr.hcl

followed by the modification of the Terraform script to incorporate the AMI's endpoint information. AMI code is accessible here

Compose the Terraform script for infrastructure provisioning, and subsequently integrate the AMI's endpoint details into the terraform.auto.tfvars. The Terraform code is accessible at here.

Migrate the terraform codes to terraform cloud

To understand the process of migrating Terraform codes to the cloud, please make reference to Project 19.

plan and apply

The resources are created.

Establish an SSH connection to the Jenkins server and proceed with the configuration adjustments required to enable the build and push functionalities to the Elastic Container Registry (ECR).

$ sudo systemctl start jenkins

$ sudo systemctl enable jenkins

Provisioning the Infrastructure using Pulumi

Provisioning infrastructure using Pulumi involves using code to define and manage cloud resources across various cloud providers like AWS, Azure, Google Cloud, and others. Pulumi allows you to define your infrastructure as code (IaC) using your preferred programming language, such as JavaScript, TypeScript, Python, Go, and more.

Install Pulumi:

Start by installing Pulumi on your development machine. You can find installation instructions for your specific operating system on the Pulumi website. Click here.

Open a terminal and create a new directory. Navigate to that directory and run the following command to create a new Pulumi project:

$ pulumi new <template>

Replace with the appropriate template for your chosen programming language and cloud provider. For example, to create a new python project for AWS, you would run:

$ pulumi new aws-python

Follow the prompt to set up the pulumi project.

Open main.py and write the pulumi code for provisioning the infrastructure and also provisioning the Jenkins server using the provided script. The script:

  • installs java and jenkins
  • Installs Docker
  • Installs aws cli
  • Adds jenkins and Ubuntu to the docker group

Then run the command

$ pulumi up

Select the option "Yes" to indicate your agreement with the allocation of resources for provisioning.

Use the displayed link to see the provisioning in the pulumi console/GUI.

Now we can ssh into the jenkins-server and setup the configuration of an automated process for Docker image building. This setup involves triggering jenkins to initiate a build procedure upon detecting a push event within the GitHub repository, accomplished through the utilization of a webhook mechanism.

Go to manage jenkins > security and check the enable proxy compatibility.

Access the Jenkins server start jenkins and docker as well as verify the successful installation of Jenkins, Docker, and the AWS CLI.

$ sudo systemctl start jenkins

$ sudo systemctl start docker

$ aws --version

Configure the jenkins server

On the github repository, configure the jenkins to use webhook.

The installation of the plugins is essential for facilitating this process. This plugin installation is undertaken in conjunction with the execution of other mandatory configuration adjustments to enable the seamless operation of the intended process.

Go to manage jenkins > plugins and install the following plugins:

  • Docker Pipeline Plugin: This plugin allows you to define your Jenkins pipeline using Docker commands. It integrates Docker functionality directly into your Jenkins pipeline script.

  • Amazon ECR Plugin: This plugin provides integration with Amazon ECR. It allows you to easily push and pull Docker images to and from ECR repositories.

  • Blue Ocean: Blue Ocean aims to simplify the way you create, visualize, and manage your Jenkins pipelines. It offers a more user-friendly and visual approach to building and monitoring pipeline workflows.

Then open blue ocean and configure jenkins server to use the repository in the github.

generate an access token

Push the jenkinsfile to github

$ git add .

$ git commit -m "updated jenkinsfile

$ git push

Jenkins automatically builds and pushes the docker image to the ECR.

When modifications are applied to the code and subsequently pushed, an automatic build is initiated. This build process not only pushes the changes to the Amazon Elastic Container Registry (ECR) but also applies versioning to the generated images.

In builds 6 and 7, there were no alterations made to the underlying code. This means that the software codebase remained the same in both of these builds. Essentially, the developers didn't make any changes to the code during the process of creating these builds.

However, in build 8, there was a change in the code. This indicates that the developers introduced some modifications or updates to the software's source code before creating this particular build. The changes could be related to fixing bugs, adding new features, improving performance, or making any other adjustments deemed necessary for the software's development.

The codes for this project can be accessed here.