Skip to content

Automating the building and launching of a highly available and fault tolerant application infrastructure and continuous application delivery to achieve zero down time.

Notifications You must be signed in to change notification settings

CruzanCaramele/ZeroDownTime

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

About

While many tools (that enable devops practice and culture) are available for applications deployment to machines in the cloud, doing so with having no downtime while maintaining simplicity in the deployment process is a testament to the usefulness of the tools when it comes to enabling devops.

Though a single tool may empower methods such as infrastructure as code, continuous integration and deployment and immutability, it is important to measure the strengths of each tool against the value it adds to the devops culture in place as methods are subservient to values.

Tools should simplify methods and methods in turn should encourage learning in the devops lifecycle.

ZeroDowntime is :

  • highly-available: multi-availability zones
  • secure: instances in private subnets with nat gateway

Infrastructure Design Overview

The Application Infrastructure built in this project is designed for fault tolerance and high availablity.

Graph

Infrastructure Resource Dependency Diagram

Graph

The aim of this project is to make use of industry standard tools to eliminate the window of time when switching a node serving an application in the cloud from a currrent version to a new version. In a production environment, this is considered a down time when the node serving the application is unavailable to users.

These are the methods implemented in this project and the tools that help achieve these methods:

Methods

  • infrastructure as code
  • continuous deployment
  • immutability
  • idempotence

Tools

  • Apache
  • Puppet
  • Packer
  • Terraform
  • Atlas
  • Amazon Web Services (AWS)
  • Git
  • Ubuntu

Requirements to Run this Project

Content

  • app folder : A simple sample application to be deployed
  • packer folder : An amazon machine image (AMI) created for launching nodes
  • terraform folder : Infrastructure turned into code

Delivery Pipeline and Image Deploys Process

Delivery Flow

Packer runs in Atlas to build the application AMI which is provisioned and configured using Puppet. This creates an artifact that is stored in Atlas.

Terraform then reads from the artifact registry and deploys new instances using this AMI. When the application AMI is updated, the process starts again – continuous delivery for immutable infrastructure. New nodes are created when change takes place in the AMI or the infrastructure itself, then old nodes are destroyed with the former occuring first to avoid downtime during the process.

How To Run This Project (Part 1: Building the Artifact)

  • Ensure you have an Atlas Account and AWS Account.
  • Install Packer and Terraform
  • Clone this repository.
  • From the directory Zero_DownTime/ops/packer acess the main.json file and enter your Atlas and AWS credentails in the variables section or follow these instructions to set the variables as environmental variables.
  • On a command line program such as Git, from within the directory Zero_DownTime/ops/packer execute the command packer push main.json. This uploads the packer file and provisioners to your Atlas Account.
  • From your Atlas account , navigate to the Packer link and access the the newly build configuration that you just pushed.
  • On the left side of the page access the Variables link and enter all the environmental variables from the main.json file and their corresponding values.
  • Navigate back to Builds link and queue a build. A successfull build results in the creation of an artifact, in this case an AWS artifact. This can be viewed from the Artifacts Registry.

Part 2: Launching the Infrastructure & Deploying the application

  • On a command line program such as Git, navigate to the directory Zero_DownTime/ops/terraform
  • Create an environment for the infrastructure remotely on Atlas by executing the comman terraform remote config -backend-config "name=your_atlas_username/name_of_your_environment"
  • Upload the terraform files by executing the command terraform push -name "your_atlas_username/name_of_your_environment"
  • On your Atlas account, navigate to Terraform Link and access your newly created environment
  • Queue Plan to plan the infrastructure and Apply Plan to spin up the nodes and deploy the application on AWS

Empathy is part of DevOps

As a devops practioner, making the lives of the people one works with is essential, as such creating a local development environment that mirrors the production environment for developers is necessary.

In this project the local.json file is used to build the development environment identical to the one used in production. This can be used and shared among developers for use regarldess of the operating system the developers work on. This maintains consistency and avoids moments such as --> "that code worked well on my system, it's ops problem now that it does not execute in production."

About

Automating the building and launching of a highly available and fault tolerant application infrastructure and continuous application delivery to achieve zero down time.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published