Skip to content

Commit

Permalink
Create CD pipeline
Browse files Browse the repository at this point in the history
  • Loading branch information
rgallardone committed Jun 28, 2024
1 parent 73b799b commit 3b89d52
Show file tree
Hide file tree
Showing 4 changed files with 143 additions and 4 deletions.
86 changes: 86 additions & 0 deletions .github/workflows/cd.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
name: 'Continuous Deployment'

on:
push:
branches:
- main
- develop
- release/*

jobs:
deployment:
runs-on: ubuntu-latest
environment: dev
env:
branch: main

steps:
- uses: actions/checkout@v4

- name: Get the branch name
id: get_branch_name
run: |
echo "branch=${GITHUB_HEAD_REF:-${GITHUB_REF#refs/heads/}}" >> $GITHUB_OUTPUT
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.9'

- name: Authenticate to GCP
uses: 'google-github-actions/auth@v2'
with:
credentials_json: '${{ secrets.CD_SA_KEYS }}'

- name: Install dependencies
run: |
pip install -r requirements.txt -r requirements-dev.txt
- name: Run training script
run: |
python train.py
- name: Authenticate Docker to GAR
uses: docker/login-action@v3
with:
registry: '${{ vars.GCP_REGION }}-docker.pkg.dev'
username: _json_key
password: ${{ secrets.CD_SA_KEYS }}

- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
push: true
tags: '${{ vars.GAR_REPOSITORY }}/${{ vars.GAR_IMAGE_NAME }}-${{ steps.get_branch_name.outputs.branch }}'

- name: Deploy the service to Cloud Run
id: 'deploy'
uses: 'google-github-actions/deploy-cloudrun@v2'
with:
service: '${{ vars.GCR_SERVICE_NAME }}-${{ steps.get_branch_name.outputs.branch }}'
image: '${{ vars.GAR_REPOSITORY }}/${{ vars.GAR_IMAGE_NAME }}-${{ steps.get_branch_name.outputs.branch }}'
region: '${{ vars.GCP_REGION }}'
flags: '--allow-unauthenticated'

outputs:
service_url: ${{ steps.deploy.outputs.url }}

stress_test:
runs-on: ubuntu-latest
needs: deployment

steps:
- uses: actions/checkout@v4

- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.9'

- name: Install dependencies
run: |
pip install -r requirements-test.txt
- name: Run stress test
run: |
make stress-test API_URL=${{ needs.deployment.outputs.service_url }}
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ install: ## Install dependencies
pip install -r requirements-test.txt
pip install -r requirements.txt

STRESS_URL = https://delay-model-dpmrk4cwxq-uw.a.run.app
STRESS_URL = $(API_URL)
.PHONY: stress-test
stress-test:
# change stress url to your deployed app
Expand Down
32 changes: 29 additions & 3 deletions docs/challenge.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,8 +128,34 @@ The results of the stress test are an error rate of 0%, an average response time
On this final step, the goal is to setup a proper CI/CD pipeline.
The CI workflows focus on running the tests and assesing the quality of the code each time there's a push to the repository, with the goal of detecting bugs earlier, correcting code faster and ensuring good code quality practices.
The Continuous Integration (CI) workflow focuses on running the tests and assesing the quality of the code each time there's a push to the repository, with the goal of detecting bugs earlier, correcting code faster and ensuring good code quality practices.
The CD workflows focus on training the model, deploying the API and running the stress testing against it. These workflows only run when there's a push to the `main`, `develop` or `release` branches on the repository.
The Continuous Deployment (CD) workflow focuses on training the model, deploying the API and running the stress test against it. This workflow only runs when there's a push to the `main`, `develop` or `release` branches.
* Undesirable model tracking
Let's describe each workflow with more detail.
### Continuous Integration
The goals of this workflow are checking the code quality and testing it. For the first goal, the code is checked using `black`, `flake8` and `isort` to ensure that the style and format are correct and fit the repository standards. For the second goal, the provided test suites (`model-test` and `api-test`) are ran to ensure that the changes done on the code don't affect the functionality of the `DelayModel` class and the API.
**Observation:** The test suites require a trained model available for testing purposes. However, this test suites run on Github workers and don't have access to local models. To circumvent this, the model checkpoint is tracked with Git and uploaded to the remote. This is not desirable, since model's can crow rapidly in size and managing them inside the repository can become a problem. The ideal solution would be to maintain a proper Model Registry, with remote storage and a good version management, so that trained models can be uploaded to it or downloaded for testing or deployment. Due to time restrictions and since the model checkpoint is lightweight on this case, the decision to track the model was taken.
### Continuous Deployment
The goal of this workflow is to train the model, build the Docker image with it and deploy it to a Cloud Run service. This workflow only runs when there's a push to the `main`, `develop` or `release` branches and it deploys a different API for each of these. The reasoning is that having different deployments for different stages of the development of features and releases can help in testing how the changes affect the deployment, while keeping the `main` API intact and serving only the released code features.
Here are the most important steps taken to develop this workflow:
* A small and simple training script (`train.py`) was created so that the GA trains the model before deploying it. This training script uses all the available data, preprocesses it, trains the model and writes it to the location used by the Dockerfile to put the model inside the Docker image. This is a simplification of a real scenario. Ideally, the data would be stored remotely and we would have different remote jobs for preprocessing the data, training the model and uploading it to a Model Registry. These remote jobs could be triggered by the same events that trigger this workflow, but none of the preprocessing or training would run inside the GA synchronously.
* A GCP Service Account `cd-pipeline-sa` was created to grant the Github Action runner with permissions to push the Docker image to the Artifact Registry repository and to deploy the Cloud Run Service. The roles given to this SA are:
- `Artifact Registry Writer`: enables the SA to push Docker images to the Artifact Registry repositories
- `Cloud Run Admin`: gives the SA full control over the Cloud Run services deployed
- `Service Account User`: gives the SA the necessary permissions to act as the default Cloud Run service account. This permission is needed for deploying from the Github Action.
We created one single SA for simplification, since we only use it in a single workflow. Ideally, we should have multiple SAs, each with more granular and reduced permissions; for example, we could have a "Cloud Run SA" which only has control over the services and nothing else, and a separate "Artifact Registry SA" which only has access to the repository.
* A `dev` environment was created on the Github Repository, containing various configuration variables (mostly names used through the GCP deployment) and secrets (the key to access the SA `cd-pipeline-sa`). The created configuration variables are:
- `GAR_IMAGE_NAME=delay-model-api`
- `GAR_REPOSITORY=us-west1-docker.pkg.dev/rodrigo-tryolabs-latam/delay-model-service`
- `GCP_PROJECT_ID=rodrigo-tryolabs-latam`
- `GCP_REGION=us-west1`
- `GCR_SERVICE_NAME=delay-model`
* After deployment of the service, the stress tests run against the deployed API. As mentioned, different APIs are deployed depending on the branch. To point the stress test script to the correct API, a small modification was needed to be done to the `Makefile`, so that the URL of the API is passed as an argument on the `make stress-test` command. The final command is `make stress-test API_URL=<api-url>`.
27 changes: 27 additions & 0 deletions train.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
import pandas as pd

from challenge.model import DelayModel

print("Loading data...")
# Read the data
df = pd.read_csv("data/data.csv")
print("-> Data loaded")

# Create the model
model = DelayModel()

print("Preprocessing data...")
# Preprocess the data
X_train, y_train = model.preprocess(df, "delay")
print("-> Preprocessed data")


print("Training model...")
# Train the model
model.fit(X_train, y_train)
print("-> Model trained")

print("Saving model...")
# Store the model
model.save("challenge/tmp/model_checkpoint.pkl")
print("-> Model saved")

0 comments on commit 3b89d52

Please sign in to comment.