Skip to content

Let's deploy Aspire-flavoured apps to a Kubernetes cluster, through Aspir8! Are you new to Kubernetes? Don't worry. Let's start from scratch.

License

Notifications You must be signed in to change notification settings

devkimchi/aspir8-from-scratch

Repository files navigation

Aspir8 from Scratch

Let's deploy Aspire-flavoured apps to a Kubernetes cluster, through Aspir8! Are you new to Kubernetes? Don't worry. Let's start from scratch.

Table of Contents

Prerequisites

Local Kubernetes Cluster Setup through Docker Desktop

  1. Install Docker Desktop on you local machine.
  2. Enable Kubernetes in Docker Desktop.
  3. Deploy sample app to a Kubernetes cluster.

Kubernetes Dashboard Setup

Use Kubernetes Dashboard v2.x

Note: This is only applicable for Kubernetes Dashboard v2.x.

References

  1. Get dashboard version.

    # Bash
    dashboard_version=$(curl 'https://api.github.com/repos/kubernetes/dashboard/releases' | \
        jq -r '[.[] | select(.name | contains("-") | not)] | .[0].name')
    
    # PowerShell
    $dashboard_version = $($(Invoke-RestMethod https://api.github.com/repos/kubernetes/dashboard/releases) | `
        Where-Object { $_.name -notlike "*-*" } | Select-Object -First 1).name
  2. Install dashboard.

    # Bash
    kubectl apply -f \
      https://raw.githubusercontent.com/kubernetes/dashboard/$dashboard_version/aio/deploy/recommended.yaml
    
    # PowerShell
    kubectl apply -f `
      https://raw.githubusercontent.com/kubernetes/dashboard/$dashboard_version/aio/deploy/recommended.yaml
  1. Create admin user.

    kubectl apply -f ./admin-user.yaml
  2. Get the access token. Take note the access token to access the dashboard.

    # Bash
    kubectl get secret admin-user \
        -n kubernetes-dashboard \
        -o jsonpath={".data.token"} | base64 -d
    
    # PowerShell
    kubectl get secret admin-user `
        -n kubernetes-dashboard `
        -o jsonpath='{ .data.token }' | `
        % { [Text.Encoding]::UTF8.GetString([Convert]::FromBase64String($_)) }
  3. Run the proxy server.

    kubectl proxy
  4. Access the dashboard using the following URL:

    http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
    
  5. Enter the access token to access the dashboard.

Use Helm Charts

Note: From Kubernetes Dashboard v3.x, use Helm Charts approach.

  1. Install Helm.

  2. Run the following commands to install the Kubernetes Dashboard.

    # Add kubernetes-dashboard repository
    helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
    
    # Deploy a Helm Release named "kubernetes-dashboard" using the kubernetes-dashboard chart
    helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
  3. Create admin user.

    kubectl apply -f ./admin-user.yaml
  4. Get the access token. Take note the access token to access the dashboard.

    # Bash
    kubectl get secret admin-user \
        -n kubernetes-dashboard \
        -o jsonpath={".data.token"} | base64 -d
    
    # PowerShell
    kubectl get secret admin-user `
        -n kubernetes-dashboard `
        -o jsonpath='{ .data.token }' | `
        % { [Text.Encoding]::UTF8.GetString([Convert]::FromBase64String($_)) }
  5. Run the proxy server.

    kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443
  6. Access the dashboard using the following URL:

    https://localhost:8443
    
  7. Enter the access token to access the dashboard.

Aspire-flavoured App Build

  1. Install .NET Aspire workload.

    # Bash
    sudo dotnet workload update && sudo dotnet workload install aspire
    
    # PowerShell
    dotnet workload update && dotnet workload install aspire
  2. Create a new Aspire starter app.

    dotnet new aspire-starter -n Aspir8
  3. Build the app.

    dotnet restore && dotnet build
  4. Run the app locally.

    dotnet run --project Aspir8.AppHost
  5. Open the app in a browser, and go to the weather page to see whether the API is working or not. The port number might be different from the example below.

    http://localhost:17008
    

Aspire-flavoured App Deployment to Kubernetes Cluster through Aspir8

Use local container registry

  1. Install Distribution (formerly known as Registry) as a local Docker Hub (Container Registry).

    docker run -d -p 6000:5000 --name registry registry:latest

    Note: The port number of 6000 is just an arbitrary number. You can choose your own one.

  2. Install Aspir8.

    dotnet tool install -g aspirate
  3. Initialise Aspir8.

    cd Aspir8.AppHost
    aspirate init -cr localhost:6000 -ct latest --disable-secrets true --non-interactive
  4. Build and publish the app to the local container registry.

    aspirate generate --image-pull-policy Always --include-dashboard true --disable-secrets true --non-interactive
  5. Deploy the app to the Kubernetes cluster.

    aspirate apply -k docker-desktop --non-interactive
  6. Check the services in the Kubernetes cluster.

    kubectl get services
  7. Install a load balancer for webfrontend to the local Kubernetes cluster.

    kubectl apply -f ../load-balancer.yaml
  8. Install a load balancer for aspire-dashboard to the local Kubernetes cluster.

    kubectl apply -f ../aspire-dashboard.yaml
  9. Open the app in a browser, and go to the dashboard page to see the logs

    http://localhost:18888
    
  10. Open the app in a browser, and go to the weather page to see whether the API is working or not.

    http://localhost/weather
    

Use Azure Kubernetes Services (AKS)

Note: It uses Azure CLI, which is the imperative approach. The declarative approach using Bicep is TBD.

  1. Set environment variables. Make sure that you use the closest or preferred location for provisioning resources (eg. koreacentral).

    # Bash
    export AZURE_ENV_NAME="aspir8$RANDOM"
    export AZ_RESOURCE_GROUP=rg-$AZURE_ENV_NAME
    export AZ_NODE_RESOURCE_GROUP=rg-$AZURE_ENV_NAME-mc
    export AZ_LOCATION=koreacentral
    export ACR_NAME=acr$AZURE_ENV_NAME
    export AKS_CLUSTER_NAME=aks-$AZURE_ENV_NAME
    
    # PowerShell
    $AZURE_ENV_NAME = "aspir8$(Get-Random -Minimum 1000 -Maximum 9999)"
    $AZ_RESOURCE_GROUP = "rg-$AZURE_ENV_NAME"
    $AZ_NODE_RESOURCE_GROUP = "rg-$AZURE_ENV_NAME-mc"
    $AZ_LOCATION = "koreacentral"
    $ACR_NAME = "acr$AZURE_ENV_NAME"
    $AKS_CLUSTER_NAME = "aks-$AZURE_ENV_NAME"
  2. Create a resource group.

    az group create -n $AZ_RESOURCE_GROUP -l $AZ_LOCATION
  3. Create an Azure Container Registry (ACR).

    # Bash
    az acr create \
        -g $AZ_RESOURCE_GROUP \
        -n $ACR_NAME \
        -l $AZ_LOCATION \
        --sku Basic \
        --admin-enabled true
    
    # PowerShell
    az acr create `
        -g $AZ_RESOURCE_GROUP `
        -n $ACR_NAME `
        -l $AZ_LOCATION `
        --sku Basic `
        --admin-enabled true
  4. Get ACR credentials.

    # Bash
    export ACR_LOGIN_SERVER=$(az acr show \
        -g $AZ_RESOURCE_GROUP \
        -n $ACR_NAME \
        --query "loginServer" -o tsv)
    export ACR_USERNAME=$(az acr credential show \
        -g $AZ_RESOURCE_GROUP \
        -n $ACR_NAME \
        --query "username" -o tsv)
    export ACR_PASSWORD=$(az acr credential show \
        -g $AZ_RESOURCE_GROUP \
        -n $ACR_NAME \
        --query "passwords[0].value" -o tsv)
    
    # PowerShell
    $ACR_LOGIN_SERVER = $(az acr show `
        -g $AZ_RESOURCE_GROUP `
        -n $ACR_NAME `
        --query "loginServer" -o tsv)
    $ACR_USERNAME = $(az acr credential show `
        -g $AZ_RESOURCE_GROUP `
        -n $ACR_NAME `
        --query "username" -o tsv)
    $ACR_PASSWORD = $(az acr credential show `
        -g $AZ_RESOURCE_GROUP `
        -n $ACR_NAME `
        --query "passwords[0].value" -o tsv)
  5. Create an AKS cluster.

    Note: Depending on the location you create the cluster, the VM size might vary.

    # Bash
    az aks create \
        -g $AZ_RESOURCE_GROUP \
        -n $AKS_CLUSTER_NAME \
        -l $AZ_LOCATION \
        --node-resource-group $AZ_NODE_RESOURCE_GROUP \
        --node-vm-size Standard_B2s \
        --network-plugin azure \
        --generate-ssh-keys \
        --attach-acr $ACR_NAME
    
    # PowerShell
    az aks create `
        -g $AZ_RESOURCE_GROUP `
        -n $AKS_CLUSTER_NAME `
        -l $AZ_LOCATION `
        --node-resource-group $AZ_NODE_RESOURCE_GROUP `
        --node-vm-size Standard_B2s `
        --network-plugin azure `
        --generate-ssh-keys `
        --attach-acr $ACR_NAME
  6. Connect to the AKS cluster.

    # Bash
    az aks get-credentials \
        -g $AZ_RESOURCE_GROUP \
        -n $AKS_CLUSTER_NAME \
    
    # PowerShell
    az aks get-credentials `
        -g $AZ_RESOURCE_GROUP `
        -n $AKS_CLUSTER_NAME `
  7. Connect to ACR.

    Note: This is the demo purpose only. You should manually enter username and password from your input.

    docker login $ACR_LOGIN_SERVER -u $ACR_USERNAME -p $ACR_PASSWORD
  8. Install Aspir8.

    dotnet tool install -g aspirate
  9. Initialise Aspir8.

    cd Aspir8.AppHost
    aspirate init -cr $ACR_LOGIN_SERVER -ct latest --non-interactive

    Note: If you are asked to enter or skip the repository prefix, enter n to skip it.

  10. Build and publish the app to ACR.

    aspirate generate --image-pull-policy IfNotPresent --non-interactive
  11. Deploy the app to the AKS cluster.

    aspirate apply -k $AKS_CLUSTER_NAME --non-interactive
  12. Install a load balancer to the AKS cluster.

    kubectl apply -f ../load-balancer.yaml
  13. Confirm the webfrontend-lb service type is LoadBalancer, and note the external IP address of the webfrontend-lb service.

    kubectl get services
  14. Open the app in a browser, and go to the weather page to see whether the API is working or not.

    http://<EXTERNAL_IP_ADDRESS>
    
  15. Once you are done, delete the entire resources from Azure.

    az group delete -n $AZ_RESOURCE_GROUP -f Microsoft.Compute/virtualMachineScaleSets -y --no-wait

Use Amazon Elastic Kubernetes Service (EKS)

Note:

  • It uses both AWS Console and AWS CLI to provision resources to AWS.
  • It uses the Root account for this demo purpose only. You should use the IAM user with the least privilege.
  1. Set environment variables. Make sure that you use the closest or preferred location for provisioning resources (eg. ap-northeast-2).

    # Bash
    export AWS_ENV_NAME="aspir8$RANDOM"
    export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
    export AWS_LOCATION=ap-northeast-2 # Seoul
    export ECR_LOGIN_SERVER=$AWS_ACCOUNT_ID.dkr.ecr.AWS_LOCATION.amazonaws.com
    export EKS_STACK_NAME=aspir8-stack
    export EKS_CLUSTER_NAME=eks-$AWS_ENV_NAME
    export EKS_NODE_GROUP_NAME=aspir8-nodegroup
    
    # PowerShell
    $AWS_ENV_NAME = "aspir8$(Get-Random -Minimum 1000 -Maximum 9999)"
    $AWS_ACCOUNT_ID = $(aws sts get-caller-identity --query "Account" --output text)
    $AWS_LOCATION = "ap-northeast-2" # Seoul
    $ECR_LOGIN_SERVER = "$($AWS_ACCOUNT_ID).dkr.ecr.$($AWS_LOCATION).amazonaws.com"
    $EKS_STACK_NAME = "aspir8-stack"
    $EKS_CLUSTER_NAME = "eks-$AWS_ENV_NAME"
    $EKS_NODE_GROUP_NAME = "aspir8-nodegroup"
  2. Create a VPC stack for EKS.

    # Bash
    aws cloudformation create-stack \
        --region $AWS_LOCATION \
        --stack-name $EKS_STACK_NAME \
        --template-url https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml
    
    # PowerShell
    aws cloudformation create-stack `
        --region $AWS_LOCATION `
        --stack-name $EKS_STACK_NAME `
        --template-url https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml
  3. Create an EKS cluster role and attach it to the policy.

    # Bash
    aws iam create-role \
        --role-name Aspir8AmazonEKSClusterRole \
        --assume-role-policy-document file://"eks-cluster-role-trust-policy.json"
    aws iam attach-role-policy \
        --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy \
        --role-name Aspir8AmazonEKSClusterRole
    
    # PowerShell
    aws iam create-role `
        --role-name Aspir8AmazonEKSClusterRole `
        --assume-role-policy-document file://"eks-cluster-role-trust-policy.json"
    aws iam attach-role-policy `
        --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy `
        --role-name Aspir8AmazonEKSClusterRole
  4. Create an EKS cluster node role and attach it to the policies.

    # Bash
    aws iam create-role \
        --role-name Aspir8AmazonEKSNodeRole \
        --assume-role-policy-document file://"eks-node-role-trust-policy.json"
    aws iam attach-role-policy \
        --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy \
        --role-name Aspir8AmazonEKSNodeRole
    aws iam attach-role-policy \
        --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly \
        --role-name Aspir8AmazonEKSNodeRole
    aws iam attach-role-policy \
        --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy \
        --role-name Aspir8AmazonEKSNodeRole
    
    # PowerShell
    aws iam create-role `
        --role-name Aspir8AmazonEKSNodeRole `
        --assume-role-policy-document file://"eks-node-role-trust-policy.json"
    aws iam attach-role-policy `
        --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy `
        --role-name Aspir8AmazonEKSNodeRole
    aws iam attach-role-policy `
        --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly `
        --role-name Aspir8AmazonEKSNodeRole
    aws iam attach-role-policy `
        --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy `
        --role-name Aspir8AmazonEKSNodeRole
  5. Create an EKS cluster by following this document.

  6. Create an EKS cluster nodes by following this document.

  7. Connect to the EKS cluster.

    aws eks update-kubeconfig --name $EKS_CLUSTER_NAME --region $AWS_LOCATION
  8. Connect to ECR.

    aws ecr get-login-password --region $AWS_LOCATION | docker login --username AWS --password-stdin $ECR_LOGIN_SERVER
  9. Create repositories in ECR.

    aws ecr create-repository --repository-name apiservice --region $AWS_LOCATION
    aws ecr create-repository --repository-name webfrontend --region $AWS_LOCATION
  10. Install Aspir8.

    dotnet tool install -g aspirate
  11. Initialise Aspir8.

    cd Aspir8.AppHost
    aspirate init -cr $ECR_LOGIN_SERVER -ct latest --non-interactive

    Note: If you are asked to enter or skip the repository prefix, enter n to skip it.

  12. Build and publish the app to ECR.

    aspirate generate --image-pull-policy IfNotPresent --non-interactive
  13. Deploy the app to the EKS cluster.

    aspirate apply -k $EKS_CLUSTER_NAME --non-interactive
  14. Install a load balancer to the EKS cluster.

    kubectl apply -f ../load-balancer.yaml
  15. Confirm the webfrontend-lb service type is LoadBalancer, and note the URL under the external IP address column of the webfrontend-lb service.

    kubectl get services
  16. Open the app in a browser, and go to the weather page to see whether the API is working or not.

    http://<xxxx.ap-northeast-2.elb.amazonaws.com>
    
  17. Once you are done, delete the entire resources from AWS.

    # Delete EKS node group
    aws eks delete-nodegroup --nodegroup-name $EKS_NODE_GROUP_NAME --cluster-name $EKS_CLUSTER_NAME
    
    # Delete EKS cluster
    aws eks delete-cluster --name $EKS_CLUSTER_NAME
    
    # Delete ECR repositories
    aws ecr delete-repository --repository-name apiservice --force --region $AWS_LOCATION
    aws ecr delete-repository --repository-name webfrontend --force --region $AWS_LOCATION
    
    # Delete CloudFormation stack
    aws cloudformation delete-stack --stack-name $EKS_STACK_NAME

    Note:

    • Deleting the EKS node group takes 5-10 mins.
    • Only after the EKS node group is deleted, the EKS cluster can be deleted.
    • While deleting the CloudFormation stack, you might be failing the deletion process. It's highly likely because of Elastic Load Balancer. Go to EC2 Dashboard, and delete the existing load balancer instance first.

Use Google Kubernetes Engine (GKE)

TBD

Use NHN Kubernetes Services (NKS)

Note:

  1. Add the following Docker Hub repository details to Aspir8.ApiService/Aspir8.ApiService.csproj.

    <PropertyGroup>
      <ContainerRepository>{{DOCKER_USERNAME}}/apiservice</ContainerRepository>
    </PropertyGroup>
  2. Add the following Docker Hub repository details to Aspir8.Web/Aspir8.Web.csproj.

    <PropertyGroup>
      <ContainerRepository>{{DOCKER_USERNAME}}/webfrontend</ContainerRepository>
    </PropertyGroup>
  3. Set environment variables.

    export NHN_ENV_NAME="aspir8$RANDOM"
    export NKS_CLUSTER_NAME=nks-$NHN_ENV_NAME
  4. Create an NKS cluster from the console.

  5. Get the kubeconfig of the NKS cluster from the console.

  6. Connect to the NKS cluster using the kubeconfig.

    export KUBECONFIG=~/.kube/config:~/path/to/downloaded/kubeconfig
    kubectl config view --merge --flatten > ~/.kube/merged_kubeconfig
    mv ~/.kube/config ~/.kube/config.bak
    mv ~/.kube/merged_kubeconfig ~/.kube/config
  7. Change the context to the NKS cluster.

    kubectl config use-context default
  8. Connect to Docker Hub.

    Note: This is the demo purpose only. You should manually enter username and password from your input.

    docker login registry.hub.docker.com -u <DOCKER_USERNAME> -p <DOCKER_PASSWORD>
  9. Initialise Aspir8.

    cd Aspir8.AppHost
    aspirate init -cr registry.hub.docker.com -ct latest --non-interactive
  10. Build and publish the app to Docker Hub.

    aspirate generate --image-pull-policy IfNotPresent --non-interactive
  11. Deploy the app to the NKS cluster.

    aspirate apply -k toast-$NKS_CLUSTER_NAME --non-interactive
  12. Install a load balancer to the NKS cluster.

    kubectl apply -f ./load-balancer.yaml
  13. Confirm the webfrontend service type is LoadBalancer, and note the external IP address of the webfrontend service.

    kubectl get services
  14. Open the app in a browser, and go to the weather page to see whether the API is working or not.

    http://<EXTERNAL_IP_ADDRESS>
    
  15. Once you are done, delete the entire resources from the console and container images from Docker Hub.

About

Let's deploy Aspire-flavoured apps to a Kubernetes cluster, through Aspir8! Are you new to Kubernetes? Don't worry. Let's start from scratch.

Topics

Resources

License

Stars

Watchers

Forks