Skip to content
/ k8s Public template

Template / Report "[FULL GUIDES] How to install k8s + CI/CD for beginners"

License

Notifications You must be signed in to change notification settings

nnbaocuong99/k8s

Repository files navigation

© Spagbo 8 Aug, 2022 // 2022-2024

❗️ Introducing

1. Credit & Usage // Sumary:

Credit & Usage

  • This project is written by me and wouldn't be possible without the hard work and contributions of the following individuals:
    • @QuocNVC - Bug fixes and enhancements.
    • @TruongLM - VM script writer.
    • Special thanks to the open-source community for their support and inspiration!
  • This project is for learning-purposes Only, meant for educational and non-commercial use. Feel free to study, learn from it.
  • This project has No Unauthorized Copying. Please refrain from directly copying or using it for any commercial or production purposes without proper authorization.
  • If you find this project helpful, consider giving credit by linking back to this repository. Mentioning it in your own project's documentation or README is appreciated.

Sumary

  • This is my research report and project template on how to install a Kubernetes (k8s) cluster and set up a CI/CD pipeline for a Java project. It’s designed with beginners in mind, especially those who are new to Docker and want to learn about backend development and CI/CD pipelines.
  • In this project, I’m using both Windows and MacOS (similar to Linux). Keep in mind that the images in your project might look slightly different with mine, but dw abt it. Take your time to research, as they serve the same function.
  • I’d also love to hear others’ opinions on what might be missing or not listed in this project. Remember, though, that this is just a template. Feel free to create your own unique content pls don’t stalk, copy, or claim someone else’s work as your own. Let’s avoid that kind of behavior! 😊
  • I’d like to express my sincere thanks to @QuocNVC and @TruongLM for their invaluable assistance. It’s a pleasure to collaborate with such talented individuals on this project.

2. Tools, things I'm using for this project:

Name Official Website Note
Kubernetes https://kubernetes.io also need K3s, RKE (checking the logstack by ELK)
Rancher https://rancher.com/docs/
Apache https://maven.apache.org
Docker https://www.docker.com
Helm https://helm.sh
ArgoCD https://argo-cd.readthedocs.io/en/stable/
VmBox https://www.virtualbox.org

OS Package Manager Official Website
Windows Winget
Chocolately
https://github.com/microsoft/winget-cli
https://chocolatey.org/
MacOS Homebrew https://brew.sh/
Ubuntu (v16.04) APT

3. Updating features: (This project has been discontinued.):

  • Mindmap and more example images for this project (WIP)
  • Known errors: Error while doing the project will be listed here. Feel free to submit your problem at Issues tab.

4. Table of contents:

  • ❗️Introducing
    • Sumary & Credit
    • Tools in project
    • Updating features



❗️ Guides step by step

Setup VMBox

1. Install:

2. Create virtual machines:

  • In document folder download 2 script files named Vagrantfile-masternode and Vagrantfile-workernode.
  • You totally can create yourself one, copy my scripts into yours if you wanna more features for your VM.

Caution

  • Take your own risk before using if you modify my script. recommend things in the script I suggest you to modify: OS version, vm.network, hostname, password.
  • Remember to put your files in 2 separate folders.
  • Remember to re-name them and change the file type to Vagrantfile not the .txt or any.
uvu
Vagrantfile on Windows will look like this

  • Open 2 terminals each separate folders then run the command below. Until your VM finished these setup steps, you'll that they're running like in the image below.
    $ vagrant up
uvu


Setup Rancher, Cluster

Warning

  • Please, always use the root user at first.
    $ sudo su
  • Your username and ip_address based on how you modify your Vagrantfile. (check this for more).
  • Remember to double-check if Docker has been installed yet.
    $ docker version

1:

  • SSH into the master-node
    $ ssh username@your_ip_address

2:

  • Based on your OS, choose your Rancher version on rancher/rancher-Tags (take a look on version guides here if you don't know how to)

  • Copy and replace tag with the version you choose below:

    $ docker run -d --name=rancher-server --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:tag
  • Wait until the image successfully pulled then run these commands to get your container id.

    $ docker ps
    $ sudo docker ps -aqf "name=containername"`
  • Replace container id you just got into this, then save THE RED LINE code thats your password!

    $ docker logs  container-id  2>&1 | grep "Bootstrap Password:"

3:

  • Navigate to the IP Address (base on your config) of your masternode
    https://192.168.56.200
    https://192.168.56.200/g (recommend this one because its friendly for begginers)
  • Login to the Rancher with username:admin and the password your just got it earlier.
  • Choose Custom mode
uvu
Setting 1

  • Set a name for it, setting like the image below and copy the scripts.
uvu
Setting 2

4:

  • Add --address worker_IP (replace worker_IP with your real workernode IP) before the --etcd and you'll get the final script like this:
    $ sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run  rancher/rancher-agent:v2.7-091ed163cc5c53efc50bd1a580cb4e54fa097e82-head --server https://192.168.56.200/ --token p5zcnnpcb5cx8pg89vkk5nkx8gbzltk9wbkmfjp6rsn9n6kf729vjp --ca-checksum 37bde28c0dc9fbd360146f727ff4b1cd254d9f17490789f93775fb2ce15b58da --address your_worker_IP --etcd --controlplane --worker
  • Do the same these steps and SSH into the worker-node.
  • Run the copied script up there.

5:

  • Get back to your masternode click the button Kubeconfig File on the right top corner to open your cluster config. It will look like this:
uvu

  • Always save your configuration files and name them, as they are important. If you have more than 5 clusters, keep in mind that kubectl can only connect to one cluster at a time.
  • Find the default Kubeconfig file in your device, paste it into it to connect and work with your cluster.
  • You can also set the KUBECONFIG environment variable or use the --kubeconfig flag with kubectl to specify a custom location. Check docs for more.

Tip

How to find your Kubeconfig file?

1. On Linux, Ubuntu and MacOS
You can check if it exists by running: ls ~/.kube/config
Default config file storage at:
uvu


2. On Windows
Manual method: Open C:\Users\%USERNAME% and create a folder .kube & a file name config inside it.
In advance, you can create or verify its existence using: dir %USERPROFILE%\.kube\config
Default config file storage at:
uvu

6:

Prepare for next steps, Make sure that you're installed all things below, follow this installation guides with your OS



ArgoCD // Setup Pipelines

1:

Install ArgoCD on your OS first:


2:

Setup steps

  • After insalled, you must create a namespace for it by running:
    $ kubectl create namespace argocd
    $ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
uvu

  • Run this command if you need, If everything went well, skip this step, change the ArgoCD-server service type to LoadBalancer:

    click to expand
    $ kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
    uvu


  • Make sure you expose the services before using it by running:

    $ kubectl port-forward svc/argocd-server -n argocd 8080:443
    uvu


Note

🌟BONUS // Lemme explain for you real quick

Depending on the project you’re working on and the context, you’ll choose one of the two options to ensure your services run smoothly and avoid errors

Simply in this project, Im gonna get direct into with node IP


Port-forwarding

  • Allows you to access services running inside a Kubernetes cluster from your local machine.
  • When you run kubectl port-forward svc/argocd-server -n argocd 8080:443, it sets up a proxy so that you can communicate with the ArgoCD server through port 8080 on your local machine.
  • Use Case: Useful for debugging, testing, or accessing the ArgoCD API server without exposing it externally.
  • Access: You can then access the ArgoCD API server using localhost:8080.

Node-port

  • NodePort exposes a service on a specific port on each node in the cluster.
  • When you create a NodePort service, Kubernetes allocates a port (usually in the range 30000-32767) on each node. Requests to that port are forwarded to the service.
  • Typically used for exposing services externally, especially when you need to access them from outside the cluster.
  • You can access the ArgoCD service using the node’s IP address and the assigned NodePort.

In short

  • In the case of NodePort, once your cluster starts, the specified port is automatically exposed, ensuring seamless external access to your services.
  • Port forwarding is more suitable for local development and debugging, while NodePort is better for exposing services externally. Choose the approach that aligns with your use case!

3:

Work steps

  • Use Kubectl to get your port:

    $ kubectl get service -n argocd
    uvu

  • Once you got your port (in this case, 32294 was mine. Copy and merge it with your worker-node IP, paste it into your browser like this

    uvu

    URL
    # the URL
    https://192.168.56.201:32294/login?return_url=https%3A%2F%2F192.168.56.201%3A32294%2Fapplications

  • Retrieve your ArgoCD password for next steps // (Install based64 If you haven’t)

    $ kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
    uvu

  • Once the command exposed, Navigate to ttps://localhost:8080 and login.

    admin
    6vH7QkjCQFiPPHPZ
    uvu

  • Logged in successfully, and this is what your result looks like

    uvu



CI/CD

1. Before you start // Get to know more

  • Please make sure you understand and successfully follow all these steps up there until now.
  • Beginner? Read this CI/CD Explained on Gitlab official website.
  • Or read my Workflows Explaination in short if these websites make you feel complicated.

2. CI

  • Create and account on Gitlab and make a repository.

  • Use the content in my repository if you have already cloned it before or conversely, push your own content.

  • (Optional) In case you using your contents, all you need to do is create a file in the root location of your repository called .gitlab-ci.yml. Copy the script below into it and commit to trigger the Pipelines. Then let it automatically start.

    stages:
      - build
    
    build-image:
      stage: build
      image: docker:latest
      services:
        - docker:dind
      rules:
      before_script:
        - docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWD
      script:
        - >
          docker build
          --build-arg DOCKER_USERNAME=$DOCKER_USERNAME
          --build-arg DOCKER_PASSWD=$DOCKER_PASSWD
          -t $DOCKER_USERNAME/demo-gitlabci:1.0 .
        - docker push $DOCKER_USERNAME/demo-gitlabci:1.0

Note

  • You totally can use others platform like Github or what ever. But in this case, I highly recommend you to use Gitlab because the CI/CD tools from Gitlab is extremely easy to use.
  • .yml file is basically a recipe that specifies how GitLab should execute pipelines.

Add Variables

  • In your repository: Settings/CI/CD/Variables/Add Variables

  • Edit and fill as:

    $DOCKER_USERNAME = your Docker username
    $DOCKER_PASSWD   = your Docker password
    uvu

  • Result

    uvu


Register a Runner

  • To make sure that your pipelines run correctly you must use Runners to run the jobs. Read this for more if this is your first time hear about it?

  • Heading to Repository Settings/CI/CD/Runners and you can choose between: Validate account and use shared runners or Register an individual runner.

  • Up to you, you can skip or follow this guide How to Install and Register Gitlab runners

  • This is how runners look when they have successfully registered:

    uvu

    uvu

3. Some photos taken during the Pipline process.


uvu
Pic 1. Commit anything to trigger the piplines


uvu
Pic 2. Jobs running

uvu
Pic 3. Logs


uvu
Pic 4. Pipeline finished


uvu
Pic 5. Check around or your Docker Hub to make sure that this job is running correctly



4. CD

Connect Reposiory

  • Now, get back to the tab where you left off with ArgoCD logged in.

  • Click Settings on the left menu Connect Repo, fill your information and connect.

    uvu

Create Chart and values files

  • Get back to your repo, create 2 files name Chart.yaml and values.yaml.
    • chart.yaml might look like

      apiVersion: v2
      name: demo-app
      description: A Helm chart for Kubernetes
      
      # A chart can be either an 'application' or a 'library' chart.
      #
      # Application charts are a collection of templates that can be packaged into versioned archives
      # to be deployed.
      #
      # Library charts provide useful utilities or functions for the chart developer. They're included as
      # a dependency of application charts to inject those utilities and functions into the rendering
      # pipeline. Library charts do not define any templates and therefore cannot be deployed.
      type: application
      
      # This is the chart version. This version number should be incremented each time you make changes
      # to the chart and its templates, including the app version.
      # Versions are expected to follow Semantic Versioning (https://semver.org/)
      version: 0.1.0
      
      # This is the version number of the application being deployed. This version number should be
      # incremented each time you make changes to the application. Versions are not expected to
      # follow Semantic Versioning. They should reflect the version the application is using.
      # It is recommended to use it with quotes.
      appVersion: "1.16.0"

    • values.yaml might look like (you can replace the repository, tag to what you desired)

      # Default values for demo-app.
      # This is a YAML-formatted file.
      # Declare variables to be passed into your templates.
      
      replicaCount: 1
      
      image:
        repository: nnbaocuong99/details-k8s-project
        pullPolicy: Always
        # Overrides the image tag whose default is the chart appVersion.
        tag: "1.0"
      
        imagePullSecrets: []
        nameOverride: ""
        fullnameOverride: ""
      
        serviceAccount:
          # Specifies whether a service account should be created
          create: true
          # Annotations to add to the service account
          annotations: {}
          # The name of the service account to use.
          # If not set and create is true, a name is generated using the fullname template
          name: ""
      
        podAnnotations: {}
      
        podSecurityContext: {}
          # fsGroup: 2000
      
        securityContext: {}
          # capabilities:
          #   drop:
          #   - ALL
          # readOnlyRootFilesystem: true
          # runAsNonRoot: true
          # runAsUser: 1000
      
        service:
          type: NodePort
          port: 80
      
        ingress:
          enabled: false
          className: ""
          annotations: {}
            # kubernetes.io/ingress.class: nginx
            # kubernetes.io/tls-acme: "true"
          hosts:
            - host: chart-example.local
              paths:
                - path: /
                  pathType: ImplementationSpecific
          tls: []
          #  - secretName: chart-example-tls
          #    hosts:
          #      - chart-example.local
      
        resources:
          # We usually recommend not to specify default resources and to leave this as a conscious
          # choice for the user. This also increases chances charts run on environments with little
          # resources, such as Minikube. If you do want to specify resources, uncomment the following
          # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
          limits:
            cpu: 100m
            memory: 128Mi
          requests:
            cpu: 100m
            memory: 128Mi
      
        autoscaling:
          enabled: false
          minReplicas: 1
          maxReplicas: 100
          targetCPUUtilizationPercentage: 80
          # targetMemoryUtilizationPercentage: 80
      
        nodeSelector: {}
      
        tolerations: []
      
        affinity: {}

Create an application

Caution

  • Check this docs for more if you want advanced settings and CLI deploy method
  • The application name should have - or _ between every single word.
  • https://gitlab.com/nnbaocuong99/k8s is MY REPOSITORY link. Replace it with your own repository link.
  • Destination should be default https://kubernetes.default.svc and argocd
  • If everything is done correctly, the values.yaml file will be automatically detected
  • Get back to the main screen of the ArgoCD and + NEW APP or CREATE APPLICATION and fill the information, configurations.





  • If your result look like this, congrats! you just created your app.
uvu
If you have more than 1 app. They will appear here.

uvu
When you open your app. It gonna look like this.


  • Now, just simply click on Sync button and your app is ready to work
uvu

uvu



If you've arrived this far, congratulations! you’ve completed a comprehensive exercise on the CI/CD using the GitOps + ArgoCD, Helm approach. I hope my exercise has been helpful to you. Thank you for taking time out of your day to, read these guides, my project. And the most important thing is good luck on your CI/CD pipelines!

Best wishes,
𝓃𝓃𝒷𝒸,