My Kubernetes Resume Challenge

My Kubernetes Resume Challenge

Part 1: Initial focus

Given the impact and aftermath of the Cloud Resume Challenge (CRC) and the demand from individuals for a Kubernetes Challenge, Forrest Brazeal in collaboration with KodeKloud has launched a spin-off of the CRC, focusing on Kubernetes. The Kubernetes Resume Challenge aims to highlight proficiency in Kubernetes and containerization, demonstrating the ability to deploy, scale, and manage web applications efficiently in a Kubernetes environment, emphasizing cloud-native deployment skills. As is customary, CRCs are not undertaken without publishing an article detailing the process and how the challenge unfolded. This article serves that purpose. If this is your first encounter with the CRC, I invite you to click here to learn more. For insights into my previous CRC experiences, particularly with GCP and AWS, I encourage you to read those articles. Without further ado, let's delve into how I approached taking this version of the CRC: the Kubernetes Resume Challenge

The first phase is the certification phase, or as I interpret it, the phase of acquiring knowledge and skills, because the goal is to ensure you have a solid understanding of Kubernetes concepts and practical experience. For this purpose, it is recommended in the challenge to complete the Certified Kubernetes Application Developer (CKAD) course by KodeKloud. Personally, I have already been playing with Kubernetes for a while. Therefore, I started the challenge directly. Apart from what is recommended here to gain an understanding of Kubernetes concepts and practical experience, I personally recommend Techworld with Nana's Kubernetes crash course. I really appreciate the clarity and the ability that Nana has to easily digest and present complex topics in an interesting way. Take a look at her crash course.

Now, the hands-on part begins with containerization, containerizing both the application and the database. Containerizing the application is fairly straightforward by following the instructions provided on the CRC website. However, where one needs to focus attention is on interfacing with the database. When dealing with the database, there's no need for containerization since the official Docker image of MariaDB is already available and ready to use. The minor adjustment required here is to load the data into the database using the initialization script via a Kubernetes ConfigMap. This, I believe, can be interactively done through a command resembling something like kubectl create configmap db-init-script --from-file=db-load-script.sql. However, I've developed a habit of configuring almost all Kubernetes resources declaratively. Therefore, I created a manifest for the ConfigMap, not just for this purpose but for all the resources I needed to deploy right from the early stage.

Before exposing the application, it's essential to make the underlying adjustments, namely, initializing the passwords to connect to the database, the connection string to the database, and the service to allow the connection, etc. Therefore, to initialize the root password of the database, I created, of course, a Kubernetes secret containing the base64-encoded value of the password. Then, I referenced it in the environment variables of the MariaDB deployment manifest. It should look something like this in the MariaDB deployment manifest:

env:
  - name: MYSQL_ROOT_PASSWORD
    valueFrom:
      secretKeyRef:
        name: mariadb-root-password
        key: password

This initializes the root password upon launching the database. Then, to establish communication between the application and the database, it's necessary to create the MariaDB service and add it, along with the service and the secret, to the environment variables of the application so that this information is used to initiate the connection to the database when the application starts. It will look something like this:

env:
  - name: DB_HOST
    value: mariadb-service
  - name: DB_USER
    value: root
  - name: DB_PASSWORD
    valueFrom:
      secretKeyRef:
        name: mariadb-root-password
        key: password

Next, to expose the application on the internet, I utilized a Load Balancer service type that I associated with the deployment. In my case, the Kubernetes environment I used is GKE on Google Cloud. Therefore, I needed to reserve a static public IP address, which I used for the Load Balancer. Then, I attached the service to the application using the app selector. Once all of this was applied, the application was ready to serve on the internet.

Next comes the second chunk of the project. The first part of this second chunk involved setting up the feature toggle using a ConfigMap. I'll explain how I proceeded. The goal of the feature toggle is to activate the dark mode on the site, so it was necessary to create another CSS file for the dark mode. Then, in the PHP code, I created a condition stating that if the environment variable FEATURE_DARK_MODE = true, then to use the dark mode CSS file, otherwise to use the default file. To make this environment variable value known to the application, I created the ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: feature-toggle-config
data:
  FEATURE_DARK_MODE: "true"

And referenced it in the application deployment manifest in the environment section:

env:
  - name: FEATURE_DARK_MODE
    valueFrom:
      configMapKeyRef:
        name: feature-toggle-config
        key: FEATURE_DARK_MODE

With this, once the ConfigMap was applied, the dark mode was activated on the site. Then, it was about manually scaling the application by increasing the number of replicas, and then performing a rolling update with a new version of the application, including a promotion banner on the site, and finally rolling back to the initial state, all as indicated in the instructions on the CRC website.

Now, it was about implementing autoscaling so that the pods scale up to a maximum of 10 if their CPU usage exceeds 50%. I created an autoscaling resource manifest including the autoscaling requirements. Of course, this can be done interactively as indicated in the CRC guide, but as I mentioned earlier, I prefer to set up my configs declaratively whenever possible to benefit from reusability. To simulate the load, I used the lightweight and easy-to-use load testing and benchmarking tool Siege.

Now place to Liveness and Readiness Probes which are Kubernetes mechanisms designed to check the health of a Pod.

  1. Liveness Probe: This probe verifies whether your application is running properly. If the liveness probe fails, Kubernetes will terminate the Pod and create a new one as a replacement. This feature is particularly useful if your application has deadlocked and cannot recover without a restart.

    In my configuration, the liveness probe is configured to perform an HTTP GET request to the /live.php endpoint on port 80 of my pod. I have created another PHP file (live.php) in the same location as index.php.

<?php
http_response_code(200);
echo "Application is running";

The probe will start 15 seconds after the Pod initializes (initialDelaySeconds) and repeat every 20 seconds (periodSeconds). It checks whether a 200 response is returned; if not, the Pod will be terminated and recreated.

  1. Readiness Probe: This probe checks whether your application is ready to handle incoming traffic. If the readiness probe fails, Kubernetes will halt traffic to the Pod until it becomes ready.

    In my configuration, the readiness probe is configured to perform an HTTP GET request to the /status.php endpoint on port 80 of my Pod. I have also created another PHP file (status.php).

<?php
$dbHost = getenv('DB_HOST');
$dbUser = getenv('DB_USER');
$dbPassword = getenv('DB_PASSWORD');
$dbName = getenv('DB_NAME');

$link = mysqli_connect($dbHost, $dbUser, $dbPassword, $dbName);

if ($link) {
    $res = mysqli_query($link, "SELECT * FROM products LIMIT 1;");
    if ($res) {
        http_response_code(200);
        echo "Application is healthy";
    } else {
        http_response_code(500);
        echo "Application is not healthy: unable to query the database";
    }
} else {
    http_response_code(500);
    echo "Application is not healthy: unable to connect to the database";
}

The probe will start 5 seconds after the Pod initializes (initialDelaySeconds) and repeat every 10 seconds (periodSeconds). This probe goes further by testing the application's connection to the database.

Explore the GitHub repository to delve into manifests and code files: GitHub Repo

To be continued ...

Currently, I'm delving into the extra credit part, which includes Package Everything in Helm, Implement Persistent Storage, and Implement CI/CD Pipeline. Stay tuned for the next updates!