Payload Logo
Kubernetes

Own the Kubernetes By Deploying It

Author

Kasper

Date Published

Let’s take the matter into our own hands and master the basics of Kubernetes by deploying a personal, single-node cluster for side projects on the cheapest VPS you can find.

As the great, Admin Mickiewicz (1798–1855), in the “Ode to Juniors” wrote:

Down from the Cloud where the Vendor locks

A self-thought DevOps drops.

He is his own rudder, sailor, and vessel.

Oh… so you mean OWN the cluster, like literally?

Literally figuratively own the k8s

Literally figuratively own the k8s

FROM note:latest as introduction

This article is based on my personal notes on building a single-node Kubernetes cluster. It’s one of those you-don’t-need-it projects, primarily for educational purposes.

This is the first part of the series and covers:

  • Installation of Kubernetes
  • Manual import of Docker image to cluster, then integrating cluster with GHCR
  • Finally, the deployment of a Hello World service

In the next parts, I’ll guide you through installing SSL and enabling persistence for databases and other types of storage.

Before we start

To complete this project you will need a low-cost VPS. I am using an Ubuntu VPS with 1 vCore CPU, 2GB RAM, and 20GB storage for less than $4 per month — and honestly, that’s sufficient.

I won’t recommend any specific hosting services. Take some time to find what works best for you. You might even be eligible for a free tier!

You will need root privileges and SSH access on your VPS. We won’t cover installing the OS or configuring SSH access.

Lastly, you’ll need a domain pointing to your VPS’s address.

A word of caution

I’ll do my best to explain the process clearly, but be aware that if you get stuck, you’re on your own 😉. I suggest working on a fresh, disposable VPS, so if something breaks, you can start over easily.

This project is not suited for production use that requires high availability or handles large traffic. After all, we’re building a single-node k8s instance on a low-cost VPS.


Installing Kubernetes

When I started this project, I already had experience with Minikube on my local machine. I wanted to take advantage of its simplicity but soon ran into resource issues — primarily insufficient storage.

After some research, I discovered k3s by Rancher — a lightweight Kubernetes distribution designed for IoT and edge devices. It requires fewer resources and is very easy to install.

A small side note on the code snippets and shell commands.

This article acts more as a guide through official resources than as a self-contained tutorial. Whenever a shell command or code snippet is copied from documentation, I’ll include a link to the source. I encourage you to use original documentation, as it ensures the most up-to-date version of the snippet and helps improve your understanding.

For instance, the next command installs k3s, but the official documentation also covers how to use k3s to verify the installation.

Log in to your VPS with SSH and install k3s according to the documentation

1curl -sfL https://get.k3s.io | sh -

That’s it! Kubernetes is installed!

Managing the cluster

Kubernetes cluster exposes API which lets end users control it. Tools like kubectl (command line), Lens (GUI), and k9s (TUI) utilize it to talk to the cluster.

Follow these instructions to install kubectl on your local computer.

To access the cluster kubectl will need some configuration including credentials and the address of the server. The default user with credentials was already created for us during the installation of k3s. Let’s get back to our server and grab the data.

On the server print out the contents of the k3s.yaml file:

1sudo cat /etc/rancher/k3s/k3s.yaml
2
3# or if you don't remember the path
4
5kubectl config view --flatten

This file contains the necessary data to access the cluster. You should see something similar to this.

1apiVersion: v1
2clusters:
3- cluster:
4 certificate-authority-data: <your-certificate-authority-data>
5 server: https://127.0.0.1:6443
6 name: default
7contexts:
8- context:
9 cluster: default
10 user: default
11 name: default
12current-context: default
13kind: Config
14preferences: {}
15users:
16- name: default
17 user:
18 client-certificate-data: <your-client-certificate-data>
19 client-key-data: <your-client-key-data>

Copy the entire file and paste it into some text editor. Before we can use it we have to modify a few lines.

Change the URL to the server from local 127.0.0.1 to a public IP or URL pointing to the server. Leave the port number.

Since in the future, we can manage multiple clusters on our local machine let’s change the name of the cluster, context, and user to something more informative. Let’s modify the user to “k3s-user”, and the context and cluster to “k3s”. You can skip this step if you are not planning to add another cluster.

1apiVersion: v1
2clusters:
3- cluster:
4 certificate-authority-data: <your-certificate-authority-data>
5 server: https://your.server.com:6443 # change the URL
6 name: k3s # cluster name
7contexts:
8- context:
9 cluster: k3s # cluster name
10 user: k3s-user # user name
11 name: k3s # context name
12current-context: k3s # context name
13kind: Config
14preferences: {}
15users:
16- name: k3s-user # user name
17 user:
18 client-certificate-data: <your-client-certificate-data>
19 client-key-data: <your-client-key-data>

Tools like kubectl and Lens will look for ~/.kube/config in your local file system to get the configuration. If you don’t manage another cluster you can save your configuration to this file. However, to keep things more organized let’s save our configuration to another file, e.g. ~/.kube/k3s.yaml, and add this path to the list of kubectl config files:

1export KUBECONFIG=$HOME/.kube/config:$HOME/.kube/k3s.yaml

Then check if the context for our cluster is visible:

1kubectl config get-contexts

Hello World

Before we go to publishing the docker image in the GitHub registry I want to show you how to deploy our service manually. This may come in handy for small projects or just to play around with k8s. It is an optional part and you can skip right to “Integrating with a private image registry”.

Our Hello World application is an Nginx server with a simple HTML page.

1FROM nginx:alpine
2COPY index.html /usr/share/nginx/html/index.html
3EXPOSE 80
4CMD ["nginx", "-g", "daemon off;"]

The contents of index.html:

1<!DOCTYPE html>
2<html>
3<head>
4 <title>Hello</title>
5</head>
6<body>
7 Hello World!
8</body>
9</html>

Having those two files we can build an image.

1docker build -t hello-world:1.0.0 .

Or if you are on an ARM family of processors (like Apple Silicone) but your VM runs amd64:

1docker buildx build -t hello-world:1.0.0 --platform=linux/amd64 .

The above example may come in handy if you are building images on a different processor architecture than your server. If during image import you come across an error: “failed resolving platform for image”. Or you see an error similar to this one after deployment on k8s: “Error: failed to create containerd container: error unpacking image: failed to resolve rootfs: content”, check if the processor of the machine you are building and a server have the same architecture.

After the image is built we can save it to the tar archive.

1docker image save hello-world > hello-world.tar

Now upload the tar file to your server. I am using scp.

1scp hello-world.tar your_username@your.server.com:~/

k3s ships with additional tools. One of them is ctr for managing containers run with containerd. We can import our image directly to the local registry of our cluster.

1k3s ctr images import ./hello-world.tar

Verify that the image is available on the cluster.

1k3s ctr images ls | grep hello-world

Deploying the application

Making the image available to our cluster puts us halfway to our goal, it needs to be deployed.

Kubernetes uses a declarative paradigm of managing the infrastructure. Instead of giving a direct command to create an application, we describe its desired state.

In the following example the Deployment object describes an application hello-world running a single container based on hello-world:latest image, exposing an 80 port.

1# personal-portfolio-deployment.yaml
2apiVersion: apps/v1
3kind: Deployment
4metadata:
5 name: hello-world
6 labels:
7 name: hello-world
8spec:
9 replicas: 1
10 selector:
11 matchLabels:
12 name: hello-world
13 template:
14 metadata:
15 labels:
16 name: hello-world
17 spec:
18 containers:
19 - name: hello-world
20 image: hello-world:1.0.0
21 imagePullPolicy: Never
22 ports:
23 - name: http
24 containerPort: 80
25 protocol: TCP

Notice that imagePullPolicy is set to Never for this deployment. That means our cluster won’t look for images outside its registry during container creation. This is expected behavior in our case since the image is available only locally.

Apply configuration and create a deployment.

1kubectl apply -f hello-world.yaml

After a few moments, you should be able to see the pods.

1kubectl get pods

Integrating with a private image registry

Importing an image directly to a cluster will quickly become awkward. Especially if we plan to implement frequent updates, versioning, and rollbacks, or want to work on multiple clusters.

To reduce our burden we will integrate with GitHub registry. First, follow this guide on how to log in to the registry in your local docker client.

Then rebuild the image with a proper tag.

1docker build -t ghcr.io/<github_username>/hello-world:1.0.0 -f Dockerfile .

Once it's done, push it to GitHub.

1docker push ghcr.io/<github_username>/hello-world:1.0.0

Add the registry to k3s

To make the image available to our cluster we have to grant access to it. k3s will check registries.yaml for the custom configuration. By default this file does not exist, do we have to create it:

1sudo vim /etc/rancher/k3s/registries.yaml

The contests for GHCR will look as follows:

1mirrors:
2 ghcr.io:
3 endpoint:
4 - "https://ghcr.io"
5configs:
6 "ghcr.io":
7 auth:
8 username: <github_username>
9 password: <secret>

The <secret> used for authentication should be created in the developer's panel in your GitHub account. Our cluster will need only read privileges for packages.

Lastly, modify the Deployment by changing the image and imagePullPolicy.

1# Modify the Deployment
2containers:
3 - name: hello-world
4 image: ghcr.io/<github_username>/hello-world:1.0.0
5 imagePullPolicy: Always
6 ports:
7 - name: http
8 containerPort: 80
9 protocol: TCP

Define an Ingress

At this point our Deployment is not accessible from outside the cluster. We did expose the 80 port for our server, however it is reachable only within the cluster.

The object which describes the external traffic to k8s is an Ingress. An Ingress uses a Service as an abstraction of or application. It provides a stable endpoint to Pods which are ephemeral.

We can draw the following diagram to describe the logical relationship between objects in our cluster:

1[Ingress] --> [Service] --> [Pod 1]...[Pod n]

You can explore the Ingress concept for a more in-depth understanding.

Apply Service declaration:

1apiVersion: v1
2kind: Service
3metadata:
4 name: hello-world-service
5spec:
6 selector:
7 app: hello-world
8 ports:
9 - protocol: TCP
10 port: 80
11 targetPort: 80

And finally the Ingress:

1apiVersion: networking.k8s.io/v1
2kind: Ingress
3metadata:
4 name: hello-world
5 namespace: default
6spec:
7 ingressClassName: traefik
8 rules:
9 - host: your.server.com
10 http:
11 paths:
12 - path: /
13 pathType: Prefix
14 backend:
15 service:
16 name: hello-world-service
17 port:
18 number: 80

Notice that we are using traefik as a ingressClassNamek3s ships with the traefik as an Ingress Controller. You may choose from many available options for an ingress controller, some of them listed here.

Summary

Congratulations! You are eligible for a belly rub today.

We covered a lot of topics here, however we only scratched the surface. Each step waits for a deeper dive in the future. Luckily, since you are the owner of the Kubernetes cluster, it will be more satisfying from now on.

If you enjoyed this article and want to receive updates in the future consider following me :) I would much appreciate it.

Do not remove your cluster yet. In the next part, we will be configuring the HTTPS.