In the previous article we discussed how to prepare a good application image.
Now we need to talk about where to run those well-designed containers.
It's de facto industry standards nowadays and despite it's complexity it does a very decent job.
There are several flavors of it, managed and unmanaged, with different scale capabilities.
Examples of managed kubernetes:
Examples of unmanaged kubernetes runtimes:
Official Kubernetes is a heavy beast and Minikube is based on virtual machines; on the other hand, Kind uses your current docker installation to provision a small, single node cluster so it's possible to test IaC artifacts locally with little effort.
Before create a cluster, you MUST met the following requisites:
$HOME/go/bin
is present on $PATH
The easiest way to install kind is:
go install sigs.k8s.io/kind@latest
Once you get the cli app, you can create a cluster with:
kind create cluster
There! cluster created and configured in ~/.kube/config
, you can use
kubectl
to interact with your brand new cluster:
sombriks@thanatos:~/git/simple-knex-koa-example> kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kind-control-plane Ready control-plane 6h21m v1.27.3 172.18.0.2 <none> Debian GNU/Linux 11 (bullseye) 6.5.9-1-default containerd://1.7.1
For starters, it's a cluster of nodes.
Usually there are more than one node so your services can offer high availability and zero downtime.
Workloads are distributed across all nodes, they are how we call your applications in Kubernetes.
In order to make those applications available to each other inside the cluster, a network service must be defined.
There are much more, but let's focus on those two, since they are the minimum needed to spin up our application.
infrastructure/
├── docker-compose.yml
├── Dockerfile
├── k8s
│ ├── network
│ │ ├── http-routes
│ │ │ └── app-route.yml
│ │ └── service
│ │ ├── app-service.yml
│ │ └── db-service.yml
│ └── workloads
│ ├── deployment
│ │ └── app-deployment.yml
│ └── stateful-set
│ └── db-stateful-set.yml
└── README.md
Our sample application, when in development mode, uses a small sqlite database. When in production mode, however, the task is up to the postgres database engine.
The workload we'll use to deploy the database is called stateful set.
It has this name because, unlike a deployment, the app state isn't considered discardable.
To create the stateful set in the cluster use the following command:
# cd infrastructure
kubectl apply -f https://raw.githubusercontent.com/sombriks/simple-knex-koa-example/manual-tag-workflow/infrastructure/k8s/workloads/stateful-set/db-stateful-set.yml
Then you create the service which will expose this workload:
# cd infrastructure
kubectl apply -f https://raw.githubusercontent.com/sombriks/simple-knex-koa-example/manual-tag-workflow/infrastructure/k8s/network/service/db-service.yml
There! we got the database. We'll address ways of how to test in a moment.
First of all, you must build the docker image using the provided dockerfile:
# cd simple-knex-koa-example
docker build -f infrastructure/Dockerfile \
-t sombriks/simple-knex-koa-example:development .
In order to offer high availability, we can use deployments do define replica sets and keep versions of them. That way, if a deployment update fails, it's always possible to rollback to a previous known stable state.
Use the following command to apply the deployment manifest file:
# cd infrastructure
kubectl apply -f k8s/workloads/deployment/app-deployment.yml
This command probably might end up in a strange error:
sombriks@thanatos:~/git/simple-knex-koa-example/infrastructure> kubectl get pods
NAME READY STATUS RESTARTS AGE
db-stateful-set-0 1/1 Running 0 171m
simple-knex-koa-deployment-5585dc79b-q55wc 0/1 ErrImagePull 0 3m13s
sombriks@thanatos:~/git/simple-knex-koa-example/infrastructure> kubectl logs simple-knex-koa-deployment-5585dc79b-q55wc
Error from server (BadRequest): container "simple-knex-koa" in pod "simple-knex-koa-deployment-5585dc79b-q55wc" is waiting to start: trying and failing to pull image
This happens because the image created isn't available in the docker public registry.
Fortunately, kind offers a way to import our local image:
kind load docker-image sombriks/simple-knex-koa-example:development
Sample output:
sombriks@thanatos:~/git/simple-knex-koa-example> kind load docker-image sombriks/simple-knex-koa-example:development
Image: "sombriks/simple-knex-koa-example:development" with ID "sha256:5cd14aad3bc9dc9487fe72a9f9f3fb11f902bb3c58bbfbc3c9b7f8676976cd51" not yet present on node "kind-control-plane", loading...
Now we're good to the next step, the service configuration.
Like the database, we need to deploy the service as well if we want to make it available:
# cd infrastructure
kubectl apply -f k8s/network/service/app-service.yml
And finally there, workload and service properly deployed.
Take a detailed look at the manifest yaml files for detailed comprehension of what just happened here.
Once workloads are deployed, they can communicate to each other using the IP addresses. However it isn't the proper way to make workloads talk each other.
Instead, define the services and the the service name will be resolved inside the pods.
This is why the PG_CONNECTION_URL
variable has db-service
in the hostname
part: this is the value of metadata.name
inside
db-service.yml.
Labels are important. It's thanks to labels that services can connect with pods.
If you noticed, all we did was to put our application inside a kubernetes cluster with no contact with the outside world.
The quickest way to check if everything is working as expecte is performing a port-forward:
kubectl port-forward services/simple-knex-koa-service 3000
Then you can get the service output in your browser or via curl:
curl http://localhost:3000/books
You can port-forward pods, deployments and services.
One useful parameter of port-forward is the address. Sometimes you want to check the service outside the node, so you must bind the service port with a less restrictive address:
kubectl port-forward --address 0.0.0.0 services/simple-knex-koa-service 3000
It is possible to define the type of the service to NodePort and that way get a exposed port on public ip for the service.
However, kind
has some trouble with the public ip part, so we're not using it
today.
An Ingress controller acts as an API Gateway and exposes the services through http requests and uri's.
In order to use an ingress with kind, we would need to recreate our cluster, so we'll see that approach later.
The kind cluster is good to test things locally before apply them to a real production cluster, and covers mostly all scenarios.
Noteworthy missing ones are a simple ingress setup and public ip to the pods inside the cluster.
Besides that, in order to properly execute a Continuous deployment scenario, one could install FluxCD or ArgoCD inside the cluster, configure it to observe the infrastructure folder and then use is as source of truth for the cluster's desired state.