Kubernetes setup
We will use Minikube with Virtualbox.
minikube config set memory 3819
minikube config set driver virtualbox
minikube start
minikube addons enable metrics-server
minikube dashboard &
minikube tunnel
We will also be using a dedicated Namespace schildcafe defined by this very simple ns.yaml
apiVersion: v1
kind: Namespace
metadata:
name: schildcafe
and applied with kubectl
kubectl apply -f ns.yaml
MySQL
We’ll also run MySQL server in our Kubernetes Cluster for demonstration purposes.
The Deployment is
apiVersion: apps/v1
kind: Deployment
metadata:
name: schildcafe-mysql
namespace: schildcafe
spec:
selector:
matchLabels:
app: schildcafe-mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: schildcafe-mysql
spec:
containers:
- image: mysql:5.6
name: schildcafe-mysql
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: root
ports:
- containerPort: 3306
name: mysql
We then define the actual MySQL service
apiVersion: v1
kind: Service
metadata:
name: schildcafe-mysql
namespace: schildcafe
spec:
ports:
- port: 3306
selector:
app: schildcafe-mysql
type: NodePort
Exercise
Implement the MySQL root password as a kubernetes secret.
Solution
We define the mysql root password as a secret of type kubernetes.io/basic-auth
apiVersion: v1
kind: Secret
metadata:
name: schildcafe-mysql-secret
namespace: schildcafe
type: kubernetes.io/basic-auth
stringData:
password: root
Then load it in the Deployment using
...
spec:
...
template:
...
spec:
containers:
- image: mysql:5.6
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: schildcafe-mysql-secret
key: password
Exercise
Implement MySQL database initialisation.
Solution
We create a ConfigMap with a init.sql as per MySQL Docker Image documentation
apiVersion: v1
kind: ConfigMap
metadata:
name: schildcafe-mysql-initdb-config
namespace: schildcafe
data:
init.sql: "CREATE DATABASE IF NOT EXISTS cafe;"
that then needs mounting to /docker-entrypoint-initdb.d the MySQL Deployment
...
spec:
...
template:
...
spec:
containers:
- image: mysql:5.6
...
volumeMounts:
- name: schildcafe-mysql-initdb
mountPath: /docker-entrypoint-initdb.d
...
volumes:
- name: schildcafe-mysql-initdb
configMap:
name: schildcafe-mysql-initdb-config
Exercise
Add persistence to MySQL.
Solution
We create a PersistentVolume on /data/ which Minikube will persist.
apiVersion: v1
kind: PersistentVolume
metadata:
name: schildcafe-mysql-pv-volume
namespace: schildcafe
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/schildcafe/mysql"
with a corresponding PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: schildcafe-mysql-pv-claim
namespace: schildcafe
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
and then mount it in the MySQL Deployment accordingly at /var/lib/mysql
...
spec:
...
template:
...
spec:
containers:
- image: mysql:5.6
...
volumeMounts:
- name: schildcafe-mysql-persistent-storage
mountPath: /var/lib/mysql
...
volumes:
- name: schildcafe-mysql-persistent-storage
persistentVolumeClaim:
claimName: schildcafe-mysql-pv-claim
The Café’s Coffee Machine and Servitør
The basic deployments for the Coffee Machine and the Servitør are straightforward
apiVersion: apps/v1
kind: Deployment
metadata:
name: schildcafe-servitor
namespace: schildcafe
labels:
app: schildcafe-servitor
spec:
selector:
matchLabels:
app: "schildcafe-servitor"
strategy:
type: Recreate
template:
metadata:
labels:
app: schildcafe-servitor
spec:
containers:
- name: servitor
image: "schildwaechter/schildcafe.servitor:main"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1333
name: servitor
apiVersion: apps/v1
kind: Deployment
metadata:
name: schildcafe-coffee
namespace: schildcafe
labels:
app: schildcafe-coffee
spec:
selector:
matchLabels:
app: schildcafe-coffee
template:
metadata:
labels:
app: schildcafe-coffee
spec:
containers:
- name: schildcafe-coffee
image: "schildwaechter/cessda.cafe.coffee.carsten:main"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1337
The two services support liveliness and readiness probes. We also know they don’t need many resources, so the Servitør gets
...
spec:
...
template:
...
spec:
containers:
- image: "schildwaechter/schildcafe.servitor:main"
...
ports:
- containerPort: 1333
livenessProbe:
httpGet:
path: /healthcheck
port: 1333
readinessProbe:
httpGet:
path: /healthcheck
port: 1333
resources:
requests:
memory: "8Mi"
cpu: "10m"
limits:
memory: "32Mi"
cpu: "100m"
and similarly for the Coffee.
Finally, both need some config settings. For the Servitør we use
...
spec:
...
template:
...
spec:
containers:
- image: "schildwaechter/schildcafe.servitor:main"
...
env:
- name: GELF_LOGGING
value: "false"
- name: GIN_MODE
value: "release"
- name: MYSQL_PORT
value: "3306"
- name: MYSQL_HOST
value: "schildcafe-mysql.schildcafe.svc.cluster.local"
- name: MYSQL_DB
value: "cafe"
- name: MYSQL_USER
value: "root"
- name: MYSQL_PASS
value: "root"
While the Coffee Machine can live without the MySQL parameters.
Finally, the corresponding services are
apiVersion: v1
kind: Service
metadata:
name: schildcafe-servitor
namespace: schildcafe
spec:
type: NodePort
ports:
- port: 1333
selector:
app: schildcafe-servitor
and
apiVersion: v1
kind: Service
metadata:
name: schildcafe-coffee
namespace: schildcafe
spec:
clusterIP: "None"
ports:
- port: 1337
selector:
app: schildcafe-coffee
Note, that the Servitør is a service of type NodePort, while the Coffee Machine has clusterIP set to "None".
We’ll make use of that for scaling purposes later.
The Barista
The final component that needs to be deployed to the SchildCafé is the Barista. As this is a script, that needs to be executed regularly, we’ll define a CronJob that runs every minute.
apiVersion: batch/v1
kind: CronJob
metadata:
name: schildcafe-barista
namespace: schildcafe
spec:
schedule: "* * * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: schildcafe-barista
image: schildwaechter/schildcafe.barista:main
imagePullPolicy: IfNotPresent
env:
- name: COFFEE_MACHINES
value: '["http://schildcafe-coffee.schildcafe.svc.cluster.local:1337"]'
- name: MYSQL_PORT
value: "3306"
...
command:
- python3
- ./barista.py
restartPolicy: Never
Of course, the Barista needs the same MySQL parameters as the Servitør.
Note, that we don’t want more than one instance running at the same time,
so we set both concurrencyPolicy: Forbid and restartPolicy: Never.
Ordering
Once everyting is up and running, check with
kubectl get all -n schildcafe
You should now be able to send an order to the IP of the Servitør service and retrieve it after a few minutes.