I recently setup Umami on this blog to get some basic analytics. The deployment on Kubernetes was easy, but it required some spelunking into documentation. Here are my notes.
Table of contents
Architecture
The setup is intentionally simple: one Umami pod, and one PostgreSQL pod backed by a ZFS filesystem.
This design has no redundancy: when the node running the pods goes down, so does Umami. Visits will not be recorded for the duration, and the web UI will be unavailable.
This design also has no replication. Since the DB is storing everything in a mirrored zpool
, the analytics data will survive a single disk failure, but it will not survive the datacenter burning down. This is fine for our usecase.
Configuration
We store everything in the umami
namespace so that it is easy to find.
apiVersion: v1
kind: Namespace
metadata:
name: umami
labels:
name: umami
01-namespace.yml
We store the configuration for Umami and PostgreSQL in a secret. We have a password to share between the DB and Umami, and the latter also needs a random string for auth security.
apiVersion: v1
kind: Secret
metadata:
name: umami-config
namespace: umami
type: Opaque
stringData:
# Same password in pg-password and the middle of database-url
pg-password: DB-PASSWORD
database-url: "postgresql://umami:DB-PASSWORD@postgres:5432/umami"
# Generate hash-salt with `openssl rand -base64 32`
hash-salt: RANDOM-STRING
02-secret.yml
PostgreSQL
The PostgreSQL part of the setup consists of a service, a StatefulSet
with a single replica, and a ConfigMap
with the DB initialization script.
The service just exposes the standard PostgreSQL port of 5432.
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: umami
spec:
ports:
- port: 5432
name: postgres
selector:
app: postgres
03-postgres.yml
We do not bother with any redundancy here, so we have a single PostgreSQL pod. It has a persistent volume, so we configure it as part of a StatefulSet
. The interesting bits here are the volume mounts and the environment variables.
We mount the persistent volume to /var/lib/postgresql/data
. In my case, this is provisioned by zfs-localpv
. Practically, this creates a new ZFS filesystem in a zpool
, and mounts it into the pod. Conceptually, this is just a hostPath
volume with better separation from the rest of the underlying system.
We need to initialize the database with the Umami schema. We can do this in the postgres
Docker image by mounting a directory of scripts to /docker-entrypoint-initdb.d/
. For simplicity, we store the init script in a ConfigMap
and mount that.
As for environment variables, we need to specify the username and password that Umami will use to connect. We hardcode the user to umami
, and we grab the password from the secret defined earlier. The database name will default to be the same as the username.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
namespace: umami
spec:
selector:
matchLabels:
app: postgres
serviceName: postgres
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
terminationGracePeriodSeconds: 60
containers:
- name: postgres
image: registry.hub.docker.com/library/postgres:14.1
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- name: pgdata
mountPath: /var/lib/postgresql/data
- name: initdb-scripts
mountPath: /docker-entrypoint-initdb.d/
env:
- name: POSTGRES_USER
value: umami
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: umami-config
key: pg-password
volumes:
- name: initdb-scripts
configMap:
name: initdb-scripts
items:
- key: "schema.postgresql.sql"
path: "schema.postgresql.sql"
nodeSelector:
kubernetes.io/hostname: fsn-qws-app2
volumeClaimTemplates:
- metadata:
name: pgdata
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "openebs-zfspv"
resources:
requests:
storage: 10Gi
03-postgres.yml
continued
Lastly, we add the ConfigMap
with the DB init script. It is a bit ugly to copy-paste a long block of SQL like this, but we only have to do it once, and it keeps all the configuration self-contained.
apiVersion: v1
kind: ConfigMap
metadata:
name: initdb-scripts
namespace: umami
data:
# Copied from https://github.com/mikecao/umami/blob/master/sql/schema.postgresql.sql
schema.postgresql.sql: |
drop table if exists event;
drop table if exists pageview;
... rest of schema.postgresql.sql from umami github repo ...
03-postgres.yml
continued
Umami
Umami itself is stateless, so we configure it as part of a Deployment
. That funny looking container image is the official Umami image, pinned to a specific version. We grab the DATABASE_URL
and HASH_SALT
environment variables from the secret defined earlier.
Note that we pinned both the postgres
and umami
pods to a specific node with nodeSelector
. This is not strictly necessary, but they cannot function separately, the postgres
pod cannot move to other nodes, so we might as well remove any network hop between them.
apiVersion: v1
kind: Service
metadata:
name: umami
namespace: umami
spec:
ports:
- port: 3000
name: web
selector:
app: umami
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: umami
namespace: umami
spec:
selector:
matchLabels:
app: umami
replicas: 1
template:
metadata:
labels:
app: umami
spec:
containers:
- name: umami
image: ghcr.io/mikecao/umami:postgresql-b756fcd
ports:
- containerPort: 3000
name: umami
env:
- name: DATABASE_TYPE
value: postgresql
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: umami-config
key: database-url
- name: HASH_SALT
valueFrom:
secretKeyRef:
name: umami-config
key: hash-salt
nodeSelector:
kubernetes.io/hostname: fsn-qws-app2
04-umami.yml
Ingress
Finally, we need to expose Umami to the public Internet. In my case, I do this with ingress-nginx
. TLS certificates are automatically provisioned by cert-manager
.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: umami
namespace: umami
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: 'letsencrypt'
spec:
tls:
- hosts:
- umami.scvalex.net
secretName: umami-certs
rules:
- host: umami.scvalex.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: umami
port:
number: 3000
05-ingress.yml
Putting it all together
To deploy all the components with kubectl apply -k
, we list them in a kustomization.yaml
.
resources:
- 01-namespace.yml
- 02-secret.yml
- 03-postgres.yml
- 04-umami.yml
- 05-ingress.yml
kustomization.yaml
And that is all there is to it: we now have Umami analytics. As with all Kubernetes configuration, it is verbose with lots of duplicated strings. On the bright side, it is readable, and we can specify it all in a single repo.