This repository contains the setup and scripts used to benchmark different container runtimes on a Raspberry Pi 5 using Kubernetes (K3s).
The setup consists of two parts:
- Single-node K3s cluster
- Runs the benchmark workload (C HTTP server)
- Deployments use different
RuntimeClassvalues:runcrunsc(gVisor)kata(QEMU / Firecracker)urunc(Linux + unikernel variants)
- Prometheus runs on a separate machine
- Uses Kubernetes service discovery to discover pods on the RPi cluster
- Uses blackbox exporter to probe pods over HTTP
charts/ # Helm chart for HTTP server workload
script/orchestrator.py # Benchmarking script
prometheus-config/ # Example configs for remote Prometheus
manifests/remote-prometheus # RBAC for remote Prometheus access
grafana-dashboards/ # Example dashboards
Before running the benchmarks, make sure the required RuntimeClass resources are already deployed on the cluster.
For example, depending on the runtimes you want to test, this may include:
runcrunsckata-qemukata-fcurunc
The benchmark chart expects runtimeClassName to refer to an existing RuntimeClass.
You can verify the available runtime classes with:
kubectl get runtimeclassThe Prometheus and blackbox exporter configuration assumes that monitoring components are deployed in the monitoring namespace.
Create it if it does not already exist:
kubectl create namespace monitoringThe example Prometheus configuration assumes Prometheus is deployed using kube-prometheus-stack.
The files under prometheus-config/ are example Helm values used with:
kube-prometheus-stackprometheus-blackbox-exporter
They are templates and must be adapted to the target environment before use.
On the Raspberry Pi cluster:
kubectl apply -f manifests/remote-prometheus/This creates:
ServiceAccountClusterRole+ClusterRoleBinding- token
Secretused by Prometheus to authenticate to the cluster
On the Prometheus machine:
Use:
prometheus-config/values-prometheus-remote.example.yaml
prometheus-config/additional-scrape-config.example.yaml
These must be adapted:
<RPI5_K3S_API_IP><BLACKBOX_EXPORTER_IP><PROMETHEUS_IP>
The workload is deployed via Helm:
charts/http-server-bench/
Each deployment is labeled for Prometheus discovery and filtering.
These labels are critical:
app=c-http-server
test_id=<test_id>
runtime_class=<runtime>
variant=<variant>
They are used by Prometheus relabeling rules:
- select only benchmark pods
- group results by runtime
- distinguish different test runs
If labels are missing or changed, metrics will not be collected correctly.
The normal variant refers to the standard Linux container image used for non-unikernel runtimes.
It is used with:
runcrunsckata-qemukata-fc
| Runtime | Variant | Memory Limit |
|---|---|---|
| runc | - | 16 MiB |
| kata | qemu | 16 MiB |
| kata | fc | 16 MiB |
| runsc | - | 16 MiB |
| urunc | unikraft/qemu | 16 MiB |
| urunc | linux/qemu | 20 MiB |
| urunc | linux/fc | 20 MiB |
| urunc | rumprun/spt | 16 MiB |
| urunc | rumprun/hvt | 16 MiB |
| urunc | mirage/spt | 32 MiB |
| urunc | mirage/hvt | 32 MiB |
Example command:
python3 script/orchestrator.py \
--chart ./charts/http-server-bench \
--release chttp-urunc-hvt-mirage-t001 \
--namespace default \
--memory 32Mi \
--set replicas=100 \
--set runtimeClassName=urunc \
--set image='REGISTRY/IMAGE:TAG' \
--set testId=t001 \
--set variant=hvt-mirage \
--selector 'app=c-http-server,test_id=t001,runtime_class=urunc,variant=hvt-mirage' \
--test-id t001 \
--runtime-class urunc \
--variant hvt-mirageReplace
'REGISTRY/IMAGE:TAG'with a container image compatible with the selected runtime. For urunc variants, this requires images built accordingly (using bunny).
After each run, the script waits before starting the next one:
run(["helm", "uninstall", args.release, "-n", args.namespace])
time.sleep(45)This delay ensures:
- old pods are fully deleted
- Prometheus updates its targets
- blackbox exporter stops probing old pods
If runs overlap, increase the sleep time.
The setup measures:
- Pod readiness (Kubernetes API)
- HTTP availability (blackbox exporter)
- Timeline of pods becoming ready/responding
All metrics are sampled every 0.5 seconds.
Example dashboards are provided in:
grafana-dashboards/
They visualize:
- readiness timelines
- HTTP availability timelines
- latency breakdowns
The dashboards are exported from the original setup and include example data and datasource references.
They require adjustments after import, such as:
- updating the Prometheus datasource
- adapting datasource UIDs
No secrets or tokens are included in this repository.
All configs under prometheus-config/ are templates.
Replace placeholder values before use.