Skip to content

nubificus/runtime-benchmarking-rpi5

Repository files navigation

Runtime Benchmarking on Raspberry Pi 5 (K3s)

This repository contains the setup and scripts used to benchmark different container runtimes on a Raspberry Pi 5 using Kubernetes (K3s).


Overview

The setup consists of two parts:

Node (Raspberry Pi 5)

  • Single-node K3s cluster
  • Runs the benchmark workload (C HTTP server)
  • Deployments use different RuntimeClass values:
    • runc
    • runsc (gVisor)
    • kata (QEMU / Firecracker)
    • urunc (Linux + unikernel variants)

Monitoring (remote machine)

  • Prometheus runs on a separate machine
  • Uses Kubernetes service discovery to discover pods on the RPi cluster
  • Uses blackbox exporter to probe pods over HTTP

Repository structure

charts/                     # Helm chart for HTTP server workload
script/orchestrator.py      # Benchmarking script
prometheus-config/          # Example configs for remote Prometheus
manifests/remote-prometheus # RBAC for remote Prometheus access
grafana-dashboards/         # Example dashboards

RuntimeClass setup

Before running the benchmarks, make sure the required RuntimeClass resources are already deployed on the cluster.

For example, depending on the runtimes you want to test, this may include:

  • runc
  • runsc
  • kata-qemu
  • kata-fc
  • urunc

The benchmark chart expects runtimeClassName to refer to an existing RuntimeClass.

You can verify the available runtime classes with:

kubectl get runtimeclass

Monitoring namespace

The Prometheus and blackbox exporter configuration assumes that monitoring components are deployed in the monitoring namespace.

Create it if it does not already exist:

kubectl create namespace monitoring

Prometheus stack

The example Prometheus configuration assumes Prometheus is deployed using kube-prometheus-stack.

The files under prometheus-config/ are example Helm values used with:

  • kube-prometheus-stack
  • prometheus-blackbox-exporter

They are templates and must be adapted to the target environment before use.


Remote Prometheus setup

On the Raspberry Pi cluster:

kubectl apply -f manifests/remote-prometheus/

This creates:

  • ServiceAccount
  • ClusterRole + ClusterRoleBinding
  • token Secret used by Prometheus to authenticate to the cluster

On the Prometheus machine:

Use:

prometheus-config/values-prometheus-remote.example.yaml
prometheus-config/additional-scrape-config.example.yaml

These must be adapted:

  • <RPI5_K3S_API_IP>
  • <BLACKBOX_EXPORTER_IP>
  • <PROMETHEUS_IP>

Workload deployment

The workload is deployed via Helm:

charts/http-server-bench/

Each deployment is labeled for Prometheus discovery and filtering.

Required labels

These labels are critical:

app=c-http-server
test_id=<test_id>
runtime_class=<runtime>
variant=<variant>

They are used by Prometheus relabeling rules:

  • select only benchmark pods
  • group results by runtime
  • distinguish different test runs

If labels are missing or changed, metrics will not be collected correctly.


Workload variants

The normal variant refers to the standard Linux container image used for non-unikernel runtimes.

It is used with:

  • runc
  • runsc
  • kata-qemu
  • kata-fc

Workload memory limits per runtime

Runtime Variant Memory Limit
runc - 16 MiB
kata qemu 16 MiB
kata fc 16 MiB
runsc - 16 MiB
urunc unikraft/qemu 16 MiB
urunc linux/qemu 20 MiB
urunc linux/fc 20 MiB
urunc rumprun/spt 16 MiB
urunc rumprun/hvt 16 MiB
urunc mirage/spt 32 MiB
urunc mirage/hvt 32 MiB

Running a benchmark

Example command:

python3 script/orchestrator.py \
  --chart ./charts/http-server-bench \
  --release chttp-urunc-hvt-mirage-t001 \
  --namespace default \
  --memory 32Mi \
  --set replicas=100 \
  --set runtimeClassName=urunc \
  --set image='REGISTRY/IMAGE:TAG' \
  --set testId=t001 \
  --set variant=hvt-mirage \
  --selector 'app=c-http-server,test_id=t001,runtime_class=urunc,variant=hvt-mirage' \
  --test-id t001 \
  --runtime-class urunc \
  --variant hvt-mirage

Replace 'REGISTRY/IMAGE:TAG' with a container image compatible with the selected runtime. For urunc variants, this requires images built accordingly (using bunny).

Important behavior

After each run, the script waits before starting the next one:

run(["helm", "uninstall", args.release, "-n", args.namespace])
time.sleep(45)

This delay ensures:

  • old pods are fully deleted
  • Prometheus updates its targets
  • blackbox exporter stops probing old pods

If runs overlap, increase the sleep time.

Metrics collected

The setup measures:

  • Pod readiness (Kubernetes API)
  • HTTP availability (blackbox exporter)
  • Timeline of pods becoming ready/responding

All metrics are sampled every 0.5 seconds.

Grafana dashboards

Example dashboards are provided in:

grafana-dashboards/

They visualize:

  • readiness timelines
  • HTTP availability timelines
  • latency breakdowns

The dashboards are exported from the original setup and include example data and datasource references.

They require adjustments after import, such as:

  • updating the Prometheus datasource
  • adapting datasource UIDs

Notes

No secrets or tokens are included in this repository.

All configs under prometheus-config/ are templates.

Replace placeholder values before use.

About

Runtime benchmarking on Raspberry Pi 5 in K3s

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages