-
Notifications
You must be signed in to change notification settings - Fork 235
Kubernetes Deployment Example
Scott Robertson edited this page Jan 18, 2020
·
6 revisions
Here is an example of how to deploy the Faktory Server to Kubernetes.
Here we tell Kubernetes to deploy a single replica of the Faktory Server. A few things to note:
- We mount a volume to store faktory data so it will persist across restarts
- We mount the configmap as a volume
- There is a sidecar that watches that configmap, and if it changes, sends a SIGHUB to the main Faktory process
- The deployment strategy is Recreate so we only ever have one instance of Faktory
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: faktory-server
labels:
app: faktory-server
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: faktory-server
template:
metadata:
labels:
app: faktory-server
spec:
shareProcessNamespace: true
terminationGracePeriodSeconds: 10
containers:
- name: faktory-server-config-watcher
image: busybox
command:
- sh
- "-c"
- |
sum() {
current=$(find /conf -type f -exec md5sum {} \; | sort -k 2 | md5sum)
}
sum
last="$current"
while true; do
sum
if [ "$current" != "$last" ]; then
pid=$(pidof faktory)
echo "$(date -Iseconds) [conf.d] changes detected - signaling Faktory with pid=$pid"
kill -HUP "$pid"
last="$current"
fi
sleep 1
done
volumeMounts:
- name: faktory-server-configs-volume
mountPath: "/conf"
- image: docker.contribsys.com/contribsys/faktory-ent:1.2.0
name: faktory-server
command:
- "/faktory"
- "-b"
- ":7419"
- "-w"
- ":7420"
- "-e"
- production
imagePullPolicy: Always
envFrom:
- configMapRef:
name: production-config
volumeMounts:
- name: faktory-server-configs-volume
mountPath: "/etc/faktory/conf.d"
- name: faktory-server-storage-volume
mountPath: "/var/lib/faktory"
volumes:
- name: faktory-server-configs-volume
configMap:
name: faktory-server-configmap
- name: faktory-server-storage-volume
persistentVolumeClaim:
claimName: faktory-server-storage-pv-claim
imagePullSecrets:
- name: docker-registry-key
An example configmap that will be mounted into the deployment above
---
apiVersion: v1
kind: ConfigMap
metadata:
name: faktory-server-configmap
data:
cron.toml: |2
[[cron]]
schedule = "*/1 * * * *"
[cron.job]
queue = "default"
reserve_for = 60
retry = -1
type = "Cron::SomeRandomCron"
throttles.toml: |2
[throttles.default]
concurrency = 1
timeout = 60
statsd.toml: |2
[statsd]
location = "datadog-agent-svc.default.svc.cluster.local:8125"
namespace = "faktory"
tags = ["env:production"]
The volume that will be mounted to store the faktory data.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: faktory-server-storage-pv-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: [storage_class_here]
resources:
requests:
storage: 5Gi
This exposes the Faktory Server to the rest of your cluster. You can then use for example: tcp://faktory-server-svc.default.svc.cluster.local:7419
as your host for the Faktory clients.
kind: Service
apiVersion: v1
metadata:
name: faktory-server-svc
spec:
selector:
app: faktory-server
ports:
- name: faktory
protocol: TCP
port: 7419
- name: dashboard
protocol: TCP
port: 7420
Home | Installation | Getting Started Ruby | Job Errors | FAQ | Related Projects
This wiki is tracked by git and publicly editable. You are welcome to fix errors and typos. Any defacing or vandalism of content will result in your changes being reverted and you being blocked.