This guide explains how to add database support to your project using the k8s-ephemeral-environments platform.
If you're using the reusable workflow with k8s-ee.yaml, enabling a database is simple:
# k8s-ee.yaml
projectId: myapp
databases:
postgresql: true # Enable PostgreSQL
# mongodb: true # Enable MongoDB
# redis: true # Enable Redis
# minio: true # Enable MinIO (S3-compatible storage)
# mariadb: true # Enable MariaDBConnection details are automatically injected as environment variables (e.g., DATABASE_URL for PostgreSQL).
See the Configuration Reference for all options.
The platform provides a database-agnostic approach where:
- Platform team installs database operators cluster-wide
- Project teams enable databases via simple configuration
- Operators auto-generate credentials and manage lifecycle
| Database | Use Case | Operator |
|---|---|---|
| PostgreSQL | Relational data, most applications | CloudNativePG |
| MariaDB | Relational data (alternative to PostgreSQL) | Bitnami MariaDB |
| MongoDB | Document store, flexible schemas | MongoDB Community |
| MinIO | S3-compatible object storage | MinIO Operator |
| Redis | Caching, sessions, queues | Simple deployment |
Note: Only one SQL database (PostgreSQL OR MariaDB) should be enabled at a time.
In your project's charts/my-app/Chart.yaml:
dependencies:
- name: postgresql
version: "1.0.0"
repository: "file://../../../charts/postgresql"
condition: postgresql.enabledIn your project's charts/my-app/values.yaml:
postgresql:
enabled: true
storage:
size: 1Gi # Optional, default is 1GiIn your charts/my-app/templates/deployment.yaml:
spec:
containers:
- name: {{ .Chart.Name }}
env:
{{- if .Values.postgresql.enabled }}
{{- include "postgresql.envVars" .Subcharts.postgresql | nindent 8 }}
{{- end }}Note: Use
.Subcharts.postgresqlto access the sub-chart context for theenvVarstemplate.
// Node.js
const { Pool } = require('pg');
const pool = new Pool({ connectionString: process.env.DATABASE_URL });# Python
import os
from sqlalchemy import create_engine
engine = create_engine(os.environ['DATABASE_URL'])// Java Spring Boot (application.properties)
spring.datasource.url=${DATABASE_URL}# values.yaml
postgresql:
enabled: true
storage:
size: 1Gi
storageClass: local-path # Optional, uses cluster default if empty
resources:
requests:
memory: 256Mi
cpu: 100m
limits:
memory: 512Mi
cpu: 500mEnvironment Variables:
| Variable | Description | Example |
|---|---|---|
DATABASE_URL |
Full connection URI | postgresql://user:pass@host:5432/db |
PGHOST |
Database host | myapp-postgresql |
PGPORT |
Database port | 5432 |
PGDATABASE |
Database name | app |
PGUSER |
Database user | app |
PGPASSWORD |
Database password | <auto-generated> |
# values.yaml
mongodb:
enabled: true
storage:
size: 1Gi
storageClass: "" # Optional, uses cluster default if empty
resources:
requests:
memory: 256Mi
cpu: 100m
limits:
memory: 512Mi
cpu: 500mEnvironment Variables:
| Variable | Description | Example |
|---|---|---|
MONGODB_URI |
Full connection string (uses /admin for auth) | mongodb://user:pass@host:27017/admin |
MONGODB_URL |
Alias for MONGODB_URI | Same as above |
MONGODB_DATABASE |
Database name for application data | app |
Note: The connection string uses
/adminfor MongoDB authentication. UseMONGODB_DATABASEto specify the actual database for your collections.
# values.yaml
minio:
enabled: true
storage:
size: 5Gi
storageClass: "" # Optional, uses cluster default if empty
resources:
requests:
memory: 256Mi
cpu: 100m
limits:
memory: 512Mi
cpu: 500mEnvironment Variables:
| Variable | Description | Example |
|---|---|---|
S3_ENDPOINT |
MinIO endpoint URL | http://myapp-minio:9000 |
S3_ACCESS_KEY |
Access key | <auto-generated> |
S3_SECRET_KEY |
Secret key | <auto-generated> |
S3_BUCKET |
Default bucket name | data |
# values.yaml
mariadb:
enabled: true
storage:
size: 1Gi
storageClass: "" # Optional, uses cluster default if empty
resources:
requests:
memory: 256Mi
cpu: 100m
limits:
memory: 512Mi
cpu: 500mEnvironment Variables:
| Variable | Description | Example |
|---|---|---|
DATABASE_TYPE |
Database type indicator | mariadb |
MYSQL_URL |
Full connection URI | mysql://user:pass@host:3306/db |
MYSQL_HOST |
Database host | myapp-mariadb |
MYSQL_PORT |
Database port | 3306 |
MYSQL_DATABASE |
Database name | app |
MYSQL_USER |
Database user | app |
MYSQL_PASSWORD |
Database password | <auto-generated> |
Note: Only one SQL database (PostgreSQL OR MariaDB) should be enabled at a time. The DATABASE_TYPE environment variable is automatically set based on which database is enabled.
Using MariaDB in Your Application:
// TypeScript/Node.js with Drizzle ORM
import { drizzle } from 'drizzle-orm/mysql2';
import mysql from 'mysql2/promise';
const pool = mysql.createPool(process.env.MYSQL_URL);
const db = drizzle(pool);# Python with SQLAlchemy
import os
from sqlalchemy import create_engine
engine = create_engine(os.environ['MYSQL_URL'])# values.yaml
redis:
enabled: true
auth:
enabled: true # Enable password authentication (default: true)
resources:
requests:
memory: 64Mi
cpu: 50m
limits:
memory: 128Mi
cpu: 200mEnvironment Variables:
| Variable | Description | When Available |
|---|---|---|
REDIS_HOST |
Redis host | Always |
REDIS_PORT |
Redis port | Always |
REDIS_PASSWORD |
Redis password | When auth.enabled: true |
REDIS_URL |
Full connection URL | When auth.enabled: false |
Constructing Redis URL with Authentication:
When auth.enabled: true, you need to construct the Redis URL in your application:
// Node.js
const redisUrl = process.env.REDIS_URL ||
`redis://:${process.env.REDIS_PASSWORD}@${process.env.REDIS_HOST}:${process.env.REDIS_PORT}`;# Python
import os
redis_url = os.environ.get('REDIS_URL') or \
f"redis://:{os.environ['REDIS_PASSWORD']}@{os.environ['REDIS_HOST']}:{os.environ['REDIS_PORT']}"// Java
String redisUrl = System.getenv("REDIS_URL") != null ? System.getenv("REDIS_URL") :
String.format("redis://:%s@%s:%s",
System.getenv("REDIS_PASSWORD"),
System.getenv("REDIS_HOST"),
System.getenv("REDIS_PORT"));Projects can enable multiple databases simultaneously:
# values.yaml
postgresql:
enabled: true
redis:
enabled: true # For caching
minio:
enabled: true # For file uploads# deployment.yaml
env:
{{- if .Values.postgresql.enabled }}
{{- include "postgresql.envVars" .Subcharts.postgresql | nindent 8 }}
{{- end }}
{{- if .Values.redis.enabled }}
{{- include "redis.envVars" .Subcharts.redis | nindent 8 }}
{{- end }}
{{- if .Values.minio.enabled }}
{{- include "minio.envVars" .Subcharts.minio | nindent 8 }}
{{- end }}| Approach | Best For | Schema Versioning | Supports Changes |
|---|---|---|---|
| Drizzle ORM Migrations | Production apps, evolving schemas | Yes | Yes |
| Bootstrap SQL | Simple demos, static schemas | No | No |
Recommendation: Use migrations for production-grade applications. See Database Migrations Guide for full documentation.
Use Drizzle ORM for type-safe, versioned migrations:
// Run migrations at app startup
import { migrate } from 'drizzle-orm/node-postgres/migrator';
await migrate(db, { migrationsFolder: './drizzle' });See Database Migrations Guide and Database Seeding Guide for complete setup instructions.
For simple demos with static schemas that never change, use CloudNativePG's bootstrap configuration:
# values.yaml
postgresql:
enabled: true
bootstrap:
# Run on application database after it's created
postInitApplicationSQL:
- |
CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
name VARCHAR(255)
);
INSERT INTO users (name) VALUES ('Test User');
# Run on postgres database (for extensions, etc.)
initSQL:
- CREATE EXTENSION IF NOT EXISTS "uuid-ossp";Important Notes:
- Use named dollar-quote delimiters (
$func$) instead of$$for function bodies - CloudNativePG's template processing consumes$$. - Bootstrap SQL runs as the
postgressuperuser, but your app connects as theappuser. You must explicitly grant permissions:
-- Add at the end of your postInitApplicationSQL
GRANT ALL PRIVILEGES ON your_table TO app;
GRANT USAGE, SELECT ON SEQUENCE your_table_id_seq TO app;See demo-app/migrations/ for documented SQL reference files.
Ephemeral environments use conservative defaults to fit within PR namespace quotas:
| Database | Memory Request | Memory Limit | CPU Request | CPU Limit |
|---|---|---|---|---|
| PostgreSQL | 256Mi | 512Mi | 100m | 500m |
| MariaDB | 256Mi | 512Mi | 100m | 500m |
| MongoDB | 256Mi | 512Mi | 100m | 500m |
| Redis | 64Mi | 128Mi | 50m | 200m |
| MinIO | 256Mi | 512Mi | 100m | 500m |
Adjust in values.yaml if needed, but stay within namespace ResourceQuota.
# Check pod status
kubectl get pods -n <namespace> -l app.kubernetes.io/component=database
# Check pod events
kubectl describe pod <pod-name> -n <namespace>
# Check operator logs
kubectl logs -n cnpg-system -l app.kubernetes.io/name=cloudnative-pg- Verify the database pod is running and ready
- Check the service exists:
kubectl get svc -n <namespace> - Verify credentials secret exists:
kubectl get secrets -n <namespace>
Operators generate secrets asynchronously. Add a wait or init container:
initContainers:
- name: wait-for-db
image: busybox
command: ['sh', '-c', 'until nc -z $PGHOST $PGPORT; do sleep 2; done']
env:
- name: PGHOST
value: "{{ .Release.Name }}-postgresql"
- name: PGPORT
value: "5432"Ephemeral environments use emptyDir storage by default - data is intentionally lost when the PR closes. This is correct behavior for PR environments.
For staging environments that need persistence, configure a PVC:
postgresql:
storage:
size: 5Gi
class: local-path
persistent: true # Use PVC instead of emptyDir- Credentials are auto-generated by operators (cryptographically secure)
- Secrets exist only within the PR namespace
- Namespace deletion cascades to all secrets
- No credentials are committed to git
- Cross-namespace access is prevented by RBAC
- NetworkPolicies allow same-namespace communication (app to database)
- Cross-namespace database access is blocked by default
The database charts use Helm's lookup function to preserve credentials across helm upgrade operations. This means:
- First install: Random credentials are generated
- Subsequent upgrades: Existing credentials are preserved
- Namespace recreation: New credentials are generated (expected for ephemeral environments)
The lookup function has limitations you should be aware of:
| Operation | Behavior |
|---|---|
helm install |
Generates new credentials |
helm upgrade |
Preserves existing credentials |
helm template |
Generates new credentials (no cluster access) |
helm diff |
Shows credential changes (false positive) |
helm lint |
Generates new credentials |
--dry-run |
Generates new credentials |
Why this matters:
helm diffwill always show password changes even when no actual change will occur- CI/CD validation using
helm templatewill produce different manifests than actual deployments - Users running Helm without cluster RBAC permissions to read secrets will get new credentials
Mitigation: This is acceptable for ephemeral PR environments where:
- Each PR gets a fresh namespace anyway
- Credentials don't need to persist across PR updates (database is recreated)
- The deployment pipeline runs with cluster access (actual upgrades work correctly)
- Check the user story docs
- Review task breakdowns
- Open an issue on the repository