diff --git a/README.md b/README.md index f28970d..b603499 100644 --- a/README.md +++ b/README.md @@ -1,46 +1,63 @@ -> [!WARNING] -> This chart is currently redesigned (branch name [next](https://github.com/8gears/n8n-helm-chart/tree/next)) to better support the changed config option of n8n. Please follow the issue https://github.com/8gears/n8n-helm-chart/pull/125 for more details and the current progress of development. New Chart should be ready in January 2025. - - -> [!IMPORTANT] -> The n8n Helm chart is growing, and we need your help! We're looking for passionate maintainers and contributors to improve its automation, governance, and documentation. If you're interested in making a difference, [join the discussion](https://github.com/8gears/n8n-helm-chart/discussions/90). - - - +> [!INFO] +> The n8n Helm chart is growing, and needs your help! +> We're looking for additional passionate maintainers and contributors +> to improve and maintain this chart, governance, development, documentation and CI/CD workflows. +> If you're interested in making a difference, +> [join the discussion](https://github.com/8gears/n8n-helm-chart/discussions/90). +> [!WARNING] +> Version 1.0.0 of this Chart includes breaking changes and is not backwards compatible with previous versions. +> Please review the migration guide below before upgrading. +> # n8n Helm Chart for Kubernetes [n8n](https://github.com/n8n-io/n8n) is an extendable workflow automation tool. - [![Artifact HUB](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/n8n)](https://artifacthub.io/packages/helm/open-8gears/n8n) - -The Helm chart source code location -is [github.com/8gears/n8n-helm-chart](https://github.com/8gears/n8n-helm-chart) - - - +The Helm chart source code location is [github.com/8gears/n8n-helm-chart](https://github.com/8gears/n8n-helm-chart) ## Requirements Before you start, make sure you have the following tools ready: - Helm >= 3.8 -- external Postgres DB or external MySQL | embedded SQLite (bundled with n8n) +- external Postgres DB or embedded SQLite (SQLite is bundled with n8n) - Helmfile (Optional) -## Configuration +## Overview + +The `values.yaml` file is divided into a multiple n8n and Kubernetes specific sections. + +1. Global and chart wide values, like the image repository, image tag, etc. +2. Ingress, (default is nginx, but you can change it to your own ingress controller) +3. Main n8n app configuration + Kubernetes specific settings +4. Worker related settings + Kubernetes specific settings +5. Webhook related settings + Kubernetes specific settings +6. Raw Resources to pass through your own manifests like GatewayAPI, ServiceMonitor etc. +7. Redis related settings + Kubernetes specific settings + +## Setting Configuration Values and Environment Variables + +These n8n specific settings should be added to `main.config:` or `main.secret:` in the `values.yaml` file. + +See the [example](#examples) section and other example in the `/examples` directory of this repo. -The `values.yaml` file is divided into a n8n specific configuration section, and -a Kubernetes deployment-specific section. +> [!IMPORTANT] +> The YAML nodes `config` and `secret` in the values.yaml are transformed 1:1 into ENV variables. -The shown values represent Helm Chart defaults, not the application defaults. -In many cases, the Helm Chart defaults are empty. -The comments behind the values provide a description and display the application -default. +```yaml +main: + config: + n8n: + encryption_key: "my_secret" # ==> turns into ENV: N8N_ENCRYPTION_KEY=my_secret + db: + type: postgresdb # ==> turns into ENV: DB_TYPE=postgresdb + postgresdb: + host: 192.168.0.52 # ==> turns into ENV: DB_POSTGRESDB_HOST=192.168.0.52 + node: + function_allow_builtin: "*" # ==> turns into ENV: NODE_FUNCTION_ALLOW_BUILTIN="*" +``` -These n8n config options should be attached below the root elements `secret:` -or `config:` in the `values.yaml`. -(See the [typical-values-example](#typical-values-example) section). +Consult the [n8n Environment Variables Documentation]( https://docs.n8n.io/hosting/configuration/environment-variables/) You decide what should go into `secret` and what should be a `config`. There is no restriction, mix and match as you like. @@ -50,280 +67,704 @@ There is no restriction, mix and match as you like. Install chart ```shell -helm install my-n8n oci://8gears.container-registry.com/library/n8n --version 0.25.2 +helm install my-n8n oci://8gears.container-registry.com/library/n8n --version 1.0.0 ``` -# N8N Specific Config Section +# Examples -Every possible n8n config value can be set, -even if it is not mentioned in the excerpt below. -All application config settings are described in the: -[n8n configuration options](https://github.com/n8n-io/n8n/blob/master/packages/cli/src/config/schema.ts). -Treat the n8n provided config documentation as the source of truth, -this Charts just forwards everything to n8n. +A typical example of a config in combination with a secret. +You can find various other examples in the `examples` directory of this repository. ```yaml -database: - type: # Type of database to use - Other possible types ['sqlite', 'mariadb', 'mysqldb', 'postgresdb'] - default: sqlite - tablePrefix: # Prefix for table names - default: '' - postgresdb: - database: # PostgresDB Database - default: n8n - host: # PostgresDB Host - default: localhost - password: # PostgresDB Password - default: '' - port: # PostgresDB Port - default: 5432 - user: # PostgresDB User - default: root - schema: # PostgresDB Schema - default: public - ssl: - ca: # SSL certificate authority - default: '' - cert: # SSL certificate - default: '' - key: # SSL key - default: '' - rejectUnauthorized: # If unauthorized SSL connections should be rejected - default: true - mysqldb: - database: # MySQL Database - default: n8n - host: # MySQL Host - default: localhost - password: # MySQL Password - default: '' - port: # MySQL Port - default: 3306 - user: # MySQL User - default: root -credentials: - overwrite: - data: # Overwrites for credentials - default: "{}" - endpoint: # Fetch credentials from API - default: '' - -executions: - process: # In what process workflows should be executed - possible values [main, own] - default: own - timeout: # Max run time (seconds) before stopping the workflow execution - default: -1 - maxTimeout: # Max execution time (seconds) that can be set for a workflow individually - default: 3600 - saveDataOnError: # What workflow execution data to save on error - possible values [all , none] - default: all - saveDataOnSuccess: # What workflow execution data to save on success - possible values [all , none] - default: all - saveDataManualExecutions: # Save data of executions when started manually via editor - default: false - pruneData: # Delete data of past executions on a rolling basis - default: false - pruneDataMaxAge: # How old (hours) the execution data has to be to get deleted - default: 336 - pruneDataTimeout: # Timeout (seconds) after execution data has been pruned - default: 3600 -generic: - timezone: # The timezone to use - default: America/New_York -path: # Path n8n is deployed to - default: "/" -host: # Host name n8n can be reached - default: localhost -port: # HTTP port n8n can be reached - default: 5678 -listen_address: # IP address n8n should listen on - default: 0.0.0.0 -protocol: # HTTP Protocol via which n8n can be reached - possible values [http , https] - default: http -ssl_key: # SSL Key for HTTPS Protocol - default: '' -ssl_cert: # SSL Cert for HTTPS Protocol - default: '' -security: - excludeEndpoints: # Additional endpoints to exclude auth checks. Multiple endpoints can be separated by colon - default: '' - basicAuth: - active: # If basic auth should be activated for editor and REST-API - default: false - user: # The name of the basic auth user - default: '' - password: # The password of the basic auth user - default: '' - hash: # If password for basic auth is hashed - default: false - jwtAuth: - active: # If JWT auth should be activated for editor and REST-API - default: false - jwtHeader: # The request header containing a signed JWT - default: '' - jwtHeaderValuePrefix: # The request header value prefix to strip (optional) default: '' - jwksUri: # The URI to fetch JWK Set for JWT authentication - default: '' - jwtIssuer: # JWT issuer to expect (optional) - default: '' - jwtNamespace: # JWT namespace to expect (optional) - default: '' - jwtAllowedTenantKey: # JWT tenant key name to inspect within JWT namespace (optional) - default: '' - jwtAllowedTenant: # JWT tenant to allow (optional) - default: '' -endpoints: - rest: # Path for rest endpoint default: rest - webhook: # Path for webhook endpoint default: webhook - webhookTest: # Path for test-webhook endpoint default: webhook-test - webhookWaiting: # Path for test-webhook endpoint default: webhook-waiting -externalHookFiles: # Files containing external hooks. Multiple files can be separated by colon - default: '' -nodes: - exclude: # Nodes not to load - default: "[]" - errorTriggerType: # Node Type to use as Error Trigger - default: n8n-nodes-base.errorTrigger -# the list goes on... +#small deployment with nodeport for local testing or small deployments +main: + config: + n8n: + hide_usage_page: true + secret: + n8n: + encryption_key: "" + resources: + limits: + memory: 2048Mi + requests: + memory: 512Mi + service: + type: NodePort + port: 5678 ``` -### Values +# Values File -The values file consists of n8n specific sections `config` and `secret` where -you paste the n8n config like shown above. +## N8N Specific Config Section + +Every possible n8n config value can be set, +even if it is not mentioned in the excerpt below. +Treat the n8n provided configuration documentation as the source of truth, +this Charts just forwards everything down to the n8n pods. ```yaml -# The n8n related part of the config -config: # Dict with all n8n config options -# database: -# type: postgresdb -# postgresdb: -# database: n8n -# host: localhost # -# existingSecret and secret are exclusive, with existingSecret taking priority. -# existingSecret: "" # Use an existing Kubernetes secret, e.g created by hand or Vault operator. -secret: # Dict with all n8n config options, unlike config the values here will end up in a secret. -# database: -# postgresdb: -# password: here_db_root_password - -## -## -## Common Kubernetes Config Settings -persistence: - ## If true, use a Persistent Volume Claim, If false, use emptyDir - ## - enabled: false - type: emptyDir # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim - ## Persistent Volume Storage Class - ## If defined, storageClassName: - ## If set to "-", storageClassName: "", which disables dynamic provisioning - ## If undefined (the default) or set to null, no storageClassName spec is - ## set, choosing the default provisioner. (gp2 on AWS, standard on - ## GKE, AWS & OpenStack) - ## - # storageClass: "-" - ## PVC annotations - # - # If you need this annotation include it under values.yml file and pvc.yml template will add it. - # This is not maintained at Helm v3 anymore. - # https://github.com/8gears/n8n-helm-chart/issues/8 - # - # annotations: - # helm.sh/resource-policy: keep - ## Persistent Volume Access Mode - ## - accessModes: - - ReadWriteOnce - ## Persistent Volume size - ## - size: 1Gi - ## Use an existing PVC - ## - # existingClaim: - -# Set additional environment variables on the Deployment -extraEnv: { } -# Set this if running behind a reverse proxy and the external port is different from the port n8n runs on -# WEBHOOK_TUNNEL_URL: "https://n8n.myhost.com/ - -replicaCount: 1 +# General Config +# + +# default .Chart.Name +nameOverride: + +# default .Chart.Name or .Values.nameOverride +fullnameOverride: + +# +# Common Kubernetes Config Settings for this entire n8n deployment +# image: repository: n8nio/n8n pullPolicy: IfNotPresent # Overrides the image tag whose default is the chart appVersion. tag: "" +imagePullSecrets: [] -imagePullSecrets: [ ] -nameOverride: "" -fullnameOverride: "" - -serviceAccount: - # Specifies whether a service account should be created - create: true - # Annotations to add to the service account - annotations: { } - # The name of the service account to use. - # If not set and create is true, a name is generated using the fullname template - name: "" - -podAnnotations: { } - -podSecurityContext: { } -# fsGroup: 2000 - -securityContext: { } - # capabilities: - # drop: -# - ALL -# readOnlyRootFilesystem: true -# runAsNonRoot: true -# runAsUser: 1000 - -service: - type: ClusterIP - port: 80 - annotations: { } - +# +# Ingress +# ingress: enabled: false - annotations: { } - # kubernetes.io/ingress.class: nginx - # kubernetes.io/tls-acme: "true" + annotations: {} + # define a custom ingress class Name, like "traefik" or "nginx" + className: "" hosts: - - host: chart-example.local - paths: [ ] - tls: [ ] - # - secretName: chart-example-tls - # hosts: - # - chart-example.local - -resources: { } - # We usually recommend not to specify default resources and to leave this as a conscious + - host: workflow.example.com + paths: [] + tls: + - hosts: + - workflow.example.com + secretName: host-domain-cert + +# the main (n8n) application related configuration + Kubernetes specific settings +# The config: {} dictionary is converted to environmental variables in the ConfigMap. +main: + # See https://docs.n8n.io/hosting/configuration/environment-variables/ for all values. + config: + # n8n: + # db: + # type: postgresdb + # postgresdb: + # host: 192.168.0.52 + + # Dictionary for secrets, unlike config:, the values here will end up in the secret file. + # The YAML entry db.postgresdb.password: my_secret is transformed DB_POSTGRESDB_password=bXlfc2VjcmV0 + # See https://docs.n8n.io/hosting/configuration/environment-variables/ + secret: + # n8n: + # if you run n8n stateless, you should provide an encryption key here. + # encryption_key: + # + # database: + # postgresdb: + # password: 'big secret' + + # Extra environmental variables, so you can reference other configmaps and secrets into n8n as env vars. + extraEnv: + # N8N_DB_POSTGRESDB_NAME: + # valueFrom: + # secretKeyRef: + # name: db-app + # key: dbname + # + # N8n Kubernetes specific settings + # + persistence: + # If true, use a Persistent Volume Claim, If false, use emptyDir + enabled: false + # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim + type: emptyDir + # Persistent Volume Storage Class + # If defined, storageClassName: + # If set to "-", storageClassName: "", which disables dynamic provisioning + # If undefined (the default) or set to null, no storageClassName spec is + # set, choosing the default provisioner. (gp2 on AWS, standard on + # GKE, AWS & OpenStack) + # + # storageClass: "-" + # PVC annotations + # + # If you need this annotation include it under `values.yml` file and pvc.yml template will add it. + # This is not maintained at Helm v3 anymore. + # https://github.com/8gears/n8n-helm-chart/issues/8 + # + # annotations: + # helm.sh/resource-policy: keep + # Persistent Volume Access Mode + # + accessModes: + - ReadWriteOnce + # Persistent Volume size + size: 1Gi + # Use an existing PVC + # existingClaim: + + extraVolumes: [] + # - name: db-ca-cert + # secret: + # secretName: db-ca + # items: + # - key: ca.crt + # path: ca.crt + + extraVolumeMounts: [] + # - name: db-ca-cert + # mountPath: /etc/ssl/certs/postgresql + # readOnly: true + + + # Number of desired pods. + replicaCount: 1 + + # here you can specify the deployment strategy as Recreate or RollingUpdate with optional maxSurge and maxUnavailable + # If these options are not set, default values are 25% + # deploymentStrategy: + # type: Recreate | RollingUpdate + # maxSurge: "50%" + # maxUnavailable: "50%" + + deploymentStrategy: + type: "Recreate" + # maxSurge: "50%" + # maxUnavailable: "50%" + + serviceAccount: + # Specifies whether a service account should be created + create: true + # Annotations to add to the service account + annotations: {} + # The name of the service account to use. + # If not set and create is true, a name is generated using the fullname template + name: "" + + podAnnotations: {} + podLabels: {} + + podSecurityContext: + runAsNonRoot: true + runAsUser: 1000 + runAsGroup: 1000 + fsGroup: 1000 + + securityContext: {} + # capabilities: + # drop: + # - ALL + # readOnlyRootFilesystem: true + # runAsNonRoot: true + # runAsUser: 1000 + + # here you can specify lifecycle hooks - it can be used e.g., to easily add packages to the container without building + # your own docker image + # see https://github.com/8gears/n8n-helm-chart/pull/30 + lifecycle: {} + + # here's the sample configuration to add mysql-client to the container + # lifecycle: + # postStart: + # exec: + # command: ["/bin/sh", "-c", "apk add mysql-client"] + + # here you can override a command for main container + # it may be used to override a starting script (e.g., to resolve issues like https://github.com/n8n-io/n8n/issues/6412) or run additional preparation steps (e.g., installing additional software) + command: [] + + # sample configuration that overrides starting script and solves above issue (also it runs n8n as root, so be careful): + # command: + # - tini + # - -- + # - /bin/sh + # - -c + # - chmod o+rx /root; chown -R node /root/.n8n || true; chown -R node /root/.n8n; ln -s /root/.n8n /home/node; chown -R node /home/node || true; node /usr/local/bin/n8n + + # here you can override the livenessProbe for the main container + # it may be used to increase the timeout for the livenessProbe (e.g., to resolve issues like + + livenessProbe: + httpGet: + path: /healthz + port: http + # initialDelaySeconds: 30 + # periodSeconds: 10 + # timeoutSeconds: 5 + # failureThreshold: 6 + # successThreshold: 1 + + # here you can override the readinessProbe for the main container + # it may be used to increase the timeout for the readinessProbe (e.g., to resolve issues like + + readinessProbe: + httpGet: + path: /healthz + port: http + # initialDelaySeconds: 30 + # periodSeconds: 10 + # timeoutSeconds: 5 + # failureThreshold: 6 + # successThreshold: 1 + + # List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. + # See https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ + initContainers: [] + # - name: init-data-dir + # image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" + # command: [ "/bin/sh", "-c", "mkdir -p /home/node/.n8n/" ] + # volumeMounts: + # - name: data + # mountPath: /home/node/.n8n + + + service: + annotations: {} + # -- Service types allow you to specify what kind of Service you want. + # E.g., ClusterIP, NodePort, LoadBalancer, ExternalName + type: ClusterIP + # -- Service port + port: 80 + + resources: {} + # We usually recommend not specifying default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little # resources, such as Minikube. If you do want to specify resources, uncomment the following # lines, adjust them as necessary, and remove the curly braces after 'resources:'. # limits: # cpu: 100m -# memory: 128Mi -# requests: -# cpu: 100m -# memory: 128Mi - -autoscaling: - enabled: false - minReplicas: 1 - maxReplicas: 100 - targetCPUUtilizationPercentage: 80 - # targetMemoryUtilizationPercentage: 80 - -nodeSelector: { } + # memory: 128Mi + # requests: + # cpu: 100m + # memory: 128Mi -tolerations: [ ] + autoscaling: + enabled: false + minReplicas: 1 + maxReplicas: 100 + targetCPUUtilizationPercentage: 80 + # targetMemoryUtilizationPercentage: 80 -affinity: { } + nodeSelector: {} + tolerations: [] + affinity: {} -scaling: +# # # # # # # # # # # # # # # # +# +# Worker related settings +# +worker: enabled: false + count: 2 + # You can define the number of jobs a worker can run in parallel by using the concurrency flag. It defaults to 10. To change it: + concurrency: 10 + + # + # Worker Kubernetes specific settings + # + persistence: + # If true, use a Persistent Volume Claim, If false, use emptyDir + enabled: false + # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim + type: emptyDir + # Persistent Volume Storage Class + # If defined, storageClassName: + # If set to "-", storageClassName: "", which disables dynamic provisioning + # If undefined (the default) or set to null, no storageClassName spec is + # set, choosing the default provisioner. (gp2 on AWS, standard on + # GKE, AWS & OpenStack) + # + # storageClass: "-" + # PVC annotations + # + # If you need this annotation include it under `values.yml` file and pvc.yml template will add it. + # This is not maintained at Helm v3 anymore. + # https://github.com/8gears/n8n-helm-chart/issues/8 + # + # annotations: + # helm.sh/resource-policy: keep + # Persistent Volume Access Mode + accessModes: + - ReadWriteOnce + # Persistent Volume size + size: 1Gi + # Use an existing PVC + # existingClaim: + # Number of desired pods. + replicaCount: 1 + + # here you can specify the deployment strategy as Recreate or RollingUpdate with optional maxSurge and maxUnavailable + # If these options are not set, default values are 25% + # deploymentStrategy: + # type: RollingUpdate + # maxSurge: "50%" + # maxUnavailable: "50%" + + deploymentStrategy: + type: "Recreate" + # maxSurge: "50%" + # maxUnavailable: "50%" + + serviceAccount: + # Specifies whether a service account should be created + create: true + # Annotations to add to the service account + annotations: {} + # The name of the service account to use. + # If not set and create is true, a name is generated using the fullname template + name: "" + + podAnnotations: {} + + podLabels: {} + + podSecurityContext: + runAsNonRoot: true + runAsUser: 1000 + runAsGroup: 1000 + fsGroup: 1000 + + securityContext: {} + # capabilities: + # drop: + # - ALL + # readOnlyRootFilesystem: true + # runAsNonRoot: true + # runAsUser: 1000 + + # here you can specify lifecycle hooks - it can be used e.g., to easily add packages to the container without building + # your own docker image + # see https://github.com/8gears/n8n-helm-chart/pull/30 + lifecycle: {} + + # here's the sample configuration to add mysql-client to the container + # lifecycle: + # postStart: + # exec: + # command: ["/bin/sh", "-c", "apk add mysql-client"] + + # here you can override a command for worker container + # it may be used to override a starting script (e.g., to resolve issues like https://github.com/n8n-io/n8n/issues/6412) or + # run additional preparation steps (e.g., installing additional software) + command: [] + + # sample configuration that overrides starting script and solves above issue (also it runs n8n as root, so be careful): + # command: + # - tini + # - -- + # - /bin/sh + # - -c + # - chmod o+rx /root; chown -R node /root/.n8n || true; chown -R node /root/.n8n; ln -s /root/.n8n /home/node; chown -R node /home/node || true; node /usr/local/bin/n8n + + # command args + commandArgs: [] + + # here you can override the livenessProbe for the main container + # it may be used to increase the timeout for the livenessProbe (e.g., to resolve issues like + livenessProbe: + httpGet: + path: /healthz + port: http + # initialDelaySeconds: 30 + # periodSeconds: 10 + # timeoutSeconds: 5 + # failureThreshold: 6 + # successThreshold: 1 + + # here you can override the readinessProbe for the main container + # it may be used to increase the timeout for the readinessProbe (e.g., to resolve issues like + + readinessProbe: + httpGet: + path: /healthz + port: http + # initialDelaySeconds: 30 + # periodSeconds: 10 + # timeoutSeconds: 5 + # failureThreshold: 6 + # successThreshold: 1 + + # List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. + # See https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ + initContainers: [] + + service: + annotations: {} + # -- Service types allow you to specify what kind of Service you want. + # E.g., ClusterIP, NodePort, LoadBalancer, ExternalName + type: ClusterIP + # -- Service port + port: 80 + + resources: {} + # We usually recommend not specifying default resources and to leave this as a conscious + # choice for the user. This also increases chances charts run on environments with little + # resources, such as Minikube. If you do want to specify resources, uncomment the following + # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + # limits: + # cpu: 100m + # memory: 128Mi + # requests: + # cpu: 100m + # memory: 128Mi - worker: - count: 2 - concurrency: 2 + autoscaling: + enabled: false + minReplicas: 1 + maxReplicas: 100 + targetCPUUtilizationPercentage: 80 + # targetMemoryUtilizationPercentage: 80 + + nodeSelector: {} + tolerations: [] + affinity: {} + +# Webhook related settings +# With .Values.scaling.webhook.enabled=true you disable Webhooks from the main process, but you enable the processing on a different Webhook instance. +# See https://github.com/8gears/n8n-helm-chart/issues/39#issuecomment-1579991754 for the full explanation. +# Webhook processes rely on Redis too. +webhook: + enabled: false + # additional (to main) config for webhook + config: {} + # additional (to main) config for webhook + secret: {} + + # Extra environmental variables, so you can reference other configmaps and secrets into n8n as env vars. + extraEnv: {} + # WEBHOOK_URL: + # value: "http://webhook.domain.tld" + - webhook: + # + # Webhook Kubernetes specific settings + # + persistence: + # If true, use a Persistent Volume Claim, If false, use emptyDir enabled: false - count: 1 + # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim + type: emptyDir + # Persistent Volume Storage Class + # If defined, storageClassName: + # If set to "-", storageClassName: "", which disables dynamic provisioning + # If undefined (the default) or set to null, no storageClassName spec is + # set, choosing the default provisioner. (gp2 on AWS, standard on + # GKE, AWS & OpenStack) + # + # storageClass: "-" + # PVC annotations + # + # If you need this annotation include it under `values.yml` file and pvc.yml template will add it. + # This is not maintained at Helm v3 anymore. + # https://github.com/8gears/n8n-helm-chart/issues/8 + # + # annotations: + # helm.sh/resource-policy: keep + # Persistent Volume Access Mode + # + accessModes: + - ReadWriteOnce + # Persistent Volume size + # + size: 1Gi + # Use an existing PVC + # + # existingClaim: + + # Number of desired pods. + replicaCount: 1 + + # here you can specify the deployment strategy as Recreate or RollingUpdate with optional maxSurge and maxUnavailable + # If these options are not set, default values are 25% + # deploymentStrategy: + # type: RollingUpdate + # maxSurge: "50%" + # maxUnavailable: "50%" + + deploymentStrategy: + type: "Recreate" + + nameOverride: "" + fullnameOverride: "" + + serviceAccount: + # Specifies whether a service account should be created + create: true + # Annotations to add to the service account + annotations: {} + # The name of the service account to use. + # If not set and create is true, a name is generated using the fullname template + name: "" + + podAnnotations: {} + + podLabels: {} + + podSecurityContext: + runAsNonRoot: true + runAsUser: 1000 + runAsGroup: 1000 + fsGroup: 1000 + + securityContext: {} + # capabilities: + # drop: + # - ALL + # readOnlyRootFilesystem: true + # runAsNonRoot: true + # runAsUser: 1000 + + # here you can specify lifecycle hooks - it can be used e.g., to easily add packages to the container without building + # your own docker image + # see https://github.com/8gears/n8n-helm-chart/pull/30 + lifecycle: {} + + # here's the sample configuration to add mysql-client to the container + # lifecycle: + # postStart: + # exec: + # command: ["/bin/sh", "-c", "apk add mysql-client"] + + # here you can override a command for main container + # it may be used to override a starting script (e.g., to resolve issues like https://github.com/n8n-io/n8n/issues/6412) or + # run additional preparation steps (e.g., installing additional software) + command: [] + + # sample configuration that overrides starting script and solves above issue (also it runs n8n as root, so be careful): + # command: + # - tini + # - -- + # - /bin/sh + # - -c + # - chmod o+rx /root; chown -R node /root/.n8n || true; chown -R node /root/.n8n; ln -s /root/.n8n /home/node; chown -R node /home/node || true; node /usr/local/bin/n8n + # Command Arguments + commandArgs: [] + + # here you can override the livenessProbe for the main container + # it may be used to increase the timeout for the livenessProbe (e.g., to resolve issues like + + livenessProbe: + httpGet: + path: /healthz + port: http + # initialDelaySeconds: 30 + # periodSeconds: 10 + # timeoutSeconds: 5 + # failureThreshold: 6 + # successThreshold: 1 + + # here you can override the readinessProbe for the main container + # it may be used to increase the timeout for the readinessProbe (e.g., to resolve issues like + + readinessProbe: + httpGet: + path: /healthz + port: http + # initialDelaySeconds: 30 + # periodSeconds: 10 + # timeoutSeconds: 5 + # failureThreshold: 6 + # successThreshold: 1 + + # List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. + # See https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ + initContainers: [] + + service: + annotations: {} + # -- Service types allow you to specify what kind of Service you want. + # E.g., ClusterIP, NodePort, LoadBalancer, ExternalName + type: ClusterIP + # -- Service port + port: 80 + + resources: {} + # We usually recommend not specifying default resources and to leave this as a conscious + # choice for the user. This also increases chances charts run on environments with little + # resources, such as Minikube. If you do want to specify resources, uncomment the following + # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + # limits: + # cpu: 100m + # memory: 128Mi + # requests: + # cpu: 100m + # memory: 128Mi + autoscaling: + enabled: false + minReplicas: 1 + maxReplicas: 100 + targetCPUUtilizationPercentage: 80 + # targetMemoryUtilizationPercentage: 80 + nodeSelector: {} + tolerations: [] + affinity: {} - redis: - host: - password: +# +# Additional resources +# +# Takes a list of Kubernetes resources and merges each resource with a default metadata.labels map and +# installs the result. +# Use this to add any arbitrary Kubernetes manifests alongside this chart instead of kubectl and scripts. +resources: [] +# - apiVersion: v1 +# kind: ConfigMap +# metadata: +# name: example-config +# data: +# example.property.1: "value1" +# example.property.2: "value2" +# As an alternative to the above, you can also use a string as the value of the data field. +# - | +# apiVersion: v1 +# kind: ConfigMap +# metadata: +# name: example-config-string +# data: +# example.property.1: "value1" +# example.property.2: "value2" + +# Add additional templates. +# In contrast to the resources field, these templates are not merged with the default metadata.labels map. +# The templates are rendered with the values.yaml file as the context. +templates: [] +# - | +# apiVersion: v1 +# kind: ConfigMap +# metadata: +# name: my-config +# stringData: +# image_name: {{ .Values.image.repository }} + +# Bitnami Valkey configuration +# https://artifacthub.io/packages/helm/bitnami/valkey redis: enabled: false - # Other default redis values: https://github.com/bitnami/charts/blob/master/bitnami/redis/values.yaml -``` + architecture: standalone -# Typical Values Example + primary: + persistence: + enabled: false + existingClaim: "" + size: 2Gi -A typical example of a config in combination with a secret. - -```yaml -# values.yaml - -config: - database: - type: postgresdb - postgresdb: - host: 192.168.0.52 -secret: - database: - postgresdb: - password: 'big secret' ``` +## Migration Guide to Version 1.0.0 -## Setup +This version includes a complete redesign of the chart to better accommodate n8n configuration options. +Key changes include: +- Values restructured under `.Values.main`, `.Values.worker`, and `.Values.webhook` +- Updated deployment configurations +- New Redis integration requirements -```shell -helm install -f values.yaml -n n8n deploymentname n8n -``` -## Scaling +## Scaling and Advanced Configuration Options n8n provides a **queue-mode**, where the workload is shared between multiple -instances of same n8n installation. -This provides a shared load over multiple instances and a limited high +instances of the same n8n installation. +This provides a shared load over multiple instances and limited high availability, because the controller instance remains as Single-Point-Of-Failure. With the help of an internal/external redis server and by using the excellent diff --git a/charts/n8n/Chart.lock b/charts/n8n/Chart.lock index 1ddcb6f..3e64ca0 100644 --- a/charts/n8n/Chart.lock +++ b/charts/n8n/Chart.lock @@ -1,6 +1,6 @@ dependencies: -- name: redis - repository: https://charts.bitnami.com/bitnami - version: 18.6.1 -digest: sha256:679512a5d6167cd529b9b6d861a6605f62683c3497b8f920fc344dd00bf0ba82 -generated: "2024-06-30T21:39:40.024252056+02:00" +- name: valkey + repository: oci://registry-1.docker.io/bitnamicharts + version: 2.2.3 +digest: sha256:3bed83ffaec482e8e31eae77f966ecbd95258b6124ced0824d825805dc23e197 +generated: "2025-01-18T14:00:37.663128+01:00" diff --git a/charts/n8n/Chart.yaml b/charts/n8n/Chart.yaml index 29ab842..5595cc1 100644 --- a/charts/n8n/Chart.yaml +++ b/charts/n8n/Chart.yaml @@ -1,7 +1,7 @@ apiVersion: v2 name: n8n -version: 0.25.2 -appVersion: 1.62.5 +version: 1.0.0 +appVersion: 1.76.1 type: application description: "A Kubernetes Helm chart for n8n a free and open fair-code licensed node based Workflow Automation Tool. Easily automate tasks across different services." home: https://github.com/8gears/n8n-helm-chart @@ -25,13 +25,13 @@ maintainers: url: https://github.com/n8n-io dependencies: - - name: redis - version: 18.6.1 - repository: https://charts.bitnami.com/bitnami + - name: valkey + version: 2.2.3 + repository: oci://registry-1.docker.io/bitnamicharts condition: redis.enabled annotations: - artifacthub.io/prerelease: "false" + artifacthub.io/prerelease: "true" # supported kinds are added, changed, deprecated, removed, fixed and security. artifacthub.io/changes: | - kind: changed - description: "Updated application to version 1.62.5" + description: "Major chart redesign and also update to n8n version 1.74.1. This version is not compatible with the previous version. Please read the release notes for more details." diff --git a/charts/n8n/templates/NOTES.txt b/charts/n8n/templates/NOTES.txt index d8818c9..106b2f4 100644 --- a/charts/n8n/templates/NOTES.txt +++ b/charts/n8n/templates/NOTES.txt @@ -5,16 +5,16 @@ http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }} {{- end }} {{- end }} -{{- else if contains "NodePort" .Values.service.type }} +{{- else if contains "NodePort" (default "ClusterIP" .Values.main.service.type) }} export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "n8n.fullname" . }}) export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") echo http://$NODE_IP:$NODE_PORT -{{- else if contains "LoadBalancer" .Values.service.type }} +{{- else if contains "LoadBalancer" (default "ClusterIP" .Values.main.service.type) }} NOTE: It may take a few minutes for the LoadBalancer IP to be available. You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "n8n.fullname" . }}' export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "n8n.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}") - echo http://$SERVICE_IP:{{ .Values.service.port }} -{{- else if contains "ClusterIP" .Values.service.type }} + echo http://$SERVICE_IP:{{ .Values.main.service.port }} +{{- else if contains "ClusterIP" (default "ClusterIP" .Values.main.service.type) }} export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "n8n.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}") echo "Visit http://127.0.0.1:8080 to use your application" diff --git a/charts/n8n/templates/_helpers.tpl b/charts/n8n/templates/_helpers.tpl index 2b2cf48..2bb246f 100644 --- a/charts/n8n/templates/_helpers.tpl +++ b/charts/n8n/templates/_helpers.tpl @@ -50,92 +50,55 @@ app.kubernetes.io/name: {{ include "n8n.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} {{- end }} -{{/* -Selector labels -*/}} -{{- define "n8n.deploymentPodEnvironments" -}} -{{- range $key, $value := .Values.extraEnv }} -- name: {{ $key }} - value: {{ $value | quote}} -{{ end }} -{{- range $key, $value := .Values.extraEnvSecrets }} -- name: {{ $key }} - valueFrom: - secretKeyRef: - name: {{ $value.name | quote }} - key: {{ $value.key | quote }} -{{ end }} -- name: "N8N_PORT" #! we better set the port once again as ENV Var, see: https://community.n8n.io/t/default-config-is-not-set-or-the-port-to-be-more-precise/3158/3?u=vad1mo - value: {{ get .Values.config "port" | default "5678" | quote }} -{{- if .Values.n8n.encryption_key }} -- name: "N8N_ENCRYPTION_KEY" - valueFrom: - secretKeyRef: - key: N8N_ENCRYPTION_KEY - name: {{ include "n8n.fullname" . }} -{{- end }} -{{- if or .Values.config .Values.secret }} -- name: "N8N_CONFIG_FILES" - value: {{ include "n8n.configFiles" . | quote }} -{{ end }} -{{- if .Values.scaling.enabled }} -- name: "QUEUE_BULL_REDIS_HOST" - {{- if .Values.scaling.redis.host }} - value: "{{ .Values.scaling.redis.host }}" - {{ else }} - value: "{{ .Release.Name }}-redis-master" - {{ end }} -- name: "EXECUTIONS_MODE" - value: "queue" -{{ end }} -{{- if .Values.scaling.redis.password }} -- name: "QUEUE_BULL_REDIS_PASSWORD" - value: "{{ .Values.scaling.redis.password }}" -{{ end }} -{{- if .Values.scaling.webhook.enabled }} -- name: "N8N_DISABLE_PRODUCTION_MAIN_PROCESS" - value: "true" -{{ end }} -{{- end }} -{{/* -Create the name of the service account to use -*/}} +{{/* Create the name of the service account to use */}} {{- define "n8n.serviceAccountName" -}} -{{- if .Values.serviceAccount.create }} -{{- default (include "n8n.fullname" .) .Values.serviceAccount.name }} +{{- if .Values.main.serviceAccount.create }} +{{- default (include "n8n.fullname" .) .Values.main.serviceAccount.name }} {{- else }} -{{- default "default" .Values.serviceAccount.name }} +{{- default "default" .Values.main.serviceAccount.name }} {{- end }} {{- end }} -{{/* Create a list of config files for n8n */}} -{{- define "n8n.configFiles" -}} - {{- $conf_val := "" }} - {{- $sec_val := "" }} - {{- $separator := "" }} - {{- if .Values.config }} - {{- $conf_val = "/n8n-config/config.json" }} - {{- end }} - {{- if or .Values.secret .Values.existingSecret }} - {{- $sec_val = "/n8n-secret/secret.json" }} - {{- end }} - {{- if and .Values.config (or .Values.secret .Values.existingSecret) }} - {{- $separator = "," }} - {{- end }} - {{- print $conf_val $separator $sec_val }} -{{- end }} - - {{/* PVC existing, emptyDir, Dynamic */}} {{- define "n8n.pvc" -}} -{{- if or (not .Values.persistence.enabled) (eq .Values.persistence.type "emptyDir") -}} +{{- if or (not .Values.main.persistence.enabled) (eq .Values.main.persistence.type "emptyDir") -}} emptyDir: {} -{{- else if and .Values.persistence.enabled .Values.persistence.existingClaim -}} +{{- else if and .Values.main.persistence.enabled .Values.main.persistence.existingClaim -}} persistentVolumeClaim: - claimName: {{ .Values.persistence.existingClaim }} -{{- else if and .Values.persistence.enabled (eq .Values.persistence.type "dynamic") -}} + claimName: {{ .Values.main.persistence.existingClaim }} +{{- else if and .Values.main.persistence.enabled (eq .Values.main.persistence.type "dynamic") -}} persistentVolumeClaim: claimName: {{ include "n8n.fullname" . }} {{- end }} {{- end }} + + +{{/* Create environment variables from yaml tree */}} +{{- define "toEnvVars" -}} + {{- $prefix := "" }} + {{- if .prefix }} + {{- $prefix = printf "%s_" .prefix }} + {{- end }} + {{- range $key, $value := .values }} + {{- if kindIs "map" $value -}} + {{- dict "values" $value "prefix" (printf "%s%s" $prefix ($key | upper)) "isSecret" $.isSecret | include "toEnvVars" -}} + {{- else -}} + {{- if $.isSecret -}} +{{ $prefix }}{{ $key | upper }}: {{ $value | b64enc }}{{ "\n" }} + {{- else -}} +{{ $prefix }}{{ $key | upper }}: {{ $value | quote }}{{ "\n" }} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end }} + + +{{/* Validate Redis configuration when webhooks are enabled*/}} +{{- define "n8n.validateRedis" -}} +{{- if and .Values.webhook.enabled (not .Values.redis.enabled) -}} +{{- fail "Webhook processes rely on Redis. Please set redis.enabled=true when webhook.enabled=true" -}} +{{- end -}} +{{- end -}} + + diff --git a/charts/n8n/templates/configmap.webhook.yaml b/charts/n8n/templates/configmap.webhook.yaml new file mode 100644 index 0000000..7c734e5 --- /dev/null +++ b/charts/n8n/templates/configmap.webhook.yaml @@ -0,0 +1,10 @@ +{{- if .Values.webhook.config }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: webhook-config-{{ include "n8n.fullname" . }} + labels: + {{- include "n8n.labels" . | nindent 4 }} +data: + {{- include "toEnvVars" (dict "values" .Values.webhook.config "prefix" "") | nindent 2 }} +{{- end }} diff --git a/charts/n8n/templates/configmap.yaml b/charts/n8n/templates/configmap.yaml index 98d5102..f46430c 100644 --- a/charts/n8n/templates/configmap.yaml +++ b/charts/n8n/templates/configmap.yaml @@ -1,12 +1,10 @@ -{{- if .Values.config }} +{{- if .Values.main.config }} apiVersion: v1 kind: ConfigMap metadata: - name: {{ include "n8n.fullname" . }} + name: app-config-{{ include "n8n.fullname" . }} labels: {{- include "n8n.labels" . | nindent 4 }} data: - config.json: | -{{ .Values.config | toPrettyJson | indent 4 }} - + {{- include "toEnvVars" (dict "values" .Values.main.config "prefix" "") | nindent 2 }} {{- end }} diff --git a/charts/n8n/templates/deployment.webhook.yaml b/charts/n8n/templates/deployment.webhook.yaml new file mode 100644 index 0000000..c0c141b --- /dev/null +++ b/charts/n8n/templates/deployment.webhook.yaml @@ -0,0 +1,126 @@ +{{- if .Values.webhook.enabled }} +{{- /* Validate Redis configuration */}} +{{- include "n8n.validateRedis" . }} +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ include "n8n.fullname" . }}-webhook + labels: + {{- include "n8n.labels" . | nindent 4 }} +spec: + {{- if not .Values.webhook.autoscaling.enabled }} + replicas: {{ .Values.webhook.replicaCount }} + {{- end }} + strategy: + type: {{ .Values.webhook.deploymentStrategy.type }} + {{- if eq .Values.webhook.deploymentStrategy.type "RollingUpdate" }} + rollingUpdate: + maxSurge: {{ default "25%" .Values.webhook.deploymentStrategy.maxSurge }} + maxUnavailable: {{ default "25%" .Values.webhook.deploymentStrategy.maxUnavailable }} + {{- end }} + selector: + matchLabels: + {{- include "n8n.selectorLabels" . | nindent 6 }} + app.kubernetes.io/type: webhook + template: + metadata: + annotations: + checksum/config: {{ print .Values.webhook .Values.main | sha256sum }} + {{- with .Values.webhook.podAnnotations }} + {{- toYaml . | nindent 8 }} + {{- end }} + labels: + {{- include "n8n.selectorLabels" . | nindent 8 }} + app.kubernetes.io/type: webhook + {{- if .Values.webhook.podLabels }} + {{ toYaml .Values.webhook.podLabels | nindent 8 }} + {{- end }} + spec: + {{- with .Values.imagePullSecrets }} + imagePullSecrets: + {{- toYaml . | nindent 8 }} + {{- end }} + serviceAccountName: {{ include "n8n.serviceAccountName" . }} + securityContext: + {{- toYaml .Values.webhook.podSecurityContext | nindent 8 }} + {{- if .Values.webhook.initContainers }} + initContainers: + {{ tpl (toYaml .Values.webhook.initContainers) . | nindent 8 }} + {{- end }} + containers: + - name: {{ .Chart.Name }}-webhook + image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" + imagePullPolicy: {{ .Values.image.pullPolicy }} + {{- if .Values.webhook.command }} + command: + {{- toYaml .Values.webhook.command | nindent 12 }} + {{- else }} + command: ["n8n"] + {{- end }} + {{- if .Values.webhook.commandArgs }} + args: + {{- toYaml .Values.webhook.commandArgs | nindent 12 }} + {{- else }} + args: + - "webhook" + {{- end }} + ports: + - name: http + containerPort: {{ get .Values.webhook.config "port" | default 5678 }} + protocol: TCP + {{- with .Values.webhook.livenessProbe }} + livenessProbe: + {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.webhook.readinessProbe }} + readinessProbe: + {{- toYaml . | nindent 12 }} + {{- end }} + envFrom: + {{- if .Values.main.config }} + - configMapRef: + name: app-config-{{ include "n8n.fullname" . }} + {{- end }} + {{- if .Values.main.secret }} + - secretRef: + name: app-secret-{{ include "n8n.fullname" . }} + {{- end }} + {{- if .Values.webhook.config }} + - configMapRef: + name: webhook-config-{{ include "n8n.fullname" . }} + {{- end }} + {{- if .Values.webhook.secret }} + - secretRef: + name: webhook-secret-{{ include "n8n.fullname" . }} + {{- end }} + + env: {{ not (empty .Values.webhook.extraEnv) | ternary nil "[]" }} + {{- range $key, $value := .Values.webhook.extraEnv }} + - name: {{ $key }} + {{- toYaml $value | nindent 14 }} + {{- end }} + lifecycle: + {{- toYaml .Values.webhook.lifecycle | nindent 12 }} + securityContext: + {{- toYaml .Values.webhook.securityContext | nindent 12 }} + resources: + {{- toYaml .Values.webhook.resources | nindent 12 }} + volumeMounts: + - name: data + mountPath: /home/node/.n8n + {{- with .Values.webhook.nodeSelector }} + nodeSelector: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.webhook.affinity }} + affinity: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.webhook.tolerations }} + tolerations: + {{- toYaml . | nindent 8 }} + {{- end }} + volumes: + - name: "data" + {{ include "n8n.pvc" . }} +{{- end }} diff --git a/charts/n8n/templates/deployment.webhooks.yaml b/charts/n8n/templates/deployment.webhooks.yaml deleted file mode 100644 index 3b7f22e..0000000 --- a/charts/n8n/templates/deployment.webhooks.yaml +++ /dev/null @@ -1,104 +0,0 @@ -{{- if .Values.scaling.webhook.enabled }} -apiVersion: apps/v1 -kind: Deployment -metadata: - name: {{ include "n8n.fullname" . }}-webhook - labels: - {{- include "n8n.labels" . | nindent 4 }} -spec: - {{- if not .Values.autoscaling.enabled }} - replicas: {{ .Values.scaling.webhook.count }} - {{- end }} - strategy: - type: {{ .Values.deploymentStrategy.type }} - {{- if eq .Values.deploymentStrategy.type "RollingUpdate" }} - rollingUpdate: - maxSurge: {{ default "25%" .Values.deploymentStrategy.maxSurge }} - maxUnavailable: {{ default "25%" .Values.deploymentStrategy.maxUnavailable }} - {{- end }} - selector: - matchLabels: - {{- include "n8n.selectorLabels" . | nindent 6 }} - app.kubernetes.io/type: webhooks - template: - metadata: - annotations: - checksum/config: {{ print .Values | sha256sum }} - {{- with .Values.podAnnotations }} - {{- toYaml . | nindent 8 }} - {{- end }} - labels: - {{- include "n8n.selectorLabels" . | nindent 8 }} - app.kubernetes.io/type: webhooks - {{- if .Values.podLabels }} - {{ toYaml .Values.podLabels | nindent 8 }} - {{- end }} - - spec: - {{- with .Values.imagePullSecrets }} - imagePullSecrets: - {{- toYaml . | nindent 8 }} - {{- end }} - serviceAccountName: {{ include "n8n.serviceAccountName" . }} - securityContext: - {{- toYaml .Values.podSecurityContext | nindent 8 }} - {{- if .Values.initContainers }} - initContainers: - {{ tpl (toYaml .Values.initContainers) . | nindent 8 }} - {{- end }} - containers: - - name: {{ .Chart.Name }} - securityContext: - {{- toYaml .Values.securityContext | nindent 12 }} - image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" - imagePullPolicy: {{ .Values.image.pullPolicy }} - command: ["n8n"] - args: ["webhook"] - env: - {{- include "n8n.deploymentPodEnvironments" . | nindent 12 }} - ports: - - name: http - containerPort: {{ get .Values.config "port" | default 5678 }} - protocol: TCP - resources: - {{- toYaml .Values.webhookResources | nindent 12 }} - volumeMounts: - - name: data - mountPath: {{ if semverCompare ">=1.0" (.Values.image.tag | default .Chart.AppVersion) }}/home/node/.n8n{{ else }}/root/.n8n{{ end }} - {{- if .Values.config }} - - name: config-volume - mountPath: /n8n-config - {{- end }} - {{- if or (.Values.secret) (.Values.existingSecret) }} - - name: secret-volume - mountPath: /n8n-secret - {{- end }} - {{- with .Values.nodeSelector }} - nodeSelector: - {{- toYaml . | nindent 8 }} - {{- end }} - {{- with .Values.affinity }} - affinity: - {{- toYaml . | nindent 8 }} - {{- end }} - {{- with .Values.tolerations }} - tolerations: - {{- toYaml . | nindent 8 }} - {{- end }} - volumes: - - name: "data" - {{ include "n8n.pvc" . }} - {{- if .Values.config }} - - name: config-volume - configMap: - name: {{ include "n8n.fullname" . }} - {{- end }} - {{- if or (.Values.secret) (.Values.existingSecret) }} - - name: secret-volume - secret: - secretName: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{ else }}{{ include "n8n.fullname" . }}{{ end }} - items: - - key: "secret.json" - path: "secret.json" - {{- end }} -{{- end }} diff --git a/charts/n8n/templates/deployment.worker.yaml b/charts/n8n/templates/deployment.worker.yaml index 0014d3e..3db3e27 100644 --- a/charts/n8n/templates/deployment.worker.yaml +++ b/charts/n8n/templates/deployment.worker.yaml @@ -1,4 +1,4 @@ -{{- if .Values.scaling.enabled }} +{{- if .Values.worker.enabled }} apiVersion: apps/v1 kind: Deployment metadata: @@ -6,15 +6,15 @@ metadata: labels: {{- include "n8n.labels" . | nindent 4 }} spec: - {{- if not .Values.autoscaling.enabled }} - replicas: {{ .Values.scaling.worker.count }} + {{- if not .Values.worker.autoscaling.enabled }} + replicas: {{ .Values.worker.count }} {{- end }} strategy: - type: {{ .Values.deploymentStrategy.type }} - {{- if eq .Values.deploymentStrategy.type "RollingUpdate" }} + type: {{ .Values.worker.deploymentStrategy.type }} + {{- if eq .Values.worker.deploymentStrategy.type "RollingUpdate" }} rollingUpdate: - maxSurge: {{ default "25%" .Values.deploymentStrategy.maxSurge }} - maxUnavailable: {{ default "25%" .Values.deploymentStrategy.maxUnavailable }} + maxSurge: {{ default "25%" .Values.worker.deploymentStrategy.maxSurge }} + maxUnavailable: {{ default "25%" .Values.worker.deploymentStrategy.maxUnavailable }} {{- end }} selector: matchLabels: @@ -24,14 +24,14 @@ spec: metadata: annotations: checksum/config: {{ print .Values | sha256sum }} - {{- with .Values.podAnnotations }} + {{- with .Values.worker.podAnnotations }} {{- toYaml . | nindent 8 }} {{- end }} labels: {{- include "n8n.selectorLabels" . | nindent 8 }} app.kubernetes.io/type: worker - {{- if .Values.podLabels }} - {{ toYaml .Values.podLabels | nindent 8 }} + {{- if .Values.worker.podLabels }} + {{ toYaml .Values.worker.podLabels | nindent 8 }} {{- end }} spec: @@ -41,64 +41,61 @@ spec: {{- end }} serviceAccountName: {{ include "n8n.serviceAccountName" . }} securityContext: - {{- toYaml .Values.podSecurityContext | nindent 8 }} - {{- if .Values.initContainers }} + {{- toYaml .Values.worker.podSecurityContext | nindent 8 }} + {{- if .Values.worker.initContainers }} initContainers: - {{ tpl (toYaml .Values.initContainers) . | nindent 8 }} + {{ tpl (toYaml .Values.worker.initContainers) . | nindent 8 }} {{- end }} containers: - - name: {{ .Chart.Name }} + - name: {{ .Chart.Name }}-worker securityContext: - {{- toYaml .Values.securityContext | nindent 12 }} + {{- toYaml .Values.worker.securityContext | nindent 12 }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" imagePullPolicy: {{ .Values.image.pullPolicy }} + {{- if .Values.worker.command }} + command: + {{- toYaml .Values.worker.command | nindent 12 }} + {{- else }} command: ["n8n"] - args: ["worker", "--concurrency={{ .Values.scaling.worker.concurrency }}"] - env: - {{- include "n8n.deploymentPodEnvironments" . | nindent 12 }} + {{- end }} + {{- if .Values.worker.commandArgs }} + args: + {{- toYaml .Values.worker.commandArgs | nindent 12 }} + {{- else }} + args: + - "worker" + - "--concurrency={{ .Values.worker.concurrency }}" + {{- end }} ports: - name: http - containerPort: {{ get .Values.config "port" | default 5678 }} + containerPort: {{ get .Values.worker.config "port" | default 5678 }} protocol: TCP + {{- with .Values.main.livenessProbe }} + livenessProbe: + {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.main.readinessProbe }} + readinessProbe: + {{- toYaml . | nindent 12 }} + {{- end }} resources: - {{- toYaml .Values.workerResources | nindent 12 }} + {{- toYaml .Values.worker.workerResources | nindent 12 }} volumeMounts: - name: data - mountPath: {{ if semverCompare ">=1.0" (.Values.image.tag | default .Chart.AppVersion) }}/home/node/.n8n{{ else }}/root/.n8n{{ end }} - {{- if .Values.config }} - - name: config-volume - mountPath: /n8n-config - {{- end }} - {{- if or (.Values.secret) (.Values.existingSecret) }} - - name: secret-volume - mountPath: /n8n-secret - {{- end }} - {{- with .Values.nodeSelector }} + mountPath: /home/node/.n8n + {{- with .Values.worker.nodeSelector }} nodeSelector: {{- toYaml . | nindent 8 }} {{- end }} - {{- with .Values.affinity }} + {{- with .Values.worker.affinity }} affinity: {{- toYaml . | nindent 8 }} {{- end }} - {{- with .Values.tolerations }} + {{- with .Values.worker.tolerations }} tolerations: {{- toYaml . | nindent 8 }} {{- end }} volumes: - name: "data" {{ include "n8n.pvc" . }} - {{- if .Values.config }} - - name: config-volume - configMap: - name: {{ include "n8n.fullname" . }} - {{- end }} - {{- if or (.Values.secret) (.Values.existingSecret) }} - - name: secret-volume - secret: - secretName: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{ else }}{{ include "n8n.fullname" . }}{{ end }} - items: - - key: "secret.json" - path: "secret.json" - {{- end }} - {{- end }} +{{- end }} diff --git a/charts/n8n/templates/deployment.yaml b/charts/n8n/templates/deployment.yaml index a458441..907bcc7 100644 --- a/charts/n8n/templates/deployment.yaml +++ b/charts/n8n/templates/deployment.yaml @@ -5,15 +5,15 @@ metadata: labels: {{- include "n8n.labels" . | nindent 4 }} spec: - {{- if not .Values.autoscaling.enabled }} - replicas: {{ .Values.replicaCount }} + {{- if not .Values.main.autoscaling.enabled }} + replicas: {{ .Values.main.replicaCount }} {{- end }} strategy: - type: {{ .Values.deploymentStrategy.type }} - {{- if eq .Values.deploymentStrategy.type "RollingUpdate" }} + type: {{ .Values.main.deploymentStrategy.type }} + {{- if eq .Values.main.deploymentStrategy.type "RollingUpdate" }} rollingUpdate: - maxSurge: {{ default "25%" .Values.deploymentStrategy.maxSurge }} - maxUnavailable: {{ default "25%" .Values.deploymentStrategy.maxUnavailable }} + maxSurge: {{ default "25%" .Values.main.deploymentStrategy.maxSurge }} + maxUnavailable: {{ default "25%" .Values.main.deploymentStrategy.maxUnavailable }} {{- end }} selector: matchLabels: @@ -22,17 +22,16 @@ spec: template: metadata: annotations: - checksum/config: {{ print .Values | sha256sum }} - {{- with .Values.podAnnotations }} + checksum/config: {{ print .Values.main | sha256sum }} + {{- with .Values.main.podAnnotations }} {{- toYaml . | nindent 8 }} {{- end }} labels: {{- include "n8n.selectorLabels" . | nindent 8 }} app.kubernetes.io/type: master - {{- if .Values.podLabels }} - {{ toYaml .Values.podLabels | nindent 8 }} + {{- if .Values.main.podLabels }} + {{ toYaml .Values.main.podLabels | nindent 8 }} {{- end }} - spec: {{- with .Values.imagePullSecrets }} imagePullSecrets: @@ -40,85 +39,74 @@ spec: {{- end }} serviceAccountName: {{ include "n8n.serviceAccountName" . }} securityContext: - {{- toYaml .Values.podSecurityContext | nindent 8 }} - {{- if or .Values.initContainers (and .Values.persistence.enabled (eq .Values.persistence.type "dynamic")) }} + {{- toYaml .Values.main.podSecurityContext | nindent 8 }} + {{- if or .Values.main.initContainers }} initContainers: - {{- if and .Values.persistence.enabled (eq .Values.persistence.type "dynamic") }} - - name: init-data-dir - image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" - command: ["/bin/sh", "-c", "mkdir -p /home/node/.n8n/"] - volumeMounts: - - name: data - mountPath: /home/node/.n8n - {{- end }} - {{- if .Values.initContainers }} - {{ tpl (toYaml .Values.initContainers) . | nindent 8 }} + {{- if .Values.main.initContainers }} + {{ tpl (toYaml .Values.main.initContainers) . | nindent 10 }} {{- end }} {{- end }} containers: - name: {{ .Chart.Name }} - {{- with .Values.command }} + {{- with .Values.main.command }} command: {{- toYaml . | nindent 12 }} {{- end }} securityContext: - {{- toYaml .Values.securityContext | nindent 12 }} + {{- toYaml .Values.main.securityContext | nindent 12 }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" imagePullPolicy: {{ .Values.image.pullPolicy }} - env: - {{- include "n8n.deploymentPodEnvironments" . | nindent 12 }} + envFrom: + {{- if .Values.main.config }} + - configMapRef: + name: app-config-{{ include "n8n.fullname" . }} + {{- end }} + {{- if .Values.main.secret }} + - secretRef: + name: app-secret-{{ include "n8n.fullname" . }} + {{- end }} + env: {{ not (empty .Values.main.extraEnv) | ternary nil "[]" }} + {{- range $key, $value := .Values.main.extraEnv }} + - name: {{ $key }} + {{- toYaml $value | nindent 14 }} + {{- end }} lifecycle: - {{- toYaml .Values.lifecycle | nindent 12 }} + {{- toYaml .Values.main.lifecycle | nindent 12 }} ports: - name: http - containerPort: {{ get .Values.config "port" | default 5678 }} + containerPort: {{ get .Values.main.config "port" | default 5678 }} protocol: TCP - {{- with .Values.livenessProbe }} + {{- with .Values.main.livenessProbe }} livenessProbe: {{- toYaml . | nindent 12 }} {{- end }} - {{- with .Values.readinessProbe }} + {{- with .Values.main.readinessProbe }} readinessProbe: {{- toYaml . | nindent 12 }} {{- end }} resources: - {{- toYaml .Values.resources | nindent 12 }} + {{- toYaml .Values.main.resources | nindent 12 }} volumeMounts: - name: data - mountPath: {{ if semverCompare ">=1.0" (.Values.image.tag | default .Chart.AppVersion) }}/home/node/.n8n{{ else }}/root/.n8n{{ end }} - {{- if .Values.config }} - - name: config-volume - mountPath: /n8n-config - {{- end }} - {{- if or (.Values.secret) (.Values.existingSecret) }} - - name: secret-volume - mountPath: /n8n-secret - {{- end }} - {{- with .Values.nodeSelector }} + mountPath: /home/node/.n8n + {{- if .Values.main.extraVolumeMounts }} + {{- toYaml .Values.main.extraVolumeMounts | nindent 12 }} + {{- end }} + {{- with .Values.main.nodeSelector }} nodeSelector: {{- toYaml . | nindent 8 }} {{- end }} - {{- with .Values.affinity }} + {{- with .Values.main.affinity }} affinity: {{- toYaml . | nindent 8 }} {{- end }} - {{- with .Values.tolerations }} + {{- with .Values.main.tolerations }} tolerations: {{- toYaml . | nindent 8 }} {{- end }} volumes: - name: "data" {{ include "n8n.pvc" . }} - {{- if .Values.config }} - - name: config-volume - configMap: - name: {{ include "n8n.fullname" . }} - {{- end }} - {{- if or (.Values.secret) (.Values.existingSecret) }} - - name: secret-volume - secret: - secretName: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{ else }}{{ include "n8n.fullname" . }}{{ end }} - items: - - key: "secret.json" - path: "secret.json" + {{- if .Values.main.extraVolumes }} + {{- toYaml .Values.main.extraVolumes | nindent 8 }} {{- end }} diff --git a/charts/n8n/templates/hpa.yaml b/charts/n8n/templates/hpa.yaml index df58416..346ecf2 100644 --- a/charts/n8n/templates/hpa.yaml +++ b/charts/n8n/templates/hpa.yaml @@ -1,9 +1,5 @@ -{{- if .Values.autoscaling.enabled }} -{{- if semverCompare ">=1.25-0" .Capabilities.KubeVersion.GitVersion -}} +{{- if .Values.main.autoscaling.enabled }} apiVersion: autoscaling/v2 -{{- else -}} -apiVersion: autoscaling/v2beta1 -{{- end }} kind: HorizontalPodAutoscaler metadata: name: {{ include "n8n.fullname" . }} @@ -14,38 +10,23 @@ spec: apiVersion: apps/v1 kind: Deployment name: {{ include "n8n.fullname" . }} - minReplicas: {{ .Values.autoscaling.minReplicas }} - maxReplicas: {{ .Values.autoscaling.maxReplicas }} + minReplicas: {{ .Values.main.autoscaling.minReplicas }} + maxReplicas: {{ .Values.main.autoscaling.maxReplicas }} metrics: -{{- if semverCompare ">=1.25-0" .Capabilities.KubeVersion.GitVersion -}} - {{- if .Values.autoscaling.targetCPUUtilizationPercentage }} + {{- if .Values.main.autoscaling.targetCPUUtilizationPercentage }} - type: Resource resource: name: cpu target: type: Utilization - averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }} + averageUtilization: {{ .Values.main.autoscaling.targetCPUUtilizationPercentage }} {{- end }} - {{- if .Values.autoscaling.targetMemoryUtilizationPercentage }} + {{- if .Values.main.autoscaling.targetMemoryUtilizationPercentage }} - type: Resource resource: name: memory target: type: Utilization - averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }} - {{- end }} -{{- else -}} - {{- if .Values.autoscaling.targetCPUUtilizationPercentage }} - - type: Resource - resource: - name: cpu - targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }} + averageUtilization: {{ .Values.main.autoscaling.targetMemoryUtilizationPercentage }} {{- end }} - {{- if .Values.autoscaling.targetMemoryUtilizationPercentage }} - - type: Resource - resource: - name: memory - targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }} - {{- end }} -{{- end }} {{- end }} diff --git a/charts/n8n/templates/ingress.yaml b/charts/n8n/templates/ingress.yaml index 946b945..f917513 100644 --- a/charts/n8n/templates/ingress.yaml +++ b/charts/n8n/templates/ingress.yaml @@ -1,14 +1,7 @@ {{- if .Values.ingress.enabled -}} {{- $fullName := include "n8n.fullname" . -}} -{{- $svcPort := .Values.service.port -}} -{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}} +{{- $svcPort := .Values.main.service.port -}} apiVersion: networking.k8s.io/v1 -{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}} -apiVersion: networking.k8s.io/v1beta1 -{{- else -}} -apiVersion: extensions/v1beta1 -{{- end }} - kind: Ingress metadata: name: {{ $fullName }} @@ -19,7 +12,7 @@ metadata: {{- toYaml . | nindent 4 }} {{- end }} spec: - {{- if and .Values.ingress.className (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }} + {{- if .Values.ingress.className }} ingressClassName: {{ .Values.ingress.className }} {{- end }} {{- if .Values.ingress.tls }} @@ -39,20 +32,13 @@ spec: paths: {{- range .paths }} - path: {{ . }} - {{- if semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion }} pathType: Prefix - {{- end }} backend: - {{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }} service: name: {{ $fullName }} port: number: {{ $svcPort }} - {{- else }} - serviceName: {{ $fullName }} - servicePort: {{ $svcPort }} - {{- end }} - {{- if $.Values.scaling.webhook.enabled }} + {{- if $.Values.webhook.enabled }} - path: {{ . }}webhook/ pathType: Prefix backend: diff --git a/charts/n8n/templates/pvc.yaml b/charts/n8n/templates/pvc.yaml index aec7bbc..deff860 100644 --- a/charts/n8n/templates/pvc.yaml +++ b/charts/n8n/templates/pvc.yaml @@ -1,9 +1,9 @@ -{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }} +{{- if and .Values.main.persistence.enabled (not .Values.main.persistence.existingClaim) }} apiVersion: v1 kind: PersistentVolumeClaim metadata: name: {{ include "n8n.fullname" . }} - {{- with .Values.persistence.annotations }} + {{- with .Values.main.persistence.annotations }} annotations: {{- range $key, $value := . }} {{ $key }}: {{ $value }} @@ -13,17 +13,17 @@ metadata: {{- include "n8n.labels" . | nindent 4 }} spec: accessModes: - {{- range .Values.persistence.accessModes }} + {{- range .Values.main.persistence.accessModes }} - {{ . | quote }} {{- end }} resources: requests: - storage: {{ .Values.persistence.size }} - {{- if .Values.persistence.storageClass }} - {{- if eq "-" .Values.persistence.storageClass }} + storage: {{ .Values.main.persistence.size }} + {{- if .Values.main.persistence.storageClass }} + {{- if eq "-" .Values.main.persistence.storageClass }} storageClassName: "" {{- else }} - storageClassName: {{ .Values.persistence.storageClass }} + storageClassName: {{ .Values.main.persistence.storageClass }} {{- end }} {{- end }} {{- end }} diff --git a/charts/n8n/templates/resources.yaml b/charts/n8n/templates/resources.yaml new file mode 100644 index 0000000..e4a64aa --- /dev/null +++ b/charts/n8n/templates/resources.yaml @@ -0,0 +1,48 @@ +{{/* +raw.resource will create a resource template that can be +merged with each item in `.Values.resources`. +*/}} +{{- define "raw.metadata" -}} +metadata: + labels: + app: {{ template "n8n.name" . }} + chart: {{ template "n8n.chart" . }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} +{{- end }} + +{{- $metadata := fromYaml (include "raw.metadata" .) -}} + +{{- $resources := list -}} +{{- range .Values.resources }} + {{- if typeIs "string" . }} + {{/* Handle multi-part */}} + {{- range . | splitList "---\n" }} + {{- with . | fromYaml }} + {{- $resources = append $resources . }} + {{- end }} + {{- end }} + {{- else }} + {{- $resources = append $resources . }} + {{- end }} +{{- end }} + +{{- $templates := list -}} +{{- range .Values.templates }} + {{/* Handle multi-part */}} + {{- range . | splitList "---\n" }} + {{- if . | fromYaml }} + {{- $templates = append $templates . }} + {{- end }} + {{- end }} +{{- end }} + +{{- range $resources }} +--- +{{ merge . $metadata | toYaml }} +{{- end }} + +{{- range $templates }} +--- +{{ merge (tpl . $ | fromYaml) $metadata | toYaml }} +{{- end }} diff --git a/charts/n8n/templates/secret.webhook.yaml b/charts/n8n/templates/secret.webhook.yaml new file mode 100644 index 0000000..38d2bb4 --- /dev/null +++ b/charts/n8n/templates/secret.webhook.yaml @@ -0,0 +1,12 @@ +{{- if .Values.webhook.secret }} +apiVersion: v1 +kind: Secret +metadata: + name: webhook-secret-{{ include "n8n.fullname" . }} + labels: + {{- include "n8n.labels" . | nindent 4 }} +type: Opaque +data: + {{- include "toEnvVars" (dict "values" .Values.webhook.secret "prefix" "" "isSecret" true) | nindent 4 }} + +{{- end }} diff --git a/charts/n8n/templates/secret.yaml b/charts/n8n/templates/secret.yaml index 1a5520e..1cdc16a 100644 --- a/charts/n8n/templates/secret.yaml +++ b/charts/n8n/templates/secret.yaml @@ -1,14 +1,12 @@ -{{- if or .Values.secret .Values.n8n.encryption_key }} +{{- if .Values.main.secret }} apiVersion: v1 kind: Secret metadata: - name: {{ include "n8n.fullname" . }} + name: app-secret-{{ include "n8n.fullname" . }} labels: {{- include "n8n.labels" . | nindent 4 }} +type: Opaque data: - secret.json: {{ .Values.secret | toPrettyJson | b64enc }} - {{- if .Values.n8n.encryption_key }} - N8N_ENCRYPTION_KEY: {{ .Values.n8n.encryption_key | b64enc }} - {{- end }} + {{- include "toEnvVars" (dict "values" .Values.main.secret "prefix" "" "isSecret" true) | nindent 4 }} {{- end }} diff --git a/charts/n8n/templates/service.webhooks.yaml b/charts/n8n/templates/service.webhook.yaml similarity index 65% rename from charts/n8n/templates/service.webhooks.yaml rename to charts/n8n/templates/service.webhook.yaml index 8579651..2ebd84e 100644 --- a/charts/n8n/templates/service.webhooks.yaml +++ b/charts/n8n/templates/service.webhook.yaml @@ -1,18 +1,18 @@ -{{- if .Values.scaling.webhook.enabled }} +{{- if .Values.webhook.enabled }} apiVersion: v1 kind: Service metadata: name: {{ include "n8n.fullname" . }}-webhooks labels: {{- include "n8n.labels" . | nindent 4 }} - {{- with .Values.service.annotations }} + {{- with .Values.webhook.service.annotations }} annotations: {{- toYaml . | nindent 4 }} {{- end }} spec: - type: {{ .Values.service.type }} + type: {{ .Values.webhook.service.type | default "ClusterIP" }} ports: - - port: {{ .Values.service.port }} + - port: {{ .Values.webhook.service.port | default 80 }} targetPort: http protocol: TCP name: http diff --git a/charts/n8n/templates/service.yaml b/charts/n8n/templates/service.yaml index 239642d..e2c93cd 100644 --- a/charts/n8n/templates/service.yaml +++ b/charts/n8n/templates/service.yaml @@ -4,14 +4,14 @@ metadata: name: {{ include "n8n.fullname" . }} labels: {{- include "n8n.labels" . | nindent 4 }} - {{- with .Values.service.annotations }} + {{- with .Values.main.service.annotations }} annotations: {{- toYaml . | nindent 4 }} {{- end }} spec: - type: {{ .Values.service.type }} + type: {{ default "ClusterIP_" .Values.main.service.type }} ports: - - port: {{ .Values.service.port }} + - port: {{ default 80 .Values.main.service.port }} targetPort: http protocol: TCP name: http diff --git a/charts/n8n/templates/serviceaccount.yaml b/charts/n8n/templates/serviceaccount.yaml index 5c629a4..137ea35 100644 --- a/charts/n8n/templates/serviceaccount.yaml +++ b/charts/n8n/templates/serviceaccount.yaml @@ -1,11 +1,11 @@ -{{- if .Values.serviceAccount.create -}} +{{- if .Values.main.serviceAccount.create -}} apiVersion: v1 kind: ServiceAccount metadata: name: {{ include "n8n.serviceAccountName" . }} labels: {{- include "n8n.labels" . | nindent 4 }} - {{- with .Values.serviceAccount.annotations }} + {{- with .Values.main.serviceAccount.annotations }} annotations: {{- toYaml . | nindent 4 }} {{- end }} diff --git a/charts/n8n/templates/tests/test-connection.yaml b/charts/n8n/templates/tests/test-connection.yaml index 5dbe7df..5a0d655 100644 --- a/charts/n8n/templates/tests/test-connection.yaml +++ b/charts/n8n/templates/tests/test-connection.yaml @@ -9,7 +9,11 @@ metadata: spec: containers: - name: wget - image: busybox - command: ['wget'] - args: ['{{ include "n8n.fullname" . }}:{{ .Values.service.port }}'] + image: alpine + command: ['sh', '-c'] + args: + - | + echo "Testing connection to {{ include "n8n.fullname" . }}:{{ default 80 .Values.main.service.port }}"; + nslookup {{ include "n8n.fullname" . }}; + wget {{ include "n8n.fullname" . }}:{{ default 80 .Values.main.service.port }}; restartPolicy: Never diff --git a/charts/n8n/values.yaml b/charts/n8n/values.yaml index 849ba2c..f0bbc47 100644 --- a/charts/n8n/values.yaml +++ b/charts/n8n/values.yaml @@ -1,193 +1,161 @@ -# Default helm values for n8n. -# Default values within the n8n application can be found under https://github.com/n8n-io/n8n/blob/master/packages/cli/src/config/index.ts -n8n: - # if not specified, n8n on first launch creates a random encryption key for encrypting saved credentials and saves it in the ~/.n8n folder - encryption_key: -defaults: - -config: {} - - -# existingSecret and secret are exclusive, with existingSecret taking priority. -# existingSecret: "" # Use an existing Kubernetes secret, e.g., created by hand or Vault operator. -# Dict with all n8n JSON config options, unlike config, the values here will end up in a secret. -secret: {} - -# Typical Example of a config in combination with a secret. -# config: -# database: -# type: postgresdb -# postgresdb: -# host: 192.168.0.52 -# secret: -# database: -# postgresdb: -# password: 'big secret' - -## ALL possible n8n Values - -# database: -# type: # Type of database to use - Other possible types ['sqlite', 'mariadb', 'mysqldb', 'postgresdb'] - default: sqlite -# tablePrefix: # Prefix for table names - default: '' -# postgresdb: -# database: # PostgresDB Database - default: n8n -# host: # PostgresDB Host - default: localhost -# password: # PostgresDB Password - default: '' -# port: # PostgresDB Port - default: 5432 -# user: # PostgresDB User - default: root -# schema: # PostgresDB Schema - default: public -# ssl: -# ca: # SSL certificate authority - default: '' -# cert: # SSL certificate - default: '' -# key: # SSL key - default: '' -# rejectUnauthorized: # If unauthorized SSL connections should be rejected - default: true -# mysqldb: -# database: # MySQL Database - default: n8n -# host: # MySQL Host - default: localhost -# password: # MySQL Password - default: '' -# port: # MySQL Port - default: 3306 -# user: # MySQL User - default: root -# credentials: -# overwrite: -# data: # Overwrites for credentials - default: "{}" -# endpoint: # Fetch credentials from API - default: '' +# README +# High level values structure, overview and explanation of the values.yaml file. +# 1. Global and chart wide values, like the image repository, image tag, etc. +# 2. Ingress, (default is nginx, but you can change it to your own ingress controller) +# 3. Main n8n app configuration + kubernetes specific settings +# 4. Worker related settings + kubernetes specific settings +# 5. Webhook related settings + kubernetes specific settings +# 6. Raw Resources to pass through your own manifests like GatewayAPI, ServiceMonitor etc. +# 7. Redis related settings + kubernetes specific settings + # -# executions: -# process: # In what process workflows should be executed - possible values [main, own] - default: own -# timeout: # Max run time (seconds) before stopping the workflow execution - default: -1 -# maxTimeout: # Max execution time (seconds) that can be set for a workflow individually - default: 3600 -# saveDataOnError: # What workflow execution data to save on error - possible values [all , none] - default: all -# saveDataOnSuccess: # What workflow execution data to save on success - possible values [all , none] - default: all -# saveDataManualExecutions: # Save data of executions when started manually via editor - default: false -# pruneData: # Delete data of past executions on a rolling basis - default: false -# pruneDataMaxAge: # How old (hours) the execution data has to be to get deleted - default: 336 -# pruneDataTimeout: # Timeout (seconds) after execution data has been pruned - default: 3600 -# generic: -# timezone: # The timezone to use - default: America/New_York -# path: # Path n8n is deployed to - default: "/" -# host: # Host name n8n can be reached - default: localhost -# port: # HTTP port n8n can be reached - default: 5678 -# listen_address: # IP address n8n should listen on - default: 0.0.0.0 -# protocol: # HTTP Protocol via which n8n can be reached - possible values [http , https] - default: http -# ssl_key: # SSL Key for HTTPS Protocol - default: '' -# ssl_cert: # SSL Cert for HTTPS Protocol - default: '' -# security: -# excludeEndpoints: # Additional endpoints to exclude auth checks. Multiple endpoints can be separated by colon - default: '' -# basicAuth: -# active: # If basic auth should be activated for editor and REST-API - default: false -# user: # The name of the basic auth user - default: '' -# password: # The password of the basic auth user - default: '' -# hash: # If password for basic auth is hashed - default: false -# jwtAuth: -# active: # If JWT auth should be activated for editor and REST-API - default: false -# jwtHeader: # The request header containing a signed JWT - default: '' -# jwtHeaderValuePrefix: # The request header value prefix to strip (optional) default: '' -# jwksUri: # The URI to fetch JWK Set for JWT authentication - default: '' -# jwtIssuer: # JWT issuer to expect (optional) - default: '' -# jwtNamespace: # JWT namespace to expect (optional) - default: '' -# jwtAllowedTenantKey: # JWT tenant key name to inspect within JWT namespace (optional) - default: '' -# jwtAllowedTenant: # JWT tenant to allow (optional) - default: '' -# endpoints: -# rest: # Path for rest endpoint default: rest -# webhook: # Path for webhook endpoint default: webhook -# webhookTest: # Path for test-webhook endpoint default: webhook-test -# webhookWaiting: # Path for waiting-webhook endpoint default: webhook-waiting -# externalHookFiles: # Files containing external hooks. Multiple files can be separated by colon - default: '' -# nodes: -# exclude: # Nodes not to load - default: "[]" -# errorTriggerType: # Node Type to use as Error Trigger - default: n8n-nodes-base.errorTrigger - -# Set additional environment variables on the Deployment -extraEnv: {} -# Set this if running behind a reverse proxy and the external port is different from the port n8n runs on -# WEBHOOK_URL: "https://n8n.myhost.com/ - -# Set additional environment from existing secrets -extraEnvSecrets: {} -# for example, to reuse existing postgres authentication secrets: -# DB_POSTGRESDB_USER: -# name: postgres-user-auth -# key: username -# DB_POSTGRESDB_PASSWORD: -# name: postgres-user-auth -# key: password - -## Common Kubernetes Config Settings -persistence: - ## If true, use a Persistent Volume Claim, If false, use emptyDir - ## - enabled: false - # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim - type: emptyDir - ## Persistent Volume Storage Class - ## If defined, storageClassName: - ## If set to "-", storageClassName: "", which disables dynamic provisioning - ## If undefined (the default) or set to null, no storageClassName spec is - ## set, choosing the default provisioner. (gp2 on AWS, standard on - ## GKE, AWS & OpenStack) - ## - # storageClass: "-" - ## PVC annotations - # - # If you need this annotation include it under `values.yml` file and pvc.yml template will add it. - # This is not maintained at Helm v3 anymore. - # https://github.com/8gears/n8n-helm-chart/issues/8 - # - # annotations: - # helm.sh/resource-policy: keep - ## Persistent Volume Access Mode - ## - accessModes: - - ReadWriteOnce - ## Persistent Volume size - ## - size: 1Gi - ## Use an existing PVC - ## - # existingClaim: - -replicaCount: 1 - -# here you can specify the deployment strategy as Recreate or RollingUpdate with optional maxSurge and maxUnavailable -# If these options are not set, default values are 25% -# deploymentStrategy: -# type: RollingUpdate -# maxSurge: "50%" -# maxUnavailable: "50%" - -deploymentStrategy: - type: "Recreate" +# General Config +# + +# default .Chart.Name +nameOverride: +# default .Chart.Name or .Values.nameOverride +fullnameOverride: + + +# +# Common Kubernetes Config Settings for this entire n8n deployment +# image: repository: n8nio/n8n pullPolicy: IfNotPresent # Overrides the image tag whose default is the chart appVersion. tag: "" - imagePullSecrets: [] -nameOverride: "" -fullnameOverride: "" -serviceAccount: - # Specifies whether a service account should be created - create: true - # Annotations to add to the service account +# +# Ingress +# +ingress: + enabled: false annotations: {} - # The name of the service account to use. - # If not set and create is true, a name is generated using the fullname template - name: "" - -podAnnotations: {} - -podLabels: {} - -podSecurityContext: - runAsNonRoot: true - runAsUser: 1000 - runAsGroup: 1000 - fsGroup: 1000 - -securityContext: {} + # define a custom ingress class Name, like "traefik" or "nginx" + className: "" + hosts: + - host: workflow.example.com + paths: [] + tls: + - hosts: + - workflow.example.com + secretName: host-domain-cert + +# the main (n8n) application related configuration + Kubernetes specific settings +# The config: {} dictionary is converted to environmental variables in the ConfigMap. +main: + # See https://docs.n8n.io/hosting/configuration/environment-variables/ for all values. + config: {} + # n8n: + # db: + # type: postgresdb + # postgresdb: + # host: 192.168.0.52 + + # Dictionary for secrets, unlike config:, the values here will end up in the secret file. + # The YAML entry db.postgresdb.password: my_secret is transformed DB_POSTGRESDB_password=bXlfc2VjcmV0 + # See https://docs.n8n.io/hosting/configuration/environment-variables/ + secret: {} + # n8n: + # if you run n8n stateless, you should provide an encryption key here. + # encryption_key: + # + # database: + # postgresdb: + # password: 'big secret' + + # Extra environmental variables, so you can reference other configmaps and secrets into n8n as env vars. + extraEnv: + # N8N_DB_POSTGRESDB_NAME: + # valueFrom: + # secretKeyRef: + # name: db-app + # key: dbname + # + # N8n Kubernetes specific settings + # + persistence: + # If true, use a Persistent Volume Claim, If false, use emptyDir + enabled: false + # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim + type: emptyDir + # Persistent Volume Storage Class + # If defined, storageClassName: + # If set to "-", storageClassName: "", which disables dynamic provisioning + # If undefined (the default) or set to null, no storageClassName spec is + # set, choosing the default provisioner. (gp2 on AWS, standard on + # GKE, AWS & OpenStack) + # + # storageClass: "-" + # PVC annotations + # + # If you need this annotation include it under `values.yml` file and pvc.yml template will add it. + # This is not maintained at Helm v3 anymore. + # https://github.com/8gears/n8n-helm-chart/issues/8 + # + # annotations: + # helm.sh/resource-policy: keep + # Persistent Volume Access Mode + # + accessModes: + - ReadWriteOnce + # Persistent Volume size + size: 1Gi + # Use an existing PVC + # existingClaim: + + extraVolumes: [] + # - name: db-ca-cert + # secret: + # secretName: db-ca + # items: + # - key: ca.crt + # path: ca.crt + + extraVolumeMounts: [] + # - name: db-ca-cert + # mountPath: /etc/ssl/certs/postgresql + # readOnly: true + + + # Number of desired pods. More than one pod is supported in n8n enterprise. + replicaCount: 1 + + # here you can specify the deployment strategy as Recreate or RollingUpdate with optional maxSurge and maxUnavailable + # If these options are not set, default values are 25% + # deploymentStrategy: + # type: Recreate | RollingUpdate + # maxSurge: "50%" + # maxUnavailable: "50%" + + deploymentStrategy: + type: "Recreate" + # maxSurge: "50%" + # maxUnavailable: "50%" + + serviceAccount: + # Specifies whether a service account should be created + create: true + # Annotations to add to the service account + annotations: {} + # The name of the service account to use. + # If not set and create is true, a name is generated using the fullname template + name: "" + + podAnnotations: {} + podLabels: {} + + podSecurityContext: + runAsNonRoot: true + runAsUser: 1000 + runAsGroup: 1000 + fsGroup: 1000 + + securityContext: {} # capabilities: # drop: # - ALL @@ -195,89 +163,245 @@ securityContext: {} # runAsNonRoot: true # runAsUser: 1000 -# here you can specify lifecycle hooks - it can be used e.g., to easily add packages to the container without building -# your own docker image -# see https://github.com/8gears/n8n-helm-chart/pull/30 -lifecycle: - {} - -# here's the sample configuration to add mysql-client to the container -# lifecycle: -# postStart: -# exec: -# command: ["/bin/sh", "-c", "apk add mysql-client"] - -# here you can override a command for main container -# it may be used to override a starting script (e.g., to resolve issues like https://github.com/n8n-io/n8n/issues/6412) or -# run additional preparation steps (e.g., installing additional software) -command: [] - -# sample configuration that overrides starting script and solves above issue (also it runs n8n as root, so be careful): -# command: -# - tini -# - -- -# - /bin/sh -# - -c -# - chmod o+rx /root; chown -R node /root/.n8n || true; chown -R node /root/.n8n; ln -s /root/.n8n /home/node; chown -R node /home/node || true; node /usr/local/bin/n8n - -# here you can override the livenessProbe for the main container -# it may be used to increase the timeout for the livenessProbe (e.g., to resolve issues like - -livenessProbe: - httpGet: - path: /healthz - port: http - # initialDelaySeconds: 30 - # periodSeconds: 10 - # timeoutSeconds: 5 - # failureThreshold: 6 - # successThreshold: 1 - -# here you can override the readinessProbe for the main container -# it may be used to increase the timeout for the readinessProbe (e.g., to resolve issues like - -readinessProbe: - httpGet: - path: /healthz - port: http - # initialDelaySeconds: 30 - # periodSeconds: 10 - # timeoutSeconds: 5 - # failureThreshold: 6 - # successThreshold: 1 - -# here you can add init containers to the various deployments -initContainers: [] - -service: - type: ClusterIP - port: 80 - annotations: {} + # here you can specify lifecycle hooks - it can be used e.g., to easily add packages to the container without building + # your own docker image + # see https://github.com/8gears/n8n-helm-chart/pull/30 + lifecycle: {} + + # here's the sample configuration to add mysql-client to the container + # lifecycle: + # postStart: + # exec: + # command: ["/bin/sh", "-c", "apk add mysql-client"] + + # here you can override a command for main container + # it may be used to override a starting script (e.g., to resolve issues like https://github.com/n8n-io/n8n/issues/6412) or run additional preparation steps (e.g., installing additional software) + command: [] + + # sample configuration that overrides starting script and solves above issue (also it runs n8n as root, so be careful): + # command: + # - tini + # - -- + # - /bin/sh + # - -c + # - chmod o+rx /root; chown -R node /root/.n8n || true; chown -R node /root/.n8n; ln -s /root/.n8n /home/node; chown -R node /home/node || true; node /usr/local/bin/n8n + + # here you can override the livenessProbe for the main container + # it may be used to increase the timeout for the livenessProbe (e.g., to resolve issues like + + livenessProbe: + httpGet: + path: /healthz + port: http + # initialDelaySeconds: 30 + # periodSeconds: 10 + # timeoutSeconds: 5 + # failureThreshold: 6 + # successThreshold: 1 + + # here you can override the readinessProbe for the main container + # it may be used to increase the timeout for the readinessProbe (e.g., to resolve issues like + + readinessProbe: + httpGet: + path: /healthz + port: http + # initialDelaySeconds: 30 + # periodSeconds: 10 + # timeoutSeconds: 5 + # failureThreshold: 6 + # successThreshold: 1 + + # List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. + # See https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ + initContainers: [] + # - name: init-data-dir + # image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" + # command: [ "/bin/sh", "-c", "mkdir -p /home/node/.n8n/" ] + # volumeMounts: + # - name: data + # mountPath: /home/node/.n8n + + + service: + annotations: {} + # -- Service types allow you to specify what kind of Service you want. + # E.g., ClusterIP, NodePort, LoadBalancer, ExternalName + type: ClusterIP + # -- Service port + port: 80 + + resources: {} + # We usually recommend not specifying default resources and to leave this as a conscious + # choice for the user. This also increases chances charts run on environments with little + # resources, such as Minikube. If you do want to specify resources, uncomment the following + # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + # limits: + # cpu: 100m + # memory: 128Mi + # requests: + # cpu: 100m + # memory: 128Mi -ingress: - enabled: false - annotations: {} - # kubernetes.io/ingress.class: nginx - # kubernetes.io/tls-acme: "true" - hosts: - - host: chart-example.local - paths: [] - tls: [] - # - secretName: chart-example-tls - # hosts: - # - chart-example.local + autoscaling: + enabled: false + minReplicas: 1 + maxReplicas: 100 + targetCPUUtilizationPercentage: 80 + # targetMemoryUtilizationPercentage: 80 - # define a custom incressClassName, like "traefik" or "nginx" - className: "" + nodeSelector: {} + tolerations: [] + affinity: {} -workerResources: - {} +# # # # # # # # # # # # # # # # +# +# Worker related settings +# +worker: + enabled: false + count: 2 + # You can define the number of jobs a worker can run in parallel by using the concurrency flag. It defaults to 10. To change it: + concurrency: 10 -webhookResources: - {} + # + # Worker Kubernetes specific settings + # + persistence: + # If true, use a Persistent Volume Claim, If false, use emptyDir + enabled: false + # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim + type: emptyDir + # Persistent Volume Storage Class + # If defined, storageClassName: + # If set to "-", storageClassName: "", which disables dynamic provisioning + # If undefined (the default) or set to null, no storageClassName spec is + # set, choosing the default provisioner. (gp2 on AWS, standard on + # GKE, AWS & OpenStack) + # + # storageClass: "-" + # PVC annotations + # + # If you need this annotation include it under `values.yml` file and pvc.yml template will add it. + # This is not maintained at Helm v3 anymore. + # https://github.com/8gears/n8n-helm-chart/issues/8 + # + # annotations: + # helm.sh/resource-policy: keep + # Persistent Volume Access Mode + accessModes: + - ReadWriteOnce + # Persistent Volume size + size: 1Gi + # Use an existing PVC + # existingClaim: + # Number of desired pods. + replicaCount: 1 + + # here you can specify the deployment strategy as Recreate or RollingUpdate with optional maxSurge and maxUnavailable + # If these options are not set, default values are 25% + # deploymentStrategy: + # type: RollingUpdate + # maxSurge: "50%" + # maxUnavailable: "50%" + + deploymentStrategy: + type: "Recreate" + # maxSurge: "50%" + # maxUnavailable: "50%" + + serviceAccount: + # Specifies whether a service account should be created + create: true + # Annotations to add to the service account + annotations: {} + # The name of the service account to use. + # If not set and create is true, a name is generated using the fullname template + name: "" + + podAnnotations: {} + + podLabels: {} + + podSecurityContext: + runAsNonRoot: true + runAsUser: 1000 + runAsGroup: 1000 + fsGroup: 1000 + + securityContext: {} + # capabilities: + # drop: + # - ALL + # readOnlyRootFilesystem: true + # runAsNonRoot: true + # runAsUser: 1000 -resources: - {} + # here you can specify lifecycle hooks - it can be used e.g., to easily add packages to the container without building + # your own docker image + # see https://github.com/8gears/n8n-helm-chart/pull/30 + lifecycle: {} + + # here's the sample configuration to add mysql-client to the container + # lifecycle: + # postStart: + # exec: + # command: ["/bin/sh", "-c", "apk add mysql-client"] + + # here you can override a command for worker container + # it may be used to override a starting script (e.g., to resolve issues like https://github.com/n8n-io/n8n/issues/6412) or + # run additional preparation steps (e.g., installing additional software) + command: [] + + # sample configuration that overrides starting script and solves above issue (also it runs n8n as root, so be careful): + # command: + # - tini + # - -- + # - /bin/sh + # - -c + # - chmod o+rx /root; chown -R node /root/.n8n || true; chown -R node /root/.n8n; ln -s /root/.n8n /home/node; chown -R node /home/node || true; node /usr/local/bin/n8n + + # command args + commandArgs: [] + + # here you can override the livenessProbe for the main container + # it may be used to increase the timeout for the livenessProbe (e.g., to resolve issues like + livenessProbe: + httpGet: + path: /healthz + port: http + # initialDelaySeconds: 30 + # periodSeconds: 10 + # timeoutSeconds: 5 + # failureThreshold: 6 + # successThreshold: 1 + + # here you can override the readinessProbe for the main container + # it may be used to increase the timeout for the readinessProbe (e.g., to resolve issues like + + readinessProbe: + httpGet: + path: /healthz + port: http + # initialDelaySeconds: 30 + # periodSeconds: 10 + # timeoutSeconds: 5 + # failureThreshold: 6 + # successThreshold: 1 + + # List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. + # See https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ + initContainers: [] + + service: + annotations: {} + # -- Service types allow you to specify what kind of Service you want. + # E.g., ClusterIP, NodePort, LoadBalancer, ExternalName + type: ClusterIP + # -- Service port + port: 80 + + resources: {} # We usually recommend not specifying default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little # resources, such as Minikube. If you do want to specify resources, uncomment the following @@ -289,44 +413,242 @@ resources: # cpu: 100m # memory: 128Mi -autoscaling: + autoscaling: + enabled: false + minReplicas: 1 + maxReplicas: 100 + targetCPUUtilizationPercentage: 80 + # targetMemoryUtilizationPercentage: 80 + + nodeSelector: {} + tolerations: [] + affinity: {} + +# Webhook related settings +# With .Values.scaling.webhook.enabled=true you disable Webhooks from the main process, but you enable the processing on a different Webhook instance. +# See https://github.com/8gears/n8n-helm-chart/issues/39#issuecomment-1579991754 for the full explanation. +# Webhook processes rely on Redis too. +webhook: enabled: false - minReplicas: 1 - maxReplicas: 100 - targetCPUUtilizationPercentage: 80 - # targetMemoryUtilizationPercentage: 80 - -nodeSelector: {} + # additional (to main) config for webhook + config: {} + # additional (to main) config for webhook + secret: {} -tolerations: [] + # Extra environmental variables, so you can reference other configmaps and secrets into n8n as env vars. + extraEnv: {} + # WEBHOOK_URL: + # value: "http://webhook.domain.tld" -affinity: {} -scaling: - enabled: false - - worker: - count: 2 - concurrency: 2 - # With .Values.scaling.webhook.enabled=true you disable Webhooks from the main process, but you enable the processing on a different Webhook instance. - # See https://github.com/8gears/n8n-helm-chart/issues/39#issuecomment-1579991754 for the full explanation. - webhook: + # + # Webhook Kubernetes specific settings + # + persistence: + # If true, use a Persistent Volume Claim, If false, use emptyDir enabled: false - count: 1 + # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim + type: emptyDir + # Persistent Volume Storage Class + # If defined, storageClassName: + # If set to "-", storageClassName: "", which disables dynamic provisioning + # If undefined (the default) or set to null, no storageClassName spec is + # set, choosing the default provisioner. (gp2 on AWS, standard on + # GKE, AWS & OpenStack) + # + # storageClass: "-" + # PVC annotations + # + # If you need this annotation include it under `values.yml` file and pvc.yml template will add it. + # This is not maintained at Helm v3 anymore. + # https://github.com/8gears/n8n-helm-chart/issues/8 + # + # annotations: + # helm.sh/resource-policy: keep + # Persistent Volume Access Mode + # + accessModes: + - ReadWriteOnce + # Persistent Volume size + # + size: 1Gi + # Use an existing PVC + # + # existingClaim: + + # Number of desired pods. + replicaCount: 1 + + # here you can specify the deployment strategy as Recreate or RollingUpdate with optional maxSurge and maxUnavailable + # If these options are not set, default values are 25% + # deploymentStrategy: + # type: RollingUpdate + # maxSurge: "50%" + # maxUnavailable: "50%" + + deploymentStrategy: + type: "Recreate" + + nameOverride: "" + fullnameOverride: "" + + serviceAccount: + # Specifies whether a service account should be created + create: true + # Annotations to add to the service account + annotations: {} + # The name of the service account to use. + # If not set and create is true, a name is generated using the fullname template + name: "" + + podAnnotations: {} + + podLabels: {} + + podSecurityContext: + runAsNonRoot: true + runAsUser: 1000 + runAsGroup: 1000 + fsGroup: 1000 + + securityContext: {} + # capabilities: + # drop: + # - ALL + # readOnlyRootFilesystem: true + # runAsNonRoot: true + # runAsUser: 1000 - redis: - host: - password: + # here you can specify lifecycle hooks - it can be used e.g., to easily add packages to the container without building + # your own docker image + # see https://github.com/8gears/n8n-helm-chart/pull/30 + lifecycle: {} + + # here's the sample configuration to add mysql-client to the container + # lifecycle: + # postStart: + # exec: + # command: ["/bin/sh", "-c", "apk add mysql-client"] + + # here you can override a command for main container + # it may be used to override a starting script (e.g., to resolve issues like https://github.com/n8n-io/n8n/issues/6412) or + # run additional preparation steps (e.g., installing additional software) + command: [] + + # sample configuration that overrides starting script and solves above issue (also it runs n8n as root, so be careful): + # command: + # - tini + # - -- + # - /bin/sh + # - -c + # - chmod o+rx /root; chown -R node /root/.n8n || true; chown -R node /root/.n8n; ln -s /root/.n8n /home/node; chown -R node /home/node || true; node /usr/local/bin/n8n + # Command Arguments + commandArgs: [] + + # here you can override the livenessProbe for the main container + # it may be used to increase the timeout for the livenessProbe (e.g., to resolve issues like + + livenessProbe: + httpGet: + path: /healthz + port: http + # initialDelaySeconds: 30 + # periodSeconds: 10 + # timeoutSeconds: 5 + # failureThreshold: 6 + # successThreshold: 1 + + # here you can override the readinessProbe for the main container + # it may be used to increase the timeout for the readinessProbe (e.g., to resolve issues like + + readinessProbe: + httpGet: + path: /healthz + port: http + # initialDelaySeconds: 30 + # periodSeconds: 10 + # timeoutSeconds: 5 + # failureThreshold: 6 + # successThreshold: 1 + + # List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. + # See https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ + initContainers: [] + + service: + annotations: {} + # -- Service types allow you to specify what kind of Service you want. + # E.g., ClusterIP, NodePort, LoadBalancer, ExternalName + type: ClusterIP + # -- Service port + port: 80 + + resources: {} + # We usually recommend not specifying default resources and to leave this as a conscious + # choice for the user. This also increases chances charts run on environments with little + # resources, such as Minikube. If you do want to specify resources, uncomment the following + # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + # limits: + # cpu: 100m + # memory: 128Mi + # requests: + # cpu: 100m + # memory: 128Mi + autoscaling: + enabled: false + minReplicas: 1 + maxReplicas: 100 + targetCPUUtilizationPercentage: 80 + # targetMemoryUtilizationPercentage: 80 + nodeSelector: {} + tolerations: [] + affinity: {} +# +# Additional resources +# -## Bitnami Redis configuration -## https://github.com/bitnami/charts/tree/master/bitnami/redis +# Takes a list of Kubernetes resources and merges each resource with a default metadata.labels map and +# installs the result. +# Use this to add any arbitrary Kubernetes manifests alongside this chart instead of kubectl and scripts. +resources: [] +# - apiVersion: v1 +# kind: ConfigMap +# metadata: +# name: example-config +# data: +# example.property.1: "value1" +# example.property.2: "value2" +# As an alternative to the above, you can also use a string as the value of the data field. +# - | +# apiVersion: v1 +# kind: ConfigMap +# metadata: +# name: example-config-string +# data: +# example.property.1: "value1" +# example.property.2: "value2" + +# Add additional templates. +# In contrast to the resources field, these templates are not merged with the default metadata.labels map. +# The templates are rendered with the values.yaml file as the context. +templates: [] +# - | +# apiVersion: v1 +# kind: ConfigMap +# metadata: +# name: my-config +# stringData: +# image_name: {{ .Values.image.repository }} + +# Bitnami Valkey configuration +# https://artifacthub.io/packages/helm/bitnami/valkey redis: enabled: false architecture: standalone - master: + primary: persistence: - enabled: true + enabled: false existingClaim: "" size: 2Gi diff --git a/examples/8gears.yaml b/examples/8gears.yaml new file mode 100644 index 0000000..7247332 --- /dev/null +++ b/examples/8gears.yaml @@ -0,0 +1,92 @@ +#Prod like set up with CloudNativePG and nginx-ingress +image: + repository: 8gears.container-registry.com/ops/n8n + tag: 1.76.1 +imagePullSecrets: + - name: 8gears-registry-n8n + +n8n: + config: + db: + type: postgresdb + postgresdb: + host: db-rw + user: n8n +# password: password is read from cnpg db-app secretKeyRef + pool: + size: 10 + ssl: + enabled: true + reject_Unauthorized: true + ca_file: "/home/ssl/certs/postgresql/ca.crt" + secret: + n8n: + encryption_key: "" + + extraEnv: + DB_POSTGRESDB_PASSWORD: + valueFrom: + secretKeyRef: + name: db-app + key: password + # Mount CA Cert to N8N container + extraVolumeMounts: + - name: db-ca-cert + mountPath: /home/ssl/certs/postgresql + readOnly: true + + extraVolumes: + - name: db-ca-cert + secret: + secretName: db-ca + items: + - key: ca.crt + path: ca.crt + resources: + limits: + memory: 2048Mi + requests: + memory: 512Mi + +ingress: + enabled: true + className: nginx + annotations: + cert-manager.io/cluster-issuer: letsencrypt-prod + nginx.ingress.kubernetes.io/proxy-body-size: "0" + nginx.ingress.kubernetes.io/proxy-buffering: "off" + nginx.ingress.kubernetes.io/proxy-request-buffering: "off" + nginx.ingress.kubernetes.io/ssl-redirect: "true" + hosts: + - host: n8n-dev.8gears.com + paths: + - / + tls: + - secretName: n8n-ingress-tls + hosts: + - n8n-dev.8gears.com + + + +# cnpg DB cluster request +resources: + - apiVersion: postgresql.cnpg.io/v1 + kind: Cluster + metadata: + name: db + spec: + instances: 1 + bootstrap: + initdb: + database: n8n + owner: n8n + postgresql: + parameters: + shared_buffers: "64MB" + resources: + requests: + memory: "512Mi" + limits: + memory: "512Mi" + storage: + size: 1Gi diff --git a/examples/README.md b/examples/README.md new file mode 100644 index 0000000..a343234 --- /dev/null +++ b/examples/README.md @@ -0,0 +1,13 @@ +# Example Values Files + +this directory contains some example values +for to help you get started with a setup for your particular use case. + + +## Examples + +* values_local.yaml - n8n on local kind/k3s cluster for testing on localhost +* aws - n8n on AWS with EKS, ingress-nginx +* simple-prod - simple production setup with AWS + + diff --git a/examples/values_full.yaml b/examples/values_full.yaml new file mode 100644 index 0000000..d5217b6 --- /dev/null +++ b/examples/values_full.yaml @@ -0,0 +1,109 @@ +#Prod like set up with CloudNativePG and nginx-ingress +image: + repository: 8gears.container-registry.com/ops/n8n + tag: 1.76.1 +imagePullSecrets: + - name: 8gears-registry-n8n + +main: + config: + executions_mode: queue + db: + type: postgresdb + postgresdb: + host: db-rw + user: n8n +# password: password is read from cnpg db-app secretKeyRef + pool: + size: 10 + ssl: + enabled: true + reject_Unauthorized: true + ca_file: "/home/ssl/certs/postgresql/ca.crt" + queue: + bull: + redis: + host: redis + port: 6379 + secret: + n8n: + encryption_key: "" + + extraEnv: + DB_POSTGRESDB_PASSWORD: + valueFrom: + secretKeyRef: + name: db-app + key: password + # Mount the CNPG CA Cert into N8N container + extraVolumeMounts: + - name: db-ca-cert + mountPath: /home/ssl/certs/postgresql + readOnly: true + + extraVolumes: + - name: db-ca-cert + secret: + secretName: db-ca + items: + - key: ca.crt + path: ca.crt + resources: + limits: + memory: 2048Mi + requests: + memory: 512Mi + +webhook: + enabled: true + config: + ASDF: "1234" + extraEnv: + WEBHOOK_URL: + value: "http://webhook.domain.tld" +redis: + enabled: true + + +ingress: + enabled: true + className: nginx + annotations: + cert-manager.io/cluster-issuer: letsencrypt-prod + nginx.ingress.kubernetes.io/proxy-body-size: "0" + nginx.ingress.kubernetes.io/proxy-buffering: "off" + nginx.ingress.kubernetes.io/proxy-request-buffering: "off" + nginx.ingress.kubernetes.io/ssl-redirect: "true" + hosts: + - host: n8n-dev.8gears.com + paths: + - / + tls: + - secretName: n8n-ingress-tls + hosts: + - n8n-dev.8gears.com + + + +# cnpg DB cluster request +resources: + - apiVersion: postgresql.cnpg.io/v1 + kind: Cluster + metadata: + name: db + spec: + instances: 1 + bootstrap: + initdb: + database: n8n + owner: n8n + postgresql: + parameters: + shared_buffers: "64MB" + resources: + requests: + memory: "512Mi" + limits: + memory: "512Mi" + storage: + size: 1Gi diff --git a/examples/values_small.yaml b/examples/values_small.yaml new file mode 100644 index 0000000..2acf3a2 --- /dev/null +++ b/examples/values_small.yaml @@ -0,0 +1,16 @@ +#small deployment with nodeport for local testing or small deployments +main: + config: + n8n: + hide_usage_page: true + secret: + n8n: + encryption_key: "" + resources: + limits: + memory: 2048Mi + requests: + memory: 512Mi + service: + type: NodePort + port: 5678 diff --git a/examples/values_small_prod.yaml b/examples/values_small_prod.yaml new file mode 100644 index 0000000..1413b18 --- /dev/null +++ b/examples/values_small_prod.yaml @@ -0,0 +1,92 @@ +#Prod like set up with CloudNativePG and nginx-ingress +image: + repository: 8gears.container-registry.com/ops/n8n + tag: 1.76.1 +imagePullSecrets: + - name: 8gears-registry-n8n + +main: + config: + db: + type: postgresdb + postgresdb: + host: db-rw + user: n8n +# password: password is read from cnpg db-app secretKeyRef + pool: + size: 10 + ssl: + enabled: true + reject_Unauthorized: true + ca_file: "/home/ssl/certs/postgresql/ca.crt" + secret: + n8n: + encryption_key: "" + + extraEnv: + DB_POSTGRESDB_PASSWORD: + valueFrom: + secretKeyRef: + name: db-app + key: password + # Mount the CNPG CA Cert into N8N container + extraVolumeMounts: + - name: db-ca-cert + mountPath: /home/ssl/certs/postgresql + readOnly: true + + extraVolumes: + - name: db-ca-cert + secret: + secretName: db-ca + items: + - key: ca.crt + path: ca.crt + resources: + limits: + memory: 2048Mi + requests: + memory: 512Mi + +ingress: + enabled: true + className: nginx + annotations: + cert-manager.io/cluster-issuer: letsencrypt-prod + nginx.ingress.kubernetes.io/proxy-body-size: "0" + nginx.ingress.kubernetes.io/proxy-buffering: "off" + nginx.ingress.kubernetes.io/proxy-request-buffering: "off" + nginx.ingress.kubernetes.io/ssl-redirect: "true" + hosts: + - host: n8n-dev.8gears.com + paths: + - / + tls: + - secretName: n8n-ingress-tls + hosts: + - n8n-dev.8gears.com + + + +# cnpg DB cluster request +resources: + - apiVersion: postgresql.cnpg.io/v1 + kind: Cluster + metadata: + name: db + spec: + instances: 1 + bootstrap: + initdb: + database: n8n + owner: n8n + postgresql: + parameters: + shared_buffers: "64MB" + resources: + requests: + memory: "512Mi" + limits: + memory: "512Mi" + storage: + size: 1Gi