title |
---|
K3s Cluster Configuration Reference |
This section covers the configuration options that are available in Rancher for a new or existing K3s Kubernetes cluster.
You can configure the Kubernetes options one of two ways:
- Rancher UI: Use the Rancher UI to select options that are commonly customized when setting up a Kubernetes cluster.
- Cluster Config File: Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create a K3s config file. Using a config file allows you to set any of the options available in an K3s installation.
The Rancher UI provides two ways to edit a cluster:
- With a form.
- With YAML.
The form covers the most frequently needed options for clusters.
To edit your cluster,
- Click ☰ > Cluster Management.
- Go to the cluster you want to configure and click ⋮ > Edit Config.
For a complete reference of configurable options for K3s clusters in YAML, see the K3s documentation.
To edit your cluster with YAML:
- Click ☰ > Cluster Management.
- Go to the cluster you want to configure and click ⋮ > Edit as YAML.
- Edit the RKE options under the
rkeConfig
directive.
This subsection covers generic machine pool configurations. For specific infrastructure provider configurations, refer to the following:
The name of the machine pool.
The number of machines in the pool.
Option to assign etcd, control plane, and worker roles to nodes.
The amount of time nodes can be unreachable before they are automatically deleted and replaced.
Enables draining nodes by evicting all pods before the node is deleted.
Add labels to nodes to help with organization and object selection.
For details on label syntax requirements, see the Kubernetes documentation.
Add taints to nodes, to prevent pods from being scheduled to or executed on the nodes, unless the pods have matching tolerations.
The version of Kubernetes installed on your cluster nodes. Rancher packages its own version of Kubernetes based on hyperkube.
For more detail, see Upgrading Kubernetes.
The default pod security admission configuration template for the cluster.
Option to enable or disable secrets encryption. When enabled, secrets will be encrypted using a AES-CBC key. If disabled, any previously secrets will not be readable until encryption is enabled again. Refer to the K3s documentation for details.
If your network provider allows project network isolation, you can choose whether to enable or disable inter-project communication.
Option to enable or disable SELinux support.
By default, CoreDNS is installed as the default DNS provider. If CoreDNS is not installed, an alternate DNS provider must be installed yourself. Refer to the K3s documentation for details..
Option to enable or disable the Klipper service load balancer. Refer to the K3s documentation for details.
Option to enable or disable the Traefik HTTP reverse proxy and load balancer. For more details and configuration options, see the K3s documentation.
Option to enable or disable local storage on the node(s).
Option to enable or disable the metrics server. If enabled, ensure port 10250 is opened for inbound TCP traffic.
Additional Kubernetes manifests, managed as a Add-on, to apply to the cluster on startup. Refer to the K3s documentation for details.
Option to set environment variables for K3s agents. The environment variables can be set using key value pairs. Refer to the K3 documentation for more details.
Option to enable or disable recurring etcd snapshots. If enabled, users have the option to configure the frequency of snapshots. For details, refer to the K3s documentation.
Option to choose whether to expose etcd metrics to the public or only within the cluster.
IPv4/IPv6 network CIDRs to use for pod IPs (default: 10.42.0.0/16).
IPv4/IPv6 network CIDRs to use for service IPs (default: 10.43.0.0/16).
IPv4 Cluster IP for coredns service. Should be in your service-cidr range (default: 10.43.0.10).
Select the domain for the cluster. The default is cluster.local
.
Option to change the range of ports that can be used for NodePort services. The default is 30000-32767
.
Option to truncate hostnames to 15 characters or less. You can only set this field during the initial creation of the cluster. You can't enable or disable the 15 character limit after cluster creation.
This setting only affects machine-provisioned clusters. Since custom clusters set hostnames during their own node creation process, which occurs outside of Rancher, this field doesn't restrict custom cluster hostname length.
Truncating hostnames in a cluster improves compatibility with Windows-based systems. Although Kubernetes allows hostnames up to 63 characters in length, systems that use NetBIOS restrict hostnames to 15 characters or less.
Add hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the server TLS cert.
Authorized Cluster Endpoint can be used to directly access the Kubernetes API server, without requiring communication through Rancher.
For more detail on how an authorized cluster endpoint works and why it is used, refer to the architecture section.
We recommend using a load balancer with the authorized cluster endpoint. For details, refer to the recommended architecture section.
Select the image repository to pull Rancher images from. For more details and configuration options, see the K3s documentation.
Select how many nodes can be upgraded at the same time. Can be a fixed number or percentage.
Select how many nodes can be upgraded at the same time. Can be a fixed number or percentage.
Option to remove all pods from the node prior to upgrading.
Option to remove all pods from the node prior to upgrading.
Option to set kubelet options for different nodes. For available options, refer to the Kubernetes documentation.
Editing clusters in YAML allows you to set configurations that are already listed in Configuration Options in the Rancher UI, as well as set Rancher-specific parameters.
Example Cluster Config File Snippet
apiVersion: provisioning.cattle.io/v1
kind: Cluster
spec:
cloudCredentialSecretName: cattle-global-data:cc-fllv6
clusterAgentDeploymentCustomization: {}
fleetAgentDeploymentCustomization: {}
kubernetesVersion: v1.26.7+k3s1
localClusterAuthEndpoint: {}
rkeConfig:
additionalManifest: ""
chartValues: {}
etcd:
snapshotRetention: 5
snapshotScheduleCron: 0 */5 * * *
machineGlobalConfig:
disable-apiserver: false
disable-cloud-controller: false
disable-controller-manager: false
disable-etcd: false
disable-kube-proxy: false
disable-network-policy: false
disable-scheduler: false
etcd-expose-metrics: false
kube-apiserver-arg:
- audit-policy-file=/etc/rancher/k3s/user-audit-policy.yaml
- audit-log-path=/etc/rancher/k3s/user-audit.logs
profile: null
secrets-encryption: false
machinePools:
- controlPlaneRole: true
etcdRole: true
machineConfigRef:
kind: Amazonec2Config
name: nc-test-pool1-pwl5h
name: pool1
quantity: 1
unhealthyNodeTimeout: 0s
workerRole: true
machineSelectorConfig:
- config:
docker: false
protect-kernel-defaults: false
selinux: false
machineSelectorFiles:
- fileSources:
- configMap:
name: ''
secret:
name: audit-policy
items:
- key: audit-policy
path: /etc/rancher/k3s/user-audit-policy.yaml
machineLabelSelector:
matchLabels:
rke.cattle.io/control-plane-role: 'true'
registries: {}
upgradeStrategy:
controlPlaneConcurrency: '1'
controlPlaneDrainOptions:
deleteEmptyDirData: true
disableEviction: false
enabled: false
force: false
gracePeriod: -1
ignoreDaemonSets: true
ignoreErrors: false
postDrainHooks: null
preDrainHooks: null
skipWaitForDeleteTimeoutSeconds: 0
timeout: 120
workerConcurrency: '1'
workerDrainOptions:
deleteEmptyDirData: true
disableEviction: false
enabled: false
force: false
gracePeriod: -1
ignoreDaemonSets: true
ignoreErrors: false
postDrainHooks: null
preDrainHooks: null
skipWaitForDeleteTimeoutSeconds: 0
timeout: 120
Specify additional manifests to deliver to the control plane nodes.
The value is a String, and will be placed at the path /var/lib/rancher/k3s/server/manifests/rancher/addons.yaml
on target nodes.
Example:
additionalManifest: |-
apiVersion: v1
kind: Namespace
metadata:
name: name-xxxx
:::note
If you want to customize system charts, you should use the chartValues
field as described below.
Alternatives, such as using a HelmChartConfig to customize the system charts via additionalManifest
, can cause unexpected behavior, due to having multiple HelmChartConfigs for the same chart.
:::
Specify the values for the system charts installed by K3s.
For more information about how K3s manges packaged components, please refer to K3s documentation.
Example:
chartValues:
chart-name:
key: value
Specify K3s configurations. Any configuration change made here will apply to every node. The configuration options available in the standalone version of k3s can be applied here.
Example:
machineGlobalConfig:
etcd-arg:
- key1=value1
- key2=value2
To make it easier to put files on nodes beforehand, Rancher expects the following values to be included in the configuration, while K3s expects the values to be entered as file paths:
- private-registry
- flannel-conf
Rancher delivers the files to the path /var/lib/rancher/k3s/etc/config-files/<option>
in target nodes, and sets the proper options in the K3s server.
Example:
apiVersion: provisioning.cattle.io/v1
kind: Cluster
spec:
rkeConfig:
machineGlobalConfig:
private-registry: |
mirrors:
docker.io:
endpoint:
- "http://mycustomreg.com:5000"
configs:
"mycustomreg:5000":
auth:
username: xxxxxx # this is the registry username
password: xxxxxx # this is the registry password
machineSelectorConfig
is the same as machineGlobalConfig
except that a label selector can be specified with the configuration. The configuration will only be applied to nodes that match the provided label selector.
Multiple config
entries are allowed, each specifying their own machineLabelSelector
. A user can specify matchExpressions
, matchLabels
, both, or neither. Omitting the machineLabelSelector
section of this field has the same effect as putting the config in the machineGlobalConfig
section.
Example:
machineSelectorConfig
- config:
config-key: config-value
machineLabelSelector:
matchExpressions:
- key: example-key
operator: string # Valid operators are In, NotIn, Exists and DoesNotExist.
values:
- example-value1
- example-value2
matchLabels:
key1: value1
key2: value2
:::note
This feature is available in Rancher v2.7.2 and later.
:::
Deliver files to nodes, so that the files can be in place before initiating K3s server or agent processes.
The content of the file is retrieved from either a secret or a configmap. The target nodes are filtered by the machineLabelSelector
.
Example :
machineSelectorFiles:
- fileSources:
- secret:
items:
- key: example-key
path: path-to-put-the-file-on-nodes
permissions: 644 (optional)
hash: base64-encoded-hash-of-the-content (optional)
name: example-secret-name
machineLabelSelector:
matchExpressions:
- key: example-key
operator: string # Valid operators are In, NotIn, Exists and DoesNotExist.
values:
- example-value1
- example-value2
matchLabels:
key1: value1
key2: value2
- fileSources:
- configMap:
items:
- key: example-key
path: path-to-put-the-file-on-nodes
permissions: 644 (optional)
hash: base64-encoded-hash-of-the-content (optional)
name: example-configmap-name
machineLabelSelector:
matchExpressions:
- key: example-key
operator: string # Valid operators are In, NotIn, Exists and DoesNotExist.
values:
- example-value1
- example-value2
matchLabels:
key1: value1
key2: value2
The secret or configmap must meet the following requirements:
- It must be in the
fleet-default
namespace where the Cluster object exists. - It must have the annotation
rke.cattle.io/object-authorized-for-clusters: cluster-name1,cluster-name2
, which permits the target clusters to use it.
:::tip
Rancher Dashboard provides an easy-to-use form for creating the secret or configmap.
:::
Example:
apiVersion: v1
data:
audit-policy: >-
IyBMb2cgYWxsIHJlcXVlc3RzIGF0IHRoZSBNZXRhZGF0YSBsZXZlbC4KYXBpVmVyc2lvbjogYXVkaXQuazhzLmlvL3YxCmtpbmQ6IFBvbGljeQpydWxlczoKLSBsZXZlbDogTWV0YWRhdGE=
kind: Secret
metadata:
annotations:
rke.cattle.io/object-authorized-for-clusters: cluster1
name: name1
namespace: fleet-default