You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/en/pages/advanced-resources/backing-image/backing-image.adoc
+1-1
Original file line number
Diff line number
Diff line change
@@ -187,7 +187,7 @@ Since v1.3.0, users can download existing backing image files to the local via U
187
187
[discrete]
188
188
==== Clean up backing images in disks
189
189
190
-
* Longhorn automatically cleans up the unused backing image files in the disks based on xref:references/settings.adoc#backing-image-cleanup-wait-interval[the setting `Backing Image Cleanup Wait Interval`]. But Longhorn will retain at least one file in a disk for each backing image anyway.
190
+
* Longhorn automatically cleans up the unused backing image files in the disks based on xref:references/settings.adoc#_backing_image_cleanup_wait_interval[the setting `Backing Image Cleanup Wait Interval`]. But Longhorn will retain at least one file in a disk for each backing image anyway.
191
191
* You can manually remove backing images from disks using the Longhorn UI. Go to *Setting* > *Backing Image*, and then click the name of a specific backing image. In the window that opens, select one or more disks and then click *Clean Up*.
192
192
* Once there is one replica in a disk using a backing image, no matter what the replica's current state is, the backing image file in this disk cannot be cleaned up.
Copy file name to clipboardExpand all lines: modules/en/pages/advanced-resources/cluster-restore/rancher-cluster-restore.adoc
+1-1
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ This doc describes what users need to do after restoring the cluster with a Ranc
8
8
9
9
* Most of the data and the underlying disks still exist in the cluster before the restore and can be directly reused then.
10
10
* There is a backupstore holding all volume data.
11
-
* The setting xref:references/settings.adoc#disable-revision-counter[`Disable Revision Counter`] is false. (It's false by default.) Otherwise, users need to manually check if the data among volume replicas are consistent, or directly restore volumes from backup.
11
+
* The setting xref:references/settings.adoc#_disable_revision_counter[`Disable Revision Counter`] is false. (It's false by default.) Otherwise, users need to manually check if the data among volume replicas are consistent, or directly restore volumes from backup.
Copy file name to clipboardExpand all lines: modules/en/pages/advanced-resources/data-cleanup/orphaned-data-cleanup.adoc
+1-1
Original file line number
Diff line number
Diff line change
@@ -159,4 +159,4 @@ Longhorn will not create an `orphan` resource for an orphaned directory when
159
159
** The volume volume.meta file is missing.
160
160
* The orphaned replica directory is on an evicted node.
161
161
* The orphaned replica directory is in an evicted disk.
162
-
* The orphaned data cleanup mechanism does not clean up a stale replica, also known as an error replica. Instead, the stale replica is cleaned up according to the xref:nodes-and-volumes/volumes/create-volumes.adoc#creating-longhorn-volumes-with-kubectl[staleReplicaTimeout] setting.
162
+
* The orphaned data cleanup mechanism does not clean up a stale replica, also known as an error replica. Instead, the stale replica is cleaned up according to the xref:nodes-and-volumes/volumes/create-volumes.adoc#_creating_longhorn_volumes_with_kubectl[staleReplicaTimeout] setting.
Copy file name to clipboardExpand all lines: modules/en/pages/advanced-resources/deploy/storage-network.adoc
+4-4
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ If this is the case, provide a valid `NetworkAttachmentDefinition` and re-run Lo
43
43
44
44
=== Setting Storage Network After Longhorn Installation
45
45
46
-
Set the setting xref:references/settings.adoc#storage-network[Storage Network].
46
+
Set the setting xref:references/settings.adoc#_storage_network[Storage Network].
47
47
48
48
[WARNING]
49
49
====
@@ -54,13 +54,13 @@ Longhorn is not aware of the updates. Hence this will cause malfunctioning and e
54
54
55
55
=== Setting Storage Network For RWX Volumes
56
56
57
-
Configure the setting xref:references/settings.adoc#storage-network-for-rwx-volume-enabled[Storage Network For RWX Volume Enabled].
57
+
Configure the setting xref:references/settings.adoc#_storage_network_for_rwx_volume_enabled[Storage Network For RWX Volume Enabled].
58
58
59
59
= Limitation
60
60
61
-
When an RWX volume is created with the storage network, the NFS mount point connection must be re-established when the CSI plugin pod restarts. Longhorn provides the xref:references/settings.adoc#automatically-delete-workload-pod-when-the-volume-is-detached-unexpectedly[Automatically Delete Workload Pod when The Volume Is Detached Unexpectedly] setting, which automatically deletes RWX volume workload pods when the CSI plugin pod restarts. However, the workload pod's NFS mount point could become unresponsive when the setting is disabled or the pod is not managed by a controller. In such cases, you must manually restart the CSI plugin pod.
61
+
When an RWX volume is created with the storage network, the NFS mount point connection must be re-established when the CSI plugin pod restarts. Longhorn provides the xref:references/settings.adoc#_automatically_delete_workload_pod_when_the_volume_is_detached_unexpectedly[Automatically Delete Workload Pod when The Volume Is Detached Unexpectedly] setting, which automatically deletes RWX volume workload pods when the CSI plugin pod restarts. However, the workload pod's NFS mount point could become unresponsive when the setting is disabled or the pod is not managed by a controller. In such cases, you must manually restart the CSI plugin pod.
62
62
63
-
For more information, see xref:important-notes/index.adoc#storage-network-support-for-read-write-many-rwx-volumes[Storage Network Support for Read-Write-Many (RWX) Volume] in Important Notes.
63
+
For more information, see xref:important-notes/index.adoc#_storage_network_support_for_read_write_many_rwx_volumes[Storage Network Support for Read-Write-Many (RWX) Volume] in Important Notes.
Copy file name to clipboardExpand all lines: modules/en/pages/advanced-resources/os-distro-specific/okd-support.adoc
+2-2
Original file line number
Diff line number
Diff line change
@@ -52,7 +52,7 @@ For more information, see xref:deploy/install/install-with-kubectl.adoc[Install
52
52
53
53
== Prepare A Customized Default Longhorn Disk (Optional)
54
54
55
-
To understand more about configuring the disks for Longhorn, please refer to the section xref:nodes-and-volumes/nodes/default-disk-and-node-config.adoc#launch-longhorn-with-multiple-disks[Configuring Defaults for Nodes and Disks]
55
+
To understand more about configuring the disks for Longhorn, please refer to the section xref:nodes-and-volumes/nodes/default-disk-and-node-config.adoc#_launch_longhorn_with_multiple_disks[Configuring Defaults for Nodes and Disks]
56
56
57
57
Longhorn will use the directory `/var/lib/longhorn` as default storage mount point and that means Longhorn use the root device as the default storage. If you don't want to use the root device as the Longhorn storage, set *_defaultSettings.createDefaultDiskLabeledNodes_* true when installing Longhorn by helm:
Please refer to the section xref:nodes-and-volumes/nodes/default-disk-and-node-config.adoc#customizing-default-disks-for-new-nodes[Customizing Default Disks for New Nodes] to label and annotate storage node on where your device is by oc commands:
124
+
Please refer to the section xref:nodes-and-volumes/nodes/default-disk-and-node-config.adoc#_customizing_default_disks_for_new_nodes[Customizing Default Disks for New Nodes] to label and annotate storage node on where your device is by oc commands:
Copy file name to clipboardExpand all lines: modules/en/pages/advanced-resources/os-distro-specific/talos-linux-support.adoc
+2-2
Original file line number
Diff line number
Diff line change
@@ -30,9 +30,9 @@ For detailed instructions, see the Talos documentation on https://www.talos.dev/
30
30
31
31
Longhorn requires pod security `enforce: "privileged"`.
32
32
33
-
By default, Talos Linux applies a `baseline` pod security profile across namespaces, except for the kube-system namespace. This default setting restricts Longhorn's ability to manage and access system resources. For more information, see xref:deploy/install/index.adoc#root-and-privileged-permission[Root and Privileged Permission].
33
+
By default, Talos Linux applies a `baseline` pod security profile across namespaces, except for the kube-system namespace. This default setting restricts Longhorn's ability to manage and access system resources. For more information, see xref:deploy/install/index.adoc#_root_and_privileged_permission[Root and Privileged Permission].
34
34
35
-
For detailed instructions, see xref:important-notes/index.adoc#pod-security-policies-disabled--pod-security-admission-introduction[Pod Security Policies Disabled & Pod Security Admission Introduction] and the Talos documentation on https://www.talos.dev/v1.6/kubernetes-guides/configuration/pod-security/[Pod Security].
35
+
For detailed instructions, see xref:important-notes/index.adoc#_pod_security_policies_disabled__pod_security_admission_introduction[Pod Security Policies Disabled & Pod Security Admission Introduction] and the Talos documentation on https://www.talos.dev/v1.6/kubernetes-guides/configuration/pod-security/[Pod Security].
Copy file name to clipboardExpand all lines: modules/en/pages/advanced-resources/support-managed-k8s-service/upgrade-k8s-on-eks.adoc
+1-1
Original file line number
Diff line number
Diff line change
@@ -6,4 +6,4 @@ In Longhorn, set `replica-replenishment-wait-interval` to `0`.
6
6
7
7
See https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html[Updating a cluster] for instructions.
8
8
9
-
NOTE: If you have created xref:advanced-resources/support-managed-k8s-service/manage-node-group-on-eks.adoc#create-additional-volume[addition disks] for Longhorn, you will need to manually add the path of the mounted disk into the disk list of the upgraded nodes.
9
+
NOTE: If you have created xref:advanced-resources/support-managed-k8s-service/manage-node-group-on-eks.adoc#_create_additional_volume[addition disks] for Longhorn, you will need to manually add the path of the mounted disk into the disk list of the upgraded nodes.
Copy file name to clipboardExpand all lines: modules/en/pages/advanced-resources/system-backup-restore/restore-longhorn-system.adoc
+6-6
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@
19
19
20
20
== Longhorn System Restore Rollouts
21
21
22
-
* Longhorn restores the resource from the xref:advanced-resources/system-backup-restore/backup-longhorn-system.adoc#longhorn-system-backup-bundle[Longhorn System Backup Bundle].
22
+
* Longhorn restores the resource from the xref:advanced-resources/system-backup-restore/backup-longhorn-system.adoc#_longhorn_system_backup_bundle[Longhorn System Backup Bundle].
23
23
* Longhorn does not restore existing `Volumes` and their associated `PersistentVolume` and `PersistentVolumeClaim`.
24
24
* Longhorn automatically restores a `Volume` from its latest backup.
25
25
* To prevent overwriting eligible settings, Longhorn does not restore the `ConfigMap/longhorn-default-setting`.
@@ -41,7 +41,7 @@ You can restore the Longhorn system using Longhorn UI. Or with the `kubectl` com
41
41
* Set up the `Nodes` and disk tags for `StorageClass`.
42
42
* Have a Longhorn system backup.
43
43
+
44
-
See xref:advanced-resources/system-backup-restore/backup-longhorn-system.adoc#create-longhorn-system-backup[Backup Longhorn System - Create Longhorn System Backup] for instructions.
44
+
See xref:advanced-resources/system-backup-restore/backup-longhorn-system.adoc#_create_longhorn_system_backup[Backup Longhorn System - Create Longhorn System Backup] for instructions.
Copy file name to clipboardExpand all lines: modules/en/pages/advanced-resources/system-backup-restore/restore-to-a-cluster-contains-data-using-Rancher-snapshot.adoc
+1-1
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ This doc describes what users need to do after restoring the cluster with a Ranc
8
8
9
9
* *Most of the data and the underlying disks still exist* in the cluster before the restore and can be directly reused then.
10
10
* There is a backupstore holding all volume data.
11
-
* The setting xref:references/settings.adoc#disable-revision-counter[`Disable Revision Counter`] is false. (It's false by default.) Otherwise, users need to manually check if the data among volume replicas are consistent, or directly restore volumes from backup.
11
+
* The setting xref:references/settings.adoc#_disable_revision_counter[`Disable Revision Counter`] is false. (It's false by default.) Otherwise, users need to manually check if the data among volume replicas are consistent, or directly restore volumes from backup.
Copy file name to clipboardExpand all lines: modules/en/pages/best-practices.adoc
+9-9
Original file line number
Diff line number
Diff line change
@@ -93,7 +93,7 @@ in particular, benefit from usage of specific kernel versions.
93
93
94
94
* Optimizing or improving the filesystem: Use a kernel with version `v5.8` or later. See https://github.com/longhorn/longhorn/issues/2507#issuecomment-857195496[Issue
95
95
#2507] for details.
96
-
* Enabling the xref:references/settings.adoc#freeze-filesystem-for-snapshot[Freeze Filesystem for Snapshot] setting: Use a
96
+
* Enabling the xref:references/settings.adoc#_freeze_filesystem_for_snapshot[Freeze Filesystem for Snapshot] setting: Use a
97
97
kernel with version `5.17` or later to ensure that a volume crash during a filesystem freeze cannot lock up a node.
98
98
* Enabling the xref:v2-data-engine/prerequisites.adoc[V2 Data Engine]: Use a kernel with version `5.19` or later to ensure
99
99
@@ -169,7 +169,7 @@ Since Longhorn doesn't currently support sharding between the different disks, w
169
169
170
170
Any extra disks must be written in the `/etc/fstab` file to allow automatic mounting after the machine reboots.
171
171
172
-
Don't use a symbolic link for the extra disks. Use `mount --bind` instead of `ln -s` and make sure it's in the `fstab` file. For details, see xref:nodes-and-volumes/nodes/multidisk.adoc#use-an-alternative-path-for-a-disk-on-the-node[the section about multiple disk support.]
172
+
Don't use a symbolic link for the extra disks. Use `mount --bind` instead of `ln -s` and make sure it's in the `fstab` file. For details, see xref:nodes-and-volumes/nodes/multidisk.adoc#_use_an_alternative_path_for_a_disk_on_the_node[the section about multiple disk support.]
173
173
174
174
== Configuring Default Disks Before and After Installation
175
175
@@ -189,9 +189,9 @@ The following sections outline other recommendations for production environments
189
189
190
190
=== IO Performance
191
191
192
-
* *Storage network*: Use a xref:advanced-resources/deploy/storage-network.adoc#setting-storage-network[dedicated storage network] to improve IO performance and stability.
193
-
* *Longhorn disk*: Use a xref:nodes-and-volumes/nodes/multidisk.adoc#add-a-disk[dedicated disk] for Longhorn storage instead of using the root disk.
194
-
* *Replica count*: Set the xref:references/settings.adoc#default-replica-count[default replica count] to "2" to achieve data availability with better disk space usage or less impact to system performance. This practice is especially beneficial to data-intensive applications.
192
+
* *Storage network*: Use a xref:advanced-resources/deploy/storage-network.adoc#_setting_storage_network[dedicated storage network] to improve IO performance and stability.
193
+
* *Longhorn disk*: Use a xref:nodes-and-volumes/nodes/multidisk.adoc#_add_a_disk[dedicated disk] for Longhorn storage instead of using the root disk.
194
+
* *Replica count*: Set the xref:references/settings.adoc#_default_replica_count[default replica count] to "2" to achieve data availability with better disk space usage or less impact to system performance. This practice is especially beneficial to data-intensive applications.
195
195
* *Storage tag*: Use xref:nodes-and-volumes/nodes/storage-tags.adoc[storage tags] to define storage tiering for data-intensive applications. For example, only high-performance disks can be used for storing performance-sensitive data.
196
196
* *Data locality*: Use `best-effort` as the default xref:high-availability/data-locality.adoc[data locality] of Longhorn StorageClasses.
197
197
+
@@ -203,15 +203,15 @@ For data-intensive applications, you can use pod scheduling functions such as no
203
203
204
204
* *Recurring snapshots*: Periodically clean up system-generated snapshots and retain only the number of snapshots that makes sense for your implementation.
205
205
+
206
-
For applications with replication capability, periodically xref:concepts.adoc#243-deleting-snapshots[delete all types of snapshots].
206
+
For applications with replication capability, periodically xref:concepts.adoc#_2_4_3_deleting_snapshots[delete all types of snapshots].
207
207
208
208
* *Recurring filesystem trim*: Periodically xref:nodes-and-volumes/volumes/trim-filesystem.adoc[trim the filesystem] inside volumes to reclaim disk space.
209
209
* *Snapshot space management*: xref:snapshots-and-backups/snapshot-space-management.adoc[Configure global and volume-specific settings] to prevent unexpected disk space exhaustion.
@@ -237,15 +237,15 @@ You can also set a specific milli CPU value for instance manager pods on a parti
237
237
238
238
NOTE: This field will overwrite the above setting for the specified node.
239
239
240
-
Refer to xref:references/settings.adoc#guaranteed-instance-manager-cpu[Guaranteed Instance Manager CPU] for more details.
240
+
Refer to xref:references/settings.adoc#_guaranteed_instance_manager_cpu[Guaranteed Instance Manager CPU] for more details.
241
241
242
242
=== V2 Data Engine
243
243
244
244
The `Guaranteed Instance Manager CPU for V2 Data Engine` setting allows you to reserve a specific number of millicpus on each node for each instance manager pod when the V2 Data Engine is enabled. By default, the Storage Performance Development Kit (SPDK) target daemon within each instance manager pod uses 1 CPU core. Configuring a minimum CPU usage value is essential for maintaining engine and replica stability, especially during periods of high node workload. The default value is 1250.
245
245
246
246
== StorageClass
247
247
248
-
We don't recommend modifying the default StorageClass named `longhorn`, since the change of parameters might cause issues during an upgrade later. If you want to change the parameters set in the StorageClass, you can create a new StorageClass by referring to the xref:references/examples.adoc#storageclass[StorageClass examples].
248
+
We don't recommend modifying the default StorageClass named `longhorn`, since the change of parameters might cause issues during an upgrade later. If you want to change the parameters set in the StorageClass, you can create a new StorageClass by referring to the xref:references/examples.adoc#_storageclass[StorageClass examples].
0 commit comments