You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/en/pages/advanced-resources/backing-image/backing-image.adoc
+1-1
Original file line number
Diff line number
Diff line change
@@ -159,7 +159,7 @@ Since v1.3.0, users can download existing backing image files to the local via U
159
159
[discrete]
160
160
==== Clean up backing images in disks
161
161
162
-
* Longhorn automatically cleans up the unused backing image files in the disks based on xref:references/settings.adoc#backing-image-cleanup-wait-interval[the setting `Backing Image Cleanup Wait Interval`]. But Longhorn will retain at least one file in a disk for each backing image anyway.
162
+
* Longhorn automatically cleans up the unused backing image files in the disks based on xref:references/settings.adoc#_backing_image_cleanup_wait_interval[the setting `Backing Image Cleanup Wait Interval`]. But Longhorn will retain at least one file in a disk for each backing image anyway.
163
163
* You can manually remove backing images from disks using the Longhorn UI. Go to *Setting* > *Backing Image*, and then click the name of a specific backing image. In the window that opens, select one or more disks and then click *Clean Up*.
164
164
* Once there is one replica in a disk using a backing image, no matter what the replica's current state is, the backing image file in this disk cannot be cleaned up.
Copy file name to clipboardExpand all lines: modules/en/pages/advanced-resources/cluster-restore/rancher-cluster-restore.adoc
+1-1
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ This doc describes what users need to do after restoring the cluster with a Ranc
8
8
9
9
* Most of the data and the underlying disks still exist in the cluster before the restore and can be directly reused then.
10
10
* There is a backupstore holding all volume data.
11
-
* The setting xref:references/settings.adoc#disable-revision-counter[`Disable Revision Counter`] is false. (It's false by default.) Otherwise, users need to manually check if the data among volume replicas are consistent, or directly restore volumes from backup.
11
+
* The setting xref:references/settings.adoc#_disable_revision_counter[`Disable Revision Counter`] is false. (It's false by default.) Otherwise, users need to manually check if the data among volume replicas are consistent, or directly restore volumes from backup.
Copy file name to clipboardExpand all lines: modules/en/pages/advanced-resources/data-cleanup/orphaned-data-cleanup.adoc
+1-1
Original file line number
Diff line number
Diff line change
@@ -159,4 +159,4 @@ Longhorn will not create an `orphan` resource for an orphaned directory when
159
159
** The volume volume.meta file is missing.
160
160
* The orphaned replica directory is on an evicted node.
161
161
* The orphaned replica directory is in an evicted disk.
162
-
* The orphaned data cleanup mechanism does not clean up a stale replica, also known as an error replica. Instead, the stale replica is cleaned up according to the xref:nodes-and-volumes/volumes/create-volumes.adoc#creating-longhorn-volumes-with-kubectl[staleReplicaTimeout] setting.
162
+
* The orphaned data cleanup mechanism does not clean up a stale replica, also known as an error replica. Instead, the stale replica is cleaned up according to the xref:nodes-and-volumes/volumes/create-volumes.adoc#_creating_longhorn_volumes_with_kubectl[staleReplicaTimeout] setting.
Copy file name to clipboardExpand all lines: modules/en/pages/advanced-resources/os-distro-specific/okd-support.adoc
+2-2
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ And then install Longhorn with setting *_openshift.enabled_* true:
28
28
29
29
== Prepare A Customized Default Longhorn Disk (Optional)
30
30
31
-
To understand more about configuring the disks for Longhorn, please refer to the section xref:nodes-and-volumes/nodes/default-disk-and-node-config.adoc#launch-longhorn-with-multiple-disks[Configuring Defaults for Nodes and Disks]
31
+
To understand more about configuring the disks for Longhorn, please refer to the section xref:nodes-and-volumes/nodes/default-disk-and-node-config.adoc#_launch_longhorn_with_multiple_disks[Configuring Defaults for Nodes and Disks]
32
32
33
33
Longhorn will use the directory `/var/lib/longhorn` as default storage mount point and that means Longhorn use the root device as the default storage. If you don't want to use the root device as the Longhorn storage, set *_defaultSettings.createDefaultDiskLabeledNodes_* true when installing Longhorn by helm:
Please refer to the section xref:nodes-and-volumes/nodes/default-disk-and-node-config.adoc#customizing-default-disks-for-new-nodes[Customizing Default Disks for New Nodes] to label and annotate storage node on where your device is by oc commands:
100
+
Please refer to the section xref:nodes-and-volumes/nodes/default-disk-and-node-config.adoc#_customizing_default_disks_for_new_nodes[Customizing Default Disks for New Nodes] to label and annotate storage node on where your device is by oc commands:
Copy file name to clipboardExpand all lines: modules/en/pages/advanced-resources/os-distro-specific/talos-linux-support.adoc
+2-2
Original file line number
Diff line number
Diff line change
@@ -30,9 +30,9 @@ For detailed instructions, see the Talos documentation on https://www.talos.dev/
30
30
31
31
Longhorn requires pod security `enforce: "privileged"`.
32
32
33
-
By default, Talos Linux applies a `baseline` pod security profile across namespaces, except for the kube-system namespace. This default setting restricts Longhorn's ability to manage and access system resources. For more information, see xref:deploy/install/index.adoc#root-and-privileged-permission[Root and Privileged Permission].
33
+
By default, Talos Linux applies a `baseline` pod security profile across namespaces, except for the kube-system namespace. This default setting restricts Longhorn's ability to manage and access system resources. For more information, see xref:deploy/install/index.adoc#_root_and_privileged_permission[Root and Privileged Permission].
34
34
35
-
For detailed instructions, see xref:deploy/important-notes/index.adoc#pod-security-policies-disabled--pod-security-admission-introduction[Pod Security Policies Disabled & Pod Security Admission Introduction] and Talos' documentation on https://www.talos.dev/v1.6/kubernetes-guides/configuration/pod-security/[Pod Security].
35
+
For detailed instructions, see xref:deploy/important-notes/index.adoc#_pod_security_policies_disabled__pod_security_admission_introduction[Pod Security Policies Disabled & Pod Security Admission Introduction] and Talos' documentation on https://www.talos.dev/v1.6/kubernetes-guides/configuration/pod-security/[Pod Security].
Copy file name to clipboardExpand all lines: modules/en/pages/advanced-resources/support-managed-k8s-service/upgrade-k8s-on-eks.adoc
+1-1
Original file line number
Diff line number
Diff line change
@@ -6,4 +6,4 @@ In Longhorn, set `replica-replenishment-wait-interval` to `0`.
6
6
7
7
See https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html[Updating a cluster] for instructions.
8
8
9
-
NOTE: If you have created xref:advanced-resources/support-managed-k8s-service/manage-node-group-on-eks.adoc#create-additional-volume[addition disks] for Longhorn, you will need to manually add the path of the mounted disk into the disk list of the upgraded nodes.
9
+
NOTE: If you have created xref:advanced-resources/support-managed-k8s-service/manage-node-group-on-eks.adoc#_create_additional_volume[addition disks] for Longhorn, you will need to manually add the path of the mounted disk into the disk list of the upgraded nodes.
Copy file name to clipboardExpand all lines: modules/en/pages/advanced-resources/system-backup-restore/backup-longhorn-system.adoc
+1-1
Original file line number
Diff line number
Diff line change
@@ -38,7 +38,7 @@ It includes below resources associating with the Longhorn system:
38
38
* StorageClasses
39
39
* Volumes
40
40
41
-
WARNING: Longhorn does not backup `BackingImages`. We will improve this part in the future. See xref:advanced-resources/system-backup-restore/restore-longhorn-system.adoc#prerequisite[Restore Longhorn System - Prerequisite] for restoring volumes created with the backing image.
41
+
WARNING: Longhorn does not backup `BackingImages`. We will improve this part in the future. See xref:advanced-resources/system-backup-restore/restore-longhorn-system.adoc#_prerequisite[Restore Longhorn System - Prerequisite] for restoring volumes created with the backing image.
42
42
43
43
NOTE: Longhorn does not backup `Nodes`. The Longhorn manager on the target cluster is responsible for creating its own Longhorn `Node` custom resources.
Copy file name to clipboardExpand all lines: modules/en/pages/advanced-resources/system-backup-restore/restore-longhorn-system.adoc
+6-6
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@
19
19
20
20
== Longhorn System Restore Rollouts
21
21
22
-
* Longhorn restores the resource from the xref:advanced-resources/system-backup-restore/backup-longhorn-system.adoc#longhorn-system-backup-bundle[Longhorn System Backup Bundle].
22
+
* Longhorn restores the resource from the xref:advanced-resources/system-backup-restore/backup-longhorn-system.adoc#_longhorn_system_backup_bundle[Longhorn System Backup Bundle].
23
23
* Longhorn does not restore existing `Volumes` and their associated `PersistentVolume` and `PersistentVolumeClaim`.
24
24
* Longhorn automatically restores a `Volume` from its latest backup.
25
25
* To prevent overwriting eligible settings, Longhorn does not restore the `ConfigMap/longhorn-default-setting`.
@@ -41,7 +41,7 @@ You can restore the Longhorn system using Longhorn UI. Or with the `kubectl` com
41
41
* Set up the `Nodes` and disk tags for `StorageClass`.
42
42
* Have a Longhorn system backup.
43
43
+
44
-
See xref:advanced-resources/system-backup-restore/backup-longhorn-system.adoc#create-longhorn-system-backup[Backup Longhorn System - Create Longhorn System Backup] for instructions.
44
+
See xref:advanced-resources/system-backup-restore/backup-longhorn-system.adoc#_create_longhorn_system_backup[Backup Longhorn System - Create Longhorn System Backup] for instructions.
45
45
46
46
* Have volume `BackingImages` available in the cluster.
Copy file name to clipboardExpand all lines: modules/en/pages/advanced-resources/system-backup-restore/restore-to-a-cluster-contains-data-using-Rancher-snapshot.adoc
+1-1
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ This doc describes what users need to do after restoring the cluster with a Ranc
8
8
9
9
* *Most of the data and the underlying disks still exist* in the cluster before the restore and can be directly reused then.
10
10
* There is a backupstore holding all volume data.
11
-
* The setting xref:references/settings.adoc#disable-revision-counter[`Disable Revision Counter`] is false. (It's false by default.) Otherwise, users need to manually check if the data among volume replicas are consistent, or directly restore volumes from backup.
11
+
* The setting xref:references/settings.adoc#_disable_revision_counter[`Disable Revision Counter`] is false. (It's false by default.) Otherwise, users need to manually check if the data among volume replicas are consistent, or directly restore volumes from backup.
Copy file name to clipboardExpand all lines: modules/en/pages/best-practices.adoc
+8-8
Original file line number
Diff line number
Diff line change
@@ -158,7 +158,7 @@ Since Longhorn doesn't currently support sharding between the different disks, w
158
158
159
159
Any extra disks must be written in the `/etc/fstab` file to allow automatic mounting after the machine reboots.
160
160
161
-
Don't use a symbolic link for the extra disks. Use `mount --bind` instead of `ln -s` and make sure it's in the `fstab` file. For details, see xref:nodes-and-volumes/nodes/multidisk.adoc#use-an-alternative-path-for-a-disk-on-the-node[the section about multiple disk support.]
161
+
Don't use a symbolic link for the extra disks. Use `mount --bind` instead of `ln -s` and make sure it's in the `fstab` file. For details, see xref:nodes-and-volumes/nodes/multidisk.adoc#_use_an_alternative_path_for_a_disk_on_the_node[the section about multiple disk support.]
162
162
163
163
== Configuring Default Disks Before and After Installation
164
164
@@ -178,9 +178,9 @@ The following sections outline other recommendations for production environments
178
178
179
179
=== IO Performance
180
180
181
-
* *Storage network*: Use a xref:advanced-resources/deploy/storage-network.adoc#setting-storage-network[dedicated storage network] to improve IO performance and stability.
182
-
* *Longhorn disk*: Use a xref:nodes-and-volumes/nodes/multidisk.adoc#add-a-disk[dedicated disk] for Longhorn storage instead of using the root disk.
183
-
* *Replica count*: Set the xref:references/settings.adoc#default-replica-count[default replica count] to "2" to achieve data availability with better disk space usage or less impact to system performance. This practice is especially beneficial to data-intensive applications.
181
+
* *Storage network*: Use a xref:advanced-resources/deploy/storage-network.adoc#_setting_storage_network[dedicated storage network] to improve IO performance and stability.
182
+
* *Longhorn disk*: Use a xref:nodes-and-volumes/nodes/multidisk.adoc#_add_a_disk[dedicated disk] for Longhorn storage instead of using the root disk.
183
+
* *Replica count*: Set the xref:references/settings.adoc#_default_replica_count[default replica count] to "2" to achieve data availability with better disk space usage or less impact to system performance. This practice is especially beneficial to data-intensive applications.
184
184
* *Storage tag*: Use xref:nodes-and-volumes/nodes/storage-tags.adoc[storage tags] to define storage tiering for data-intensive applications. For example, only high-performance disks can be used for storing performance-sensitive data.
185
185
* *Data locality*: Use `best-effort` as the default xref:high-availability/data-locality.adoc[data locality] of Longhorn StorageClasses.
186
186
+
@@ -192,15 +192,15 @@ For data-intensive applications, you can use pod scheduling functions such as no
192
192
193
193
* *Recurring snapshots*: Periodically clean up system-generated snapshots and retain only the number of snapshots that makes sense for your implementation.
194
194
+
195
-
For applications with replication capability, periodically xref:concepts.adoc#243-deleting-snapshots[delete all types of snapshots].
195
+
For applications with replication capability, periodically xref:concepts.adoc#_2_4_3_deleting_snapshots[delete all types of snapshots].
196
196
197
197
* *Recurring filesystem trim*: Periodically xref:nodes-and-volumes/volumes/trim-filesystem.adoc[trim the filesystem] inside volumes to reclaim disk space.
198
198
* *Snapshot space management*: xref:snapshots-and-backups/snapshot-space-management.adoc[Configure global and volume-specific settings] to prevent unexpected disk space exhaustion.
@@ -226,15 +226,15 @@ You can also set a specific milli CPU value for instance manager pods on a parti
226
226
227
227
NOTE: This field will overwrite the above setting for the specified node.
228
228
229
-
Refer to xref:references/settings.adoc#guaranteed-instance-manager-cpu[Guaranteed Instance Manager CPU] for more details.
229
+
Refer to xref:references/settings.adoc#_guaranteed_instance_manager_cpu[Guaranteed Instance Manager CPU] for more details.
230
230
231
231
=== V2 Data Engine
232
232
233
233
The `Guaranteed Instance Manager CPU for V2 Data Engine` setting allows you to reserve a specific number of millicpus on each node for each instance manager pod when the V2 Data Engine is enabled. By default, the Storage Performance Development Kit (SPDK) target daemon within each instance manager pod uses 1 CPU core. Configuring a minimum CPU usage value is essential for maintaining engine and replica stability, especially during periods of high node workload. The default value is 1250.
234
234
235
235
== StorageClass
236
236
237
-
We don't recommend modifying the default StorageClass named `longhorn`, since the change of parameters might cause issues during an upgrade later. If you want to change the parameters set in the StorageClass, you can create a new StorageClass by referring to the xref:references/examples.adoc#storageclass[StorageClass examples].
237
+
We don't recommend modifying the default StorageClass named `longhorn`, since the change of parameters might cause issues during an upgrade later. If you want to change the parameters set in the StorageClass, you can create a new StorageClass by referring to the xref:references/examples.adoc#_storageclass[StorageClass examples].
Copy file name to clipboardExpand all lines: modules/en/pages/concepts.adoc
+2-2
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ The storage controller and replicas are themselves orchestrated using Kubernetes
9
9
10
10
For an overview of Longhorn features, refer to xref:what-is-longhorn.adoc[this section.]
11
11
12
-
For the installation requirements, go to xref:deploy/install/index.adoc#installation-requirements[this section.]
12
+
For the installation requirements, go to xref:deploy/install/index.adoc#_installation_requirements[this section.]
13
13
14
14
____
15
15
This section assumes familiarity with Kubernetes persistent storage concepts. For more information on these concepts, refer to the <<appendix-how-persistent-storage-works-in-kubernetes,appendix.>> For help with the terminology used in this page, refer to xref:terminology.adoc[this section.]
@@ -142,7 +142,7 @@ Each replica contains a chain of snapshots of a Longhorn volume. A snapshot is l
142
142
143
143
For each Longhorn volume, multiple replicas of the volume should run in the Kubernetes cluster, each on a separate node. All replicas are treated the same, and the Longhorn Engine always runs on the same node as the pod, which is also the consumer of the volume. In that way, we make sure that even if the Pod is down, the Engine can be moved to another Pod and your service will continue undisrupted.
144
144
145
-
The default replica count can be changed in the xref:references/settings.adoc#default-replica-count[settings.] When a volume is attached, the replica count for the volume can be changed in the UI.
145
+
The default replica count can be changed in the xref:references/settings.adoc#_default_replica_count[settings.] When a volume is attached, the replica count for the volume can be changed in the UI.
146
146
147
147
If the current healthy replica count is less than specified replica count, Longhorn will start rebuilding new replicas.
0 commit comments