Skip to content

Commit b816bb5

Browse files
authored
Merge pull request #56 from jhkrug/1.7.1
fix anchors
2 parents fb87314 + cf847ea commit b816bb5

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

46 files changed

+111
-111
lines changed

modules/en/pages/advanced-resources/backing-image/backing-image.adoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -187,7 +187,7 @@ Since v1.3.0, users can download existing backing image files to the local via U
187187
[discrete]
188188
==== Clean up backing images in disks
189189

190-
* Longhorn automatically cleans up the unused backing image files in the disks based on xref:references/settings.adoc#backing-image-cleanup-wait-interval[the setting `Backing Image Cleanup Wait Interval`]. But Longhorn will retain at least one file in a disk for each backing image anyway.
190+
* Longhorn automatically cleans up the unused backing image files in the disks based on xref:references/settings.adoc#_backing_image_cleanup_wait_interval[the setting `Backing Image Cleanup Wait Interval`]. But Longhorn will retain at least one file in a disk for each backing image anyway.
191191
* You can manually remove backing images from disks using the Longhorn UI. Go to *Setting* > *Backing Image*, and then click the name of a specific backing image. In the window that opens, select one or more disks and then click *Clean Up*.
192192
* Once there is one replica in a disk using a backing image, no matter what the replica's current state is, the backing image file in this disk cannot be cleaned up.
193193

modules/en/pages/advanced-resources/cluster-restore/rancher-cluster-restore.adoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ This doc describes what users need to do after restoring the cluster with a Ranc
88

99
* Most of the data and the underlying disks still exist in the cluster before the restore and can be directly reused then.
1010
* There is a backupstore holding all volume data.
11-
* The setting xref:references/settings.adoc#disable-revision-counter[`Disable Revision Counter`] is false. (It's false by default.) Otherwise, users need to manually check if the data among volume replicas are consistent, or directly restore volumes from backup.
11+
* The setting xref:references/settings.adoc#_disable_revision_counter[`Disable Revision Counter`] is false. (It's false by default.) Otherwise, users need to manually check if the data among volume replicas are consistent, or directly restore volumes from backup.
1212

1313
== Expectation:
1414

modules/en/pages/advanced-resources/data-cleanup/orphaned-data-cleanup.adoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -159,4 +159,4 @@ Longhorn will not create an `orphan` resource for an orphaned directory when
159159
** The volume volume.meta file is missing.
160160
* The orphaned replica directory is on an evicted node.
161161
* The orphaned replica directory is in an evicted disk.
162-
* The orphaned data cleanup mechanism does not clean up a stale replica, also known as an error replica. Instead, the stale replica is cleaned up according to the xref:nodes-and-volumes/volumes/create-volumes.adoc#creating-longhorn-volumes-with-kubectl[staleReplicaTimeout] setting.
162+
* The orphaned data cleanup mechanism does not clean up a stale replica, also known as an error replica. Instead, the stale replica is cleaned up according to the xref:nodes-and-volumes/volumes/create-volumes.adoc#_creating_longhorn_volumes_with_kubectl[staleReplicaTimeout] setting.

modules/en/pages/advanced-resources/deploy/storage-network.adoc

+4-4
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ If this is the case, provide a valid `NetworkAttachmentDefinition` and re-run Lo
4343

4444
=== Setting Storage Network After Longhorn Installation
4545

46-
Set the setting xref:references/settings.adoc#storage-network[Storage Network].
46+
Set the setting xref:references/settings.adoc#_storage_network[Storage Network].
4747

4848
[WARNING]
4949
====
@@ -54,13 +54,13 @@ Longhorn is not aware of the updates. Hence this will cause malfunctioning and e
5454

5555
=== Setting Storage Network For RWX Volumes
5656

57-
Configure the setting xref:references/settings.adoc#storage-network-for-rwx-volume-enabled[Storage Network For RWX Volume Enabled].
57+
Configure the setting xref:references/settings.adoc#_storage_network_for_rwx_volume_enabled[Storage Network For RWX Volume Enabled].
5858

5959
= Limitation
6060

61-
When an RWX volume is created with the storage network, the NFS mount point connection must be re-established when the CSI plugin pod restarts. Longhorn provides the xref:references/settings.adoc#automatically-delete-workload-pod-when-the-volume-is-detached-unexpectedly[Automatically Delete Workload Pod when The Volume Is Detached Unexpectedly] setting, which automatically deletes RWX volume workload pods when the CSI plugin pod restarts. However, the workload pod's NFS mount point could become unresponsive when the setting is disabled or the pod is not managed by a controller. In such cases, you must manually restart the CSI plugin pod.
61+
When an RWX volume is created with the storage network, the NFS mount point connection must be re-established when the CSI plugin pod restarts. Longhorn provides the xref:references/settings.adoc#_automatically_delete_workload_pod_when_the_volume_is_detached_unexpectedly[Automatically Delete Workload Pod when The Volume Is Detached Unexpectedly] setting, which automatically deletes RWX volume workload pods when the CSI plugin pod restarts. However, the workload pod's NFS mount point could become unresponsive when the setting is disabled or the pod is not managed by a controller. In such cases, you must manually restart the CSI plugin pod.
6262

63-
For more information, see xref:important-notes/index.adoc#storage-network-support-for-read-write-many-rwx-volumes[Storage Network Support for Read-Write-Many (RWX) Volume] in Important Notes.
63+
For more information, see xref:important-notes/index.adoc#_storage_network_support_for_read_write_many_rwx_volumes[Storage Network Support for Read-Write-Many (RWX) Volume] in Important Notes.
6464

6565
= History
6666

modules/en/pages/advanced-resources/os-distro-specific/okd-support.adoc

+2-2
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ For more information, see xref:deploy/install/install-with-kubectl.adoc[Install
5252
5353
== Prepare A Customized Default Longhorn Disk (Optional)
5454
55-
To understand more about configuring the disks for Longhorn, please refer to the section xref:nodes-and-volumes/nodes/default-disk-and-node-config.adoc#launch-longhorn-with-multiple-disks[Configuring Defaults for Nodes and Disks]
55+
To understand more about configuring the disks for Longhorn, please refer to the section xref:nodes-and-volumes/nodes/default-disk-and-node-config.adoc#_launch_longhorn_with_multiple_disks[Configuring Defaults for Nodes and Disks]
5656
5757
Longhorn will use the directory `/var/lib/longhorn` as default storage mount point and that means Longhorn use the root device as the default storage. If you don't want to use the root device as the Longhorn storage, set *_defaultSettings.createDefaultDiskLabeledNodes_* true when installing Longhorn by helm:
5858
@@ -121,7 +121,7 @@ oc apply -f auto-mount-machineconfig.yaml
121121
122122
==== Label and Annotate The Node
123123
124-
Please refer to the section xref:nodes-and-volumes/nodes/default-disk-and-node-config.adoc#customizing-default-disks-for-new-nodes[Customizing Default Disks for New Nodes] to label and annotate storage node on where your device is by oc commands:
124+
Please refer to the section xref:nodes-and-volumes/nodes/default-disk-and-node-config.adoc#_customizing_default_disks_for_new_nodes[Customizing Default Disks for New Nodes] to label and annotate storage node on where your device is by oc commands:
125125
126126
[subs="+attributes",bash]
127127
----

modules/en/pages/advanced-resources/os-distro-specific/talos-linux-support.adoc

+2-2
Original file line numberDiff line numberDiff line change
@@ -30,9 +30,9 @@ For detailed instructions, see the Talos documentation on https://www.talos.dev/
3030

3131
Longhorn requires pod security `enforce: "privileged"`.
3232

33-
By default, Talos Linux applies a `baseline` pod security profile across namespaces, except for the kube-system namespace. This default setting restricts Longhorn's ability to manage and access system resources. For more information, see xref:deploy/install/index.adoc#root-and-privileged-permission[Root and Privileged Permission].
33+
By default, Talos Linux applies a `baseline` pod security profile across namespaces, except for the kube-system namespace. This default setting restricts Longhorn's ability to manage and access system resources. For more information, see xref:deploy/install/index.adoc#_root_and_privileged_permission[Root and Privileged Permission].
3434

35-
For detailed instructions, see xref:important-notes/index.adoc#pod-security-policies-disabled--pod-security-admission-introduction[Pod Security Policies Disabled & Pod Security Admission Introduction] and the Talos documentation on https://www.talos.dev/v1.6/kubernetes-guides/configuration/pod-security/[Pod Security].
35+
For detailed instructions, see xref:important-notes/index.adoc#_pod_security_policies_disabled__pod_security_admission_introduction[Pod Security Policies Disabled & Pod Security Admission Introduction] and the Talos documentation on https://www.talos.dev/v1.6/kubernetes-guides/configuration/pod-security/[Pod Security].
3636

3737
=== Data Path Mounts
3838

modules/en/pages/advanced-resources/security/volume-encryption.adoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@ A newly-created PVC remains in the `Pending` state until the associated Secret i
103103

104104
= Filesystem Expansion
105105

106-
Longhorn supports xref:nodes-and-volumes/volumes/expansion.adoc#encrypted-volume[offline expansion] for encrypted volumes.
106+
Longhorn supports xref:nodes-and-volumes/volumes/expansion.adoc#_encrypted_volume[offline expansion] for encrypted volumes.
107107

108108
= History
109109

modules/en/pages/advanced-resources/support-managed-k8s-service/upgrade-k8s-on-eks.adoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,4 @@ In Longhorn, set `replica-replenishment-wait-interval` to `0`.
66

77
See https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html[Updating a cluster] for instructions.
88

9-
NOTE: If you have created xref:advanced-resources/support-managed-k8s-service/manage-node-group-on-eks.adoc#create-additional-volume[addition disks] for Longhorn, you will need to manually add the path of the mounted disk into the disk list of the upgraded nodes.
9+
NOTE: If you have created xref:advanced-resources/support-managed-k8s-service/manage-node-group-on-eks.adoc#_create_additional_volume[addition disks] for Longhorn, you will need to manually add the path of the mounted disk into the disk list of the upgraded nodes.

modules/en/pages/advanced-resources/system-backup-restore/restore-longhorn-system.adoc

+6-6
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
2020
== Longhorn System Restore Rollouts
2121

22-
* Longhorn restores the resource from the xref:advanced-resources/system-backup-restore/backup-longhorn-system.adoc#longhorn-system-backup-bundle[Longhorn System Backup Bundle].
22+
* Longhorn restores the resource from the xref:advanced-resources/system-backup-restore/backup-longhorn-system.adoc#_longhorn_system_backup_bundle[Longhorn System Backup Bundle].
2323
* Longhorn does not restore existing `Volumes` and their associated `PersistentVolume` and `PersistentVolumeClaim`.
2424
* Longhorn automatically restores a `Volume` from its latest backup.
2525
* To prevent overwriting eligible settings, Longhorn does not restore the `ConfigMap/longhorn-default-setting`.
@@ -41,7 +41,7 @@ You can restore the Longhorn system using Longhorn UI. Or with the `kubectl` com
4141
* Set up the `Nodes` and disk tags for `StorageClass`.
4242
* Have a Longhorn system backup.
4343
+
44-
See xref:advanced-resources/system-backup-restore/backup-longhorn-system.adoc#create-longhorn-system-backup[Backup Longhorn System - Create Longhorn System Backup] for instructions.
44+
See xref:advanced-resources/system-backup-restore/backup-longhorn-system.adoc#_create_longhorn_system_backup[Backup Longhorn System - Create Longhorn System Backup] for instructions.
4545

4646
* All existing `Volumes` are detached.
4747

@@ -119,10 +119,10 @@ systemrestore.longhorn.io "restore-demo" deleted
119119

120120
Some settings are excluded as configurable before the Longhorn system restore.
121121

122-
* xref:references/settings.adoc#concurrent-volume-backup-restore-per-node-limit[Concurrent volume backup restore per node limit]
123-
* xref:references/settings.adoc#concurrent-replica-rebuild-per-node-limit[Concurrent replica rebuild per node limit]
124-
* xref:references/settings.adoc#backup-target[Backup Target]
125-
* xref:references/settings.adoc#backup-target-credential-secret[Backup Target Credential Secret]
122+
* xref:references/settings.adoc#_concurrent_volume_backup_restore_per_node_limit[Concurrent volume backup restore per node limit]
123+
* xref:references/settings.adoc#_concurrent_replica_rebuild_per_node_limit[Concurrent replica rebuild per node limit]
124+
* xref:references/settings.adoc#_backup_target[Backup Target]
125+
* xref:references/settings.adoc#_backup_target_credential_secret[Backup Target Credential Secret]
126126

127127
== Troubleshoot
128128

modules/en/pages/advanced-resources/system-backup-restore/restore-to-a-cluster-contains-data-using-Rancher-snapshot.adoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ This doc describes what users need to do after restoring the cluster with a Ranc
88

99
* *Most of the data and the underlying disks still exist* in the cluster before the restore and can be directly reused then.
1010
* There is a backupstore holding all volume data.
11-
* The setting xref:references/settings.adoc#disable-revision-counter[`Disable Revision Counter`] is false. (It's false by default.) Otherwise, users need to manually check if the data among volume replicas are consistent, or directly restore volumes from backup.
11+
* The setting xref:references/settings.adoc#_disable_revision_counter[`Disable Revision Counter`] is false. (It's false by default.) Otherwise, users need to manually check if the data among volume replicas are consistent, or directly restore volumes from backup.
1212

1313
== Expectation:
1414

modules/en/pages/best-practices.adoc

+9-9
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ in particular, benefit from usage of specific kernel versions.
9393

9494
* Optimizing or improving the filesystem: Use a kernel with version `v5.8` or later. See https://github.com/longhorn/longhorn/issues/2507#issuecomment-857195496[Issue
9595
#2507] for details.
96-
* Enabling the xref:references/settings.adoc#freeze-filesystem-for-snapshot[Freeze Filesystem for Snapshot] setting: Use a
96+
* Enabling the xref:references/settings.adoc#_freeze_filesystem_for_snapshot[Freeze Filesystem for Snapshot] setting: Use a
9797
kernel with version `5.17` or later to ensure that a volume crash during a filesystem freeze cannot lock up a node.
9898
* Enabling the xref:v2-data-engine/prerequisites.adoc[V2 Data Engine]: Use a kernel with version `5.19` or later to ensure
9999

@@ -169,7 +169,7 @@ Since Longhorn doesn't currently support sharding between the different disks, w
169169

170170
Any extra disks must be written in the `/etc/fstab` file to allow automatic mounting after the machine reboots.
171171

172-
Don't use a symbolic link for the extra disks. Use `mount --bind` instead of `ln -s` and make sure it's in the `fstab` file. For details, see xref:nodes-and-volumes/nodes/multidisk.adoc#use-an-alternative-path-for-a-disk-on-the-node[the section about multiple disk support.]
172+
Don't use a symbolic link for the extra disks. Use `mount --bind` instead of `ln -s` and make sure it's in the `fstab` file. For details, see xref:nodes-and-volumes/nodes/multidisk.adoc#_use_an_alternative_path_for_a_disk_on_the_node[the section about multiple disk support.]
173173

174174
== Configuring Default Disks Before and After Installation
175175

@@ -189,9 +189,9 @@ The following sections outline other recommendations for production environments
189189

190190
=== IO Performance
191191

192-
* *Storage network*: Use a xref:advanced-resources/deploy/storage-network.adoc#setting-storage-network[dedicated storage network] to improve IO performance and stability.
193-
* *Longhorn disk*: Use a xref:nodes-and-volumes/nodes/multidisk.adoc#add-a-disk[dedicated disk] for Longhorn storage instead of using the root disk.
194-
* *Replica count*: Set the xref:references/settings.adoc#default-replica-count[default replica count] to "2" to achieve data availability with better disk space usage or less impact to system performance. This practice is especially beneficial to data-intensive applications.
192+
* *Storage network*: Use a xref:advanced-resources/deploy/storage-network.adoc#_setting_storage_network[dedicated storage network] to improve IO performance and stability.
193+
* *Longhorn disk*: Use a xref:nodes-and-volumes/nodes/multidisk.adoc#_add_a_disk[dedicated disk] for Longhorn storage instead of using the root disk.
194+
* *Replica count*: Set the xref:references/settings.adoc#_default_replica_count[default replica count] to "2" to achieve data availability with better disk space usage or less impact to system performance. This practice is especially beneficial to data-intensive applications.
195195
* *Storage tag*: Use xref:nodes-and-volumes/nodes/storage-tags.adoc[storage tags] to define storage tiering for data-intensive applications. For example, only high-performance disks can be used for storing performance-sensitive data.
196196
* *Data locality*: Use `best-effort` as the default xref:high-availability/data-locality.adoc[data locality] of Longhorn StorageClasses.
197197
+
@@ -203,15 +203,15 @@ For data-intensive applications, you can use pod scheduling functions such as no
203203

204204
* *Recurring snapshots*: Periodically clean up system-generated snapshots and retain only the number of snapshots that makes sense for your implementation.
205205
+
206-
For applications with replication capability, periodically xref:concepts.adoc#243-deleting-snapshots[delete all types of snapshots].
206+
For applications with replication capability, periodically xref:concepts.adoc#_2_4_3_deleting_snapshots[delete all types of snapshots].
207207

208208
* *Recurring filesystem trim*: Periodically xref:nodes-and-volumes/volumes/trim-filesystem.adoc[trim the filesystem] inside volumes to reclaim disk space.
209209
* *Snapshot space management*: xref:snapshots-and-backups/snapshot-space-management.adoc[Configure global and volume-specific settings] to prevent unexpected disk space exhaustion.
210210

211211
=== Disaster Recovery
212212

213213
* *Recurring backups*: Create xref:snapshots-and-backups/scheduling-backups-and-snapshots.adoc[recurring backup jobs] for mission-critical application volumes.
214-
* *System backup*: Create periodic xref:advanced-resources/system-backup-restore/backup-longhorn-system.adoc#create-longhorn-system-backup[system backups].
214+
* *System backup*: Create periodic xref:advanced-resources/system-backup-restore/backup-longhorn-system.adoc#_create_longhorn_system_backup[system backups].
215215

216216
== Deploying Workloads
217217

@@ -237,15 +237,15 @@ You can also set a specific milli CPU value for instance manager pods on a parti
237237

238238
NOTE: This field will overwrite the above setting for the specified node.
239239

240-
Refer to xref:references/settings.adoc#guaranteed-instance-manager-cpu[Guaranteed Instance Manager CPU] for more details.
240+
Refer to xref:references/settings.adoc#_guaranteed_instance_manager_cpu[Guaranteed Instance Manager CPU] for more details.
241241

242242
=== V2 Data Engine
243243

244244
The `Guaranteed Instance Manager CPU for V2 Data Engine` setting allows you to reserve a specific number of millicpus on each node for each instance manager pod when the V2 Data Engine is enabled. By default, the Storage Performance Development Kit (SPDK) target daemon within each instance manager pod uses 1 CPU core. Configuring a minimum CPU usage value is essential for maintaining engine and replica stability, especially during periods of high node workload. The default value is 1250.
245245

246246
== StorageClass
247247

248-
We don't recommend modifying the default StorageClass named `longhorn`, since the change of parameters might cause issues during an upgrade later. If you want to change the parameters set in the StorageClass, you can create a new StorageClass by referring to the xref:references/examples.adoc#storageclass[StorageClass examples].
248+
We don't recommend modifying the default StorageClass named `longhorn`, since the change of parameters might cause issues during an upgrade later. If you want to change the parameters set in the StorageClass, you can create a new StorageClass by referring to the xref:references/examples.adoc#_storageclass[StorageClass examples].
249249

250250
== Scheduling Settings
251251

0 commit comments

Comments
 (0)