Skip to content

Commit 35a9ce9

Browse files
authored
Merge pull request #61 from jhkrug/1.6.0
fix anchors
2 parents ac466a8 + 36127ee commit 35a9ce9

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+108
-108
lines changed

modules/en/pages/advanced-resources/backing-image/backing-image.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ Since v1.3.0, users can download existing backing image files to the local via U
159159
[discrete]
160160
==== Clean up backing images in disks
161161

162-
* Longhorn automatically cleans up the unused backing image files in the disks based on xref:references/settings.adoc#backing-image-cleanup-wait-interval[the setting `Backing Image Cleanup Wait Interval`]. But Longhorn will retain at least one file in a disk for each backing image anyway.
162+
* Longhorn automatically cleans up the unused backing image files in the disks based on xref:references/settings.adoc#_backing_image_cleanup_wait_interval[the setting `Backing Image Cleanup Wait Interval`]. But Longhorn will retain at least one file in a disk for each backing image anyway.
163163
* The unused backing images can be also cleaned up manually via the Longhorn UI: Click menu:Setting[Backing Image > Operation list of one backing image > Clean Up]. Then choose disks.
164164
* Once there is one replica in a disk using a backing image, no matter what the replica's current state is, the backing image file in this disk cannot be cleaned up.
165165

modules/en/pages/advanced-resources/cluster-restore/rancher-cluster-restore.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ This doc describes what users need to do after restoring the cluster with a Ranc
88

99
* Most of the data and the underlying disks still exist in the cluster before the restore and can be directly reused then.
1010
* There is a backupstore holding all volume data.
11-
* The setting xref:references/settings.adoc#disable-revision-counter[`Disable Revision Counter`] is false. (It's false by default.) Otherwise, users need to manually check if the data among volume replicas are consistent, or directly restore volumes from backup.
11+
* The setting xref:references/settings.adoc#_disable_revision_counter[`Disable Revision Counter`] is false. (It's false by default.) Otherwise, users need to manually check if the data among volume replicas are consistent, or directly restore volumes from backup.
1212

1313
== Expectation:
1414

modules/en/pages/advanced-resources/data-cleanup/orphaned-data-cleanup.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -159,4 +159,4 @@ Longhorn will not create an `orphan` resource for an orphaned directory when
159159
** The volume volume.meta file is missing.
160160
* The orphaned replica directory is on an evicted node.
161161
* The orphaned replica directory is in an evicted disk.
162-
* The orphaned data cleanup mechanism does not clean up a stale replica, also known as an error replica. Instead, the stale replica is cleaned up according to the xref:nodes-and-volumes/volumes/create-volumes.adoc#creating-longhorn-volumes-with-kubectl[staleReplicaTimeout] setting.
162+
* The orphaned data cleanup mechanism does not clean up a stale replica, also known as an error replica. Instead, the stale replica is cleaned up according to the xref:nodes-and-volumes/volumes/create-volumes.adoc#_creating_longhorn_volumes_with_kubectl[staleReplicaTimeout] setting.

modules/en/pages/advanced-resources/deploy/storage-network.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ If this is the case, provide a valid `NetworkAttachmentDefinition` and re-run Lo
4343

4444
=== Setting Storage Network After Longhorn Installation
4545

46-
Set the setting xref:references/settings.adoc#storage-network[Storage Network].
46+
Set the setting xref:references/settings.adoc#_storage_network[Storage Network].
4747

4848
[WARNING]
4949
====

modules/en/pages/advanced-resources/os-distro-specific/okd-support.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ And then install Longhorn with setting *_openshift.enabled_* true:
2828
2929
== Prepare A Customized Default Longhorn Disk (Optional)
3030
31-
To understand more about configuring the disks for Longhorn, please refer to the section xref:nodes-and-volumes/nodes/default-disk-and-node-config.adoc#launch-longhorn-with-multiple-disks[Configuring Defaults for Nodes and Disks]
31+
To understand more about configuring the disks for Longhorn, please refer to the section xref:nodes-and-volumes/nodes/default-disk-and-node-config.adoc#_launch_longhorn_with_multiple_disks[Configuring Defaults for Nodes and Disks]
3232
3333
Longhorn will use the directory `/var/lib/longhorn` as default storage mount point and that means Longhorn use the root device as the default storage. If you don't want to use the root device as the Longhorn storage, set *_defaultSettings.createDefaultDiskLabeledNodes_* true when installing Longhorn by helm:
3434
@@ -97,7 +97,7 @@ oc apply -f auto-mount-machineconfig.yaml
9797
9898
==== Label and Annotate The Node
9999
100-
Please refer to the section xref:nodes-and-volumes/nodes/default-disk-and-node-config.adoc#customizing-default-disks-for-new-nodes[Customizing Default Disks for New Nodes] to label and annotate storage node on where your device is by oc commands:
100+
Please refer to the section xref:nodes-and-volumes/nodes/default-disk-and-node-config.adoc#_customizing_default_disks_for_new_nodes[Customizing Default Disks for New Nodes] to label and annotate storage node on where your device is by oc commands:
101101
102102
[subs="+attributes",bash]
103103
----

modules/en/pages/advanced-resources/os-distro-specific/talos-linux-support.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,9 +30,9 @@ For detailed instructions, see the Talos documentation on https://www.talos.dev/
3030

3131
Longhorn requires pod security `enforce: "privileged"`.
3232

33-
By default, Talos Linux applies a `baseline` pod security profile across namespaces, except for the kube-system namespace. This default setting restricts Longhorn's ability to manage and access system resources. For more information, see xref:deploy/install/index.adoc#root-and-privileged-permission[Root and Privileged Permission].
33+
By default, Talos Linux applies a `baseline` pod security profile across namespaces, except for the kube-system namespace. This default setting restricts Longhorn's ability to manage and access system resources. For more information, see xref:deploy/install/index.adoc#_root_and_privileged_permission[Root and Privileged Permission].
3434

35-
For detailed instructions, see xref:deploy/important-notes/index.adoc#pod-security-policies-disabled--pod-security-admission-introduction[Pod Security Policies Disabled & Pod Security Admission Introduction] and Talos' documentation on https://www.talos.dev/v1.6/kubernetes-guides/configuration/pod-security/[Pod Security].
35+
For detailed instructions, see xref:deploy/important-notes/index.adoc#_pod_security_policies_disabled__pod_security_admission_introduction[Pod Security Policies Disabled & Pod Security Admission Introduction] and Talos' documentation on https://www.talos.dev/v1.6/kubernetes-guides/configuration/pod-security/[Pod Security].
3636

3737
=== Data Path Mounts
3838

modules/en/pages/advanced-resources/security/volume-encryption.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ After a PVC is created, it remains in the `Pending` state until the associated s
102102

103103
= Filesystem Expansion
104104

105-
Longhorn supports xref:nodes-and-volumes/volumes/expansion.adoc#encrypted-volume[offline expansion] for encrypted volumes.
105+
Longhorn supports xref:nodes-and-volumes/volumes/expansion.adoc#_encrypted_volume[offline expansion] for encrypted volumes.
106106

107107
= History
108108

modules/en/pages/advanced-resources/support-managed-k8s-service/upgrade-k8s-on-eks.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,4 @@ In Longhorn, set `replica-replenishment-wait-interval` to `0`.
66

77
See https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html[Updating a cluster] for instructions.
88

9-
NOTE: If you have created xref:advanced-resources/support-managed-k8s-service/manage-node-group-on-eks.adoc#create-additional-volume[addition disks] for Longhorn, you will need to manually add the path of the mounted disk into the disk list of the upgraded nodes.
9+
NOTE: If you have created xref:advanced-resources/support-managed-k8s-service/manage-node-group-on-eks.adoc#_create_additional_volume[addition disks] for Longhorn, you will need to manually add the path of the mounted disk into the disk list of the upgraded nodes.

modules/en/pages/advanced-resources/system-backup-restore/backup-longhorn-system.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ It includes below resources associating with the Longhorn system:
3838
* StorageClasses
3939
* Volumes
4040

41-
WARNING: Longhorn does not backup `BackingImages`. We will improve this part in the future. See xref:advanced-resources/system-backup-restore/restore-longhorn-system.adoc#prerequisite[Restore Longhorn System - Prerequisite] for restoring volumes created with the backing image.
41+
WARNING: Longhorn does not backup `BackingImages`. We will improve this part in the future. See xref:advanced-resources/system-backup-restore/restore-longhorn-system.adoc#_prerequisite[Restore Longhorn System - Prerequisite] for restoring volumes created with the backing image.
4242

4343
NOTE: Longhorn does not backup `Nodes`. The Longhorn manager on the target cluster is responsible for creating its own Longhorn `Node` custom resources.
4444

modules/en/pages/advanced-resources/system-backup-restore/restore-longhorn-system.adoc

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
2020
== Longhorn System Restore Rollouts
2121

22-
* Longhorn restores the resource from the xref:advanced-resources/system-backup-restore/backup-longhorn-system.adoc#longhorn-system-backup-bundle[Longhorn System Backup Bundle].
22+
* Longhorn restores the resource from the xref:advanced-resources/system-backup-restore/backup-longhorn-system.adoc#_longhorn_system_backup_bundle[Longhorn System Backup Bundle].
2323
* Longhorn does not restore existing `Volumes` and their associated `PersistentVolume` and `PersistentVolumeClaim`.
2424
* Longhorn automatically restores a `Volume` from its latest backup.
2525
* To prevent overwriting eligible settings, Longhorn does not restore the `ConfigMap/longhorn-default-setting`.
@@ -41,7 +41,7 @@ You can restore the Longhorn system using Longhorn UI. Or with the `kubectl` com
4141
* Set up the `Nodes` and disk tags for `StorageClass`.
4242
* Have a Longhorn system backup.
4343
+
44-
See xref:advanced-resources/system-backup-restore/backup-longhorn-system.adoc#create-longhorn-system-backup[Backup Longhorn System - Create Longhorn System Backup] for instructions.
44+
See xref:advanced-resources/system-backup-restore/backup-longhorn-system.adoc#_create_longhorn_system_backup[Backup Longhorn System - Create Longhorn System Backup] for instructions.
4545

4646
* Have volume `BackingImages` available in the cluster.
4747
+
@@ -123,10 +123,10 @@ systemrestore.longhorn.io "restore-demo" deleted
123123

124124
Some settings are excluded as configurable before the Longhorn system restore.
125125

126-
* xref:references/settings.adoc#concurrent-volume-backup-restore-per-node-limit[Concurrent volume backup restore per node limit]
127-
* xref:references/settings.adoc#concurrent-replica-rebuild-per-node-limit[Concurrent replica rebuild per node limit]
128-
* xref:references/settings.adoc#backup-target[Backup Target]
129-
* xref:references/settings.adoc#backup-target-credential-secret[Backup Target Credential Secret]
126+
* xref:references/settings.adoc#_concurrent_volume_backup_restore_per_node_limit[Concurrent volume backup restore per node limit]
127+
* xref:references/settings.adoc#_concurrent_replica_rebuild_per_node_limit[Concurrent replica rebuild per node limit]
128+
* xref:references/settings.adoc#_backup_target[Backup Target]
129+
* xref:references/settings.adoc#_backup_target_credential_secret[Backup Target Credential Secret]
130130

131131
== Troubleshoot
132132

modules/en/pages/advanced-resources/system-backup-restore/restore-to-a-cluster-contains-data-using-Rancher-snapshot.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ This doc describes what users need to do after restoring the cluster with a Ranc
88

99
* *Most of the data and the underlying disks still exist* in the cluster before the restore and can be directly reused then.
1010
* There is a backupstore holding all volume data.
11-
* The setting xref:references/settings.adoc#disable-revision-counter[`Disable Revision Counter`] is false. (It's false by default.) Otherwise, users need to manually check if the data among volume replicas are consistent, or directly restore volumes from backup.
11+
* The setting xref:references/settings.adoc#_disable_revision_counter[`Disable Revision Counter`] is false. (It's false by default.) Otherwise, users need to manually check if the data among volume replicas are consistent, or directly restore volumes from backup.
1212

1313
== Expectation:
1414

modules/en/pages/best-practices.adoc

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -132,7 +132,7 @@ Since Longhorn doesn't currently support sharding between the different disks, w
132132

133133
Any extra disks must be written in the `/etc/fstab` file to allow automatic mounting after the machine reboots.
134134

135-
Don't use a symbolic link for the extra disks. Use `mount --bind` instead of `ln -s` and make sure it's in the `fstab` file. For details, see xref:nodes-and-volumes/nodes/multidisk.adoc#use-an-alternative-path-for-a-disk-on-the-node[the section about multiple disk support.]
135+
Don't use a symbolic link for the extra disks. Use `mount --bind` instead of `ln -s` and make sure it's in the `fstab` file. For details, see xref:nodes-and-volumes/nodes/multidisk.adoc#_use_an_alternative_path_for_a_disk_on_the_node[the section about multiple disk support.]
136136

137137
== Configuring Default Disks Before and After Installation
138138

@@ -152,9 +152,9 @@ The following sections outline other recommendations for production environments
152152

153153
=== IO Performance
154154

155-
* *Storage network*: Use a xref:advanced-resources/deploy/storage-network.adoc#setting-storage-network[dedicated storage network] to improve IO performance and stability.
156-
* *Longhorn disk*: Use a xref:nodes-and-volumes/nodes/multidisk.adoc#add-a-disk[dedicated disk] for Longhorn storage instead of using the root disk.
157-
* *Replica count*: Set the xref:references/settings.adoc#default-replica-count[default replica count] to "2" to achieve data availability with better disk space usage or less impact to system performance. This practice is especially beneficial to data-intensive applications.
155+
* *Storage network*: Use a xref:advanced-resources/deploy/storage-network.adoc#_setting_storage_network[dedicated storage network] to improve IO performance and stability.
156+
* *Longhorn disk*: Use a xref:nodes-and-volumes/nodes/multidisk.adoc#_add_a_disk[dedicated disk] for Longhorn storage instead of using the root disk.
157+
* *Replica count*: Set the xref:references/settings.adoc#_default_replica_count[default replica count] to "2" to achieve data availability with better disk space usage or less impact to system performance. This practice is especially beneficial to data-intensive applications.
158158
* *Storage tag*: Use xref:nodes-and-volumes/nodes/storage-tags.adoc[storage tags] to define storage tiering for data-intensive applications. For example, only high-performance disks can be used for storing performance-sensitive data.
159159
* *Data locality*: Use `best-effort` as the default xref:high-availability/data-locality.adoc[data locality] of Longhorn StorageClasses.
160160
+
@@ -166,15 +166,15 @@ For data-intensive applications, you can use pod scheduling functions such as no
166166

167167
* *Recurring snapshots*: Periodically clean up system-generated snapshots and retain only the number of snapshots that makes sense for your implementation.
168168
+
169-
For applications with replication capability, periodically xref:concepts.adoc#243-deleting-snapshots[delete all types of snapshots].
169+
For applications with replication capability, periodically xref:concepts.adoc#_2_4_3_deleting_snapshots[delete all types of snapshots].
170170

171171
* *Recurring filesystem trim*: Periodically xref:nodes-and-volumes/volumes/trim-filesystem.adoc[trim the filesystem] inside volumes to reclaim disk space.
172172
* *Snapshot space management*: xref:snapshots-and-backups/snapshot-space-management.adoc[Configure global and volume-specific settings] to prevent unexpected disk space exhaustion.
173173

174174
=== Disaster Recovery
175175

176176
* *Recurring backups*: Create xref:snapshots-and-backups/scheduling-backups-and-snapshots.adoc[recurring backup jobs] for mission-critical application volumes.
177-
* *System backup*: Create periodic xref:advanced-resources/system-backup-restore/backup-longhorn-system.adoc#create-longhorn-system-backup[system backups].
177+
* *System backup*: Create periodic xref:advanced-resources/system-backup-restore/backup-longhorn-system.adoc#_create_longhorn_system_backup[system backups].
178178

179179
== Deploying Workloads
180180

@@ -200,15 +200,15 @@ You can also set a specific milli CPU value for instance manager pods on a parti
200200

201201
NOTE: This field will overwrite the above setting for the specified node.
202202

203-
Refer to xref:references/settings.adoc#guaranteed-instance-manager-cpu[Guaranteed Instance Manager CPU] for more details.
203+
Refer to xref:references/settings.adoc#_guaranteed_instance_manager_cpu[Guaranteed Instance Manager CPU] for more details.
204204

205205
=== V2 Data Engine
206206

207207
The `Guaranteed Instance Manager CPU for V2 Data Engine` setting allows you to reserve a specific number of millicpus on each node for each instance manager pod when the V2 Data Engine is enabled. By default, the Storage Performance Development Kit (SPDK) target daemon within each instance manager pod uses 1 CPU core. Configuring a minimum CPU usage value is essential for maintaining engine and replica stability, especially during periods of high node workload. The default value is 1250.
208208

209209
== StorageClass
210210

211-
We don't recommend modifying the default StorageClass named `longhorn`, since the change of parameters might cause issues during an upgrade later. If you want to change the parameters set in the StorageClass, you can create a new StorageClass by referring to the xref:references/examples.adoc#storageclass[StorageClass examples].
211+
We don't recommend modifying the default StorageClass named `longhorn`, since the change of parameters might cause issues during an upgrade later. If you want to change the parameters set in the StorageClass, you can create a new StorageClass by referring to the xref:references/examples.adoc#_storageclass[StorageClass examples].
212212

213213
== Scheduling Settings
214214

modules/en/pages/concepts.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ The storage controller and replicas are themselves orchestrated using Kubernetes
99

1010
For an overview of Longhorn features, refer to xref:what-is-longhorn.adoc[this section.]
1111

12-
For the installation requirements, go to xref:deploy/install/index.adoc#installation-requirements[this section.]
12+
For the installation requirements, go to xref:deploy/install/index.adoc#_installation_requirements[this section.]
1313

1414
____
1515
This section assumes familiarity with Kubernetes persistent storage concepts. For more information on these concepts, refer to the <<appendix-how-persistent-storage-works-in-kubernetes,appendix.>> For help with the terminology used in this page, refer to xref:terminology.adoc[this section.]
@@ -142,7 +142,7 @@ Each replica contains a chain of snapshots of a Longhorn volume. A snapshot is l
142142

143143
For each Longhorn volume, multiple replicas of the volume should run in the Kubernetes cluster, each on a separate node. All replicas are treated the same, and the Longhorn Engine always runs on the same node as the pod, which is also the consumer of the volume. In that way, we make sure that even if the Pod is down, the Engine can be moved to another Pod and your service will continue undisrupted.
144144

145-
The default replica count can be changed in the xref:references/settings.adoc#default-replica-count[settings.] When a volume is attached, the replica count for the volume can be changed in the UI.
145+
The default replica count can be changed in the xref:references/settings.adoc#_default_replica_count[settings.] When a volume is attached, the replica count for the volume can be changed in the UI.
146146

147147
If the current healthy replica count is less than specified replica count, Longhorn will start rebuilding new replicas.
148148

0 commit comments

Comments
 (0)