You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Closes#673
Removes remaining `asciidocalypse` links from Markdown files and removes
`asciidocalypse` from the `cross_links` list in `docset.yml` to prevent
any new `asciidocalypse` links from being added.
Copy file name to clipboardExpand all lines: deploy-manage/autoscaling/autoscaling-deciders.md
+11-11
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ applies_to:
18
18
19
19
[Autoscaling](/deploy-manage/autoscaling.md) in Elasticsearch enables dynamic resource allocation based on predefined policies. A key component of this mechanism is autoscaling deciders, which independently assess resource requirements and determine when scaling actions are necessary. Deciders analyze various factors, such as storage usage, indexing rates, and machine learning workloads, to ensure clusters maintain optimal performance without manual intervention.
20
20
21
-
::::{admonition} Indirect use only
21
+
::::{admonition} Indirect use only
22
22
This feature is designed for indirect use by {{ech}}, {{ece}}, and {{eck}}. Direct use is not supported.
23
23
::::
24
24
@@ -49,7 +49,7 @@ The [autoscaling](../../deploy-manage/autoscaling.md) reactive storage decider (
49
49
50
50
The reactive storage decider is enabled for all policies governing data nodes and has no configuration options.
51
51
52
-
The decider relies partially on using [data tier preference](../../manage-data/lifecycle/data-tiers.md#data-tier-allocation) allocation rather than node attributes. In particular, scaling a data tier into existence (starting the first node in a tier) will result in starting a node in any data tier that is empty if not using allocation based on data tier preference. Using the [ILM migrate](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/index-lifecycle-actions/ilm-migrate.md) action to migrate between tiers is the preferred way of allocating to tiers and fully supports scaling a tier into existence.
52
+
The decider relies partially on using [data tier preference](../../manage-data/lifecycle/data-tiers.md#data-tier-allocation) allocation rather than node attributes. In particular, scaling a data tier into existence (starting the first node in a tier) will result in starting a node in any data tier that is empty if not using allocation based on data tier preference. Using the [ILM migrate](elasticsearch://reference/elasticsearch/index-lifecycle-actions/ilm-migrate.md) action to migrate between tiers is the preferred way of allocating to tiers and fully supports scaling a tier into existence.
: (Optional, [time value](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#time-units)) The window of time to use for forecasting. Defaults to 30 minutes.
65
+
: (Optional, [time value](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#time-units)) The window of time to use for forecasting. Defaults to 30 minutes.
: (Optional, [byte value](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#byte-units)) The memory needed per shard, in bytes. Defaults to 2000 shards per 64 GB node (roughly 32 MB per shard). Notice that this is total memory, not heap, assuming that the Elasticsearch default heap sizing mechanism is used and that nodes are not bigger than 64 GB.
99
+
: (Optional, [byte value](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#byte-units)) The memory needed per shard, in bytes. Defaults to 2000 shards per 64 GB node (roughly 32 MB per shard). Notice that this is total memory, not heap, assuming that the Elasticsearch default heap sizing mechanism is used and that nodes are not bigger than 64 GB.
@@ -121,8 +121,8 @@ The [autoscaling](../../deploy-manage/autoscaling.md) {{ml}} decider (`ml`) calc
121
121
122
122
The {{ml}} decider is enabled for policies governing `ml` nodes.
123
123
124
-
::::{note}
125
-
For {{ml}} jobs to open when the cluster is not appropriately scaled, set `xpack.ml.max_lazy_ml_nodes` to the largest number of possible {{ml}} nodes (refer to [Advanced machine learning settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/machine-learning-settings.md#advanced-ml-settings) for more information). In {{ess}}, this is automatically set.
124
+
::::{note}
125
+
For {{ml}} jobs to open when the cluster is not appropriately scaled, set `xpack.ml.max_lazy_ml_nodes` to the largest number of possible {{ml}} nodes (refer to [Advanced machine learning settings](elasticsearch://reference/elasticsearch/configuration-reference/machine-learning-settings.md#advanced-ml-settings) for more information). In {{ess}}, this is automatically set.
126
126
::::
127
127
128
128
@@ -137,7 +137,7 @@ Both `num_anomaly_jobs_in_queue` and `num_analytics_jobs_in_queue` are designed
137
137
: (Optional, integer) Specifies the number of queued {{dfanalytics-jobs}} to allow. Defaults to `0`.
138
138
139
139
`down_scale_delay`
140
-
: (Optional, [time value](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#time-units)) Specifies the time to delay before scaling down. Defaults to 1 hour. If a scale down is possible for the entire time window, then a scale down is requested. If the cluster requires a scale up during the window, the window is reset.
140
+
: (Optional, [time value](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#time-units)) Specifies the time to delay before scaling down. Defaults to 1 hour. If a scale down is possible for the entire time window, then a scale down is requested. If the cluster requires a scale up during the window, the window is reset.
@@ -168,12 +168,12 @@ The API returns the following result:
168
168
169
169
## Fixed decider [autoscaling-fixed-decider]
170
170
171
-
::::{warning}
171
+
::::{warning}
172
172
This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
173
173
::::
174
174
175
175
176
-
::::{warning}
176
+
::::{warning}
177
177
The fixed decider is intended for testing only. Do not use this decider in production.
178
178
::::
179
179
@@ -183,10 +183,10 @@ The [autoscaling](../../deploy-manage/autoscaling.md) `fixed` decider responds w
Copy file name to clipboardExpand all lines: deploy-manage/deploy.md
+4-4
Original file line number
Diff line number
Diff line change
@@ -23,10 +23,10 @@ Your choice of deployment type determines how you'll set up and manage these cor
23
23
This section focuses on deploying and managing {{es}} and {{kib}}, as well as supporting orchestration technologies. However, depending on your use case, you might need to deploy [other {{stack}} components](/get-started/the-stack.md). For example, you might need to add components to ingest logs or metrics.
24
24
25
25
To learn how to deploy optional {{stack}} components, refer to the following sections:
26
-
*[Fleet and Elastic Agent](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/index.md)
26
+
*[Fleet and Elastic Agent](/reference/ingestion-tools/fleet/index.md)
Copy file name to clipboardExpand all lines: deploy-manage/deploy/cloud-enterprise/air-gapped-install.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ Before you start, you must:
19
19
* Follow the same prerequisites described in [](./install.md#ece-install-prerequisites). This includes [](./identify-deployment-scenario.md) and [](./prepare-environment.md) steps.
20
20
*[Configure your operating system](./configure-operating-system.md) in all ECE hosts.
21
21
* Be part of the `docker` group to run the installation script. You should not install Elastic Cloud Enterprise as the `root` user.
22
-
* Set up and run a local copy of the Elastic Package Repository, otherwise your deployments with APM server and Elastic agent won’t work. Refer to the [Running EPR in air-gapped environments](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/air-gapped.md#air-gapped-diy-epr) documentation.
22
+
* Set up and run a local copy of the Elastic Package Repository, otherwise your deployments with APM server and Elastic agent won’t work. Refer to the [Running EPR in air-gapped environments](/reference/ingestion-tools/fleet/air-gapped.md#air-gapped-diy-epr) documentation.
23
23
24
24
When you are ready to install ECE, you can proceed:
Copy file name to clipboardExpand all lines: deploy-manage/deploy/cloud-enterprise/manage-integrations-server.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ mapped_urls:
8
8
9
9
# Manage your Integrations Server [ece-manage-integrations-server]
10
10
11
-
For deployments that are version 8.0 and later, you have the option to add a combined [Application Performance Monitoring (APM) Server](/solutions/observability/apps/application-performance-monitoring-apm.md) and [Fleet Server](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/index.md) to your deployment. APM allows you to monitor software services and applications in real time, turning that data into documents stored in the Elasticsearch cluster. Fleet allows you to centrally manage Elastic Agents on many hosts.
11
+
For deployments that are version 8.0 and later, you have the option to add a combined [Application Performance Monitoring (APM) Server](/solutions/observability/apps/application-performance-monitoring-apm.md) and [Fleet Server](/reference/ingestion-tools/fleet/index.md) to your deployment. APM allows you to monitor software services and applications in real time, turning that data into documents stored in the Elasticsearch cluster. Fleet allows you to centrally manage Elastic Agents on many hosts.
12
12
13
13
As part of provisioning, the APM Server and Fleet Server are already configured to work with Elasticsearch and Kibana. At the end of provisioning, you are shown the secret token to configure communication between the APM Server and the backend [APM Agents](https://www.elastic.co/guide/en/apm/agent/index.html). The APM Agents get deployed within your services and applications.
With the API console you can interact with a specific {{es}} deployment directly from the Cloud UI without having to authenticate again. This RESTful API access is limited to the specific cluster and works only for Elasticsearch API calls.
31
31
32
-
::::{important}
32
+
::::{important}
33
33
API console is intended for admin purposes. Avoid running normal workload like indexing or search requests.
Copy file name to clipboardExpand all lines: deploy-manage/deploy/self-managed/_snippets/enroll-nodes.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@ When {{es}} starts for the first time, the security auto-configuration process b
2
2
3
3
Before enrolling a new node, additional actions such as binding to an address other than `localhost` or satisfying bootstrap checks are typically necessary in production clusters. During that time, an auto-generated enrollment token could expire, which is why enrollment tokens aren’t generated automatically.
4
4
5
-
Additionally, only nodes on the same host can join the cluster without additional configuration. If you want nodes from another host to join your cluster, you need to set `transport.host` to a [supported value](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/networking-settings.md#network-interface-values) (such as uncommenting the suggested value of `0.0.0.0`), or an IP address that’s bound to an interface where other hosts can reach it. Refer to [transport settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/networking-settings.md#transport-settings) for more information.
5
+
Additionally, only nodes on the same host can join the cluster without additional configuration. If you want nodes from another host to join your cluster, you need to set `transport.host` to a [supported value](elasticsearch://reference/elasticsearch/configuration-reference/networking-settings.md#network-interface-values) (such as uncommenting the suggested value of `0.0.0.0`), or an IP address that’s bound to an interface where other hosts can reach it. Refer to [transport settings](elasticsearch://reference/elasticsearch/configuration-reference/networking-settings.md#transport-settings) for more information.
6
6
7
7
To enroll new nodes in your cluster, create an enrollment token with the `elasticsearch-create-enrollment-token` tool on any existing node in your cluster. You can then start a new node with the `--enrollment-token` parameter so that it joins an existing cluster.
0 commit comments